The candidate-led case interview is one where the interview presents you with an open question like "Your client is considering a merger with its next largest competitor. What should the client do?"
With the exception of McKinsey which has moved to an interviewer-led interview, most other firms use some variation of the candidate-led case. It is important that you are familiar with this format.
When I give a candidate-led case, I expected the candidate to drive the case.
I might interrupt the candidate-led case briefly to ask a specific question that I wanted to test the candidate on, but generally would let the candidate lead.
So I'd say it's reasonable the interviewer will expect you to lead the case from start to finish -- just like a client would expect you to do the same after you start working as a consultant.
In case you're not familiar with this style of case interview, let me provide you with some context.
Some firms, particularly in early rounds or prior to first round, have been using more pre-structured cases. These are case interviews that have essentially been started by someone else, you're presented with the information that has been discovered so far, and then you are asked a specific question like "given this data, what conclusion can you draw / or can you not draw" or given the data would you recommend A, B or C?
The McKinsey Problem Solving test is one example of this.
I understand Monitor has been doing something equivalent.
Many of the written case interviews or in person interviews where you are handed a standardized, pre-structured case, but expected to present the rest of the case in person, are all variations of this approach.
This was not the norm when I was interviewing as a candidate and as an interviewer. That being said, I do understand why this is being done today-- and importantly why it is being done in early rounds / pre-1st round (but much less so in later rounds).
Let me put this in some context.
If you take 10 McKinsey consultants from 10 different countries around the world, you show them 5 pieces of data, they all arrive at the exact same conclusion. I've actually seen this and it's really amazing and scary at the same time. The only difference is how they pronounce certain words differently.
Logically speaking, when you have a certain set of facts, there are a finite number of conclusions that you can draw that are factually supported.
(By the way if you want to practice this "data sufficiency" type of skill, get a GRE test prep book and do practice questions on the data sufficiency related questions.) Now my guess is the case interview processes from prior years that did not test this skill let candidates through the process who could not look at the same data as one of the firm's partners, and get the same logical, fact-based conclusion.
I know this would drive partners absolutely crazy, because you need to trust you analysts and associates to do certain things without double checking their work.
The first is to never make a math mistake (so 2 + 2 never equals 5.... you screw that up, in front of a client, get a conclusion wrong, and embarrass a partner, you might as well quit. Your career is toast.)
The second, is given a set of data, you draw the correct logical conclusion. And (very, very, very important) if the data is not complete enough to draw a conclusion, you have to be able to recognize that situation and communicate that to the partner / project manager / case interviewer.
The difference between a conclusion and hypothesis is the completeness of the data.
This is an important fundamental skill in day-to-day consulting.
So my guess is that the firms decided to test for this skill early in the recruiting process (hence the McKinsey Problem Solving Test, Monitor's equivalent, and case interviews involving half-solved cases that you need to finish).
Now once a firm knows you can do this, then they want to know if you can handle the entire end-to-end problem solving process -- also know as a "candidate-led case".
The best analogy I can think of is an interviewer-led case is like taking a math test that has a pre-defined question with multiple choice questions (the answer is A, B, or C). A candidate-led one is more like an essay question, where it is very open ended.
So the key skill difference needed to perform on the more open-ended type cases, is a skill called structuring. It is the process of converting an "essay" question type case (which is what it feels like when the case starts) to one that is structured (like a multiple choice question).
Actually, to be more precise, the skill is more accurately described as "structuring" and "investigating".
I had one person who emailed me early this week, and said the following, "I get it now... a case interview is not really an interviewing, it really should be called case investigating" (like trying to solve a mystery). I think that's a very appropriate image to keep in mind.
The other difference to keep in mind is that in a final round, the percentage of interviewers who are partners goes way up. The partners tend to focus on different aspects of the case (sometimes). The slight bias I've noticed, is they tend to be more concerned with "synthesis" which is how do you draw a very specific conclusion from a very wide open case problem.
(Also on a side note, some partners will ask you to do a case while standing up. Don't be surprised by this. And instead of making notes on your pad of paper, they want you to make notes on a flip chart or white board.
A McKinsey partner in Los Angeles did this to me, and I was a bit taken back by it. It took me a while to adapt, but I ultimately did well and got an offer. So don't be surprised if this happens to you.)
In my free online videos, you'll want to pay attention to the videos on "opening a case" (the structuring part) and "closing" a case, which is the synthesis part. Also an additional resource you'll want to be aware of is my Look Over My Shoulder® Program (note this is a paid program). The Look Over My Shoulder® Program consists of 13 interviews with 5 different cases, all of which are "candidate-led" cases.
I recently conducted an entire round of McKinsey style mock interviews with candidates who were actively interviewing with the major firms.
I recorded the interviews (audio only, so the candidates' names and appearances could remain anonymous), had all 13 interviews transcribed, and then I went back and did a detailed step-by-step analysis of what each person said vs. what they should have said -- minute-by-minute.
There were a few things that were very interesting about this round of interviews. First off, by my standards, out of the 13 people I interviewed none of them "passed" (but significantly 1/3 of them ended up getting offers from McKinsey & BCG a few weeks later based on their skills + presumably using some of the feedback I gave them).
In particular, two of the cases I gave clearly separated candidates with A- skills vs. those with A+ skills.
What was the most common mistake made by those who ultimately ended up getting offers from the top firms, but made mistakes in my interview? Basically it was the transition between case "structuring" into "investigating".
If you structure the case up front poorly, it's impossible to get the right answer. If you investigate a case initially well, but you fail to re-structure the case to reflect newly discovered qualitative or quantitative insights, it's really difficult to crack the case.
A lot of people had real difficulty re-structuring a case in the middle.
(You'll see first hand examples of this in the Look Over My Shoulder® Program and you'll see me analyze how they boxed themselves into a corner and you'll hear my re-enactment of what they should have done instead, and the precise moment they made the mistake.)
You are not supposed to mechanically go through the framework asking a set of 15 questions - even if they are not relevant.
You are supposed to start a case that way, but you're also supposed to in many cases deviate from the framework once you've developed a hypothesis or go through the remainder of the framework filtering out questions that are not necessary to test your hypothesis.
When an interviewer complains that a candidate is "overly framework driven," this is what they mean. They act as if they are reading from a list of questions associated with a framework, and never deviate from the list of questions, even when new information is uncovered.
It is the equivalent of a police detective who discovers some new piece of evidence (a.k.a. data) and ignores it because it's not part of his checklist of questions to ask.
Instead, a good detective (and case interview candidate) will start by asking a set of standard questions (a.k.a. a framework) such as, "Where were you at the time of the crime?", "Can anyone prove you were there?", etc.
Then in the course of asking "standard questions" (a.k.a. standard framework questions), the detective discovers some piece of unexpected information (a.k.a. new insightful data).
This leads the detective to develop a "hunch" (a.k.a. a hypothesis) of who is likely guilty of the crime, and then will then focus on trying to prove (or disprove) their hunch by investigating, by looking for evidence to prove their case (the equivalent of a case interview candidate asking for factual data to test one's hypothesis.)
So circling back to the two especially tough cases I gave, these cases required excellent quantitative problem solving skills and excellent qualitative problem solving skills and good structuring skills and good investigating skills - all at the same time.
Those with physics PhDs got the quantitative part right, but missed the qualitative aspects of the case. Those with less math-oriented backgrounds, grasped the right qualitative issues, but failed to analyze them quantitatively.