Question:
My question is in regards to Monitor’s written case interview format. The candidate spends 15-30 mins reading 2/3 pages of text with graphs/tables about a business problem (I guess during this time the candidate CAN take notes but CANNOT ask the interviewer any clarifying questions), and then takes a series of questions from the interviewer.

This process, to me, doesn’t seem to fall into the traditional “stall-verify” category — the candidate asking for data he needs as the case unfolds (i.e. the candidate driving the case) versus the candidate is inundated with data even before the case opens up. The latter in the written case is unnerving to me. Specifically, I’ve had issues with:

a) opening a written case or what to expect at the start of the case with the information “out there”
b) frameworks — whether they matter here?
c) weeding out extraneous information to get the right data
d) not sure if the information provided is sufficient to make certain analysis (and that I must ask for additional information or estimate….)

My Reply:

Let me start with a caveat. I don’t have any direct experience with the Monitor Written Case Interview. When I got an offer from them years ago, they had the traditional verbal interaction type case.

That being said, I’ve seen many firms move to incorporating some type of written case such as the McKinsey Problem Solving Test.

I’ll start with my thoughts as to WHY firms are using this type of format in part of whole.

The three biggest things this format does for the consulting firm is it provides an evaluation that 1) is frankly hard to prepare for, 2) really does test whether the candidate can ignore extraneous, non-relevant data (of which there is a ton of on the job), and 3) to test if the candidate can draw logical conclusions based on the facts available.

With a verbal case, you’re the one coming up with ideas, you’re the one asking for data, you’re the one determining if your hypothesis is right or not.

With a written case, someone else has come up with the data (typically way more data than you actually need to “solve the case”). For written cases interviews like the McKinsey Problem Solving Test the test gives you the hypothesis and you have to figure out if it’s correct or not. It sounds like in the Monitor Written Case the interviewer gives you the hypothesis / question and ask you if it’s correct or not based on the data.

In both cases, you’re taking someone else’s hypothesis and testing it against the data. This very much mirrors what day-to-day consulting work is like especially for a new consultant. Typically the engagement manager or partner comes up with the initial hypotheses for the client team.

As an Associate, you’re job is to go get data and figure out if the hypothesis is right or not. So in this respect, the Monitor Written Case and others like really test this one specific aspect of the job more so than a verbal case could.

Also in a real client engagement, the first week or two an engagement is often just gathering basic facts about the business without any hypotheses. These facts would be things like 10 year sales history, 10 year profit history, 10 year profit history by customer segments, 10 year growth rate by customer segment, 10 year growth right by customer segment for the company vs. the same growth rates for the industry, and literally dozens of such basic facts.

These initial facts are then used by the consulting team to come up with the hypotheses that will determine the rest of the engagement or more likely these facts will be used to see if the initial hypothesis that triggered the engagement is in fact the most important issue before devoting resources to trying to resolve it.

So in the first week or two of an actual engagement, you end up gather enormous amounts of data that you don’t actually need (but aren’t able to determine that until AFTER you get the data).

With respect to your specific questions, let me answer them the best I can:

a/b) In terms of how to open such a case, I’m not actually sure what Monitor is looking for here. My guess is it will probably be one of two things. Either they will ask you to open the case as you would a verbal case interview such as the ones described in my videos.

Or they will follow a Q&A format and ask you specific questions to see if you can draw a conclusion that’s fully supported by the facts or determine that you don’t have sufficient information.

If it’s a traditional case, then I would just open the case as I suggest in my videos. Use a framework. And when you need data, rather than just telling the interviewer you’re looking for X data, you would probably tell the interviewer you’re looking for X data and then actually looking on the written case to see if the data is there.

In this open ended format, the key is YOU bring structure to the case, and refer to the data in the written case. (Separately, it’s a good idea to look at the written case to see if any interesting patterns emerge that might refine your initial hypotheses. For example, if profits are terrible, it might lean you towards a Profit = Revenue – Cost framework)

If the interviewer is looking more for a Q&A type dialog where the ask questions about the case and you answer them, then I think the framework is less important and probably not needed at all.

c/d) In terms of having difficulty with weeding out extraneous data, this is a tough one to practice. I never practiced this myself, I just happen to naturally be good at it. I can kind of just look at data and the question and I can just kinda tell if it’s relevant or not.

The closest thing I can see to practicing this skill is pulling out a standardized test prep book (GMAT, GRE, SAT, LSAT, MCAT) especially for those test which have a data sufficiency section.

This is a pure logic test. You don’t need to know anything about business to do well in this particular aspect of a written case. So if you need practice, go to the standardized test prep programs.

Incidentally, at McKinsey the applicants who did well as consultants all had very high math test scores. If you are really good at math, you tend to naturally be really good at this data sufficiency / logic stuff.

Hope this helps. Again, I think this kind of format is very useful tool from the employer/interviewer’s standpoint. It tests some of the raw talent aspects of the candidate rather than what they learn.

In the overall scheme of things, I think this is probably not the most important area to focus on in your prep for consulting interviews. The time to results tradeoff of improving your logic and data sufficiency skills is going to be low.

In contrast, I think the time spent familiarizing yourself with frameworks and the case process is a good time investment because it is very possible to be talented, but poorly prepared for verbal cases, and do poorly as a result. So the first 5 – 10 hours of prep via my videos can very easily double and triple the number of passed interviews.

My sense if 50 – 100 hours of prep on logic and data sufficiency might improve your success rate by 20% – 30% at most.

From the consulting firm’s point of view, if you have the raw talent to be a good consultant, are completely unprepared and unfamiliar with cases/frameworks, it IS quite likely one can do well on written tests like the Monitor Written Case (especially if its oriented around data sufficiency / drawing logical conclusions) or the McKinsey Problem Solving Test — which is why I think firms are adding this type of element to the interview process.

(And my sense is firms that are setup to train new consultant who have great raw talent, but little formal business training, will tend to gravitate towards this type of evaluation measure). Those that want talent that is more polished and ready to go without much training, will probably lean more to the verbal cases to test readiness.

(NOTE: Monitor has since been acquired by Deloitte.  It remains to be seen if Deloitte adopts the same interviewing process or drops it entirely.)