Case in Point by Marc Cosentino is a popular Case Interview Preparation Resource. This book first published in 1999 is one of the oldest case prep resources.

I am often asked my thoughts on this book and how to use this resource along with the resources here at CaseInterview.com so in this post I will share with you a recent exchange with one of my readers on Case in Point.

Readers Question:

Victor, first, thank you so much for the resources you’ve provided.  I have watched your lectures twice and memorized your frameworks, and I feel like they are very helpful.

My question pertains to differences between what I’ve seen from your approach and that put forth in Case in Point.  Specifically, when reading through the 25 cases offered in the book, I’ve noticed a few things:

Interviewees are not hypothesis driven.  They ask questions all the time without stating what their relevance is to the broader problem and what they hope to learn from them.

Interviewees will only sometimes lay out a framework.  Often they will jump right in to asking questions and then layout an improvised framework later.  Sometimes they will never state a framework or organizational structure for their thoughts.

Interviewees will often offer ideas without supporting data.  For example, in a competitive response case (# 4 if you’re interested), they are asked if they think that they should cut prices if IBM enters the market and does so.  They say, no, with reason “I believe that Eastern offers great products at competitive prices.  If customers like us, they’re not going to go to IBM to save a little money.”  The problem is that none of this was established by prior questioning in the case.  This goes against your mantra of always supporting your ideas with data, or at the least hedging and saying, “I don’t have the data to back this up, but my guess is…”

In general, the interviewee will maybe ask 5 questions to gather data before starting to offer ideas.  It seems like, just from watching your examples, that you generally ask far more questions.  Even if you have a feeling, you seek greater confirmation.

Interviewers will often ask for brainstorm-style recommendations.  These are not the type of concluding recommendations you talk about, with three reasons and data to support them, but more like brainstorm recommendations.

Interviewers will often turn the case in new directions. They will say something like “enough of that, I want to look at this other thing.”  In which case you seem to really have a series of short idea developments and answers, as opposed to one big problem that you have to deconstruct and solve.

When I did a practice case interview on the McKinsey website, it was also really a series of 6 small problems, offered by the interviewer, rather than 1 big problem to break apart.  Maybe this is the only way they could do it, given it was not a two person case.The interviewers in these cases don’t seem to have any problems with this.  Nor do the post case comments.  This is put forth as the right way to do a case.

In short, I’m confused.  The cases that I see in this book look very different from the cases that you modeled.  Yet you put Case in Point forward as a great resource (that’s why I’ve worked through all of it).  My feeling is that your way seems better, because it is more data driven.  Some of the things that these interviewees say seem fairly foolish to me.  But I also worry that maybe I’m setting my standard for data too high.

Could you please offer your thoughts on this?  Should I be expecting cases like those in Case in Point, or more like your examples?

Thanks so much for your time and help.

P.S. I haven’t listened to LOMS yet, but I plan to soon, so maybe that will answer my question.

My Reply:

There’s one chapter in Case in Point that’s really well done — the one on estimation questions. The other chapters I would generally disagree with.

So the short answer is (and of course I’m pretty biased on this), refer to my point of view anytime there’s a conflict between what I say and what’s said in Case in Point.

My approach to case interviews is not really an approach to case interviews. Rather, it’s the McKinsey approach to solving client problems which in many ways is very similar to the approach Bain, BCG, Monitor, LEK, etc., use to solve their clients’ problems.

As a general rule, clients value a consultant’s opinion, but they value a consultant’s factually-justified conclusion more.  It is what they ultimately pay for and can use as justification to others (shareholders, their board of directors, the CEO) for a particular course of action.

At McKinsey, we almost always had a hypothesis. Often the initial hypothesis was wrong, but we figured that out very quickly, precisely because we had a concrete hypothesis that was disprove-able.

So “my” approach to the recruiting process is based on being a good consultant at McKinsey, not merely being a good candidate.  It’s a subtle but important distinction.

Also my perspective is based on landing offers at 6+ consulting firms, being in the top 10% at McKinsey, and being a former interviewer at McKinsey. So my biases are based on what I’ve personally experienced.

Now for the case interview itself, the specifics will vary a bit by firm and change a little from year to year. But, the fundamental premise behind everything the firms ask you in a case interview is to find the people who have a very high probability of being a good consultant.

Now let me address each of your questions.

1) If the problem is open-ended, you should always state a hypothesis as a way to structure your problem solving. If the problem is close-ended, then in some cases a hypothesis is already implied by the interviewer’s question.

So if the interviewer says, “Based on the follow exhibit, what is the profit margin for product X under the new pricing changes?” That’s a “closed” question… meaning there’s only one single right answer. Just answer the question.

Other times, the interviewer might give you a case and might say something like, “The client wants to do X in this situation, but they’re not 100% sure it’s a good idea.” In this case, the client has already provided the hypothesis and it makes sense to just go with it.

So in this case, there is a hypothesis, but because it’s already implied based on how the case was framed, you don’t need to state one independent of what was already provided. Just use their hypothesis, say you’re doing so, and figure out how you will test it.

For example, if a client (or in your case an interviewer) asks you, “What should I do?” Given there are a million possible answers you could say, you want to start with a hypothesis.

If the client asks something more specific like, “I want to acquire XYZ company. Should I, yes or no?” Well that in-going hypothesis isYes, buy the company.” Now you just need to find a way to prove or disprove it.

If you are not hypothesis driven when the question clearly calls for one, you will get “killed” in an interview. That’s because interviewers hate unfocused, meandering question answering that doesn’t have a particular line of reasoning (the hypothesis) behind it.

2) In terms of whether it makes sense to use a framework or organizing structure, I would say 90%+ of the time you want to organize your thoughts in some way. Sometimes this is a framework, other times its just using a MECE issue tree.

One of the big reasons clients hire consultants is to take highly ambiguous problems where one doesn’t even know where to start in solving it, and simplify the problem in some way. The most common technique to simplifying a problem is to organize the problem in a way where each of the parts of the problem are easy to solve… and by aggregating your findings for each of the parts, you get your overall answer.

Having said that, you don’t want to force fit the wrong framework or inappropriate framework to the problem. It needs to be a relevant way of organizing your thoughts.

For example, let’s say a friend of you says, “I think I’m in love. I think I want to marry Jennifer. What do you think?”

In this case, the question is a binary one. The client (your friend) wants a yes or no answer. So in this case you could say, “Well let’s assume it is a good idea for you to marry Jennifer. What would have to be true for this conclusion to make sense?”  (Note – consultants are known for being highly analytical, though not necessarily very romantic).

Well, let’s look at two groups of factors —

a) Emotional

b) Non-Emotional

Within emotional, the most important question is: do you love her?

Within non-emotional, let’s break things down into five groups (unfortunately, not totally MECE):

B1) Values – do you and Jennifer share the same values?

B2) Goals – do you share similar life goals?

B3) Financial – Does marrying Jennifer have a positive, negative, or neutral impact on your financial situation?

B4) Social – Does marrying Jennifer have a positive, negative or neutral impact on your social situation and standing?

B5) Companionship – Do you enjoy her company?

B6) Other Stuff – (just to be somewhat MECE)

Now, I just made up what I said above. It’s obviously not an official framework. But it is a way of organizing the decision into “bite-size” chunks.  And if the friend says he loves her and if all the non-emotional factors are positive, it supports the “hypothesis” that yes, he should marry her.

And, very importantly, if one of the factors goes against marrying Jennifer, it allows him to have a more specific deliberation. So maybe he had some vague reservations in the back of his head about Jennifer. By organizing or breaking down the problem/decision at hand into its pieces, it’s easier to pinpoint the precise factor that’s bothering him.

The same is true with clients. If a client is 100% sure about a decision, she doesn’t bother calling in the consultants. When the client is uncertain, there is usually some reservation that’s giving her pause. Sometimes she will have a high awareness about what’s bothering her. Other times, its a “gut feeling” that this decision is not yet a 100% yet, but it can be hard to articulate why.

3) Unless you’re specifically asked to offer unproven ideas, in a brainstorming kind of way, I would not just shoot from the hip. Or if you do so, you can “think out loud” as part of your thinking process, but then immediately focus in on turning one of those ideas into a hypothesis or something you can analyze.

[Of the major firms, McKinsey does ask towards the end of a case a “brainstorming” type of question. And it’s usually framed as “What other factors do you think would be relevant to consider (even if you don’t have time to analyze)?” It’s more a test of whether you can use common sense or business judgment to have an intuitive feel for “what’s important,” which is a slightly different skill than being able to prove what you say.]

The reason for this approach is that you’ll get killed by a client if you offer up an idea that they interpret as a conclusion, especially if you did not qualify it as an opinion or a hypothesis worth testing.

Keep in mind, when you work for a top consulting firm, your word is treated like gold.

So if you (without any data) say, “Mr Client, you should consider shutting down the Eastern region factory,” that client immediately tells all the other clients that “BCG says we should shut down the Eastern region.” And you get counter-attacked by clients who have a vested interest in the Eastern region.

The reason you have to prove what you say is because you aren’t speaking for yourself. You’re speaking on behalf of McKinsey, Bain or BCG. And it is assumed that what you say is what your firm says.

Now if you “conclude” that the client should shut down the eastern region, and you can prove why that makes sense, then you would of course state that conclusion to the senior client, even if they don’t want to hear the news.

In short, consultants are always in the cross-hairs for getting their ideas shot down. Oftentimes clients (often junior clients in the organization) do not want your recommendations to be implemented. They will try to discredit your recommendations, call “BS” on them, or find flaws in your logic or reasoning.  At times, it can be an intellectually skeptical and even hostile environment.

Consulting firms want to know if you can hold your own (and protect the reputation of the firm) in that environment.

If you’re just a smooth talker who can spout ideas, but when critically challenged, you collapse, they don’t want you.

4) In terms of how many questions to ask before you offer solutions or a conclusion to a client, you want to ask the minimum number of questions needed to logically prove your conclusions. Often this takes more than five questions to figure out what’s going on.

If you can do it in five questions, do it in five. If you can do it in two, do it in two. If you need 20, and 20 is truly the absolutely minimum number of questions needed to logical prove the conclusion, you ask the 20 questions.

Bottom line: proving your point efficiently (e.g., not asking unnecessary questions where the answers to the unnecessary questions would not actually help you prove or disprove the hypothesis) is the name of the game.

Also, 80% of the work in a case (and in a client engagement) is isolating the problem… figuring out the root cause of the problem. Often once you’ve defined the problem, fixing it is quite easy.

For example, if sales are down the last two years, the wrong approach is to ask a few questions and then start suggesting specific solutions like increasing the sales commission rate, or cutting prices. The right approach is to quantitatively and logically determine why sales have gone down the last two years… and then address the underlying cause which may or may not have anything to do with sales commission rates or pricing changes.

(As an example, maybe the competition introduced a new entry level product that’s taking market share away from the traditional product type that the client sells. In this case, the key question is whether the client should enter that market segment or not. If the market is moving away from the client, then sales commission and pricing changes end up solving the wrong problem.)

5) Sometimes interviewers, especially those from McKinsey, will ask for a brainstorming type opinions. This does happen and if it does, it is worth clarifying if the interviewer wants brainstorming type answers or only those conclusions you can prove.  If they’re looking for brainstorming, by all means given them brainstorming — though preferably by introducing categories of brainstorming ideas before you introduce the specific ones.

6) Some case interviews are deliberately “choppy” or fragmented. McKinsey in particular no longer has one big open-ended case. Instead it has a case that’s been artificially chopped up into five or six specific modules that are only loosely related to each other.  See LOMS for examples of this.  In these cases, McKinsey has taken an open-ended case and broken it down for you into discrete “chunks” that are more “closed.”

When that happens, you should answer their discrete questions.

For example, a case in this style might have a section where they want you to calculate profits for a particular product line under two different sets of assumptions. This is a test of being able to do word problem arithmetic. Just solve the case interview math problem and give the specific answer.

Other firms will ask more open-ended case questions. In general, the open-ended cases are harder. The “closed” cases are easier because the open-ended cases have been pre-structured or pre-organized for you into discrete chunks.

7) Overall, the standard for data is high amongst clients and what they expect from their consultants. In turn, consultants set the standard high for candidates.  The only exception is when the data you’re asking for is more precise than is necessary to reach a conclusion.

For example, let’s say a company has a policy that they will only invest in new construction projects that have an annual return on investment of 15%.

In other words, a $100M up-front investment needs to generate at least $15M per year to be considered a good investment.

Let’s say you’re analyzing a proposed project to build a new factory. Let’s say the annual profit from this $100M investment is somewhere between $22M – $35M per year.

Once you know the return on investment (ROI) is clearly above 15%, there’s no need to waste energy in determining more precisely where in the $22M – $35M range the return will likely end up being.

But, if you conclude that this project is a good investment and you can not justify why you think that and preferably prove it with some data (quantitative and qualitative), you will get ripped apart.  I know this because as a management consultant, if I went into a practice presentation with a partner and I presented something as a conclusion that I could not prove, he would yell at me and call me a moron — okay, not really, but he would talk to the engagement manager behind closed doors and take him or her to task for letting it happen.

Why does this happen? Why does anything happen in management consulting?

Because some poor soul did this in front of a super smart client, and got intellectually ripped to shreds by presenting a factually unsupported conclusion… and some partner lost a client because of it… and some consulting firm got unofficially blacklisted from ever working in the company because of the crappy work that was done the one time some new consultant presented a factually unsupported conclusion.

Everything in the case interview process happens for a reason. What firms want more than anything is to have happy clients that gladly pay very large fees. Literally every aspect of a case interview can be back traced to some specific type of client situation the consulting firm wants to replicate, or negative situation they desperately want to avoid.

Saying something you can not prove, unless you were explicitly asked for an opinion or you qualified the statement as a not yet proven opinion, will get you fired from a consulting firm for the simple reason that doing so in front of the client will get the firm fired.

Finally, I would strongly recommend listening to the examples in Look Over My Shoulder®. But, rather than pay overly close attention to the specific question format (which is essentially the premise behind all of your questions), I would encourage you to carefully study and emulate the thinking and communication process demonstrated by the candidates who delivered exceptional performances.

Many of the people who use LOMS, especially those who use it a lot (50+ hours of LOMS practice), end up internalizing the thinking and communication process demonstrated.  And when they actually interview, even if they get some unusual case question they’ve never seen before or they encounter some new case format variation that’s very new, they still do well.

The skills I recommend that candidates adopt are very transferable across case formats. Those that only “study to the test”… meaning prepare for a very narrow set of specific case questions and formats without developing the more general “consulting skills” that firms seek, will get thrown when they encounter a case question or variation not previously seen by them.

Similarly, consultants routinely work in industries, on project types, solving problems they’ve personally never encountered before. But the good consultants still do well because they have the kinds of thinking skills valued by the top firms.