The position I applied to at McKinsey was the Fellow program (I applied holding a Master’s degree in international business).
The interview format I encountered in all of the McKinsey interviews was slightly different than the interview format used in the LOMS program; rather than having the interviewee go through the whole framework in order to determine the root cause of the problem, the interviewer would only want to see how the interviewee would structure the problem – i.e. the framework.
Once the interviewer saw the interviewee approached the problem in a very structured way, he/she would move and say something like: “Okay, let’s say we did the analysis you proposed and we found out the problem is [problem area] and the data we gathered is the following:”
The interviewer would then show a series of data, of which most would be irrelevant in answering the problem question, to see if the candidate would quickly find his way through the numbers and compute the correct numbers correctly.
(Your advice to practice data interpretation tests under time pressure helped a lot.)
To my biggest surprise, at the end of the interview, the interviewers asked me a business acumen question (such as: “do you think it would be smarter to outsource the service or to do it yourself?”) without wanting me to go through a quantitative estimate (cost/benefit analysis), but to answer the question based on sole “gut feeling”.
The interview format you describe is known as an interviewer-led format. In this case format, the candidate is expected to do a branch of the analysis, while the interviewer controls the "trunk"... as opposed to in a candidate-led interview, where you as the candidate control the trunk (deciding which topics are branches and further deciding which branches to tackle in which order) and the branches.
From the success story reports I've been receiving, it seems McKinsey is pretty far along in what appears to be a firm wide transition to this kind of interviewer-led case.
I've been watching this carefully, as some offices in some countries were using the candidate-led approach, but it's my conclusion that the firm is moving in this direction.
For Look Over My Shoulder® members, I reference these differences in a note included with LOMS on the "McKinsey 1st Round Interview." It turns out the subsequent rounds are following the same format as the first round.
I will be adding some more interviews to LOMS to show cases presented in this interviewer-led format. Current LOMS members will automatically receive the update version when it becomes available (no ETA as of yet).
The interview format is typically as follows:
1) Give you a case and have you set up the framework or issue tree... It's useful to state a hypothesis and then the structure you would use to test the hypothesis. Suggested time of about five minutes.
Often the issue tree consists of the key "drivers" for they key problem at hand.
2) Interviewer takes over and picks one branch of the framework for you to explore. If your framework did not match the interviewer's framework, there is a good chance the interviewer will just tell you what the framework you should be using is, and which branch they want you to solve.
3) You do the analysis for that branch, and it will quite often involve a math problem where you need to solve for a single unknown variable. Basically this is a verbal equivalent of a GMAT or GRE math "word problem".
4) Somewhere in the case, the interviewer just asks you qualitatively what do you think in your gut is really going on? They want a best guess, intuition-based hypothesis (they don't care about the structure because you already hopefully demonstrated you could create structure in a previous question).
So overall, they are breaking down the case into modules and testing you one module at a time -- independent of other modules. In many respects, this is easier than having to tie all the pieces of a case together by yourself. As one success story contributor wrote in, because the case is much easier, it is now much harder to differentiate yourself.
So overall, the same skills are being tested but in a somewhat artificially-determined order and in modular "chunks," rather than as an integrated whole.