The Skill of Extrapolation as a Consultant

One of the most vital skills a new consultant can have once they're on the job is the skill of extrapolation.

This involves taking some data from a client and extrapolating the data to determine some kind of trend line.

In consulting, the data is rarely perfect and never complete. You are always missing something.

So, you have to end up estimating, taking a sample, "backing into" the number you are looking for (by quantifying everything else except the number you are looking for), and many other ways of drawing reasonable conclusions from imperfect information.

Some new consultants who do well in case interviews actually have a very hard time with this on-the-job reality.  This is particularly the case from those that come from science and math backgrounds -- especially those with PhD backgrounds.

Often, people from these backgrounds are looking for an exact, definitive answer.  And the reality is, in business, often there are multiple right answers.  This is troublesome for some people.

What is also difficult for some is extrapolating a trend from a bunch of data points.

Here's an overly simple example.

A few days ago, I received an email from someone who has a BCG interview coming up.

This person asked me what I thought BCG's policy is for facial hair... in particular a man's beard.

He astutely noticed that not a single male in the BCG recruiting website had any facial hair.

In my reply, I said, "I am not aware of any facial hair policy at BCG or any other firm.

"But when I was at McKinsey, I do not recall meeting any consultants in North America, Europe or Asia that had facial hair."

I also suggested the person draw his own conclusions from my statement.

Two days later, he cut off the beard he has had for several years.

That's a pretty simple example of extrapolation.

There's a joke within McKinsey that goes like this:

How many data points do you need to extrapolate a trend line?

Business Analyst/Associate:  150 data points

Engagement Manager: 25 data points

Partner: 4 data points

Director: 1 data point 🙂

This joke is an example of extrapolation accuracy vs. sample size.

There are other kinds of extrapolations too, including using data proxies.

If the data you really want does not exist, the idea is to use data that is correlated with the data you wish you had.

This is often the case in estimating market demand for a product that does not exist.

For example, when the Apple iPod/iTunes combination first came out, it would have been difficult to estimate the demand for the iPod because consumers had never seen the product before.

This is where some creativity comes into play (yet another form of using extrapolation to find the answer you seek).

Suppose Apple was your client and they wanted to know what percentage of the mp3 player market you think the iPod could get... keeping in mind, they asked you this question one year before the iPod came to market for the first time.

Well, one way you could try to extrapolate an answer to that is to say, "What functional value does the iPod/iTunes combo provide?  What is the next closest thing to this particular solution?"

The answer that comes to mind is an mp3 player, a car, and a trip to the music store.

So, you could conduct a market research project in the music industry to find out what percentage of the market hates driving to the music store and is willing to pay to avoid the trip.

Also, since iTunes sells individual songs instead of full albums, you could also ask consumers, "What would you pay to avoid having to buy the entire album and just be able to buy the one song on the album you really want most?"

Keep in mind, all of these analyses are imperfect (welcome to consulting!).

You could then triangulate these different analyses to see if you could determine a likely range of potential market demand. This too is a form of extrapolating... a specific technique called "bounding the problem."

This refers to estimating an upper boundary to the number you seek, and a lower boundary too.

I mention all of this because many of the skills needed to be successful in consulting are subtle.

They are the kinds of things you figure out in consulting... eventually.

The challenge is not whether or not you will figure it out.  It's whether you will do so before the managers and partners in your firm judge your performance.

If you have a summer associate or internship type position, you really only have a few weeks or months (depending on your role) to make the right impression to hopefully secure a full-time offer.

So, the question becomes: Do you figure out these subtle aspects to doing well in consulting before or after the partners evaluate your performance?

As you can imagine, when it comes to performing well, solidifying your reputation in your office, or developing your personal "brand," timing makes an enormous difference.

Additional Resources

If you found this post useful, I suggest becoming a registered member (it's free) to get access to the materials I used to pass 60 out of 61 case interviews, land 7 job offers, and end up working at McKinsey.

Members get access to 6 hours of video tutorials on case interviews, the actual frameworks I used to pass my interviews, and over 500 articles on case interviews.

To get access to these free resources, just fill out the form below:

First Name *
Email *

This form collects your name and email so that we can send you login information for the free materials you requested. If you check the Yes button we also will add you to our email list. Check out our Privacy Policy below for details on how we protect and manage your submitted data.

Read Our Privacy Policy

5 comments… add one
  • Peter Oct 15, 2014, 5:54 am


    Really nice article! Thank you for such an insightful writing!

    The fact is that this kind of articles make me realize that there are some things going on in consulting that are in my unconscious mind, but that are actually really important. Extrapolation is one of them. With my background as signal processing engineer, I know that the data is rarely complete and you sometimes have to interpolate or extrapolate, but now I see how important it is in business also!

    Thank you so much for your articles!

    Best regards,

    • Tanya Sep 5, 2018, 1:45 pm

      I just accepted an offer with a MBB firm, and I am so excited! After going through the recruitment and interview process, I realized that my background (social science phd) is extremely relevant for consulting. At first, I thought I would be at a disadvantage in comparison to the MBAs and STEM phds. However, after talking to several people and practicing over 60 cases, I realized that my skills as a social scientist were very important! Having comfort with big, messy and incomplete data. Drawing conclusions about complex questions with imperfect (but good enough!) data. Triangulating data, and often toggling between quant and qual data points. Being comfortable with estimates and being fully aware of their limitations. This post has put into words what I was feeling throughout the process.

      Your post makes me wonder why I don’t see more aggressive recruitment of social science phds–all of the recruiting on my campus took place at the medical school. It seems like us “fuzzy” phds have a lot to offer!

      • Victor Cheng Sep 7, 2018, 2:51 pm


        First off, congratulations! Well done!

        I totally agree. 20+ years ago it was an experiment by MBB to recruit PhDs at all. As the firm has grown, they keep expanding their search parameters to find raw talent (and then teach them business jargon, terms and concepts).

        Maybe you can take the lead at your firm to expand the parameters even further. Good luck and nice job.


  • Pavel Jul 15, 2014, 5:33 pm

    Hi Victor, thank you for a very interesting post.

    Before I turn to my question, shortly about myself: I was with Big4 firms as an analyst and graduated from a top MBA in Europe last month. I am starting at one of the MBB firms in September.

    As you can judge from my background, I am aware that extrapolation is vitally important in consulting. I am also aware that in business access to data is imperfect at best, data can be incomplete and biased, and one cannot hope to achieve the level of certainty characteristic of science.

    Turning to the issue, my question is how much extrapolation is appropriate. There is a thin red line between providing a “good enough” estimate based on data and guessing at random, taking the result out of thin air, putting garbage in and taking garbage out.

    This is not a theoretical question – this is an ethical question at the very heart of consulting profession.

    I once heard a rule of thumb that when making an extrapolation one should ask himself whether one would be comfortable with making a business decision as a client on the basis of this extrapolation (something aking to the Golden Rule “Do unto others as…”). In another engagement, a project leader stressed the importance of providing clients with our assumptions and details of estimation methods.

    What is your opinion on ethical implications of extrapolating from limited datasets?

    • Victor Cheng Jul 15, 2014, 5:47 pm


      I don’t think there is any ethical issue with using estimates with clients. Where there IS an ethical issue, a major one, is implying more certainty that actually exists.

      So if you have lousy data and you’re using as a lousy estimation methodology, and passing it off as certainty. THAT is a big problem.

      Clients get that perfect data doesn’t exist. They do forecasts for Wall Street every 90 days. They have to do estimates themselves all the time. It’s never exact.

      They key is to ensure everyone knows it is an estimate. If the estimate precision varies widely on the assumptions (e.g., the model is highly sensitive to the assumptions) I personally would state them. Very commonly at McKinsey we would estimate a range, based on the high / low extremes of the assumptions we were using. Then on the chart we would say show a bar chart bar for the low end of the range in a solid color, then show the high end of the range using a stacked bar (on top of the solid bar) showing the incremental difference from the high end of the range. This incremental piece was usually drawn with a dashed line.

      If a decision makes logical sense at both the high and low end of the range, then the decision makes sense period.

      So if your minimum Return on Investment threshold for a new project is at least a 15% annual return, and your analysis shows that for a particular project your high/low estimate is 24% – 39%, then the decision makes sense regardless of where in the range the actual number lies.

      Estimates are GREAT for situations like the example I showed above. Similarly if the estimated ROI range was 5% – 12%, this too is a no brainer decision. The project stinks. Don’t do it.

      Where an estimate is less helpful, is if the ROI range is say 10% – 20%. In this case the go / no go decision could completely reverse itself depending on where in the estimate range the actual ROI lies. One could argue we don’t know enough. If it were my company, I’d say there’s not enough of a margin for error and I personally would not pursue the project because it’s not clear and away a good idea even if our estimate is off a little. I like betting on “sure things” rather than borderline cases.

      Estimates have their place. The key is to know when and under what circumstances you can rely on them (and how much to rely on them) and to make sure your client doesn’t presume more certainty than exists.


Leave a Comment