Your Sales Forecast Accuracy Sucks!–and Why It Doesn’t Matter

0
130

Share on LinkedIn

“If you’re on the forecast accuracy horse, it’s dead. Get off!” intones Dave Garwood, of R. D. Garwood in Atlanta. His mellifluous baritone voice could sell gilded steak knives to vegetarians. But Garwood is a manufacturing guy, not a sales executive. In 2000, he became one of a handful of people to be honored with the prestigious Lifetime Achievement Award by the American Production & Inventory Control Society (APICS).

“I can’t think of a way to clear the room faster of salespeople than to bring up sales forecasting,” he says. Salespeople aren’t stupid. They flee because they remember the bruised backsides they get when forecasts don’t pan out.

Sales professionals are often accountable for forecast accuracy. So much so, it’s embedded in their job descriptions: “The Local Sales Manager will conduct sales training, sales meetings and weekly one-on-one meetings for sales staff and must be able to forecast accurately and maintain excellent client relationships.” No doubt the company that posted that opening has a lovely pond for its employees to walk across.

Sales forecast accuracy seems the Holy Grail. Blogs and videos tout ways to gain the required clairvoyance. “A lack of discipline around sales forecasting can kill your sales organization’s credibility,” Matt Heinz wrote in a blog, Seven Steps to Greater Sales Forecast Accuracy. He’s right, but other forecasting impediments are far more consequential.

First, “the accuracy of most forecasts depends on decisions being made by people rather than by Mother Nature. Mother Nature, with all her vagaries, is a lot more dependable than a group of human beings trying to make up their minds about something,” Peter L. Bernstein wrote in his book, Against the Gods. The Remarkable Story of Risk. Second, unanticipated [stuff] happens.

This deal is in the bag! Stefano, a top-producing sales executive, forecast a $500,000 enterprise software opportunity to close in the second quarter. He had been working it for a year and had built solid relationships with the buying team. Just two weeks before the anticipated order date, the client’s project manager announced he was leaving to take an extended mountain biking trip in the Andes with his girlfriend—effective immediately. Just three days later, the client announced it had delayed a major product release because a crucial component sourced from a single supplier was no longer available. Concerned about cash flow, and not knowing when the project manager would tire of his adventure in South America, management decided to “put all IT projects on hold.” All the discipline in the world couldn’t change the outcome: Stefano incorrectly forecast a hefty chunk of revenue. He is remembered at his now-former company as a bum.

Stefano’s story is anecdotal, but it reminds us that more often than not, buying decisions hinge on events we don’t foresee, let alone control. And even the most sophisticated statistical algorithms and predictive analytics can’t account for unanticipated events—what Naseem Taleb calls Black Swans. Loss of key personnel. Supply chain disruptions. Natural disasters. Mergers and acquisitions. Consequences from foment in Syria, nuclear Iran, and antagonism between the Koreas . . . It’s a very long list.

Taleb’s Black Swans have three attributes: “First, it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable,” he writes. The problem is, management insists on good predictions, feeding a vicious cycle of forecasting, planning, experiencing performance gaps, and recovering from operational mistakes. Repeat. As author Nate Silver wrote in The Signal and the Noise, “If you can’t make a good prediction it is very often harmful to pretend that you can.” How would the newly-hired Local Sales Manager respond to that thought his or her first day on the job?

Forecasting got its start a long time ago, when people recognized fate alone didn’t govern outcomes. “Capitalism could not have flourished without two new activities that had been unnecessary so long as the future was a matter of chance or of God’s will. The first was bookkeeping, a humble activity but one that encouraged the dissemination of the new techniques of numbering and counting. The other was forecasting, a much less humble and far more challenging activity that links risk-taking with direct payoffs,” Bernstein wrote. Silver adds to this idea: “Forecasting reflected the new Protestant worldliness rather than the otherworldliness of the Holy Roman Empire. Making a forecast typically implied planning under conditions of uncertainty. It suggested having prudence, wisdom, and industriousness . . .”

Interesting that neither author mingled the word accuracy in these explanations. Though as Silver points out in his book, accuracy has become implicit in forecasting. He devotes considerable effort toward developing that idea when he describes the science of weather forecasting. But like your annoying Uncle Mel who arrives at your home unannounced and then won’t leave, that same demand for pinpoint accuracy has infected sales forecasting, and just won’t go away. “There’s very little that’s really predictive,” said Jan Hatzius, Chief Economist at Goldman Sachs. “Figuring out what’s truly causal and what’s correlation is very difficult to do.”

Tweet Me! Friend Me! Make Me Buy! “What happens in systems with noisy data and underdeveloped theory . . . is a two-step process. First, people start to mistake the noise for a signal. Second, this noise pollutes journals, blogs, and news accounts with false alarms, undermining good science and setting back our ability to understand how the system really works . . . As the memory of our mistakes fades, the signal will again seem to shimmer over the horizon. Parched for prediction we will pursue it, even if it is a mirage,” according to Silver. Think of him the next time your CFO says “these sales forecasts stink!”

My algorithm is better than your algorithm! As a pre-sales consultant for a logistics software company in the mid-1980?s, I visited a large pharmaceutical manufacturer that used my company’s software for sales forecasting. The customer’s distribution planners were complaining, frustrated that their supply-chain performance metrics hadn’t improved. We brought in the heavy gun—in this case, the grand Pooh-bah logistician who developed the near-unexplainable linear regression models, written in APL, that cranked out the company’s complex logistics plans from deep within their IBM mainframe. Toward the end of the meeting, his patience snapped when he shouted, “the software IS doing what it’s supposed to do, and I know, because I wrote the damn code!” A classic mismatch of expectations: the customer wanted accurate forecasts, but the software wasn’t designed to provide them. As Garwood points out, the purpose of a forecast is to get to a set of numbers that are reliable—to “reduce variation, not eliminate it.”

Good forecasts depend on making informed judgments. Judgments which rely on assumptions and past data that may no longer be relevant. Dr. Alex Ozonoff, an epidemiologist at the Harvard School of Public Health described it this way, “It should be a given that whatever forecast we make on average will be wrong . . . so usually it’s about understanding how it’s wrong, and what to do when it’s wrong, and minimizing the cost to us when it’s wrong.”

Republished with author's permission from original post.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here