Trend lines are usually the last things considered when developing a forecast.

Reprint: R0707K

The primary goal of forecasting is to identify the full range of possibilities facing a company, society, or the world at large. In this article, Saffo demythologizes the forecasting process to help executives become sophisticated and participative consumers of forecasts, rather than passive absorbers. He illustrates how to use forecasts to at once broaden understanding of possibilities and narrow the decision space within which one must exercise intuition.

The events of 9/11, for example, were a much bigger surprise than they should have been. After all, airliners flown into monuments were the stuff of Tom Clancy novels in the 1990s, and everyone knew that terrorists had a very personal antipathy toward the World Trade Center. So why was 9/11 such a surprise? What can executives do to avoid being blindsided by other such wild cards, be they radical shifts in markets or the seemingly sudden emergence of disruptive technologies?

In describing what forecasters are trying to achieve, Saffo outlines six simple, commonsense rules that smart managers should observe as they embark on a voyage of discovery with professional forecasters. Map a cone of uncertainty, he advises, look for the S curve, embrace the things that don’t fit, hold strong opinions weakly, look back twice as far as you look forward, and know when not to make a forecast.

  • Tweet

  • Post

  • Share

  • Save

  • Get PDF

  • Buy Copies

  • Print

People at cocktail parties are always asking me for stock tips, and then they want to know how my predictions have turned out. Their requests reveal the common but fundamentally erroneous perception that forecasters make predictions. We don’t, of course: Prediction is possible only in a world in which events are preordained and no amount of action in the present can influence future outcomes. That world is the stuff of myth and superstition. The one we inhabit is quite different—little is certain, nothing is preordained, and what we do in the present affects how events unfold, often in significant, unexpected ways.

The role of the forecaster in the real world is quite different from that of the mythical seer. Prediction is concerned with future certainty; forecasting looks at how hidden currents in the present signal possible changes in direction for companies, societies, or the world at large. Thus, the primary goal of forecasting is to identify the full range of possibilities, not a limited set of illusory certainties. Whether a specific forecast actually turns out to be accurate is only part of the picture—even a broken clock is right twice a day. Above all, the forecaster’s task is to map uncertainty, for in a world where our actions in the present influence the future, uncertainty is opportunity.

Unlike a prediction, a forecast must have a logic to it. That’s what lifts forecasting out of the dark realm of superstition. The forecaster must be able to articulate and defend that logic. Moreover, the consumer of the forecast must understand enough of the forecast process and logic to make an independent assessment of its quality—and to properly account for the opportunities and risks it presents. The wise consumer of a forecast is not a trusting bystander but a participant and, above all, a critic.

Even after you have sorted out your forecasters from the seers and prophets, you still face the task of distinguishing good forecasts from bad, and that’s where this article comes in. In the following pages, I try to demythologize the forecasting process so that you can become a more sophisticated and participative consumer of forecasts, rather than a passive absorber. I offer a set of simple, commonsense rules that you can use as you embark on a voyage of discovery with professional forecasters. Most important, I hope to give you the tools to evaluate forecasts for yourself.

Rule 1: Define a Cone of Uncertainty

As a decision maker, you ultimately have to rely on your intuition and judgment. There’s no getting around that in a world of uncertainty. But effective forecasting provides essential context that informs your intuition. It broadens your understanding by revealing overlooked possibilities and exposing unexamined assumptions regarding hoped-for outcomes. At the same time, it narrows the decision space within which you must exercise your intuition.

I visualize this process as mapping a cone of uncertainty, a tool I use to delineate possibilities that extend out from a particular moment or event. The forecaster’s job is to define the cone in a manner that helps the decision maker exercise strategic judgment. Many factors go into delineating the cone of uncertainty, but the most important is defining its breadth, which is a measure of overall uncertainty. Other factors—relationships among elements, for example, and the ranking of possible outcomes—must also be considered in developing a forecast, but determining the cone’s breadth is the crucial first step. Imagine it is 1997, the Toyota Prius has just gone on sale in Japan, and you are forecasting the future of the market for hybrid cars in the United States. External factors to consider would be oil price trends and consumer attitudes regarding the environment, as well as more general factors such as economic trends. Inside the cone would be factors such as the possible emergence of competing technologies (for instance, fuel cells) and an increased consumer preference for small cars (such as the Mini). At the edge of the cone would be wild cards like a terrorist attack or a war in the Middle East. These are just a very few representative examples. (See the exhibit “Mapping the Cone of Uncertainty” for more on the process.)

A cone of uncertainty delineates the possibilities that extend out from a particular moment or event. The most important factor in mapping a cone is defining its breadth, which is a measure of overall uncertainty. In other words, the forecaster determines what range of events or products the cone should encompass. Drawing the cone is a dynamic process, and what we see here is just one iteration.

Let’s take the case of robot products, a minicraze that has been emerging and subsiding since the mid-1980s. The events before 2007 indicate that activity in this area is building, and it seems only a matter of time before this industry takes off, in the same way PCs took off in the mid-1980s and the Web took off in the mid-1990s.

Trend lines are usually the last things considered when developing a forecast.

In drawing this cone, my first step was to note the distinction between appliance-centric robots and entertainment-centric robots, represented by the dotted line across the middle of the cone. The closer to the dotted line a particular product or event is, the more it has in common with the category on the opposite side of the line. The DARPA Grand Challenges, which may end up as the indicators of robotic highway vehicles, are military projects and are thus located far from the dotted line in the middle of the cone.

In the neck of the cone is a key speculation: Who will be the entrepreneur who launches the robot craze? Deeper in the cone are several possible outcomes; the closer to the center of the cone’s main axis they are, the more likely these events are to transpire. Along the edges of the cone are less likely events—the wild cards—which, if they did happen, would be transformative (like the emergence of intelligent robot companions).

Note that I’ve left plenty of blank spaces—this is where I will add to or refine my forecast. Above all, forecasts are meant to be scribbled on, disagreed with, and tossed out—and replaced with new, better ones.

Drawing a cone too narrowly is worse than drawing it too broadly. A broad cone leaves you with a lot of uncertainty, but uncertainty is a friend, for its bedfellow is opportunity—as any good underwriter knows. The cone can be narrowed in subsequent refinements. Indeed, good forecasting is always an iterative process. Defining the cone broadly at the start maximizes your capacity to generate hypotheses about outcomes and eventual responses. A cone that is too narrow, by contrast, leaves you open to avoidable unpleasant surprises. Worse, it may cause you to miss the most important opportunities on your horizon.

The art of defining the cone’s edge lies in carefully distinguishing between the highly improbable and the wildly impossible. Outliers—variously, wild cards or surprises—are what define this edge. A good boundary is one made up of elements lying on the ragged edge of plausibility. They are outcomes that might conceivably happen but make one uncomfortable even to contemplate.

The most commonly considered outliers are wild cards. These are trends or events that have low probabilities of occurrence (under 10%) or probabilities you simply cannot quantify but that, if the events were to occur, would have a disproportionately large impact. My favorite example of a wild card, because its probability is so uncertain and its impact so great, is finding radio evidence of intelligent life somewhere else in the universe. Nobody knows if we will ever receive a message (radio astronomers have been listening since the late 1950s), but if we did, it would send a vast and unpredictable tremor through the zeitgeist. One-third of the world’s population would probably worship the remote intelligences, one-third would want to conquer them, and the final third (the readers of this magazine) would want to do some extraterrestrial market research and sell them something.

The tricky part about wild cards is that it is difficult to acknowledge sufficiently outlandish possibilities without losing your audience. The problem—and the essence of what makes forecasting hard—is that human nature is hardwired to abhor uncertainty. We are fascinated by change, but in our effort to avoid uncertainty we either dismiss outliers entirely or attempt to turn them into certainties that they are not. This is what happened with the Y2K problem in the final years before January 1, 2000. Opinions clustered at the extremes, with one group dismissing the predictions of calamity and another stocking up on survival supplies. The correct posture toward Y2K was that it was a wild card—an event with high potential impact but very low likelihood of occurrence, thanks to years of hard work by legions of programmers fixing old code.

The result of the Y2K nonevent was that many people concluded they had been the victims of someone crying Y2K wolf, and they subsequently rejected the possibility of other wild cards ever coming to pass. Consideration of anything unlikely became unfashionable, and as a result, 9/11 was a much bigger surprise than it should have been. After all, airliners flown into monuments were the stuff of Tom Clancy novels in the 1990s (inspired by Clancy, I helped write a scenario for the U.S. Air Force in 1997 that opened with a plane being flown into the Pentagon), and it was widely known that the terrorists had a very personal antipathy toward the World Trade Center. Yet the few people who took this wild card seriously were all but dismissed by those who should have been paying close attention.

The result of the Y2K nonevent was that many people subsequently rejected the possibility of other wild cards ever coming to pass. As a result, 9/11 was a much bigger surprise than it should have been.

Human nature being what it is, we are just as likely to overreact to an unexpected wild card by seeing new wild cards everywhere. That’s a danger because it can lead you to draw a hollow cone—one that is cluttered with distracting outliers at the edge and neglected probabilities at the center. So don’t focus on the edge to the exclusion of the center, or you will be surprised by an overlooked certainty. Above all, ask hard questions about whether a seeming wild card in fact deserves to be moved closer to the center.

Rule 2: Look for the S Curve

Change rarely unfolds in a straight line. The most important developments typically follow the S-curve shape of a power law: Change starts slowly and incrementally, putters along quietly, and then suddenly explodes, eventually tapering off and even dropping back down.

The mother of all S curves of the past 50 years is the curve of Moore’s Law, the name given to Gordon Moore’s brilliant 1965 conjecture that the density of circuits on a silicon wafer doubles every 18 months. We can all feel the consequences of Moore’s Law in the extravagant surprises served up by the digital revolution swirling around us. Of course, the curve of Moore’s Law is still unfolding—it is still a “J”—with the top of the “S” nowhere in sight. But it will flatten eventually, certainly with regard to silicon circuit density. Even here, though, engineers are sure to substitute denser circuit-carrying materials (like nanoscale and biological materials) as each successive material reaches saturation, so the broadest form of the Moore’s Law curve (density regardless of the material) will keep climbing for some time to come. This distinction reveals another important feature of S curves, which is that they are fractal in nature. Very large, broadly defined curves are composed of small, precisely defined and linked S curves. For a forecaster, the discovery of an emergent S curve should lead you to suspect a larger, more important curve lurking in the background. Miss the larger curve and your strategy may amount to standing on a whale, fishing for minnows.

The art of forecasting is to identify an S-curve pattern as it begins to emerge, well ahead of the inflection point. The tricky part of S curves is that they inevitably invite us to focus on the inflection point, that dramatic moment of takeoff when fortunes are made and revolutions launched. But the wise forecaster will look to the left of the curve in hopes of identifying the inflection point’s inevitable precursors. Consider Columbus’s 1492 voyage. His discovery falls at the inflection point of Western exploration. Columbus was not the first fifteenth-century explorer to go to the New World—he was the first to make it back, and he did so at a moment when his discovery would land like a spark in the economic tinder of a newly emergent Europe and launch thousands upon thousands of voyages westward. Noting the earlier, less successful voyages, a good forecaster would have seen that the moment was ripe for an inflection point and could have advised the Portuguese that it would be unwise to turn down Columbus’s request.

Ironically, forecasters can do worse than ordinary observers when it comes to anticipating inflection points. Ordinary folks are simply surprised when an inflection point arrives seemingly out of nowhere, but innovators and would-be forecasters who glimpse the flat-line beginnings of the S curve often miscalculate the speed at which the inflection point will arrive. As futurist Roy Amara pointed out to me three decades ago, there is a tendency to overestimate the short term and underestimate the long term. Our hopes cause us to conclude that the revolution will arrive overnight. Then, when cold reality fails to conform to our inflated expectations, our disappointment leads us to conclude that the hoped-for revolution will never arrive at all—right before it does.

One reason for the miscalculations is that the left-hand part of the S curve is much longer than most people imagine. Television took 20 years, plus time out for a war, to go from invention in the 1930s to takeoff in the early 1950s. Even in that hotbed of rapid change, Silicon Valley, most ideas take 20 years to become an overnight success. The Internet was almost 20 years old in 1988, the year that it began its dramatic run-up to the 1990s dot-com eruption. So having identified the origins and shape of the left-hand side of the S curve, you are always safer betting that events will unfold slowly than concluding that a sudden shift is in the wind. The best advice ever given to me was by a rancher who reminded me of an old bit of folk wisdom: “Son, never mistake a clear view for a short distance.”

Once an inflection point arrives, people commonly underestimate the speed with which change will occur. The fact is, we are all by nature linear thinkers, and phenomena governed by the sudden, exponential growth of power laws catch us by surprise again and again. Even if we notice the beginning of a change, we instinctively draw a straight line diagonally through the S curve, and although we eventually arrive in the same spot, we miss both the lag at the start and the explosive growth in the middle. Timing, of course, is everything, and Silicon Valley is littered with the corpses of companies who mistook a clear view for a short distance and others who misjudged the magnitude of the S curve they happened upon.

Also expect the opportunities to be very different from those the majority predicts, for even the most expected futures tend to arrive in utterly unexpected ways. In the early 1980s, for example, PC makers predicted that every home would shortly have a PC on which people would do word processing and use spreadsheets or, later, read encyclopedias on CDs. But when home PC use did finally come about, it was driven by entertainment, not work, and when people finally consulted encyclopedias on-screen a decade after the PC makers said they would, the encyclopedias were online. The established companies selling their encyclopedias only on CD quickly went out of business.

Rule 3: Embrace the Things That Don’t Fit

The novelist William Gibson once observed: “The future’s already arrived. It’s just not evenly distributed yet.” The leading-edge line of an emerging S curve is like a string hanging down from the future, and the odd event you can’t get out of your mind could be a weak signal of a distant industry-disrupting S curve just starting to gain momentum.

The entire portion of the S curve to the left of the inflection point is paved with indicators—subtle pointers that when aggregated become powerful hints of things to come. The best way for forecasters to spot an emerging S curve is to become attuned to things that don’t fit, things people can’t classify or will even reject. Because of our dislike of uncertainty and our preoccupation with the present, we tend to ignore indicators that don’t fit into familiar boxes. But by definition anything that is truly new won’t fit into a category that already exists.

A classic example is the first sales of characters and in-game objects from the online game EverQuest on eBay in the late 1990s. Though eBay banned these sales in 2001, they anticipated the recent explosive growth of commerce in Second Life, Linden Lab’s virtual world in which members create 3-D avatars (digital alter egos). Through the avatars, members engage in social activities, including the creation and sale of in-world objects in a currency (Linden dollars) that can be exchanged for real dollars through various means. Today there are approximately 12 million subscribers participating in virtual world simulations like Second Life, and they’re having an impact measurable in actual dollars. Real transactions connected with Second Life and other online simulations now are (conservatively) estimated at more than $1 billion annually. Where it ends is still uncertain, but it is unquestionably a very large S curve.

More often than not, indicators look like mere oddball curiosities or, worse, failures, and just as we dislike uncertainty, we shy away from failures and anomalies. But if you want to look for the thing that’s going to come whistling in out of nowhere in the next years and change your business, look for interesting failures—smart ideas that seem to have gone nowhere.

Let’s go back to Second Life. Its earliest graphical antecedent was Habitat, an online environment developed by Lucasfilm Games in 1985. Though nongraphical MUDs (multiple user dimensions) were a cultish niche success at the time, Habitat quickly disappeared, as did a string of other graphical MUDs developed in the 1980s and 1990s. Then the tide turned in the late 1990s, when multiplayer online games like EverQuest and Ultima started to take off. It was just a matter of time before the S curve that had begun with Habitat would spike for social environments as well as for games. Linden Lab’s founders arrived on the scene with Second Life at the right time and with the right vision—that property ownership was the secret to success. (Sony missed this crucial point and insisted that everything in EverQuest, including user-created objects, was Sony’s property, thus cutting EverQuest out of the wild sales-driven growth of virtual world simulations.) So although the explosive success of Second Life came as a considerable surprise to many people, from a forecasting perspective it arrived just about on time, almost 20 years after Habitat briefly appeared and expired.

As the Second Life example illustrates, indicators come in clusters. Here’s another good example. Some readers will recall the flurry of news around the first two DARPA Grand Challenges, in which inventors and researchers were invited by the U.S. Department of Defense to design robots that could compete in a 100-mile-plus race across the Mojave Desert. The first Grand Challenge, which offered a $1 million prize, was held in March 2004. Most of the robots died in sight of the starting line, and only one robot got more than seven miles into the course. The Challenge’s ambitious goal looked as remote as the summit of Everest. But just 19 months later, at the second Grand Challenge, five robots completed the course. Significantly, 19 months is approximately one doubling period under Moore’s Law.

Around the same time I noticed a sudden new robot minicraze popping up that many people dismissed as just another passing fad. At the center of the craze was the Roomba, an inexpensive ($200 to $300) “smart” vacuum cleaner the size of a pizza pan. What was odd was that my friends with Roombas were as wildly enthusiastic about these machines as they had been about their original 128K Macs—and being engineers, they had never before shown any interest in owning, much less been excited by, a vacuum cleaner. Stranger yet, they gave their Roombas names, and when I checked with Roomba’s maker, iRobot, I learned that in fact two-thirds of Roomba owners named their Roombas and one-third confessed to having taken their Roombas on vacation with them or over to friends houses to show them off.

Alone, this is just a curious story, but considered with the Grand Challenge success, it is another compelling indicator that a robotics inflection point lies in the not-too-distant future. What form this approaching robot revolution will take is still too uncertain to call, but I’ll bet that it will be greeted with the same wild-eyed surprise and enthusiasm that greeted the rise of the PC in the early 1980s and the World Wide Web in the mid-1990s. Oh, and don’t look for these robots to be the multitasking intelligent machines of science fiction. More likely, they’ll be like the Roomba, more modest devices that do one or two tasks well or are simply cute and cuddly objects of affection. One indicator: Roomba owners today can even buy costumes for their robots!

Rule 4: Hold Strong Opinions Weakly

One of the biggest mistakes a forecaster—or a decision maker—can make is to overrely on one piece of seemingly strong information because it happens to reinforce the conclusion he or she has already reached. This lesson was tragically underscored when nine U.S. destroyers ran aground on the shores of central California on the fog-shrouded evening of September 8, 1923.

The lost ships were part of DesRon 11, a 14-ship squadron steaming from San Francisco to San Diego. Misled largely by overreliance on the commander’s dead-reckoning navigation, the squadron undershot the turn into the Santa Barbara Channel and instead ended up on the rocks at Point Pedernales, several miles to the northwest.

The squadron had navigated by dead reckoning for most of the trip, but as the ships approached the channel, the squadron’s commander obtained bearings from a radio direction station at Point Arguello. The bearing placed his ship, the Delphy, north of its dead reckoning position. Convinced that his dead reckoning was accurate, the commander reinterpreted the bearing data in a way that confirmed his erroneous position and ordered a sharp course change towards the rapidly approaching coast. Nine ships followed the disastrous course.

Meanwhile, the deck officers on the Kennedy, the 11th boat in the formation, had concluded from their dead reckoning that they in fact were farther north and closer to shore than the position given by the Delphy. The skipper was skeptical, but the doubt the deck officers raised was sufficient for him to hedge his bets; an hour before the fateful turn he ordered a course change that placed his ship several hundred yards to the west of the ships in front of them, allowing the Kennedy and the three trailing destroyers to avert disaster.

The essential difference between the two skippers’ responses was that the Delphy’s skipper ignored evidence that invalidated his dead-reckoning information and narrowed his cone of uncertainty at the very moment when the data was screaming out to broaden it. In contrast, the Kennedy’s skipper listened to the multiple sources of conflicting weak information and concluded that his ship’s position was much less certain than assumed. He hedged their bets and, therefore, saved the ship.

In forecasting, as in navigation, lots of interlocking weak information is vastly more trustworthy than a point or two of strong information. The problem is that traditional research habits are based on collecting strong information. And once researchers have gone through the long process of developing a beautiful hypothesis, they have a tendency to ignore any evidence that contradicts their conclusion. This inevitable resistance to contradictory information is responsible in no small part for the nonlinear process of paradigm shifts identified by Thomas Kuhn in his classic The Structure of Scientific Revolutions. Once a theory gains wide acceptance, there follows a long stable period in which the theory remains accepted wisdom. All the while, however, contradictory evidence is quietly building that eventually results in a sudden shift.

Good forecasting is the reverse: It is a process of strong opinions, weakly held. If you must forecast, then forecast often—and be the first one to prove yourself wrong. The way to do this is to form a forecast as quickly as possible and then set out to discredit it with new data. Let’s say you are looking at the future cost of oil and its impact on the economy. Early on, you conclude that above a certain price point, say $80 a barrel, U.S. consumers will respond the way they did during the Carter administration, by putting on cardigans and conserving energy. Your next step is to try to find out why this might not happen. (So far it hasn’t—perhaps because Americans are wealthier today, and, as evidenced by the past decade’s strong SUV sales, they may not care deeply enough to change their habits on the basis of cost alone until the oil price is much higher.) By formulating a sequence of failed forecasts as rapidly as possible, you can steadily refine the cone of uncertainty to a point where you can comfortably base a strategic response on the forecast contained within its boundaries. Having strong opinions gives you the capacity to reach conclusions quickly, but holding them weakly allows you to discard them the moment you encounter conflicting evidence.

Rule 5: Look Back Twice as Far as You Look Forward

Marshall McLuhan once observed that too often people steer their way into the future while staring into the rearview mirror because the past is so much more comforting than the present. McLuhan was right, but used properly, our historical rearview mirror is an extraordinarily powerful forecasting tool. The texture of past events can be used to connect the dots of present indicators and thus reliably map the future’s trajectory—provided one looks back far enough.

Consider the uncertainty generated by the post-bubble swirl of the Web, as incumbents like Google and Yahoo, emergent players, and declining traditional TV and print media players jockey for position. It all seems to defy categorization, much less prediction, until one looks back five decades to the emergence in the early 1950s of TV and the subsequent mass-media order it helped catalyze. The present moment has eerie parallels to that era, and inspection of those similarities quickly brings today’s landscape into sharp focus: We are in a moment when the old mass-media order is being replaced by a new personal-media order, and it’s not just the traditional media players that are struggling to adjust. The cutting-edge players of the information revolution, from Microsoft to Google, are pedaling every bit as hard.

The problem with history is that our love of certainty and continuity often causes us to draw the wrong conclusions. The recent past is rarely a reliable indicator of the future—if it were, one could successfully predict the next 12 months of the Dow or Nasdaq by laying a ruler along the past 12 months and extending the line forward. But the Dow doesn’t behave that way, and neither does any other trend. You must look for the turns, not the straightaways, and thus you must peer far enough into the past to identify patterns. It’s been written that “history doesn’t repeat itself, but sometimes it rhymes.” The effective forecaster looks to history to find the rhymes, not the identical events.

One must look for the turns, not the straightaways, and thus one must peer far enough into the past to identify patterns.

So when you look back for parallels, always look back at least twice as far as you are looking forward. Search for similar patterns, keeping in mind that history—especially recent history—rarely repeats itself directly. And don’t be afraid to keep looking further back if the double interval is not enough to trigger your forecaster’s informed intuition.

The hardest part of looking back is to know when history doesn’t fit. The temptation is to use history (as the old analogy goes) the way a drunk uses a lamppost, for support rather than illumination. That’s the single worst mistake a forecaster can make, and examples, unfortunately, abound. Jerry Levin, for instance, sold Time Warner to AOL in the mistaken belief that he could use mergers and acquisitions to shoulder his company into digital media the way he did so successfully with cable and movies. He ended up closing the deal just when AOL’s decade-old model was being wiped out by new challengers with models allowing them to offer e-mail free of charge. Another case in point: A dark joke at the Pentagon is that the U.S. military is always fighting the last war, and indeed it is evident that in the case of the Iraq conflict, planners in certain areas simultaneously assumed that Iraq II would unfold like Iraq I and dismissed Vietnam as a source of insight because the U.S. had “lost that war.”

Rule 6: Know When Not to Make a Forecast

It is a peculiar human quality that we are at once fearful of—and fascinated by—change. It is embedded in our social vocabulary, as we often greet a friend with the simple salutation, “What’s new?” Yet it is a liability for forecasters to have too strong a proclivity to see change, for the simple fact is that even in periods of dramatic, rapid transformation, there are vastly more elements that do not change than new things that emerge.

Even in periods of dramatic, rapid transformation, there are vastly more elements that do not change than new things that emerge.

Consider again that whirling vortex of the 1990s, the dot-com bubble. Plenty new was happening, but underlying the revolution were deep, unchanging consumer desires and ultimately, to the sorrow of many a start-up, unchanging laws of economics. By focusing on the novelties, many missed the fact that consumers were using their new broadband links to buy very traditional items like books and engage in old human activities like gossip, entertainment, and pornography. And though the future-lookers pronounced it to be a time when the old rules no longer applied, the old economic imperatives applied with a vengeance and the dot-com bubble burst just like every other bubble before it. Anyone who had taken the time to examine the history of economic bubbles would have seen it coming.

Against this backdrop, it is important to note that there are moments when forecasting is comparatively easy—and other moments when it is impossible. The cone of uncertainty is not static; it expands and contracts as the present rolls into the future and certain possibilities come to pass while others are closed off. There are thus moments of unprecedented uncertainty when the cone broadens to a point at which the wise forecaster will demur and refrain from making a forecast at all. But even in such a moment, one can take comfort in the knowledge that things will soon settle down, and with the careful exercise of intuition, it will once again be possible to make a good forecast.

Consider the events surrounding the fall of the Berlin Wall. In January 1989, the East German leader, Erich Honecker, declared that the wall would stand for “a hundred more years,” and indeed Western governments built all their plans around this assumption. The signs of internal collapse are obvious in hindsight, but at the time, the world seemed locked in a bipolar superpower order that despite its nuclear fearfulness was remarkably stable. The cone of uncertainty, therefore, was relatively narrow, and within its boundaries there were a number of easily imaginable outcomes, including the horror of mutual destruction. Uncertainties popped up only where the two superpowers’ spheres of influence touched and overlapped. But even here, there was a hierarchy of uncertainty: When change eventually came, it would likely unfold first in South Asia or restive Poland rather than in Berlin, safely encircled by its wall.

But the Berlin Wall came crashing down in the fall of 1989, and with it crumbled the certainty of a forecast rooted in the assumption of a world dominated by two superpowers. A comfortably narrow cone dilated to 180 degrees, and at that moment the wise forecaster would have refrained from jumping to conclusions and instead would have quietly looked for indicators of what would emerge from the geopolitical rubble—both overlooked indicators leading up to the wall’s collapse and new ones emerging from its geopolitical detritus.

Indeed, the new order revealed itself within 12 months, and the indicator was Iraq’s invasion of Kuwait on August 2, 1990. Before the collapse of the USSR, such an action would have triggered a Cuban Missile Crisis–like conflict between the two superpowers, but without a strong Soviet Union either to restrain Saddam or saber-rattle back, the outcome was very different. And with it, the new geopolitical order was obvious: The cone of uncertainty had narrowed to encompass a world where the myriad players once arrayed in the orderly force field of one superpower or another now were all going in their own directions. All the uncertainty shifted to center on whether the single surviving superpower could remain one at all. Iraq II of course has provided the answer to that question: A unipolar superpower order is not possible. As others have observed, we live in a world where the sole remaining superpower is too powerful to ignore but too weak to make a difference.

Bottom line? Be skeptical about apparent changes, and avoid making an immediate forecast—or at least don’t take any one forecast too seriously. The incoming future will wash up plenty more indicators on your beach, sooner than you think.• • •

Professional forecasters are developing ever more complex and subtle tools for peering ahead—futures markets, online expert aggregations, sophisticated computer-based simulations and even, horizon-scanning software that crawls the Web looking for surprises. That is why it is essential for executives to become sophisticated and participative consumers of forecasts. That doesn’t mean you must learn nonlinear algebra or become a forecasting expert in your own right. At the end of the day, forecasting is nothing more (nor less) than the systematic and disciplined application of common sense. It is the exercise of your own common sense that will allow you to assess the quality of the forecasts given to you—and to properly identify the opportunities and risks they present. But don’t stop there. The best way to make sense of what lies ahead is to forecast for yourself.

A version of this article appeared in the July–August 2007 issue of Harvard Business Review.

Which of the following is the final step in a forecasting system?

Which of the following is the FINAL step in a forecasting​ system? Validate and implement the results.

What is the first step in developing a forecast?

Specify the Input Data Set The first step in the forecasting process is to tell the system to use this data set by setting the Data Set field.

Which of the following forecasting techniques are considered trend?

The answer is (e), Regression. Regression is utilized when to forecast when there is a linear trend in the data and sufficient data is available for analysis.

What are the three major types of forecasts in planning the future?

Organizations use three major types of forecasting (economic, technological and demand forecasting) in planning the future of their operations. All forecasts lead to demand forecasting.