( I admire DK's writings) In an excerpt from his new book, Thinking, Fast and Slow, the Nobel laureate recalls how an inwardly focused forecasting approach once led him astray, and why an external perspective can help executives do better.
November 2011 • Daniel Kahneman
In the 1970s, I convinced some officials in the Israeli Ministry of Education of the need for a curriculum to teach judgment and decision making in high schools. The team that I assembled to design the curriculum and write a textbook for it included several experienced teachers, some of my psychology students, and Seymour Fox, then dean of the Hebrew University’s School of Education and an expert in curriculum development.
After meeting every Friday afternoon for about a year, we had constructed a detailed outline of the syllabus, written a couple of chapters, and run a few sample lessons. We all felt we had made good progress. Then, as we were discussing procedures for estimating uncertain quantities, an exercise occurred to me. I asked everyone to write down their estimate of how long it would take us to submit a finished draft of the textbook to the Ministry of Education. I was following a procedure that we already planned to incorporate into our curriculum: the proper way to elicit information from a group is not by starting with a public discussion, but by confidentially collecting each person’s judgment. I collected the estimates and jotted the results on the blackboard. They were narrowly centered around two years: the low end was one and a half, the high end two and a half years.
A shocking disconnect
Then I turned to Seymour, our curriculum expert, and asked whether he could think of other teams similar to ours that had developed a curriculum from scratch. Seymour said he could think of quite a few, and it turned out that he was familiar with the details of several. I asked him to think of these teams when they were at the same point in the process as we were. How much longer did it take them to finish their textbook projects?
He fell silent. When he finally spoke, it seemed to me that he was blushing, embarrassed by his own answer: “You know, I never realized this before, but in fact not all the teams at a stage comparable to ours ever did complete their task. A substantial fraction of the teams ended up failing to finish the job.”
This was worrisome; we had never considered the possibility that we might fail. My anxiety rising, I asked how large he estimated that fraction was. “About 40 percent,” he said. By now, a pall of gloom was falling over the room. “Those who finished, how long did it take them?”
“I cannot think of any group that finished in less than seven years,” Seymour said, “nor any that took more than ten.” I grasped at a straw: “When you compare our skills and resources to those of the other groups, how good are we? How would you rank us in comparison with these teams?” Seymour did not hesitate long this time. “We’re below average,” he said, “but not by much.”
This came as a complete surprise to all of us—including Seymour, whose prior estimate had been well within the optimistic consensus of the group. Until I prompted him, there was no connection in his mind between his knowledge of the history of other teams and his forecast of our future. We should have quit that day. None of us was willing to invest six more years of work in a project with a 40 percent chance of failure. Yet although we must have sensed that persevering was not reasonable, the warning did not provide an immediately compelling reason to quit. After a few minutes of desultory debate, we gathered ourselves and carried on as if nothing had happened. Facing a choice, we gave up rationality rather than the enterprise.
The book was completed eight years later. By that time, I was no longer living in Israel and had long since ceased to be part of the team, which finished the task after many unpredictable vicissitudes. The initial enthusiasm for the idea in the Ministry of Education had waned, and the textbook was never used.
Why the inside view didn’t work
This embarrassing episode remains one of the most instructive experiences of my professional life. I had stumbled onto a distinction between two profoundly different approaches to forecasting, which Amos Tversky and I later labeled the inside view and the outside view.
The inside view is the one that all of us, including Seymour, spontaneously adopted to assess the future of our project. We focused on our specific circumstances and searched for evidence in our own experiences. We had a sketchy plan: we knew how many chapters we were going to write, and we had an idea of how long it had taken us to write the two that we had already done. The more cautious among us probably added a few months as a margin of error.
But extrapolating was a mistake. We were forecasting based on the information in front of us, but the chapters we wrote first were easier than others and our commitment to the project was probably at its peak. The main problem was that we failed to allow for what Donald Rumsfeld famously called “unknown unknowns.” At the time, there was no way for us to foresee the succession of events that would cause the project to drag on for so long: divorces, illnesses, crises of coordination with bureaucracies. These unanticipated events not only slow the writing process, but produce long periods during which little or no progress is made at all. Of course, the same must have been true for the other teams that Seymour knew about. Like us, the members of those teams did not know the odds they were facing. There are many ways for any plan to fail, and although most of them are too improbable to be anticipated, the likelihood that something will go wrong in a big project is high.
How an outside view can help
The second question I asked Seymour directed his attention away from us and toward a class of similar cases. Seymour estimated the base rate of success in that reference class: 40 percent failure and seven to ten years for completion. His informal survey was surely not up to scientific standards of evidence, but it provided a reasonable basis for a baseline prediction: the prediction you make about a case if you know nothing except the category to which it belongs. This should be the anchor for further adjustments. If you are asked to guess the height of a woman and all you know is that she lives in New York City, for example, your baseline prediction is your best guess of the average height of women in the city. If you are now given case-specific information—that the woman’s son is the starting center of his high school basketball team—you will adjust your estimate. Seymour’s comparison of our team to others suggested that the forecast of our outcome was slightly worse than the baseline prediction, which was already grim.
The spectacular accuracy of the outside-view forecast in our specific case was surely a fluke and should not count as evidence for the validity of the outside view. However, the argument for the outside view should be made on general grounds: if the reference class is properly chosen, the outside view will give an indication of where the ballpark is. It may suggest, as it did in our case, that the inside-view forecasts are not even close.
November 2011 • Daniel Kahneman
In the 1970s, I convinced some officials in the Israeli Ministry of Education of the need for a curriculum to teach judgment and decision making in high schools. The team that I assembled to design the curriculum and write a textbook for it included several experienced teachers, some of my psychology students, and Seymour Fox, then dean of the Hebrew University’s School of Education and an expert in curriculum development.
After meeting every Friday afternoon for about a year, we had constructed a detailed outline of the syllabus, written a couple of chapters, and run a few sample lessons. We all felt we had made good progress. Then, as we were discussing procedures for estimating uncertain quantities, an exercise occurred to me. I asked everyone to write down their estimate of how long it would take us to submit a finished draft of the textbook to the Ministry of Education. I was following a procedure that we already planned to incorporate into our curriculum: the proper way to elicit information from a group is not by starting with a public discussion, but by confidentially collecting each person’s judgment. I collected the estimates and jotted the results on the blackboard. They were narrowly centered around two years: the low end was one and a half, the high end two and a half years.
A shocking disconnect
Then I turned to Seymour, our curriculum expert, and asked whether he could think of other teams similar to ours that had developed a curriculum from scratch. Seymour said he could think of quite a few, and it turned out that he was familiar with the details of several. I asked him to think of these teams when they were at the same point in the process as we were. How much longer did it take them to finish their textbook projects?
He fell silent. When he finally spoke, it seemed to me that he was blushing, embarrassed by his own answer: “You know, I never realized this before, but in fact not all the teams at a stage comparable to ours ever did complete their task. A substantial fraction of the teams ended up failing to finish the job.”
This was worrisome; we had never considered the possibility that we might fail. My anxiety rising, I asked how large he estimated that fraction was. “About 40 percent,” he said. By now, a pall of gloom was falling over the room. “Those who finished, how long did it take them?”
“I cannot think of any group that finished in less than seven years,” Seymour said, “nor any that took more than ten.” I grasped at a straw: “When you compare our skills and resources to those of the other groups, how good are we? How would you rank us in comparison with these teams?” Seymour did not hesitate long this time. “We’re below average,” he said, “but not by much.”
This came as a complete surprise to all of us—including Seymour, whose prior estimate had been well within the optimistic consensus of the group. Until I prompted him, there was no connection in his mind between his knowledge of the history of other teams and his forecast of our future. We should have quit that day. None of us was willing to invest six more years of work in a project with a 40 percent chance of failure. Yet although we must have sensed that persevering was not reasonable, the warning did not provide an immediately compelling reason to quit. After a few minutes of desultory debate, we gathered ourselves and carried on as if nothing had happened. Facing a choice, we gave up rationality rather than the enterprise.
The book was completed eight years later. By that time, I was no longer living in Israel and had long since ceased to be part of the team, which finished the task after many unpredictable vicissitudes. The initial enthusiasm for the idea in the Ministry of Education had waned, and the textbook was never used.
Why the inside view didn’t work
This embarrassing episode remains one of the most instructive experiences of my professional life. I had stumbled onto a distinction between two profoundly different approaches to forecasting, which Amos Tversky and I later labeled the inside view and the outside view.
The inside view is the one that all of us, including Seymour, spontaneously adopted to assess the future of our project. We focused on our specific circumstances and searched for evidence in our own experiences. We had a sketchy plan: we knew how many chapters we were going to write, and we had an idea of how long it had taken us to write the two that we had already done. The more cautious among us probably added a few months as a margin of error.
But extrapolating was a mistake. We were forecasting based on the information in front of us, but the chapters we wrote first were easier than others and our commitment to the project was probably at its peak. The main problem was that we failed to allow for what Donald Rumsfeld famously called “unknown unknowns.” At the time, there was no way for us to foresee the succession of events that would cause the project to drag on for so long: divorces, illnesses, crises of coordination with bureaucracies. These unanticipated events not only slow the writing process, but produce long periods during which little or no progress is made at all. Of course, the same must have been true for the other teams that Seymour knew about. Like us, the members of those teams did not know the odds they were facing. There are many ways for any plan to fail, and although most of them are too improbable to be anticipated, the likelihood that something will go wrong in a big project is high.
How an outside view can help
The second question I asked Seymour directed his attention away from us and toward a class of similar cases. Seymour estimated the base rate of success in that reference class: 40 percent failure and seven to ten years for completion. His informal survey was surely not up to scientific standards of evidence, but it provided a reasonable basis for a baseline prediction: the prediction you make about a case if you know nothing except the category to which it belongs. This should be the anchor for further adjustments. If you are asked to guess the height of a woman and all you know is that she lives in New York City, for example, your baseline prediction is your best guess of the average height of women in the city. If you are now given case-specific information—that the woman’s son is the starting center of his high school basketball team—you will adjust your estimate. Seymour’s comparison of our team to others suggested that the forecast of our outcome was slightly worse than the baseline prediction, which was already grim.
The spectacular accuracy of the outside-view forecast in our specific case was surely a fluke and should not count as evidence for the validity of the outside view. However, the argument for the outside view should be made on general grounds: if the reference class is properly chosen, the outside view will give an indication of where the ballpark is. It may suggest, as it did in our case, that the inside-view forecasts are not even close.
No comments:
Post a Comment