The cost of computing power has decreased exponentially over the last several decades (“Moore’s law”). This trend gets a lot of attention, but its context is not as well known: exponential cost decreases occur in a range of technologies, from wind electricity to polyester fiber production. The exponential trends normally only lasts for a couple of decades before they taper off, and the rate of improvement is usually slower than it has been with computers (see this data set).
Moore’s original observation was about the number of transistors per integrated circuit, but others (notably Ray Kurzweil) have reframed it in terms of the cost of computing power — the number of calculations per second you can buy for $1,000. This metric is great because you can define it broadly (“inflation-adjusted cost per unit”) and then apply the same figure to look at technological change in aluminum production ($ per pound), computer memory ($ per kbit), or black and white televisions ($ per TV). Of course, the metric isn’t perfect, especially because it doesn’t capture changes in the quality of units over time. Compared to the Model T, a modern Toyota Corolla is faster, safer, and more fuel efficient, so cost per unit tends to underestimate the total amount of change.
The plots below show that wind electricity, polyester fiber, and black and white TV sets all went through periods of exponential decrease in their cost per unit. This is interesting not only because of the amount of cost decrease (cheaper TV’s!) but also because the changes happened with such regularity, following the same simple curve for two or three decades.
The ability to predict the progress of technology would have all sorts of advantages for military and business planning, so the exponential decrease of cost curves has gotten a lot of attention over the years. In 1936 Thomas Wright observed it in airplane manufacturing, and it’s been noted in many other industries since then. But despite all the attention, there is not really a good understanding of why the cost curve decreases exponentially, or why it conforms to a regular pattern at all. Things like learning by doing and economies of scale certainly play a role, but it’s a complicated problem. All sorts of different processes go into technological improvement, from experimentation in laboratories to setting up machines in factories.
A recent paper from the Santa Fe Institute gives a new analysis, comparing different models of the cost decrease to a database of cost and production volumes for 62 different technologies. They compare the models by hindcasting — basically, you pretend that you are at some point in the past and use the model to make “predictions” of what will happen for the portion of the data set that is in the "future". you then compare the hindcast to the rest of the data set to see how well the model would have predicted the future.
There are 6 models in the paper, but they are really variations on three main ideas. The difference between them is what each model takes to be the important variable to measure unit cost against. Moore’s law postulates that costs decrease as a function of time, but Goddard and Wright, respectively, suggest that costs decrease as a function of the scale of production (the total number of units produced in one year), or as a function of cumulative production (the total number of units that have ever been produced).
The upshot of the paper is that all three models perform pretty well. In principle, the models could give wildly different results, but as it turns out, the annual and cumulative production levels in the database mostly increase exponentially with time. This makes Wright and Goddard’s models pretty much equivalent to Moore’s, so there aren’t huge differences in how they fit the data. Nevertheless, its interesting to try to tease out which model is a little bit better than the others because, even if they do a similarly good job of predicting future technological change, they reflect different conceptions of how technological progress happens:
Moore’s law represents the idea that the progress of technology is inexorable. It increases every year regardless of what happens with production volumes or research and development expenditures.
For Goddard’s model, unit cost decreases are driven purely by economies of scale. Bigger vats are more efficient at making polyester fiber, so the cost goes down when you build bigger factories and make more of it. In one sense, there is no technological “progress” in this conception because the unit cost would revert to higher historical values if the number of units produced were to decrease.
With Wright’s model the cost decrease is driven by the learning that occurs in setting up and operating factories. As more units are produced, people figure out new methods to optimize the technology and the production process. In this view, cumulative production is a proxy for effort expended — the number of man-hours of experience working with the technology. The more we make, the more we learn, and knowledge accumulates without loss.
The main result of the paper is that Goddard’s model, which only accounts for current annual production is noticeably worse than Wright’s which takes into account cumulative production. This suggests that there is information contained in the cumulative production data that is not contained in the annual production data, so that learning is involved in the cost reduction and not just economies of scale.
It would be interesting to find historical data that is more suited to differentiating between the models. For example, the production of sailing ships declined as steam ships matured. Goddard’s model would predict that unit costs should have increased during this period while Wright would predict a continued decrease.
Another thing to reflect on is that in order to pool all this data, the paper postulates that the process producing the cost reduction is the same across different technologies, except for differences in the parameters. Conceptually it is quite surprising. After all, building windmills is a quite different endeavor than producing computer chips. But to the extent that its true it is a powerful tool for forecasting because you can pool data from different technologies rather than just extrapolating a single dataset, and then use this to calculate error estimates for your projections.