There are two prevailing views about how technological changes over the last 50 years contributed to our current economic problems. The first is that progress was too slow, leading to stagnation in the economy (1,2). The second is that technology advanced too quickly, making it difficult for the average worker to adapt (3). In a debate at the 2011 Techonomy conference, Tyler Cowen presented the argument for stagnation against Erik Brynjolfsson’s case for rapid change.
This debate highlights our striking lack of understanding of the process of technological change and its impact on society. Even given the relatively simple task of interpreting history, compared to projecting into the future, Cowen and Brynjolfsson arrive at completely opposite diagnoses. They do not agree on the proper measure of technological progress (labor productivity or total factor productivity?) and after cursory consideration of some economic data, the discourse devolves into reasoning through anecdotes. Cowen compares the new inventions that his grandmother witnessed in her first 50 years of life (electricity, flush toilets, automobiles, etc.) to what he sees as a relative paucity of new inventions in his first 49 years (computers and the Internet). Brynjolfsson counters with recent advances in artificial intelligence such as Jeopardy-playing computers and self-driving cars. What’s missing from the discussion is a framework for classifying technological change and gauging its importance. How important is the Internet compared to electricity? Is a self-driving-car worth more than a flush toilet?
The economic debate continues in a flurry of articles and talks (4,5,6). But this lack of understanding causes problems that are even more far-reaching. The Federal government spends $140 billion per year funding research and development, but decisions on how to allocate the money are made without a guiding model. John Marburger described the situation during his tenure as director of the Office of Science and Technology Policy during George W. Bush’s administration (7):
"In the face of grave national challenges we were relying on anecdotes and intuitions and data disconnected from all but the most primitive interpretive frameworks."
Even the patent system stands on shaky theoretical grounds. The intellectual property law scholar Mark Lemley writes (8):
"The theory of patent law is based on the idea that a lone genius can solve problems that stump the experts, and that the lone genius will do so only if properly incented. But the canonical story of the lone genius inventor is largely a myth..."
In fact, almost all significant inventions were discovered nearly simultaneously by two or more groups working independently (9). Furthermore, inventions are almost never developed whole cloth by a single group, but build on a series of incremental advancements made by multiple investigators over the years. The leading alternative theories of patent law are also not supported by real-world evidence (10).
Taking all this together, it is clear that we need a framework for understanding and modeling technological change. Luckily there is a rich body of literature from history and sociology of technology to build from. More recently, researchers in the fields of economics (11,12), climate change modeling (13), complexity science (14), and science policy (15) have started to combine insights from historical case studies with modern analytical tools from economics, evolutionary theory, and complexity science into a theory of technology. The research is fascinating, but it is still scattered across many academic disciplines, and its results are not well diffused into the popular consciousness. In this series of blog posts, I will survey the state of the theory of technological change across these various fields of study.