Wednesday

Unemployment and Technology

My last post, Structural Unemployment: The Economists Just Don't Get It, generated some interesting comments. Here's an especially good one that was posted by an irate economist over at Mark Thoma's blog:
This is the second link I've seen to Mr. Ford's views on labor and technology. It needs to stop. Yes, we need to consider the interaction between the two. We do not, however, need the help of someone whose thinking is as sloppy and self-congratulatory as Ford's. Lots of work has been done on technology's influence on labor markets. Work that uses real data. Ford is essentially making the same "technology kills jobs" argument that has been around for centuries. His argument boils down to "this time it's different" and "people (economists) who don't see things exactly as I do feel threatened by my powerful view and are to be ignored."

There is a whiff of Glenn Beck in Ford's dismissal of other views.
Now, I think that saying "it needs to stop" and then comparing me to Glen Beck is a little over the top. It seems a bit unlikely that my little blog represents an existential threat to the field of economics.

The other points, however, deserve an answer: First, am I just dredging up a tired old argument that's been around for centuries? And second, have economists in fact done extensive work on this issue---using real data---and have they arrived at a conclusion that puts all of this to rest?

It is obviously true that technology has been advancing for centuries. The fear that machines would create unemployment has indeed come up repeatedly---going back at least as far as the Luddite revolt in 1812. And, yes, it is true: I am arguing that "this time is different."

The reason I'm making that argument is that technology---or at least information technology in particular---has clearly been accelerating and will continue to do for some time to come. (* see end note)

Suppose you get in your car and drive for an hour. You start going at 5 mph and then you double your speed every ten minutes. So for the six ten-minute intervals, you would be traveling at 5, 10, 20, 40, 80, and if you and your car are up to it, 160 mph.

Now, you could say "hey, I just drove for an hour and my speed has been increasing the entire time," and that would basically be correct. But that doesn't capture the fact that you've covered an extraordinary distance in those last few minutes. And, the fact that you didn't get a speeding ticket in the first 50 minutes really might not be such a good predictor of the future.

Among economists and people who work in finance it seems to be almost reflexive to dismiss anyone who says "this time is different." I think that makes sense where we're dealing with things like human behaviour or market psychology. If you're talking about asset bubbles, for example, then it's most likely true: things will NEVER be different. But I question whether you can apply that to a technological issue. With technology things are ALWAYS different. Impossible things suddenly become possible all the time; that's the way technology works. And it seems to me that the question of whether machines will someday out-compete the average worker is primarily a technological, not an economic, question.

The second question is whether economists have really studied this issue at length---and by that I mean specifically the potential impact of accelerating technical progress on the worker-machine relationship. I could not find much evidence of such work. In honesty, I did not do a comprehensive search of the literature, so it's certainly possible I missed a lot of existing research, and I invite any interested economists to point this out in the comments.

One paper I did find, and I think it is well-regarded, is the one by David H. Autor, Frank Levy and Richard J. Murnane: "The Skill Content of Recent Technological Change: An Empirical Exploration," published in The Quarterly Journal of Economics in November 2003. (PDF here). This paper analyzed the impact of computer technology on jobs over the 38 years between 1960 and 1998.

The paper points out that computers (at least from 1960-1998) were primarily geared toward performing routine and repetitive tasks. It then concludes that computer technology is most likely to substitute for those workers who perform, well, routine and repetitive tasks.

In fairness, the paper does point out (in a footnote) that work on more advanced technologies, such as neural networks, is underway. There is no discussion, however, of the fact that computing power is advancing exponentially and of what this might imply for the future. (It does incorporate falling costs, but I could not find evidence that it gives much consideration to increasing capability. It should be clear to anyone that today's computers are BOTH cheaper and far more capable than those that existed years ago.).

Are there other papers that focus on how accelerating technology will likely alter the way that machines can be substituted for workers in the future? Perhaps, but I haven't found them.

A more general question is: Why is there not more discussion of this issue among economists? I see little or nothing in the blogosphere and even less in academic journals. Take a look at the contents of recent issues of The Quarterly Journal of Economics. I can find nothing regarding this issue, but a number of subjects that might almost be considered "freakonomics."

The thing is that I think this is an important question. If, as I have suggested, some component of the employment out there is technological unemployment, and if that will in fact worsen over time, then the implications are pretty dire. Increasing structural unemployment would clearly spawn even more cyclical unemployment as spending falls---risking a deflationary spiral.

Consider the impact on entitlements. The already disturbing projections for Medicare and Social Security must surely incorporate some assumptions regarding unemployment levels and payroll tax receipts. What if those assumptions are optimistic?

Likewise, I think economists would agree that the best way for developed countries to get their debt burdens under control is to maximize economic growth. If we got into a situation where unemployment not only remained high but actually increased over time, the impact on consumer confidence would be highly negative. Then where would GDP growth come from?

It seems to me that, from the point of view of a skeptical economist, this issue should be treated almost like the possibility of something like nuclear terrorism: Hopefully, the probability of its actual occurence is very low, but the consequences of such an occurence are so dire that it has to be given some attention.

So, again, I wonder why this issue is ignored by most economists. There are a few exceptions, of course. Greg Clark at UC Davis had his article in the Washington Post. And Robin Hason at GMU wrote a paper on the subject of machine intelligence. I don't agree with Hanson's conclusions, but clearly he understands the implications of exponential progress.

Why not more interest in this subject? Perhaps: (A) Conclusive research has really been done, and I've missed it. or (B) Economists think this level of technology is science fiction and just dismiss it. or (C) Maybe economists just accept what they learn in grad school and genuinely don't feel there's any need to do any research into this area. Maybe something like this is so far out of the mainstream as to be a "career killer" (sort of like cold fusion research).

Another issue may be the seemingly complete dominance of econometrics within the economics profession. Anything that strays from being based on rigorous analysis of hard data is likely to be regarded as speculative fluff, and that probably makes it very difficult to do work in this area. The problem is that the available data is often years or even decades old.

If any real economists drop by, please do leave your thoughts in the comments.

___________________________

* Just a brief note on the acceleration I'm talking about (which is generally expressed as "Moore's Law"). There is some debate about how long this can continue. However, I don't think we have to worry that Moore's Law is in imminent danger of falling apart because if it were, that would be reflected in Intel's market valuation, since their whole product line would quickly get commoditized.

Here's what I wrote in The Lights in the Tunnel (Free PDF -- looks great on your iPhone) regarding the future of Moore's Law:
How confident can we be that Moore’s Law will continue to be sustainable in the coming years and decades? Evidence suggests that it is likely to hold true for the foreseeable future. At some point, current technologies will run into a fundamental limit as the transistors on computer chips are reduced in size until they approach the size of individual molecules or atoms. However, by that time, completely new technologies may be available. As this book was being written, Stanford University announced that scientists there had managed to encode the letters “S” and “U” within the interference patterns of quantum electron waves. In other words, they were able to encode digital information within particles smaller than atoms. Advances such as this may well form the foundation of future information technologies in the area of quantum computing; this will take computer engineering into the realm of individual atoms and even subatomic particles.

Even if such breakthroughs don’t arrive in time, and integrated circuit fabrication technology does eventually hit a physical limit, it seems very likely that the focus would simply shift from building faster individual processors to instead linking large numbers of inexpensive, commoditized processors together in parallel architectures. As we’ll see in the next section, this is already happening to a significant degree, but if Moore’s Law eventually runs out of steam, parallel processing may well become the primary focus for building more capable computers.

Even if the historical doubling pace of Moore’s Law does someday prove to be unsustainable, there is no reason to believe that progress would halt or even become linear in nature. If the pace fell off so that doubling took four years (or even longer) rather than the current two years, that would still be an exponential progression that would bring about staggering future gains in computing power.

Causes of Unemployment

Lately, there has been a fair amount of buzz in the economics blogosphere about the issue that I've been discussing extensively here: Structural Unemployment.

Several notable economists have weighed in. Paul Krugman touches on it here. Brad DeLong says this. Mark Thoma has a post in a forum focusing on structural unemployment at The Economist.

If you read through these posts, however, you won't see a lot of discussion about the case I've been making here, which is that advancing technology is the primary culprit. I've been arguing that as machines and software become more capable, they are beginning to match the capabilities of the average worker. In other words, as technology advances, a larger and larger fraction of the population will essentially become unemployable. While I think advancing information technology is the primary force driving this, globalization is certainly also playing a major role. (But keep in mind that aspects of globalization such as service offshoring---moving a job electronically to a low wage country---are also technology driven).

The economists sometimes mention technology, but in general they find other "structural" issues to focus on. One that I have seen again and again is this idea that people can't move to find work because their houses are underwater (the mortgage exceeds the equity). The emphasis given to this issue strikes me as almost silly. Are there any major population centers in the U.S. that have really low unemployment?

Even if people could sell their homes, would they really be motivated to load up the U-haul and move from a city with say 12% unemployment to one where it is only 9%? Have the economists lost sight of the fact that 9% unemployment is still basically a disaster? The few locales I've seen with unemployment significantly lower than that are rural or small cities (Bismark ND, for example)---places that are simply incapable of absorbing huge numbers of hopeful workers. Let's get real: playing musical chairs in a generally miserable environment is not going to solve the unemployment problem.

Another thing the economists focus on is the idea of a skill mismatch. Structural unemployment, they say, occurs because workers don't have the particular skills demanded by employers. While there's little doubt that there's some of this going on, again, I think this issue is given way too much emphasis. The idea that if we could simply re-train everyone, the problem would be solved is simply not credible. If you doubt that, ask any of the thousands of workers who have completed training programs, but still can't find work.

Economists ought to realize that if a skill mismatch was really the fundamental issue, then employers would be far more willing to invest in training workers. In reality, this rarely happens even among the most highly regarded employers. Suppose Google, for example, is looking for an engineer with very specific skills. What are the chances that Google would hire and then re-train one of the many unemployed 40+ year-old engineers with a background in a slightly different technical area? Well, basically zero.

If employers were really suffering because of a skill mismatch, they could easily help fix the problem. They don't because they have other, far more profitable options: they can hire offshore low wage workers, or they can invest in automation. Re-training millions of workers in the U.S. is likely to make a killing for the new for-profit schools that are quickly multiplying, but it won't solve the unemployment problem.

Why are economists so reluctant to seriously consider the implications of advancing technology? I think a lot of it has to do with pure denial. If the problem is a skill mismatch, then there's an easy conventional solution. If the problem's a lack of labor mobility, then that will eventually work itself out. But what if the problem is relentlessly advancing technology? What if we are getting close to a "tipping point" where autonomous technology can do the typical jobs that are required by the economy as well as an average worker? Well, that is basically UNTHINKABLE. It's unthinkable because there are NO conventional solutions.

In my book, The Lights in the Tunnel, I do propose a (theoretical) solution, but I would be the first to admit that any viable solution to such a problem would have to be both radical and politically untenable in today's environment. That's why I don't spend much time suggesting solutions here: what would be the point? (but please do read the book---it's free). I think the first step is to get past denial and start to at least seriously think about the problem in a rational way.

The few economists that have visited this blog and commented on my previous posts have generally barricaded themselves behind economic principles that were formulated more than a century ago (see the comments on my posts about the lump of labour fallacy and comparative advantage).

Most economists seem to be unwilling to really consider this issue---perhaps because it threatens nearly all the assumptions they hold dear. I wrote about this in my first post on this blog. We'll see how long it takes for the economists to wake up to what is really happening.

Update

I've posted a followup that addresses comments and poses some questions for economists: Econometrics and Technological Unemployment -- Some Questions