Friday

Artificial Intelligence - Economics and Job Market Impact

Robin Hanson, a professor at George Mason University (who also blogs at Overcoming Bias), is one of the few economists who has given serious thought to the potential economic implications of artificial intelligence. In Dr. Hanson’s 1998 paper, “Economic Growth Given Machine Intelligence,” he suggests several variations on a growth model which assumes that machines achieve sufficient intelligence to become complete substitutes for, rather than complements to, human labor. Dr. Hanson’s conclusions are very optimistic, and to me, quite counterintuitive. His models “suggest that wholesale use of machine intelligence could increase economic growth rates by an order of magnitude or more.” At the same time, however, he notes the obvious reality that as machines become affordable, and very likely more capable, substitutes for human workers, “wages might well fall below human subsistence levels. “

My immediate reaction to this is that economic growth at any level—let alone of an order of magnitude beyond what we are accustomed to—is fundamentally incompatible with wages that are falling dramatically for the vast majority of workers. We might, perhaps, have vigorous economic growth if the falling wages applied to only a minority of human workers, but it is very difficult for me to conceive of a way in which such growth would be compatible with wages falling across the board—or even for the bulk of workers. The reason is simple: workers are also consumers (and support other consumers). If wages fall dramatically, then consumption must likewise fall because the majority of personal consumption is supported by wage income.

Additionally, I would make the point that as artificial intelligence overwhelmed the labor market, the psychological impact on consumers would almost certainly amplify the fall in consumption. As the number of viable consumers in the market rapidly diminished, the mass market business models of most industries would be numerically undermined: there simply would not be enough willing and able buyers to support the high volume production of goods and services that characterizes most of the major industries that make up today’s economy. The result would not be extreme economic growth, but instead falling revenues and, quite probably, widespread business failures. It seems to me that the current economic situation offers a fair amount of support for my position.

In his paper, it appears to me that Dr. Hanson makes two separate assumptions to get around this basic problem of a reasonable balance between production and consumption. In his initial overview of his growth models, Dr. Hanson writes: “We assume that the product of the economy can be either consumed or used to produce more of any kind of capital (i.e., human, hardware, software, other).” I read this to mean that Hanson is assuming that private sector investment might “pick up the slack” left by diminished consumption. This strikes me an unsupportable assumption for the simple reason that business investment is not independent of consumption—or, at least, current investment is clearly a function of anticipated future consumer spending.

Businesses invest in order to better position themselves to reap the fruits of future consumption. As a business owner myself, I can really think of no other good reason for a business to make substantial investments in “human, hardware, software, or other” capital. In a world in which most workers’ jobs are being automated away and will never return, there would be very little reason to anticipate that future consumer spending would be anything but anemic. Therefore, there would be no reason for the private sector to invest. To me, it seems intuitively obvious that overall private sector investment would not increase, but instead would fall in line with plummeting consumption.

Dr. Hanson does, however, offer a second assumption that might help get around this problem:

Surviving the Economics of Artificial Intelligence: Everyone a Capitalist

Dr. Hanson suggests that his results “may be compatible with a rapidly rising per-capita income for humans, if humans retain a constant fraction of capital, perhaps including the wages of machine intelligences, either directly via ownership or indirectly via debt.” In other words, he seems to be saying that if consumers have an ownership interest in the economy of the future, then the resulting investment income will be sufficient to make up for the precipitous decline in wages. Presumably this would allow the population to continue consuming. Dr. Hanson fleshes out this view in another article on “Singularity Economics” that was published in IEEE Spectrum in June, 2008:

…human labor would no longer earn most income. Owners of real estate or of businesses that build, maintain or supply machines would see their wealth grow at a fabulous rate—about as fast as the economy grows. Interest rates would be similarly great. Any small part of this wealth would allow humans to live comfortably somewhere…

In other words, everyone (or at least most people) will have a piece of the action, and the returns on that ownership will be so fantastic that almost everyone will have a reasonable discretionary income—with which they can then go out and consume.

I’m going to leave aside the obvious problems with the distribution of wealth and income, as well as with any hope of social mobility, that this scenario implies, and instead focus on a more basic question: would asset values really increase at the extraordinary rate that Hanson suggests? Would they increase at all?

Dr. Hanson seems to be assuming that the stock market (and other productive assets) would increase dramatically in value because investors would recognize that businesses now have a fantastic new technology (intelligent machines) which will enable extraordinarily efficient production. The problem I see with this is that, according to modern financial theory, asset values are not determined by investors’ perceptions about technology. Asset values are defined by investors’ expectations for the future cash flows that will be associated with the asset in question. It seems clear to me that, in the midst of across the board job automation and plunging consumer demand, those future cash flows would be looking pretty minimal.

In order for asset values to increase, investors would have to reason that, because asset values would increase dramatically, nearly everyone would have access to an investment income which could then be used to consume—and thereby create the future cash flows which would justify the current value of the asset.

That strikes me as both circular and unlikely. Dr. Hanson seems to be assuming a perpetual asset bubble that somehow gets going even though it is not even remotely initially supported by fundamentals. In fact, the initial fundamentals would point—quite dramatically—in exactly the opposite direction.

Where would these courageous initial investors come from? During the height of the financial crisis last year, I happened to see a report on CNBC which noted that extremely wealthy people were buying gold and having it shipped directly to their estates. They wanted their gold in their homes, behind their gates and walls, buried in their underground vaults. Are those perhaps the investors that Dr. Hanson is expecting to step up to the plate and begin driving up asset values when intelligent machines arrive and destroy consumer demand?

It seems to me that Professor Hanson’s views are really quite unsupportable. Nonetheless, it is of course possible that I have made an error somewhere or I have misunderstood Dr. Hanson’s arguments. I look forward to Professor Hanson’s response to my thoughts.

Update: Dr. Hanson has responded on his blog. My response to his response is here.

Unemployment from Robotics, Artificial Intelligence and Computers - Will Robots and Machines steal our Jobs?

Could Advancing Job Automation Technology Cause Structural Unemployment?

The unemployment situation is looking increasingly dismal. Is it possible that there’s something going on that no one wants to acknowledge?

There can be little doubt that computers, robotic technologies and other forms of job automation have been getting far more capable and that as this trend continues, more workers are certain to be displaced in the relatively near future. Most economists dismiss any concern that this might lead to long-term structural unemployment. At the risk of being labeled a “neo-Luddite,” I’d like to explore this issue a little further.

I think I can make a fairly strong argument that a very large percentage of jobs are, on some level, essentially routine and repetitive in nature. In other words, the job can be broken down into a discrete set of tasks that tend to get repeated on a regular basis. It seems likely that, as both hardware and software continue to advance, a large fraction of these job types are ultimately going to be susceptible to machine or software automation.

I’m not talking about far fetched science fiction-level technology here: this is really a simple extrapolation of the expert systems and specialized algorithms that can currently land jet airplanes, trade autonomously on Wall Street, or beat nearly any human being at a game of chess. As technology progresses, I think there is little doubt that these systems will begin to match or exceed the capability of human workers in many routine job categories–and this includes a lot of workers with college degrees or other significant training. Many workers will also be increasingly threatened by the continuing trend toward self-service technologies that push tasks onto consumers.

One of the most extreme historical examples of technologically induced job losses is, of course, the mechanization of agriculture. In the late 1800s, about three quarters of workers in the U.S. were employed in agriculture. Today, the number is around 2-3%. Advancing technology irreversibly eliminated millions of jobs.

Obviously, when agriculture mechanized, we did not end up with long-term structural unemployment. Workers were absorbed by other industries, and average wages and overall prosperity increased dramatically. The historical experience with agriculture is, in fact, an excellent illustration of the so-called “Luddite fallacy.” This is the idea–and I think it is generally accepted by economists–that technological progress will never lead to massive, long-term unemployment.

The reasoning behind the Luddite fallacy goes roughly like this: As labor-saving technologies improve, some workers lose their jobs in the short run, but production also becomes more efficient. That leads to lower prices for the goods and services produced, and that, in turn, leaves consumers with more money to spend on other things. When they do so, demand increases across nearly all industries–and that means more jobs. That seems to be exactly what happened with agriculture: food prices fell as efficiency increased, and then consumers went out and spent their extra money elsewhere, driving increased employment in the manufacturing and service sectors.

The question we have to ask is whether or not that same scenario is likely to play out again. The problem is that this time we are not talking about a single industry being automated: these technologies are going to penetrate across the board. When agriculture mechanized, there were clearly other labor intensive sectors capable of absorbing the workers. There’s little evidence to suggest that’s going to be the case this time around.

It seems to me that, as automation penetrates nearly everywhere, there must come a “tipping point,” beyond which the overall economy is simply not labor intensive enough to continue absorbing workers who lose their jobs due to automation (or globalization). Beyond this point, businesses will be able to ramp up production primarily by employing machines and software–and structural unemployment then becomes inevitable.

If we reach that point, then I think we also have a serious problem with consumer demand. If automation is relentless, then the basic mechanism that gets purchasing power into the hands of consumers begins to break down. As a thought experiment, imagine a fully automated economy. Virtually no one would have a job (or an income); machines would do everything. So where would consumption come from? If we’re still considering a market (rather than a planned) economy, why would production continue if there weren’t any viable consumers to purchase the output? Long before we reached that extreme point of full automation, it seems pretty clear that mass-market business models would become unsustainable.

One of the things that concerns me the most about this scenario is the potential influence of consumer psychology. If at some point in the future consumers look out the window and see a landscape where jobs are relentlessly getting automated away, and if it appears that getting additional education or training provides little protection, there’s likely to be a significant negative impact on consumer sentiment and discretionary spending. If we someday get into a reinforcing cycle driven by fear of automation, a very dark scenario could ensue. It’s difficult to see how traditional policies like stimulus spending or tax cuts would be effective because they wouldn’t address consumers’ concerns about long-term income continuity.

Most economists will likely object to my arguments here as speculative and lacking in objective data. I think that if you look at issues like stagnating or declining wages for average workers, growing income inequality, increasing productivity, and consumption supported by debt rather than income, you can certainly find evidence that generally suggests we might be approaching that “tipping point” where structural unemployment is going to become a problem. However, it seems unlikely that an econometric analysis of past data is going offer clear support for this theory–and if it ever does, it will be very late in the game.

I wonder about the wisdom of the extreme emphasis on quantitative data analysis that seems to characterize economics. Should we really steer the ship exclusively by focusing our telescope on the wake? There might be an iceberg ahead.

________________________________________
Martin Ford is the author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future.

Tuesday

Future Computer Technology

What can we expect in terms of future computer technology?

Computers have clearly been getting a lot better--faster, lighter, more portable--for a long time. In fact, they have been getting better at an accelerating rate.

In 1965, Gordon Moore, one of the founders of Intel, established what's known as "Moore's Law." It says that the number of transistors on a computer chip will roughly double every two years. Moore's Law has held true for over forty years. Many people now use Moore's Law as a general rule of thumb to express the acceleration of computer technology in general; in other words, the computational capability of computers has been approximately doubling every two years.

That is a fantastic rate of increase: an exponential or geometric progression. To get an idea of what that might mean for future computer technology, lets imagine you start with a penny (one cent) in 1975. That's the year the first real PCs (like the Apple II) began to appear.

So, in 1975 you have 1 cent. Two years later in 1977, you have 4 cents. 1979: 8 cents, and so on.

Here's how it would go:

45 cents in 1985.

$3.60 in 1992.

$82 in 2001.

and $1,300 in 2009.
So we have $1,300 after starting with only a penny, and that demonstrates the difference between a machine like the Apple II and the most advanced computers available today.

Future Computer Technology - A Look Ahead

What's more interesting is if we take a look into the future:

$10,500 in 2015.

$84,000 in 2021.

$336,000 in 2025.

$2.6 million in 2031.

So, if Moore's Law continues to hold, we are going to see a fantastic increase in the power of computer technology by 2031. It's possible that the acceleration will slow down before then because, at some point, chip fabrication technologies will hit a fundamental limit. However, there may well be new technologies like quantum or optical computing by then, or the focus may simply shift to parallel processing so that thousands of cheap processors are linked together.

Of course, there is also a problem with creating software to take advantage of this power. Historically, software has advanced much more slowly than hardware, so it's really software that is the bottleneck.

Software companies like Microsoft will have to come up with compelling applications to make use of all that power. I think there is a good chance that artificial intelligence is going to be one of the biggest uses of the computer technology of the future.

If that's the case, job automation technology is likely to leap forward dramatically, and computers will be capable of doing many of the routine jobs in the economy--even those held by people with college degrees. That is going to create some serious issues for the job market and for the economy as a whole. That's the main focus of my new book, The Lights in the Tunnel.

Friday

Best Jobs for the Future

Given the state of the current job market, at lot of people are wondering what career path will make the most sense in the future. The reality is that two primary forces are going to impact the future job market: automation technology and globalization.

Most people tend to focus on globalization as the primary threat. Recent items in the press have noted that people are specifically looking for jobs that can't be outsourced, while at the same time manufacturing jobs continue to migrate to low wage countries.

While globalization and outsourcing are getting most of the attention, the reality is that job automation probably represents a bigger long term threat to most workers than globalization. In addition, many people will be surprised to learn that the threat from job automation is not going to be limited to low skilled/low wage workers. While it is true that robots and machines may someday take over many routine jobs in supermarkets, warehouses and fast food restaurants, many people who sit at desks using computers may be impacted even sooner.

The Best Future Jobs will be Jobs that are Protected from both Automation and Globalization

The New York Times recently had a story entitled Beware, Humans. The Era of Automation Software Has Begun. Sophisticated software may soon be capable of performing many service jobs. In the future, we can expect that many of the jobs that are now being offshored will instead by handled by automation software.

As a result, choosing a career in the future is likely to be increasingly complex. For example, the assumption that getting a college degree will always lead to a better job may not hold true, because many college graduates end up taking "knowledge worker" jobs--they end up sitting at desks using computers, but they still very often perform relatively routine and repetitive tasks (especially at the entry level). These types of jobs are likely to be highly vulnerable to both offshoring and automation at some point in the relatively near future.

Technological change is going to abrupt and unpredictable, so choosing the best job for the future may be a challenge, and it may be necessary to be flexible. The Lights in the Tunnel, includes a detailed look a the trends in automation and offshoring and gives some insight into which types of jobs are likely to relatively safe, and which ones may be among the first automated.

Disadvantages of Globalization

Globalization, of course, has both disadvantages and advantages. For many average workers in developed countries, the disadvantages seem to loom very large. There can be no doubt that migration of manufacturing to low wage countries and the direct outsourcing of service jobs is a factor in stagnating and even declining wages in the West.

While most economists believe that the advantages gained from globalization outweigh the disadvantages, they base this largely on a historical analysis of trade in the past. One question that has to be asked is whether advancing technology is changing the rules so that the impacts from globalization in the future may turn out to be an entirely new ballgame.

Advancing Technology may Amplify the Disadvantages of Globalization

The disadvantages of globalization may, in fact, be exacerbated by advancing technology. An obvious example of this is service offshoring, where what would have previously been a local job can be performed remotely by an offshore worker. There is really has no historical precedent for this because offshoring the job incurs no transportation costs and no delay in rendering the service. As a result, it's very possible that economists are underestimating the impact of outsourcing on the job market.

Another disadvantage of today's globalization is that it creates enormous trade imbalances that may well prove unsustainable. The reality is that we have globalized labor and capital, but we have not globalized consumption. Most of the workers in low wage countries who benefit from globalization do not earn enough to purchase the products they are producing. As a result, consumers in the U.S. and other developed countries are expected to continue driving the factories in China and other low wage counties, even as the jobs they rely on for income vaporize. This may well prove unsustainable, and the dramatically falling exports now being seen by businesses in China may be just the leading edge of the coming crisis.

However, it is probably safe to say that, in the coming decades, accelerating technology is likely to have a bigger impact on the job market and on the welfare of average workers than globalization--even though globalization will continue to be very significant.