Fifty years ago, a committee organized by the U.S. National Academy of Sciences concluded a study into automated language translation by saying that “There is no immediate or predictable prospect of useful machine translation.”
Forty years ago, the idea that trained doctors could ever be challenged by algorithms was similarly the stuff of sci-fi novels and movies.
Ten years ago, respected MIT and Harvard economists Frank Levy and Richard Murnane published their book The New Division of Labor, containing an optimistic chapter called “Why People Still Matter,” which patiently explained why software could never replace a human driver.
In all of these cases, the people making the forecasts weren’t idiots. They were doing what rational individuals ought to do: basing their conclusions on the available evidence, and then extrapolating on it to predict what was going to happen next. At the same time, in every one of these predictions the experts claiming that a certain part of life was automation-proof turned out to be wrong.
Today Google Translate, medical algorithms, and self-driving cars are all very much a reality. Almost every field of work—from lawyers and law enforcement officials, to accountants and architects—is undergoing an “algorithmic turn” as greater parts of it (including decision-making abilities) are handed over to automation.
“This is happening faster and faster,” says Erik Brynjolfsson, director of the MIT Center for Digital Business—who has recently co-authored a new book with principal research scientist Andrew McAfee entitled, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. “I think that is why people are becoming scared by the process, because they’re seeing the automation can affect a much, much broader set of jobs than ever before.”
The classic narrative of techno-replacement—as harsh as it might have been—stated that if you did a job straightforward enough to be replaced by a machine, you more or less deserved whatever happened. While blue collar jobs found themselves “disrupted” by new technology, white collar jobs typically enjoyed only the positive “sustaining” qualities of improved technology: help making their jobs easier and more efficient.
Now it's not so simple.
Suddenly it’s not just “unskilled” jobs that are being displaced, but also the classic white collar careers that have historically been viewed as safe.
“In our book we distinguish between the first and second Machine Age,” says Brynjolfsson. “In some sense, technology has always been destroying jobs, as well as creating [them]. During the First Machine Age automation was largely focused on muscle work, while the real transition toward cognitive work started in the early days of computerization—and gained momentum in the 1990s. It’s only been in the past decade that it’s really begun catching me off guard.”
One of the big alterations to the old version of techno-replacement was the discovery by artificial intelligence (AI) researchers of what is called Moravec’s paradox. Moravec’s paradox essentially states that—when it comes to AI—it is the seemingly hard problems (such as analyzing the stock market) that turn out to be easy, while the apparently easy problems (such as replicating the facial recognition abilities of a four-year-old) are hard.
While advances in robotics are certainly happening (see the increased use of quadrocopters and drones, for instance) the real advances in automation aren’t in jobs involving manual labor, but rather in ones like accounting, bookkeeping, and other roles that can be carried out using tools like routine information processing.
As Brynjolfsson notes, throughout history technology has rendered some jobs obsolete, but has also had the effect of creating others. In contrast to the Industrial Revolution, however, one of the big questions surrounding today’s digital revolution is whether as many jobs are being created as are being destroyed.
Consider, for example, the case of Kodak versus Instagram. At the height of Kodak’s power, the company employed more than 140,000 people and carried a valuation of $28 billion. Today, Kodak is bankrupt, and Instagram has taken its place as the new face of digital photography. When Instagram was sold to Facebook for $1 billion in 2012, it employed just 13 people. Is it not reasonable to wonder where all those jobs went? And what happened to the wealth those middle-class jobs created?
“I think everyone was struck by how few employees Instagram had when it was sold,” says Brynjolfsson. “It shouldn’t, in theory, be bad news when technology lets us create more wealth with less work. It’s bad news if we don’t have the organizational institutions to take advantage of that in a way that lets us create shared prosperity.”
Brynjolfsson’s right. Pick up any almost any technology book predicting the future published in the 1970s and '80s, and you’ll likely find a passage about how technology is going to bring about a 20-hour working week and retirement at age 50. The people making these predictions almost exclusively meant them as optimistic. Today, they are arguably coming true. but are also a whole lot less voluntary than most people thought. The reason you might have your work hours cut in half—or be laid off at 50—owes less to the fact that you can accumulate a large amount of wealth in less time than it does that a bot can, and often will, do your job cheaper and more efficiently than you can.
The decline of middle-class jobs enlarges an already-existing wealth disparity between rich and poor that has grown rapidly during the digital age. Intuit’s TurboTax system, for instance, automates the job of tax preparation. As Brynjolfsson and McAfee point out in The Second Machine Age, TurboTax’s CEO made $4 million last year, while its founder, Scott Cook, is a billionaire. The job that its algorithms can do, on the other hand, would previously have been carried out by hundreds of thousands of human operators—now rendered unnecessary.
Similar “winner-takes-all” dynamics are taking place in a number of industries. In his book Failing Law Schools, law professor Brian Tamanha notes U.S. government statistics suggesting that until 2018 there will only be 25,000 new openings available for young lawyers—despite the fact that law schools will produce around 45,000 graduates during that same time frame. The problem is that tools like e-discovery systems can now effectively carry out much of the work that previously would have been carried out by junior lawyers. In this way, it is entirely possible to imagine a future in which law firms stop hiring junior and trainee lawyers completely, and pass their work over to artificial intelligence systems instead.
There are no easy answers to this, although Brynjolfsson suggests that we should respond by reimagining work and learning in a way that makes us less disposable. “Did you know that the word ‘job’ has only been around for about 300 years?” he says. “It arose in part as people looked for a way to structure tasks that fit around the machines they were working with. In many ways, humans became cogs in the machine themselves.”
This can be seen in everything from the kinds of routinized jobs that represent widespread employment, through to the industrial paradigm of teaching that makes up the school system—with prisoner-students pitted against warden-teachers.
“That may have been great in the factory era, but going forward it needs rethinking,” he continues. “Robots are great at following instructions. We need people who are creative, who have interpersonal skills. We should be encouraging that kind of transition. Today, machine intelligence has reached a point where humans are freed up to do the kind of unstructured tasks machines are particularly bad at.”
The idea that automation should be used for the jobs we don’t want to do—while leaving the things we enjoy—isn’t new. In his pamphlet "The Soul of Man Under Socialism," Oscar Wilde wrote that, “The fact is, that civilisation requires slaves … Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralizing. On mechanical slavery, on the slavery of the machine, the future of the world depends.” There can be very few people who disagree with this point.
The problem, however, is in the idea that the "creative class" jobs are somehow any safer than any other. A company like Narrative Science, for example, is aiming to replace journalists—although the service currently is fairly straightforward. But while that is true now, will it always be this way? Possibly not. “I used to put limitations on what we do, assuming our stories would be specific to data-rich industries,” Automated Insights founder Robbie Allen told tech writer Steven Levy in an article about automated journalism. “Now I think ultimately the sky is the limit.”
Ultimately, betting against technology is a bad idea. To some degree, all of us need to take a page out of economist Theodore Levitt’s famous 1960 article “Marketing Myopia” and start “[plotting] the obsolescence of what now produces [our] livelihood.”
But techno-optimists and techno-skeptics are also more similar than they would like to believe, since both act as though technology’s forwards march is autonomous and unstoppable—and we can either like it or lump it. That’s not true. How technology advances from here is an open question that needs to be grappled with at policy levels—as well as the right philosophical questions being asked about what it is that we want to hold back from automation. Faster, more efficient, and cheaper aren’t the only metrics that matter, after all.
In the short term, companies will continue to need human workers to satisfy customers and succeed economically for the foreseeable future. Longer-term history has shown that we are bad predictors when it comes to how automation will affect our lives—although that’s not stopping Erik Brynjolfsson and Andy McAfee from asking the questions.
“As big as the disruptions have been over the past 10 years, Andy and I think technology will have an even bigger in the next decade,” Brynjolfsson says.
[Image: Flickr user Nacho]