Category Archives: Automation & Technology

As robots take our jobs, we need something else. I know what that is – George Monbiot.

It’s untenable to let salaried work define us. In the future, what we do for society unpaid should be at least as important.

Why bother designing robots when you can reduce human beings to machines? Last week, Amazon acquired a patent for a wristband that can track the hand movements of workers. If this technology is developed, it could grant companies almost total control over their workforce.

Last month the Guardian interviewed a young man called Aaron Callaway, who works nights in an Amazon warehouse. He has to place 250 items an hour into particular carts. His work, he says, is so repetitive, antisocial and alienating that: “I feel like I’ve lost who I was. My main interaction is with the robots.” And this is before the wristbands have been deployed.

I see the terrible story of Don Lane, the DPD driver who collapsed and died from diabetes, as another instance of the same dehumanisation. After being fined £150 by the company for taking a day off to see his doctor, this “self-employed contractor” (who worked full-time for the company and wore its uniform) felt he could no longer keep his hospital appointments. As the philosopher Byung-Chul Han argues, in the gig economy, “every individual is master and slave in one… class struggle has become an internal struggle with oneself.” Everything work offered during the social democratic era – economic security, a sense of belonging, social life, a political focus – has been stripped away: alienation is now almost complete. Digital Taylorism, splitting interesting jobs into tasks of mind-robbing monotony, threatens to degrade almost every form of labour. Workers are reduced to the crash-test dummies of the post-industrial age. The robots have arrived, and you are one of them.

So where do we find identity, meaning and purpose, a sense of autonomy, pride and utility? The answer, for many people, is volunteering. Over the past few weeks, I’ve spent a fair bit of time in the NHS, and I’ve realised that there are two national health systems in this country: the official one, performing daily miracles, and the voluntary network that supports it.

Everywhere I look, there are notices posted by people helping at the hospital, running support groups for other patients, raising money for research and equipment. Without this support, I suspect the official system would fall apart. And so would many of the patients. Some fascinating research papers suggest that positive interactions with other people promote physical healing, reduce physical pain, and minimise anxiety and stress for patients about to have an operation. Support groups save lives. So do those who raise money for treatment and research.

Last week I spoke to two remarkable volunteers. Jeanne Chattoe started fundraising for Against Breast Cancer after her sister was diagnosed with the disease. Until that point, she had lived a quiet life, bringing up her children and working in her sister’s luggage shop. She soon discovered powers she never knew she possessed. Before long, she started organising an annual fashion show that over 13 years raised almost £400,000. Then, lying awake one night, she had a great idea: why not decorate her home town pink once a year, recruiting the whole community to the cause? Witney in the Pink has now been running for 17 years, and all the shops participate: even the butchers dye their uniforms pink. The event raises at least £6,000 a year.

“It’s changed my whole life,” Jeanne told me. “I eat, live and breathe against breast cancer … I don’t know what I would have done without fundraising. Probably nothing. It’s given me a purpose.” She acquired so much expertise organising these events that in 2009 Against Breast Cancer appointed her chair of its trustees, a position she still holds today.

After his transplant, Kieran Sandwell donated his old heart to the British Heart Foundation. Then he began thinking about how he could support its work. He told me he had “been on the work treadmill where I’ve not enjoyed my job for years, wondering what I’m doing”. He set off to walk the entire coastline of the UK, to raise money and awareness. He now has 2,800 miles behind him and 2,000 ahead. “I’ve discovered that you can actually put your mind to anything … whatever I come across in my life, I can probably cope with it. Nothing fazes me now.”

Like Jeanne, he has unlocked unexpected powers. “I didn’t know I had in me the ability just to be able to talk to anyone.” His trek has also ignited a love of nature. “I seem to have created this fluffy bubble: what happens to me every day is wonderful… I want to try to show people that there’s a better life out there.” For Jeanne and Kieran, volunteering has given them what work once promised: meaning, purpose, place, community. This, surely, is where hope lies.

So here’s my outrageous proposal: replace careers advice with volunteering advice. I’ve argued before that much of the careers advice offered by schools and universities is worse than useless, shoving students headfirst into the machine, reinforcing the seductive power of life-destroying corporations. In fairness to the advisers, their job is becoming almost impossible anyway: the entire infrastructure of employment seems designed to eliminate fulfilling and fascinating work.

But while there is little chance of finding jobs that match students’ hopes and personalities and engage their capabilities, there is every chance of connecting them with good opportunities to volunteer. Perhaps it is time we saw volunteering as central to our identities and work as peripheral: something we have to do, but which no longer defines us. I would love to hear people reply, when asked what they do: “I volunteer at the food bank and run marathons. In my time off, I work for money.”

And there’s a side-effect. The world has been wrecked by people seeking status through their work. In many professions – such as fossil fuel energy companies, weapons manufacture, banking, advertising – your prestige rises with the harm you do. The greater your destruction of other people’s lives, the greater your contribution to shareholder value. But when you volunteer, the respect you gain rises with the good you do.

We should keep fighting for better jobs and better working conditions. But the battle against workplace technology is an unequal one. The real economic struggle now is for the redistribution of wealth generated by labour and machines, through universal basic income, the revival of the commons and other such policies. Until we achieve this, most people will have to take whatever work is on offer. But we cannot let it own us.

The Guardian

Universities in the Age of AI – Andrew Wachtel.

Over the next 50 years or so, as AI and machine learning become more powerful, human labor will be cannibalized by technologies that outperform people in nearly every job function. How should higher education prepare students for this eventuality?

BISHKEK – I was recently offered the presidency of a university in Kazakhstan that focuses primarily on business, economics, and law, and that teaches these subjects in a narrow, albeit intellectually rigorous, way. I am considering the job, but I have a few conditions.

What I have proposed is to transform the university into an institution where students continue to concentrate in these three disciplines, but must also complete a rigorous “core curriculum” in the humanities, social sciences, and natural sciences – including computer science and statistics. Students would also need to choose a minor in one of the humanities or social sciences.

There are many reasons for insisting on this transformation, but the most compelling one, from my perspective, is the need to prepare future graduates for a world in which artificial intelligence and AI-assisted technology plays an increasingly dominant role. To succeed in the workplace of tomorrow, students will need new skills.

Over the next 50 years or so, as AI and machine learning become more powerful, human labor will be cannibalized by technologies that outperform people in nearly every job function. Higher education must prepare students for this eventuality. Assuming AI will transform the future of work in our students’ lifetime, educators must consider what skills graduates will need when humans can no longer compete with robots.

It is not hard to predict that rote tasks will disappear first. This transition is already occurring in some rich countries, but will take longer in places like Kazakhstan. Once this trend picks up pace, however, populations will adjust accordingly. For centuries, communities grew as economic opportunities expanded; for example, farmers had bigger families as demand for products increased, requiring more labor to deliver goods to consumers.

But the world’s current population is unsustainable. As AI moves deeper into the workplace, jobs will disappear, employment will decline, and populations will shrink accordingly. That is good in principle – the planet is already bursting at the seams – but it will be difficult to manage in the short term, as the pace of population decline will not compensate for job losses amid the robot revolution.

For this reason, the next generation of human labor – today’s university students – requires specialized training to thrive. At the same time, and perhaps more than ever before, they need the kind of education that allows them to think broadly and to make unusual and unexpected connections across many fields.

Clearly, tomorrow’s leaders will need an intimate familiarity with computers – from basic programming to neural networks – to understand how machines controlling productivity and analytic processes function. But graduates will also need experience in psychology, if only to grasp how a computer’s “brain” differs from their own. And workers of the future will require training in ethics, to help them navigate a world in which the value of human beings can no longer be taken for granted.

Educators preparing students for this future must start now. Business majors should study economic and political history to avoid becoming blind determinists. Economists must learn from engineering students, as it will be engineers building the future workforce. And law students should focus on the intersection of big data and human rights, so that they gain the insight that will be needed to defend people from forces that may seek to turn individuals into disposable parts.

Even students studying creative and leisure disciplines must learn differently. For one thing, in an AI-dominated world, people will need help managing their extra time. We won’t stop playing tennis just because robots start winning Wimbledon; but new organizational and communication skills will be required to help navigate changes in how humans create and play. Managing these industries will take new skills tailored to a fully AI world.

The future of work may look nothing like the scenarios I envision, or it may be far more disruptive; no one really knows. But higher education has a responsibility to prepare students for every possible scenario – even those that today appear to be barely plausible. The best strategy for educators in any field, and at any time, is to teach skills that make humans human, rather than training students to outcompete new technologies.

No matter where I work in education, preparing young people for their futures will always be my job. And today, that future looks to be dominated by machines. To succeed, educators – and the universities we inhabit – must evolve.

*

Andrew Wachtel is President of the American University of Central Asia.

Project Syndicate

Robots will take our jobs. We’d better plan now, before it’s too late – Larry Elliott.

The opening of the Amazon Go store in Seattle brings us one step closer to the end of work as we know it.

A new sort of convenience store opened in the basement of the headquarters of Amazon in Seattle in January. Customers walk in, scan their phones, pick what they want off the shelves and walk out again. At Amazon Go there are no checkouts and no cashiers.

Instead, it is what the tech giant calls “just walk out” shopping, made possible by a new generation of machines that can sense which customer is which and what they are picking off the shelves. Within a minute or two of the shopper leaving the store, a receipt pops up on their phone for items they have bought.

This is the shape of things to come in food retailing. Technological change is happening fast and it has economic, social and ethical ramifications. There is a downside to Amazon Go, even though consumers benefit from lower prices and don’t waste time in queues. The store is only open to shoppers who can download an app on their smartphone, which rules out those who rely on welfare food stamps. Constant surveillance means there’s no shoplifting, but it has a whiff of Big Brother about it.

Change is always disruptive but the upheaval likely as a result of the next wave of automation will be especially marked. Driverless cars, for instance, are possible because intelligent machines can sense and have conversations with each other. They can do things – or will eventually be able to do things – that were once the exclusive preserve of humans. That means higher growth but also the risk that the owners of the machines get richer and richer while those displaced get angrier and angrier.

The experience of past industrial revolutions suggests that resisting technological change is futile. Nor, given that automation offers some tangible benefits – in mobility for the elderly and in healthcare, for instance – is it the cleverest of responses.

A robot tax – a levy that firms would pay if machines were taking the place of humans – would slow down the pace of automation by making the machines more expensive but this too has costs, especially for a country such as Britain, which has a problem with low investment, low productivity and a shrunken industrial base. The UK has 33 robot units per 10,000 workers, compared with 93 in the US and 213 in Japan, which suggests the need for more automation not less. On the plus side, the UK has more small and medium-sized companies in artificial intelligence than Germany or France. Penalising these firms with a robot tax does not seem like a smart idea.

The big issue is not whether the robots are coming, because they are. It is not even whether they will boost growth, because they will. On some estimates the UK economy will be 10% bigger by 2030 as the result of artificial intelligence alone. The issue is not one of production but of distribution, of whether there is a Scandinavian-style solution to the challenges of the machine age.

In some ways, the debate that was taking place between the tech industry, politicians and academics in Davos last week was similar to that which surrounded globalisation in the early 1990s. Back then, it was accepted that free movement of goods, people and money around the world would create losers as well as winners, but provided the losers were adequately compensated – either through reskilling, better education, or a stronger social safety net – all would be well.

But the reskilling never happened. Governments did not increase their budgets for education, and in some cases cut them. Welfare safety nets were made less generous. Communities affected by deindustrialisation never really recovered. Writing in the recent McKinsey quarterly, W Brian Arthur put it this way: “Offshoring in the last few decades has eaten up physical jobs and whole industries, jobs that were not replaced. The current transfer of jobs from the physical to the virtual economy is a different sort of offshoring, not to a foreign country but to a virtual one. If we follow recent history we can’t assume these jobs will be replaced either.”

The Centre for Cities suggests that the areas hardest hit by the hollowing out of manufacturing are going to be hardest hit by the next wave of automation as well. That’s because the factories and the pits were replaced by call centres and warehouses, where the scope for humans to be replaced by machines is most obvious.

But there are going to be middle-class casualties too: machines can replace radiologists, lawyers and journalists just as they have already replaced bank cashiers and will soon be replacing lorry drivers. Clearly, it is important to avoid repeating the mistakes of the past. Any response to the challenge posed by smart machines must be to invest more in education, training and skills. One suggestion made in Davos was that governments should consider tax incentives for investment in human, as well as physical, capital.

Still this won’t be sufficient. As the Institute for Public Policy Research has noted, new models of ownership are needed to ensure that the dividends of automation are broadly shared. One of its suggestions is a citizens’ wealth fund that would own a broad portfolio of assets on behalf of the public and would pay out a universal capital dividend. This could be financed either from the proceeds of asset sales or by companies paying corporation tax in the form of shares that would become more valuable due to the higher profits generated by automation.

But the dislocation will be considerable, and comes at a time when social fabrics are already frayed. To ensure that, as in the past, technological change leads to a net increase in jobs, the benefits will have to be spread around and the concept of what constitutes work rethought. That’s why one of the hardest working academics in Davos last week was Guy Standing of Soas University of London, who was on panel after panel making the case for a universal basic income, an idea that has its critics on both left and right, but whose time may well have come.

The Guardian

#Automation #Robots #Amazon #Retail industry

WTF. What’s the Future and why It’s Up To Us – Tim O’Reilly. 

ABOUT THE BOOK 

Renowned as ‘the Oracle of Silicon Valley’, Tim O’Reilly has spent three decades exploring the world-transforming power of information technology. 

Now, the leading thinker of the internet age turns his eye to the future – and asks the questions that will frame the next stage of the digital revolution: 

Will increased automation destroy jobs or create new opportunities? 

What will the company of tomorrow look like? 

Is a world dominated by algorithms to be welcomed or feared? 

How can we ensure that technology serves people, rather than the other way around? 

How can we all become better at mapping future trends? 
Tim O’Reilly’s insights create an authoritative, compelling and often surprising portrait of the world we will soon inhabit, highlighting both the many pitfalls and the enormous opportunities that lie ahead. 

ABOUT THE AUTHOR 

TIM O’REILLY is one of the world’s most influential tech analysts. As the founder of the publishing company O’Reilly Media, he became known for spotting technologies with world-shaking potential – from predicting the rise of the internet in the 1990s to coining and popularising terms like ‘Web 2.0’ and ‘Open Source’ in the 2000s. WTF? is his first book aimed at the general reader.

***

INTRODUCTION: THE WTF? ECONOMY 

THIS MORNING, I spoke out loud to a $150 device in my kitchen, told it to check if my flight was on time, and asked it to call a Lyft to take me to the airport. A car showed up a few minutes later, and my smartphone buzzed to let me know it had arrived. And in a few years, that car might very well be driving itself. 

Someone seeing this for the first time would have every excuse to say, “WTF?” 
At times, “WTF?” is an expression of astonishment. But many people reading the news about technologies like artificial intelligence and self-driving cars and drones feel a profound sense of unease and even dismay. They worry about whether their children will have jobs, or whether the robots will have taken them all. 

They are also saying “WTF?” but in a very different tone of voice. It is an expletive. 
Astonishment: phones that give advice about the best restaurant nearby or the fastest route to work today; artificial intelligences that write news stories or advise doctors; 3-D printers that make replacement parts—for humans; gene editing that can cure disease or bring extinct species back to life; new forms of corporate organization that marshal thousands of on-demand workers so that consumers can summon services at the push of a button in an app. 

Dismay: the fear that robots and AIs will take away jobs, reward their owners richly, and leave formerly middle-class workers part of a new underclass; tens of millions of jobs here in the United States that don’t pay people enough to live on; little-understood financial products and profit-seeking algorithms that can take down the entire world economy and drive millions of people from their homes; a surveillance society that tracks our every move and stores it in corporate and government databases.

Everything is amazing, everything is horrible, and it’s all moving too fast. We are heading pell-mell toward a world shaped by technology in ways that we don’t understand and have many reasons to fear. 

WTF? Google AlphaGo, an artificial intelligence program, beat the world’s best human Go player, an event that was widely predicted to be at least twenty years in the future—until it happened in 2016. If AlphaGo can happen twenty years early, what else might hit us even sooner than we expect? 

For starters: An AI running on a $35 Raspberry Pi computer beat a top US Air Force fighter pilot trainer in combat simulation. The world’s largest hedge fund has announced that it wants an AI to make three-fourths of management decisions, including hiring and firing. 
Oxford University researchers estimate that up to 47% of human tasks, including many components of white-collar jobs, may be done by machines within as little as twenty years. WTF? 

Uber has put taxi drivers out of work by replacing them with ordinary people offering rides in their own cars, creating millions of part-time jobs worldwide. Yet Uber is intent on eventually replacing those on-demand drivers with completely automated vehicles. WTF? 

Without owning a single room, Airbnb has more rooms on offer than some of the largest hotel groups in the world. Airbnb has under 3,000 employees, while Hilton has 152,000. New forms of corporate organization are outcompeting businesses based on best practices that we’ve followed for the lifetimes of most business leaders. WTF? 

Social media algorithms may have affected the outcome of the 2016 US presidential election. WTF? 
While new technologies are making some people very rich, incomes have stagnated for ordinary people, and for the first time, children in developed countries are on track to earn less than their parents. 
What do AI, self-driving cars, on-demand services, and income inequality have in common? They are telling us, loud and clear, that we’re in for massive changes in work, business, and the economy. 

But just because we can see that the future is going to be very different doesn’t mean that we know exactly how it’s going to unfold, or when. Perhaps “WTF?” really stands for “What’s the Future?” Where is technology taking us? Is it going to fill us with astonishment or dismay? And most important, what is our role in deciding that future? How do we make choices today that will result in a world we want to live in? 

I’ve spent my career as a technology evangelist, book publisher, conference producer, and investor wrestling with questions like these. My company, O’Reilly Media, works to identify important innovations, and by spreading knowledge about them, to amplify their impact and speed their adoption. And we’ve tried to sound a warning when a failure to understand how technology is changing the rules for business or society is leading us down the wrong path. 

In the process, we’ve watched numerous technology booms and busts, and seen companies go from seemingly unstoppable to irrelevant, while early-stage technologies that no one took seriously went on to change the world. 
If all you read are the headlines, you might have the mistaken idea that how highly investors value a company is the key to understanding which technologies really matter. We hear constantly that Uber is “worth” $68 billion, more than General Motors or Ford; Airbnb is “worth” $30 billion, more than Hilton Hotels and almost as much as Marriott. 

Those huge numbers can make the companies seem inevitable, with their success already achieved. But it is only when a business becomes profitably self-sustaining, rather than subsidized by investors, that we can be sure that it is here to stay. After all, after eight years Uber is still losing $2 billion every year in its race to get to worldwide scale. That’s an amount that dwarfs the losses of companies like Amazon (which lost $ 2.9 billion over its first five years before showing its first profits in 2001). 

Is Uber losing money like Amazon, which went on to become a hugely successful company that transformed retailing, publishing, and enterprise computing, or like a dot-com company that was destined to fail? Is the enthusiasm of its investors a sign of a fundamental restructuring of the nature of work, or a sign of an investment mania like the one leading up to the dot-com bust in 2001? How do we tell the difference? 

Startups with a valuation of more than a billion dollars understandably get a lot of attention, even more so now that they have a name, unicorn, the term du jour in Silicon Valley. Fortune magazine started keeping a list of companies with that exalted status. Silicon Valley news site TechCrunch has a constantly updated “Unicorn Leaderboard.” But even when these companies succeed, they may not be the surest guide to the future. 

At O’Reilly Media, we learned to tune in to very different signals by watching the innovators who first brought us the Internet and the open source software that made it possible. They did what they did out of love and curiosity, not a desire to make a fortune. We saw that radically new industries don’t start when creative entrepreneurs meet venture capitalists. They start with people who are infatuated with seemingly impossible futures. Those who change the world are people who are chasing a very different kind of unicorn, far more important than the Silicon Valley billion-dollar valuation (though some of them will achieve that too). It is the breakthrough, once remarkable, that becomes so ubiquitous that eventually it is taken for granted. 

Tom Stoppard wrote eloquently about a unicorn of this sort in his play Rosencrantz & Guildenstern Are Dead: A man breaking his journey between one place and another at a third place of no name, character, population or significance, sees a unicorn cross his path and disappear …. “My God,” says a second man, “I must be dreaming, I thought I saw a unicorn.” At which point, a dimension is added that makes the experience as alarming as it will ever be. A third witness, you understand, adds no further dimension but only spreads it thinner, and a fourth thinner still, and the more witnesses there are the thinner it gets and the more reasonable it becomes until it is as thin as reality, the name we give to the common experience. 
The world today is full of things that once made us say “WTF?” but are already well on their way to being the stuff of daily life. The Linux operating system was a unicorn. It seemed downright impossible that a decentralized community of programmers could build a world-class operating system and give it away for free. Now billions of people rely on it. 

The World Wide Web was a unicorn, even though it didn’t make Tim Berners-Lee a billionaire. I remember showing the World Wide Web at a technology conference in 1993, clicking on a link, and saying, “That picture just came over the Internet all the way from the University of Hawaii.” People didn’t believe it. They thought we were making it up. Now everyone expects that you can click on a link to find out anything at any time. 

Google Maps was a unicorn. On the bus not long ago, I watched one old man show another how the little blue dot in Google Maps followed us along as the bus moved. The newcomer to the technology was amazed. Most of us now take it for granted that our phones know exactly where we are, and not only can give us turn-by-turn directions exactly to our destination—by car, by public transit, by bicycle, and on foot—but also can find restaurants or gas stations nearby or notify our friends where we are in real time. 

The original iPhone was a unicorn even before the introduction of the App Store a year later utterly transformed the smartphone market. Once you experienced the simplicity of swiping and touching the screen rather than a tiny keyboard, there was no going back. The original pre-smartphone cell phone itself was a unicorn. As were its predecessors, the telephone and telegraph, radio and television. 

We forget. We forget quickly. And we forget ever more quickly as the pace of innovation increases. AI-powered personal agents like Amazon’s Alexa, Apple’s Siri, the Google Assistant, and Microsoft Cortana are unicorns. Uber and Lyft too are unicorns, but not because of their valuation. Unicorns are the kinds of apps that make us say, “WTF?” in a good way. Can you still remember the first time you realized that you could get the answer to virtually any question with a quick Internet search, or that your phone could route you to any destination? How cool that was, before you started taking it for granted? And how quickly did you move from taking it for granted to complaining about it when it doesn’t work quite right? 
We are layering on new kinds of magic that are slowly fading into the ordinary. A whole generation is growing up that thinks nothing of summoning cars or groceries with a smartphone app, or buying something from Amazon and having it show up in a couple of hours, or talking to AI-based personal assistants on their devices and expecting to get results. 

It is this kind of unicorn that I’ve spent my career in technology pursuing. 
So what makes a real unicorn of this amazing kind? 1. It seems unbelievable at first. 2. It changes the way the world works. 3. It results in an ecosystem of new services, jobs, business models, and industries. 

We’ve talked about the “at first unbelievable” part. What about changing the world? In Who Do You Want Your Customers to Become? Michael Schrage writes: Successful innovators don’t ask customers and clients to do something different; they ask them to become someone different …. Successful innovators ask users to embrace—or at least tolerate—new values, new skills, new behaviors, new vocabulary, new ideas, new expectations, and new aspirations. They transform their customers. 

For example, Schrage points out that Apple (and now also Google and Microsoft and Amazon) asks their “customers to become the sort of people who wouldn’t think twice about talking to their phone as a sentient servant.” Sure enough, there is a new generation of users who think nothing of saying things like: “Siri, make me a six p.m. reservation for two at Camino.”.“Alexa, play ‘Ballad of a Thin Man.’” “Okay, Google, remind me to buy currants the next time I’m at Piedmont Grocery.” 

Correctly recognizing human speech alone is hard, but listening and then performing complex actions in response—for millions of simultaneous users—requires incredible computing power provided by massive data centers. Those data centers support an ever-more-sophisticated digital infrastructure. For Google to remind me to buy currants the next time I’m at my local supermarket, it has to know where I am at all times, keep track of a particular location I’ve asked for, and bring up the reminder in that context. For Siri to make me a reservation at Camino, it needs to know that Camino is a restaurant in Oakland, and that it is open tonight, and it must allow conversations between machines, so that my phone can lay claim to a table from the restaurant’s reservation system via a service like OpenTable. 

And then it may call other services, either on my devices or in the cloud, to add the reservation to my calendar or to notify friends, so that yet another agent can remind all of us when it is time to leave for our dinner date. 
And then there are the alerts that I didn’t ask for, like Google’s warnings: “Leave now to get to the airport on time. 25 minute delay on the Bay Bridge.” or “There is traffic ahead. Faster route available.” 
All of these technologies are additive, and addictive. As they interconnect and layer on each other, they become increasingly powerful, increasingly magical. Once you become accustomed to each new superpower, life without it is like having your magic wand turn into a stick again. These services have been created by human programmers, but they will increasingly be enabled by artificial intelligence. 

That’s a scary word to many people. But it is the next step in the progression of the unicorn from the astonishing to the ordinary. 
While the term artificial intelligence or AI suggests a truly autonomous intelligence, we are far, far from that eventuality. 
AI is still just a tool, still subject to human direction. The nature of that direction, and how we must exercise it, is a key subject of this book. AI and other unicorn technologies have the potential to make a better world, in the same way that the technologies of the first industrial revolution created wealth for society that was unimaginable two centuries ago. AI bears the same relationship to previous programming techniques that the internal combustion engine does to the steam engine. It is far more versatile and powerful, and over time we will find ever more uses for it. 

Will we use it to make a better world? Or will we use it to amplify the worst features of today’s world? So far, the “WTF?” of dismay seems to have the upper hand. “Everything is amazing,” and yet we are deeply afraid. Sixty-three percent of Americans believe jobs are less secure now than they were twenty to thirty years ago. By a two-to-one ratio, people think good jobs are difficult to find where they live. And many of them blame technology. There is a constant drumbeat of news that tells us that the future is one in which increasingly intelligent machines will take over more and more human work. 
The pain is already being felt. For the first time, life expectancy is actually declining in America, and what was once its rich industrial heartland has too often become a landscape of despair.  For everyone’s sake, we must choose a different path. Loss of jobs and economic disruption are not inevitable. 

There is a profound failure of imagination and will in much of today’s economy. For every Elon Musk—who wants to reinvent the world’s energy infrastructure, build revolutionary new forms of transport, and settle humans on Mars—there are far too many companies that are simply using technology to cut costs and boost their stock price, enriching those able to invest in financial markets at the expense of an ever-growing group that may never be able to do so. Policy makers seem helpless, assuming that the course of technology is inevitable, rather than something we must shape. 

And that gets me to the third characteristic of true unicorns: They create value. Not just financial value, but real-world value for society. Consider past marvels. Could we have moved goods as easily or as quickly without modern earthmoving equipment letting us bore tunnels through mountains or under cities? The superpower of humans + machines made it possible to build cities housing tens of millions of people, for a tiny fraction of our people to work producing the food that all the rest of us eat, and to create a host of other wonders that have made the modern world the most prosperous time in human history. 

Technology is going to take our jobs! Yes. It always has, and the pain and dislocation are real. But it is going to make new kinds of jobs possible. History tells us technology kills professions, but does not kill jobs. We will find things to work on that we couldn’t do before but now can accomplish with the help of today’s amazing technologies. 

Take, for example, laser eye surgery. I used to be legally blind without huge Coke-bottle glasses. Twelve years ago, my eyes were fixed by a surgeon who would never have been able to do the job without the aid of a robot, who was now able to do something that had previously been impossible. After more than forty years of wearing glasses so strong that I was legally blind without them, I could see clearly on my own. I kept saying to myself for months afterward, “I’m seeing with my own eyes!” But in order to remove my need for prosthetic vision, the surgeon ended up relying on prosthetics of her own, performing the surgery on my cornea with the aid of a computer-controlled laser. 

During the actual surgery, apart from lifting the flap she had cut by hand in the surface of my cornea and smoothing it back into place after the laser was done, her job was to clamp open my eyes, hold my head, utter reassuring words, and tell me, sometimes with urgency, to keep looking at the red light. I asked what would happen if my eyes drifted and I didn’t stay focused on the light. “Oh, the laser would stop,” she said. “It only fires when your eyes are tracking the dot.” 

Surgery this sophisticated could never be done by an unaugmented human being. The human touch of my superb doctor was paired with the superhuman accuracy of complex machines, a twenty-first-century hybrid freeing me from assistive devices first invented eight centuries earlier in Italy. 

The revolution in sensors, computers, and control technologies is going to make many of the daily activities of the twentieth century seem quaint as, one by one, they are reinvented in the twenty-first. This is the true opportunity of technology: It extends human capability. 
In the debate about technology and the shape of the future, it’s easy to forget just how much technology already suffuses our lives, how much it has already changed us. As we get past that moment of amazement, and it fades into the new normal, we must put technology to work solving new problems. We must commit to building something new, strange to our past selves, but better, if we commit to making it so. We must keep asking: What will new technology let us do that was previously impossible? Will it help us build the kind of society we want to live in? This is the secret to reinventing the economy. 

As Google chief economist Hal Varian said to me, “My grandfather wouldn’t recognize what I do as work.” What are the new jobs of the twenty-first century? Augmented reality—the overlay of computer-generated data and images on what we see—may give us a clue. It definitely meets the WTF? test. The first time a venture capitalist friend of mine saw one unreleased augmented reality platform in the lab, he said, “If LSD were a stock, I’d be shorting it.” That’s a unicorn. But what is most exciting to me about this technology is not the LSD factor, but how augmented reality can change the way we work. 

You can imagine how augmented reality could enable workers to be “upskilled.” I’m particularly fond of imagining how the model used by Partners in Health could be turbocharged by augmented reality and telepresence. The organization provides free healthcare to people in poverty using a model in which community health workers recruited from the population being served are trained and supported in providing primary care. Doctors can be brought in as needed, but the bulk of care is provided by ordinary people. Imagine a community health worker who is able to tap on Google Glass or some next-generation wearable, and say, “Doctor, you need to see this!” (Trust me. Glass will be back, when Google learns to focus on community health workers, not fashion models.) 

It’s easy to imagine how rethinking our entire healthcare system along these lines could reduce costs, improve both health outcomes and patient satisfaction, and create jobs. Imagine house calls coming back into fashion. Add in health monitoring by wearable sensors, health advice from an AI made as available as Siri, the Google Assistant, or Microsoft Cortana, plus an Uber-style on-demand service, and you can start to see the outlines of one small segment of the next economy being brought to us by technology. 

This is only one example of how we might reinvent familiar human activities, creating new marvels that, if we are lucky, will eventually fade into the texture of everyday life, just like wonders of a previous age such as airplanes and skyscrapers, elevators, automobiles, refrigerators, and washing machines.
***

Despite their possible wonders, many of the futures we face are fraught with unknown risks. 

I am a classicist by training, and the fall of Rome is always before me. The first volume of Gibbon’s Decline and Fall of the Roman Empire was published in 1776, the same year as the American Revolution. 
Despite Silicon Valley’s dreams of a future singularity, an unknowable fusion of minds and machines that will mark the end of history as we know it, what history teaches us is that economies and nations, not just companies, can fail. Great civilizations do collapse. Technology can go backward. After the fall of Rome, the ability to make monumental structures out of concrete was lost for nearly a thousand years. It could happen to us. 

We are increasingly facing what planners call “wicked problems”—problems that are “difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize.” Even long-accepted technologies turn out to have unforeseen downsides. The automobile was a unicorn. It afforded ordinary people enormous freedom of movement, led to an infrastructure for transporting goods that spread prosperity, and enabled a consumer economy where goods could be produced far away from where they are consumed. 

Yet the roads we built to enable the automobile carved up and hollowed out cities, led to more sedentary lifestyles, and contributed mightily to the overpowering threat of climate change. Ditto cheap air travel, container shipping, the universal electric grid. All of these were enormous engines of prosperity that brought with them unintended consequences that only came to light over many decades of painful experience, by which time any solution seems impossible to attempt because the disruption required to reverse course would be so massive. We face a similar set of paradoxes today. 

The magical technologies of today—and choices we’ve already made, decades ago, about what we value as a society—are leading us down a path with complex contingencies, unseen dangers, and decisions that we don’t even know we are making. 

AI and robotics in particular are at the heart of a set of wicked problems that are setting off alarm bells among business and labor leaders, policy makers and academics. What happens to all those people who drive for a living when the cars start driving themselves? AIs are flying planes, advising doctors on the best treatments, writing sports and financial news, and telling us all, in real time, the fastest way to get to work. They are also telling human workers when to show up and when to go home, based on real-time measurement of demand. 

Computers used to work for humans; increasingly it’s now humans working for computers. The algorithm is the new shift boss. What is the future of business when technology-enabled networks and marketplaces let people choose when and how much they want to work? What is the future of education when on-demand learning outperforms traditional universities in keeping skills up to date? What is the future of media and public discourse when algorithms decide what we will watch and read, making their choice based on what will make the most profit for their owners? What is the future of the economy when more and more work can be done by intelligent machines instead of people, or only done by people in partnership with those machines? What happens to workers and their families? And what happens to the companies that depend on consumer purchasing power to buy their products? 

There are dire consequences to treating human labor simply as a cost to be eliminated. According to the McKinsey Global Institute, 540 to 580 million people—65 to 70% of households in twenty-five advanced economies—had incomes that had fallen or were flat between 2005 and 2014. Between 1993 and 2005, fewer than 10 million people—less than 2%—had the same experience. 

Over the past few decades, companies have made a deliberate choice to reward their management and “superstars” incredibly well, while treating ordinary workers as a cost to be minimized or cut. 
Top US CEOs now earn 373x the income of the average worker, up from 42x in 1980.1
As a result of the choices we’ve made as a society about how to share the benefits of economic growth and technological productivity gains, the gulf between the top and the bottom has widened enormously, and the middle has largely disappeared. Recently published research by Stanford economist Raj Chetty shows that for children born in 1940, the chance that they’d earn more than their parents was 92%; for children born in 1990, that chance has fallen to 50%.  

Businesses have delayed the effects of declining wages on the consumer economy by encouraging people to borrow—in the United States, household debt is over $12 trillion (80% of gross domestic product, or GDP, in mid-2016) and student debt alone is $1.2 trillion (with more than seven million borrowers in default). 

We’ve also used government transfers to reduce the gap between human needs and what our economy actually delivers. But of course, higher government transfers must be paid for through higher taxes or through higher government debt, either of which political gridlock has made unpalatable. This gridlock is, of course, a recipe for disaster. Meanwhile, in hopes that “the market” will deliver jobs, central banks have pushed ever more money into the system, hoping that somehow this will unlock business investment. But instead, corporate profits have reached highs not seen since the 1920s, corporate investment has shrunk, and more than $30 trillion of cash is sitting on the sidelines. 

The magic of the market is not working. We are at a very dangerous moment in history. The concentration of wealth and power in the hands of a global elite is eroding the power and sovereignty of nation-states while globe-spanning technology platforms are enabling algorithmic control of firms, institutions, and societies, shaping what billions of people see and understand and how the economic pie is divided. At the same time, income inequality and the pace of technology change are leading to a populist backlash featuring opposition to science, distrust of our governing institutions, and fear of the future, making it ever more difficult to solve the problems we have created. 

That has all the hallmarks of a classic wicked problem. Wicked problems are closely related to an idea from evolutionary biology, that there is a “fitness landscape” for any organism. Much like a physical landscape, a fitness landscape has peaks and valleys. The challenge is that you can only get from one peak—a so-called local maximum—to another by going back down. In evolutionary biology, a local maximum may mean that you become one of the long-lived stable species, unchanged for millions of years, or it may mean that you become extinct because you’re unable to respond to changed conditions. 

And in our economy, conditions are changing rapidly. Over the past few decades, the digital revolution has transformed media, entertainment, advertising, and retail, upending centuries-old companies and business models. Now it is restructuring every business, every job, and every sector of society. No company, no job—and ultimately, no government and no economy—is immune to disruption. Computers will manage our money, supervise our children, and have our lives in their “hands” as they drive our automated cars. 

The biggest changes are still ahead, and every industry and every organization will have to transform itself in the next few years, in multiple ways, or fade away. 
We need to ask ourselves whether the fundamental social safety nets of the developed world will survive the transition, and more important, what we will replace them with. Andy McAfee, coauthor of The Second Machine Age, put his finger on the consequence of failing to do so while talking with me over breakfast about the risks of AI taking over from humans: “The people will rise up before the machines do.”

This book provides a view of one small piece of this complex puzzle, the role of technology innovation in the economy, and in particular the role of WTF? technologies such as AI and on-demand services. I lay out the difficult choices we face as technology opens new doors of possibility while closing doors that once seemed the sure path to prosperity. 

But more important, I try to provide tools for thinking about the future, drawn from decades on the frontiers of the technology industry, observing and predicting its changes. The book is US-centric and technology-centric in its narrative; it is not an overview of all of the forces shaping the economy of the future, many of which are centered outside the United States or are playing out differently in other parts of the world. 

In No Ordinary Disruption, McKinsey’s Richard Dobbs, James Manyika, and Jonathan Woetzel point out quite correctly that technology is only one of four major disruptive forces shaping the world to come. 

Demographics (in particular, changes in longevity and the birth rate that have radically shifted the mix of ages in the global population), globalization, and urbanization may play at least as large a role as technology. And even that list fails to take into account catastrophic war, plague, or environmental disruption. 
These omissions are not based on a conviction that Silicon Valley’s part of the total technology innovation economy, or the United States, is more important than the rest; it is simply that the book is based on my personal and business experience, which is rooted in this field and in this one country. 

The book is divided into four parts. In the first part, I’ll share some of the techniques that my company has used to make sense of and predict innovation waves such as the commercialization of the Internet, the rise of open source software, the key drivers behind the renaissance of the web after the dot-com bust and the shift to cloud computing and big data, the Maker movement, and much more. 

I hope to persuade you that understanding the future requires discarding the way you think about the present, giving up ideas that seem natural and even inevitable. 
In the second and third parts, I’ll apply those same techniques to provide a framework for thinking about how technologies such as on-demand services, networks and platforms, and artificial intelligence are changing the nature of business, education, government, financial markets, and the economy as a whole. I’ll talk about the rise of great world-spanning digital platforms ruled by algorithm, and the way that they are reshaping our society. I’ll examine what we can learn about these platforms and the algorithms that rule them from Uber and Lyft, Airbnb, Amazon, Apple, Google, and Facebook. And I’ll talk about the one master algorithm we so take for granted that it has become invisible to us. I’ll try to demystify algorithms and AI, and show how they are not just present in the latest technology platforms but already shape business and our economy far more broadly than most of us understand. 

And I’ll make the case that many of the algorithmic systems that we have put in place to guide our companies and our economy have been designed to disregard the humans and reward the machines. 
In the fourth part of the book, I’ll examine the choices we have to make as a society. Whether we experience the WTF? of astonishment or the WTF? of dismay is not foreordained. It is up to us. It’s easy to blame technology for the problems that occur in periods of great economic transition. But both the problems and the solutions are the result of human choices. During the industrial revolution, the fruits of automation were first used solely to enrich the owners of the machines. 

Workers were often treated as cogs in the machine, to be used up and thrown away. But Victorian England figured out how to do without child labor, with reduced working hours, and their society became more prosperous. 

We saw the same thing here in the United States during the twentieth century. We look back now on the good middle-class jobs of the postwar era as something of an anomaly. But they didn’t just happen by chance. It took generations of struggle on the part of workers and activists, and growing wisdom on the part of capitalists, policy makers, political leaders, and the voting public. In the end we made choices as a society to share the fruits of productivity more widely. We also made choices to invest in the future. That golden age of postwar productivity was the result of massive investments in roads and bridges, universal power, water, sanitation, and communications. 

After World War II, we committed enormous resources to rebuild the lands destroyed by war, but we also invested in basic research. We invested in new industries: aerospace, chemicals, computers, and telecommunications. 
We invested in education, so that children could be prepared for the world they were about to inherit. 

The future comes in fits and starts, and it is often when times are darkest that the brightest futures are being born. Out of the ashes of World War II we forged a prosperous world. By choice and hard work, not by destiny. The Great War of a generation earlier had only amplified the cycle of dismay. What was the difference? 
After World War I, we punished the losers. After World War II, we invested in them and raised them up again. After World War I, the United States beggared its returning veterans. After World War II, we sent them to college. 

Wartime technologies such as digital computing were put into the public domain so that they could be transformed into the stuff of the future. The rich taxed themselves to finance the public good. 
In the 1980s, though, the idea that “greed is good” took hold in the United States and we turned away from prosperity. We accepted the idea that what was good for financial markets was good for everyone and structured our economy to drive stock prices ever higher, convincing ourselves that “the market” of stocks, bonds, and derivatives was the same as Adam Smith’s market of real goods and services exchanged by ordinary people. 

We hollowed out the real economy, putting people out of work and capping their wages in service to corporate profits that went to a smaller and smaller slice of society. We made the wrong choice forty years ago. We don’t need to stick with it. 
The rise of a billion people out of poverty in developing economies around the world at the same time that the incomes of ordinary people in most developed economies have been going backward should tell us that we took a wrong turn somewhere. 

The WTF? technologies of the twenty-first century have the potential to turbocharge the productivity of all our industries. But making what we do now more productive is just the beginning. We must share the fruits of that productivity, and use them wisely. If we let machines put us out of work, it will be because of a failure of imagination and a lack of will to make a better future. 
***

PART I  – 
USING THE RIGHT MAPS 


The map is not the territory. —Alfred Korzybski 


1 SEEING THE FUTURE IN THE PRESENT 

IN THE MEDIA, I’m often pegged as a futurist. I don’t think of myself that way. I think of myself as a mapmaker. I draw a map of the present that makes it easier to see the possibilities of the future. Maps aren’t just representations of physical locations and routes. They are any system that helps us see where we are and where we are trying to go. 

One of my favorite quotes is from Edwin Schlossberg: “The skill of writing is to create a context in which other people can think.”
This book is a map. We use maps—simplified abstractions of an underlying reality, which they represent—not just in trying to get from one place to another but in every aspect of our lives. When we walk through our darkened home without the need to turn on the light, that is because we have internalized a mental map of the space, the layout of the rooms, the location of every chair and table. 

Similarly, when an entrepreneur or venture capitalist goes to work each day, he or she has a mental map of the technology and business landscape. We dispose the world in categories: friend or acquaintance, ally or competitor, important or unimportant, urgent or trivial, future or past. For each category, we have a mental map. But as we’re reminded by the sad stories of people who religiously follow their GPS off a no-longer-existent bridge, maps can be wrong. In business and in technology, we often fail to see clearly what is ahead because we are navigating using old maps and sometimes even bad maps—maps that leave out critical details about our environment or perhaps even actively misrepresent it. Most often, in fast-moving fields like science and technology, maps are wrong simply because so much is unknown. 

Each entrepreneur, each inventor, is also an explorer, trying to make sense of what’s possible, what works and what doesn’t, and how to move forward. Think of the entrepreneurs working to develop the US transcontinental railroad in the mid-nineteenth century. The idea was first proposed in 1832, but it wasn’t even clear that the project was feasible until the 1850s, when the US House of Representatives provided the funding for an extensive series of surveys of the American West, a precursor to any actual construction. Three years of exploration from 1853 to 1855 resulted in the Pacific Railroad Surveys, a twelve-volume collection of data on 400,000 square miles of the American West. 

But all that data did not make the path forward entirely clear. There was fierce debate about the best route, debate that was not just about the geophysical merits of northern versus southern routes but also about the contested extension of slavery. Even when the intended route was decided on and construction began in 1863, there were unexpected problems—a grade steeper than previously reported that was too difficult for a locomotive, weather conditions that made certain routes impassable during the winter. You couldn’t just draw lines on the map and expect everything to work perfectly. 

The map had to be refined and redrawn with more and more layers of essential data added until it was clear enough to act on. Explorers and surveyors went down many false paths before deciding on the final route.

Creating the right map is the first challenge we face in making sense of today’s WTF? technologies. Before we can understand how to deal with AI, on-demand applications, and the disappearance of middle-class jobs, and how these things can come together into a future we want to live in, we have to make sure we aren’t blinded by old ideas. We have to see patterns that cross old boundaries. The map we follow into the future is like a picture puzzle with many of the pieces missing. You can see the rough outline of one pattern over here, and another there, but there are great gaps and you can’t quite make the connections. And then one day someone pours out another set of pieces on the table, and suddenly the pattern pops into focus. 

The difference between a map of an unknown territory and a picture puzzle is that no one knows the full picture in advance. It doesn’t exist until we see it—it’s a puzzle whose pattern we make up together as we go, invented as much as it is discovered. Finding our way into the future is a collaborative act, with each explorer filling in critical pieces that allow others to go forward. 


LISTENING FOR THE RHYMES 

Mark Twain is reputed to have said, “History doesn’t repeat itself, but it often rhymes.” 

Study history and notice its patterns. 
This is the first lesson I learned in how to think about the future. The story of how the term open source software came to be developed, refined, and adopted in early 1998—what it helped us to understand about the changing nature of software, how that new understanding changed the course of the industry, and what it predicted about the world to come—shows how the mental maps we use limit our thinking, and how revising the map can transform the choices we make. 

Before I delve into what is now ancient history, I need you to roll back your mind to 1998. Software was distributed in shrink-wrapped boxes, with new releases coming at best annually, often every two or three years. Only 42% of US households had a personal computer, versus the 80% who own a smartphone today. Only 20% of the US population had a mobile phone of any kind. The Internet was exciting investors—but it was still tiny, with only 147 million users worldwide, versus 3.4 billion today. More than half of all US Internet users had access through AOL. 

Amazon and eBay had been launched three years earlier, but Google was only just founded in September of that year. Microsoft had made Bill Gates, its founder and CEO, the richest man in the world. It was the defining company of the technology industry, with a near-monopoly position in personal computer software that it had leveraged to destroy competitor after competitor. 

The US Justice Department launched an antitrust investigation against the company in May of that year, just as it had done nearly thirty years earlier against IBM. 

In contrast to the proprietary software that made Microsoft so successful, open source software is distributed under a license that allows anyone to freely study, modify, and build on it. Examples of open source software include the Linux and Android operating systems; web browsers like Chrome and Firefox; popular programming languages like Python, PHP, and JavaScript; modern big data tools like Hadoop and Spark; and cutting-edge artificial intelligence toolkits like Google’s TensorFlow, Facebook’s Torch, or Microsoft’s CNTK. 

In the early days of computers, most software was open source, though not by that name. Some basic operating software came with a computer, but much of the code that actually made a computer useful was custom software written to solve specific problems. The software written by scientists and researchers in particular was often shared. 

During the late 1970s and 1980s, though, companies had realized that controlling access to software gave them commercial advantage and had begun to close off access using restrictive licenses. In 1985, Richard Stallman, a programmer at the Massachusetts Institute of Technology, published The GNU Manifesto, laying out the principles of what he called “free software”—not free as in price, but free as in freedom: the freedom to study, to redistribute, and to modify software without permission.  

Stallman’s ambitious goal was to build a completely free version of AT&T’s Unix operating system, originally developed at Bell Labs, the research arm of AT&T. At the time Unix was first developed in the late 1970s, AT&T was a legal monopoly with enormous profits from regulated telephone services. As a result, AT&T was not allowed to compete in the computer industry, then dominated by IBM, and in accord with its 1956 consent decree with the Justice Department had licensed Unix to computer science research groups on generous terms. 

Computer programmers at universities and companies all over the world had responded by contributing key elements to the operating system. But after the decisive consent decree of 1982, in which AT&T agreed to be broken up into seven smaller companies (“the Baby Bells”) in exchange for being allowed to compete in the computer market, AT&T tried to make Unix proprietary. They sued the University of California, Berkeley, which had built an alternate version of Unix (the Berkeley Software Distribution, or BSD), and effectively tried to shut down the collaborative barn raising that had helped to create the operating system in the first place. 

While Berkeley Unix was stalled by AT&T’s legal attacks, Stallman’s GNU Project (named for the meaningless recursive acronym “Gnu’s Not Unix”) had duplicated all of the key elements of Unix except the kernel, the central code that acts as a kind of traffic cop for all the other software. 
That kernel was supplied by a Finnish computer science student named Linus Torvalds, whose master’s thesis in 1990 consisted of a minimalist Unix-like operating system that would be portable to many different computer architectures. He called this operating system Linux. 

Over the next few years, there was a fury of commercial activity as entrepreneurs seized on the possibilities of a completely free operating system combining Torvalds’s kernel with the Free Software Foundation’s re-creation of the rest of the Unix operating system. The target was no longer AT&T, but rather Microsoft. 
In the early days of the PC industry, IBM and a growing number of personal computer “clone” vendors like Dell and Gateway provided the hardware, Microsoft provided the operating system, and a host of independent software companies provided the “killer apps”—word processing, spreadsheets, databases, and graphics programs—that drove adoption of the new platform. 

Microsoft’s DOS (Disk Operating System) was a key part of the ecosystem, but it was far from in control. That changed with the introduction of Microsoft Windows. Its extensive Application Programming Interfaces (APIs) made application development much easier but locked developers into Microsoft’s platform. Competing operating systems for the PC like IBM’s OS/2 were unable to break the stranglehold. And soon Microsoft used its dominance of the operating system to privilege its own applications—Microsoft Word, Excel, PowerPoint, Access, and, later, Internet Explorer, their web browser (now Microsoft Edge)—by making bundling deals with large buyers. 

The independent software industry for the personal computer was slowly dying, as Microsoft took over one application category after another. 

This is the rhyming pattern that I noticed: The personal computer industry had begun with an explosion of innovation that broke IBM’s monopoly on the first generation of computing, but had ended in another “winner takes all” monopoly. Look for repeating patterns and ask yourself what the next iteration might be. Now everyone was asking whether a desktop version of Linux could change the game. Not only startups but also big companies like IBM, trying to claw their way back to the top of the heap, placed huge bets that they could. 

But there was far more to the Linux story than just competing with Microsoft. It was rewriting the rules of the software industry in ways that no one expected. It had become the platform on which many of the world’s great websites—at the time, most notably Amazon and Google—were being built. 

But it was also reshaping the very way that software was being written. In February 1997, at the Linux Kongress in Würzburg, Germany, hacker Eric Raymond delivered a paper, called “The Cathedral and the Bazaar,” that electrified the Linux community. It laid out a theory of software development drawn from reflections on Linux and on Eric’s own experiences with what later came to be called open source software development. 

Eric wrote: “Who would have thought even five years ago that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet?”

The Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who’d take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles. 

Eric laid out a series of principles that have, over the past decades, become part of the software development gospel: that software should be released early and often, in an unfinished state rather than waiting to be perfected; that users should be treated as “co-developers”; and that “given enough eyeballs, all bugs are shallow.” 

Today, whether programmers develop open source software or proprietary software, they use tools and approaches that were pioneered by the open source community. But more important, anyone who uses today’s Internet software has experienced these principles at work. When you go to a site like Amazon, Facebook, or Google, you are a participant in the development process in a way that was unknown in the PC era. You are not a “co-developer” in the way that Eric Raymond imagined—you are not another hacker contributing feature suggestions and code. But you are a “beta tester”—someone who tries out continually evolving, unfinished software and gives feedback—at a scale never before imagined. Internet software developers constantly update their applications, testing new features on millions of users, measuring their impact, and learning as they go. 

Eric saw that something was changing in the way software was being developed, but in 1997, when he first delivered “The Cathedral and the Bazaar,” it wasn’t yet clear that the principles he articulated would spread far beyond free software, beyond software development itself, shaping content sites like Wikipedia and eventually enabling a revolution in which consumers would become co-creators of services like on-demand transportation (Uber and Lyft) and lodging (Airbnb). 

I was invited to give a talk at the same conference in Würzburg. My talk, titled “Hardware, Software, and Infoware,” was very different. I was fascinated not just with Linux, but with Amazon. Amazon had been built on top of various kinds of free software, including Linux, but it seemed to me to be fundamentally different in character from the kinds of software we’d seen in previous eras of computing. Today it’s obvious to everyone that websites are applications and that the web has become a platform, but in 1997 most people thought of the web browser as the application. If they knew a little bit more about the architecture of the web, they might think of the web server and associated code and data as the application. 

The content was something managed by the browser, in the same way that Microsoft Word manages a document or that Excel lets you create a spreadsheet. By contrast, I was convinced that the content itself was an essential part of the application, and that the dynamic nature of that content was leading to an entirely new architectural design pattern for a next stage beyond software, which at the time I called “infoware.” 

Where Eric was focused on the success of the Linux operating system, and saw it as an alternative to Microsoft Windows, I was particularly fascinated by the success of the Perl programming language in enabling this new paradigm on the web. Perl was originally created by Larry Wall in 1987 and distributed for free over early computer networks. I had published Larry’s book, Programming Perl, in 1991, and was preparing to launch the Perl Conference in the summer of 1997. 

I had been inspired to start the Perl Conference by the chance conjunction of comments by two friends. Early in 1997, Carla Bayha, the computer book buyer at the Borders bookstore chain, had told me that the second edition of Programming Perl, published in 1996, was one of the top 100 books in any category at Borders that year. It struck me as curious that despite this fact, there was virtually nothing written about Perl in any of the computer trade papers. Because there was no company behind Perl, it was virtually invisible to the pundits who followed the industry. 

And then Andrew Schulman, the author of a book called Unauthorized Windows 95, told me something I found equally curious. At that time, Microsoft was airing a series of television commercials about the way that their new technology called Active/X would “activate the Internet.” The software demos in these ads were actually mostly done with Perl, according to Andrew. It was clear to me that Perl, not Active/X, was actually at the heart of the way dynamic web content was being delivered.

……

from

WTF. What’s the Future and why It’s Up To Us

by Tim O’Reilly. 

get it at Amazon.com 

Automation is more complex than people think. Here’s why – Viktor Weber.

Viktor Weber, Founder & Director, Future Real Estate Institute.

Automation is a topic on which most people have an opinion. The level of knowledge on the subject varies greatly, as does the amount of fear that people feel towards the technological revolution that is taking place.

I get the impression that even informed writers — including myself — in this field often take, for numerous reasons, convenient short-cuts when it comes to writing and talking about automation.

It is therefore time to address the abundance of factors that are influencing and will continue to influence how humanity moves forward with automation. It is an intertwined and complex network of factors that allow for multiple outcomes. This article will shine the light on a selection of these parameters.

Medium.com

Will robots bring about the end of work? – Toby Walsh.

Hal Varian, chief economist at Google, has a simple way to predict the future. The future is simply what rich people have today. The rich have chauffeurs. In the future, we will have driverless cars that chauffeur us all around. The rich have private bankers. In the future, we will all have robo-bankers.

One thing that we imagine that the rich have today are lives of leisure. So will our future be one in which we too have lives of leisure, and the machines are taking the sweat? We will be able to spend our time on more important things than simply feeding and housing ourselves?

Let’s turn to another chief economist. Andy Haldane is chief economist at the Bank of England. In November 2015, he predicted that 15 million jobs in the UK, roughly half of all jobs, were under threat from automation. You’d hope he knew what he was talking about.

And he’s not the only one making dire predictions. Politicians. Bankers. Industrialists. They’re all saying a similar thing.

“We need urgently to face the challenge of automation, robotics that could make so much of contemporary work redundant”, Jeremy Corbyn at the Labour Party Conference in September 2017.

“World Bank data has predicted that the proportion of jobs threatened by automation in India is 69 percent, 77 percent in China and as high as 85 percent in Ethiopia”, according to World Bank president Jim Yong Kim in 2016.

It really does sound like we might be facing the end of work as we know it.

Many of these fears can be traced back to a 2013 study from the University of Oxford.This made a much quoted prediction that 47% of jobs in the US were under threat of automation in the next two decades. Other more recent and detailed studies have made similar dramatic predictions.

Now, there’s a lot to criticize in the Oxford study. From a technical perspective, some of report’s predictions are clearly wrong. The report gives a 94% probability that bicycle repair person will be automated in the next two decades. And, as someone trying to build that future, I can reassure any bicycle repair person that there is zero chance that we will automate even small parts of your job anytime soon. The truth of the matter is no one has any real idea of the number of jobs at risk.

Even if we have as many as 47% of jobs automated, this won’t translate into 47% unemployment. One reason is that we might just work a shorter week. That was the case in the Industrial Revolution. Before the Industrial Revolution, many worked 60 hours per week. After the Industrial Revolution, work reduced to around 40 hours per week. The same could happen with the unfolding AI Revolution.

Another reason that 47% automation won’t translate into 47% unemployment is that all technologies create new jobs as well as destroy them. That’s been the case in the past, and we have no reason to suppose that it won’t be the case in the future. There is, however, no fundamental law of economics that requires the same number of jobs to be created as destroyed. In the past, more jobs were created than destroyed but it doesn’t have to be so in the future.

In the Industrial Revolution, machines took over many of the physical tasks we used to do. But we humans were still left with all the cognitive tasks. This time, as machines start to take on many of the cognitive tasks too, there’s the worrying question: what is left for us humans?

Some of my colleagues suggest there will be plenty of new jobs like robot repair person. I am entirely unconvinced by such claims. The thousands of people who used to paint and weld in most of our car factories got replaced by only a couple of robot repair people.

No, the new jobs will have to be doing jobs where either humans excel or where we choose not to have machines. But here’s the contradiction. In fifty to hundred years time, machines will be super-human. So it’s hard to imagine of any job where humans will remain better than the machines. This means the only jobs left will be those where we prefer humans to do them.

The AI Revolution then will be about rediscovering the things that make us human. Technically, machines will have become amazing artists. They will be able to write music to rival Bach, and paintings to match Picasso. But we’ll still prefer works produced by human artists.

These works will speak to the human experience. We will appreciate a human artist who speaks about love because we have this in common. No machine will truly experience love like we do.

As well as the artistic, there will be a re-appreciation of the artisan. Indeed, we see the beginnings of this already in hipster culture. We will appreciate more and more those things made by the human hand. Mass-produced goods made by machine will become cheap. But items made by hand will be rare and increasingly valuable.

Finally as social animals, we will also increasingly appreciate and value social interactions with other humans. So the most important human traits will be our social and emotional intelligence, as well as our artistic and artisan skills. The irony is that our technological future will not be about technology but all about our humanity.

***

Toby Walsh is Professor of Artificial Intelligence at the University of New South Wales, in Sydney, Australia.

His new book, “Android Dreams: the past, present and future of Artificial Intelligence” was published in the UK by Hurst Publishers in September 2017.

The Guardian

No, wealth isn’t created at the top. It is merely devoured there – Rutger Bregman. 

This piece is about one of the biggest taboos of our times. About a truth that is seldom acknowledged, and yet, on reflection, cannot be denied. The truth that we are living in an inverse welfare state.

These days, politicians from the left to the right assume that most wealth is created at the top. By the visionaries, by the job creators, and by the people who have “made it”. By the go-getters oozing talent and entrepreneurialism that are helping to advance the whole world.

Now, we may disagree about the extent to which success deserves to be rewarded – the philosophy of the left is that the strongest shoulders should bear the heaviest burden, while the right fears high taxes will blunt enterprise – but across the spectrum virtually all agree that wealth is created primarily at the top.

So entrenched is this assumption that it’s even embedded in our language. When economists talk about “productivity”, what they really mean is the size of your paycheck. And when we use terms like “welfare state”, “redistribution” and “solidarity”, we’re implicitly subscribing to the view that there are two strata: the makers and the takers, the producers and the couch potatoes, the hardworking citizens – and everybody else.

In reality, it is precisely the other way around. In reality, it is the waste collectors, the nurses, and the cleaners whose shoulders are supporting the apex of the pyramid. They are the true mechanism of social solidarity. Meanwhile, a growing share of those we hail as “successful” and “innovative” are earning their wealth at the expense of others. The people getting the biggest handouts are not down around the bottom, but at the very top. Yet their perilous dependence on others goes unseen. Almost no one talks about it. Even for politicians on the left, it’s a non-issue.

To understand why, we need to recognise that there are two ways of making money. The first is what most of us do: work. That means tapping into our knowledge and know-how (our “human capital” in economic terms) to create something new, whether that’s a takeout app, a wedding cake, a stylish updo, or a perfectly poured pint. To work is to create. Ergo, to work is to create new wealth.

But there is also a second way to make money. That’s the rentier way: by leveraging control over something that already exists, such as land, knowledge, or money, to increase your wealth. You produce nothing, yet profit nonetheless. By definition, the rentier makes his living at others’ expense, using his power to claim economic benefit.

For those who know their history, the term “rentier” conjures associations with heirs to estates, such as the 19th century’s large class of useless rentiers, well-described by the French economist Thomas Piketty. These days, that class is making a comeback. (Ironically, however, conservative politicians adamantly defend the rentier’s right to lounge around, deeming inheritance tax to be the height of unfairness.) But there are also other ways of rent-seeking. From Wall Street to Silicon Valley, from big pharma to the lobby machines in Washington and Westminster, zoom in and you’ll see rentiers everywhere.

There is no longer a sharp dividing line between working and rentiering. In fact, the modern-day rentier often works damn hard. Countless people in the financial sector, for example, apply great ingenuity and effort to amass “rent” on their wealth. Even the big innovations of our age – businesses like Facebook and Uber – are interested mainly in expanding the rentier economy. The problem with most rich people therefore is not that they are coach potatoes. Many a CEO toils 80 hours a week to multiply his allowance. It’s hardly surprising, then, that they feel wholly entitled to their wealth.

It may take quite a mental leap to see our economy as a system that shows solidarity with the rich rather than the poor. So I’ll start with the clearest illustration of modern freeloaders at the top: bankers. Studies conducted by the International Monetary Fund and the Bank of International Settlements – not exactly leftist thinktanks – have revealed that much of the financial sector has become downright parasitic. How instead of creating wealth, they gobble it up whole.

Don’t get me wrong. Banks can help to gauge risks and get money where it is needed, both of which are vital to a well-functioning economy. But consider this: economists tell us that the optimum level of total private-sector debt is 100% of GDP. Based on this equation, if the financial sector only grows, it won’t equal more wealth, but less. So here’s the bad news. In the United Kingdom, private-sector debt is now at 157.5%. In the United States the figure is 188.8%.

In other words, a big part of the modern banking sector is essentially a giant tapeworm gorging on a sick body. It’s not creating anything new, merely sucking others dry. Bankers have found a hundred and one ways to accomplish this. The basic mechanism, however, is always the same: offer loans like it’s going out of style, which in turn inflates the price of things like houses and shares, then earn a tidy percentage off those overblown prices (in the form of interest, commissions, brokerage fees, or what have you), and if the shit hits the fan, let Uncle Sam mop it up.

The financial innovation concocted by all the math whizzes working in modern banking (instead of at universities or companies that contribute to real prosperity) basically boils down to maximising the total amount of debt. And debt, of course, is a means of earning rent. So for those who believe that pay ought to be proportionate to the value of work, the conclusion we have to draw is that many bankers should be earning a negative salary; a fine, if you will, for destroying more wealth than they create.

Bankers are the most obvious class of closet freeloaders, but they are certainly not alone. Many a lawyer and an accountant wields a similar revenue model. Take tax evasion. Untold hardworking, academically degreed professionals make a good living at the expense of the populations of other countries. Or take the tide of privatisations over the past three decades, which have been all but a carte blanche for rentiers. One of the richest people in the world, Carlos Slim, earned his millions by obtaining a monopoly of the Mexican telecom market and then hiking prices sky high. The same goes for the Russian oligarchs who rose after the Berlin Wall fell, who bought up valuable state-owned assets for song to live off the rent.

But here comes the rub. Most rentiers are not as easily identified as the greedy banker or manager. Many are disguised. On the face of it, they look like industrious folks, because for part of the time they really are doing something worthwhile. Precisely that makes us overlook their massive rent-seeking.

Take the pharmaceutical industry. Companies like GlaxoSmithKline and Pfizer regularly unveil new drugs, yet most real medical breakthroughs are made quietly at government-subsidised labs. Private companies mostly manufacture medications that resemble what we’ve already got. They get it patented and, with a hefty dose of marketing, a legion of lawyers, and a strong lobby, can live off the profits for years. In other words, the vast revenues of the pharmaceutical industry are the result of a tiny pinch of innovation and fistfuls of rent.

Even paragons of modern progress like Apple, Amazon, Google, Facebook, Uber and Airbnb are woven from the fabric of rentierism. Firstly, because they owe their existence to government discoveries and inventions (every sliver of fundamental technology in the iPhone, from the internet to batteries and from touchscreens to voice recognition, was invented by researchers on the government payroll). And second, because they tie themselves into knots to avoid paying taxes, retaining countless bankers, lawyers, and lobbyists for this very purpose.

Even more important, many of these companies function as “natural monopolies”, operating in a positive feedback loop of increasing growth and value as more and more people contribute free content to their platforms. Companies like this are incredibly difficult to compete with, because as they grow bigger, they only get stronger.

Aptly characterising this “platform capitalism” in an article, Tom Goodwin writes: “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate.”

So what do these companies own? A platform. A platform that lots and lots of people want to use. Why? First and foremost, because they’re cool and they’re fun – and in that respect, they do offer something of value. However, the main reason why we’re all happy to hand over free content to Facebook is because all of our friends are on Facebook too, because their friends are on Facebook … because their friends are on Facebook.

Most of Mark Zuckerberg’s income is just rent collected off the millions of picture and video posts that we give away daily for free. And sure, we have fun doing it. But we also have no alternative – after all, everybody is on Facebook these days. Zuckerberg has a website that advertisers are clamouring to get onto, and that doesn’t come cheap. Don’t be fooled by endearing pilots with free internet in Zambia. Stripped down to essentials, it’s an ordinary ad agency. In fact, in 2015 Google and Facebook pocketed an astounding 64%of all online ad revenue in the US.

But don’t Google and Facebook make anything useful at all? Sure they do. The irony, however, is that their best innovations only make the rentier economy even bigger. They employ scores of programmers to create new algorithms so that we’ll all click on more and more ads. Uber has usurped the whole taxi sector just as Airbnb has upended the hotel industry and Amazon has overrun the book trade. The bigger such platforms grow the more powerful they become, enabling the lords of these digital feudalities to demand more and more rent.

Think back a minute to the definition of a rentier: someone who uses their control over something that already exists in order to increase their own wealth. The feudal lord of medieval times did that by building a tollgate along a road and making everybody who passed by pay. Today’s tech giants are doing basically the same thing, but transposed to the digital highway. Using technology funded by taxpayers, they build tollgates between you and other people’s free content and all the while pay almost no tax on their earnings.

This is the so-called innovation that has Silicon Valley gurus in raptures: ever bigger platforms that claim ever bigger handouts. So why do we accept this? Why does most of the population work itself to the bone to support these rentiers?

I think there are two answers. Firstly, the modern rentier knows to keep a low profile. There was a time when everybody knew who was freeloading. The king, the church, and the aristocrats controlled almost all the land and made peasants pay dearly to farm it. But in the modern economy, making rentierism work is a great deal more complicated. How many people can explain a credit default swap, or a collateralized debt obligation?  Or the revenue model behind those cute Google Doodles? And don’t the folks on Wall Street and in Silicon Valley work themselves to the bone, too? Well then, they must be doing something useful, right?

Maybe not. The typical workday of Goldman Sachs’ CEO may be worlds away from that of King Louis XIV, but their revenue models both essentially revolve around obtaining the biggest possible handouts. “The world’s most powerful investment bank,” wrote the journalist Matt Taibbi about Goldman Sachs, “is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money.”

But far from squids and vampires, the average rich freeloader manages to masquerade quite successfully as a decent hard worker. He goes to great lengths to present himself as a “job creator” and an “investor” who “earns” his income by virtue of his high “productivity”. Most economists, journalists, and politicians from left to right are quite happy to swallow this story. Time and again language is twisted around to cloak funneling and exploitation as creation and generation.

However, it would be wrong to think that all this is part of some ingenious conspiracy. Many modern rentiers have convinced even themselves that they are bona fide value creators. When current Goldman Sachs CEO Lloyd Blankfein was asked about the purpose of his job, his straight-faced answer was that he is “doing God’s work”. The Sun King would have approved.

The second thing that keeps rentiers safe is even more insidious. We’re all wannabe rentiers. They have made millions of people complicit in their revenue model. Consider this: What are our financial sector’s two biggest cash cows? Answer: the housing market and pensions. Both are markets in which many of us are deeply invested.

Recent decades have seen more and more people contract debts to buy a home, and naturally it’s in their interest if house prices continue to scale now heights (read: burst bubble upon bubble). The same goes for pensions. Over the past few decades we’ve all scrimped and saved up a mountainous pension piggy bank. Now pension funds are under immense pressure to ally with the biggest exploiters in order to ensure they pay out enough to please their investors.

The fact of the matter is that feudalism has been democratised. To a lesser or greater extent, we are all depending on handouts. En masse, we have been made complicit in this exploitation by the rentier elite, resulting in a political covenant between the rich rent-seekers and the homeowners and retirees.

Don’t get me wrong, most homeowners and retirees are not benefiting from this situation. On the contrary, the banks are bleeding them far beyond the extent to which they themselves profit from their houses and pensions. Still, it’s hard to point fingers at a kleptomaniac when you have sticky fingers too.

So why is this happening? The answer can be summed up in three little words: Because it can.

Rentierism is, in essence, a question of power. That the Sun King Louis XIV was able to exploit millions was purely because he had the biggest army in Europe. It’s no different for the modern rentier. He’s got the law, politicians and journalists squarely in his court. That’s why bankers get fined peanuts for preposterous fraud, while a mother on government assistance gets penalised within an inch of her life if she checks the wrong box.

The biggest tragedy of all, however, is that the rentier economy is gobbling up society’s best and brightest. Where once upon a time Ivy League graduates chose careers in science, public service or education, these days they are more likely to opt for banks, law firms, or trumped up ad agencies like Google and Facebook. When you think about it, it’s insane. We are forking over billions in taxes to help our brightest minds on and up the corporate ladder so they can learn how to score ever more outrageous handouts.

One thing is certain: countries where rentiers gain the upper hand gradually fall into decline. Just look at the Roman Empire. Or Venice in the 15th century. Look at the Dutch Republic in the 18th century. Like a parasite stunts a child’s growth, so the rentier drains a country of its vitality.

What innovation remains in a rentier economy is mostly just concerned with further bolstering that very same economy. This may explain why the big dreams of the 1970s, like flying cars, curing cancer, and colonising Mars, have yet to be realised, while bankers and ad-makers have at their fingertips technologies a thousand times more powerful.

Yet it doesn’t have to be this way. Tollgates can be torn down, financial products can be banned, tax havens dismantled, lobbies tamed, and patents rejected. Higher taxes on the ultra-rich can make rentierism less attractive, precisely because society’s biggest freeloaders are at the very top of the pyramid. And we can more fairly distribute our earnings on land, oil, and innovation through a system of, say, employee shares, or a universal basic income. 

But such a revolution will require a wholly different narrative about the origins of our wealth. It will require ditching the old-fashioned faith in “solidarity” with a miserable underclass that deserves to be borne aloft on the market-level salaried shoulders of society’s strongest. All we need to do is to give real hard-working people what they deserve.

And, yes, by that I mean the waste collectors, the nurses, the cleaners – theirs are the shoulders that carry us all.

The Guardian