WTF. What’s the Future and why It’s Up To Us – Tim O’Reilly. 

ABOUT THE BOOK 

Renowned as ‘the Oracle of Silicon Valley’, Tim O’Reilly has spent three decades exploring the world-transforming power of information technology. 

Now, the leading thinker of the internet age turns his eye to the future – and asks the questions that will frame the next stage of the digital revolution: 

Will increased automation destroy jobs or create new opportunities? 

What will the company of tomorrow look like? 

Is a world dominated by algorithms to be welcomed or feared? 

How can we ensure that technology serves people, rather than the other way around? 

How can we all become better at mapping future trends? 
Tim O’Reilly’s insights create an authoritative, compelling and often surprising portrait of the world we will soon inhabit, highlighting both the many pitfalls and the enormous opportunities that lie ahead. 

ABOUT THE AUTHOR 

TIM O’REILLY is one of the world’s most influential tech analysts. As the founder of the publishing company O’Reilly Media, he became known for spotting technologies with world-shaking potential – from predicting the rise of the internet in the 1990s to coining and popularising terms like ‘Web 2.0’ and ‘Open Source’ in the 2000s. WTF? is his first book aimed at the general reader.

***

INTRODUCTION: THE WTF? ECONOMY 

THIS MORNING, I spoke out loud to a $150 device in my kitchen, told it to check if my flight was on time, and asked it to call a Lyft to take me to the airport. A car showed up a few minutes later, and my smartphone buzzed to let me know it had arrived. And in a few years, that car might very well be driving itself. 

Someone seeing this for the first time would have every excuse to say, “WTF?” 
At times, “WTF?” is an expression of astonishment. But many people reading the news about technologies like artificial intelligence and self-driving cars and drones feel a profound sense of unease and even dismay. They worry about whether their children will have jobs, or whether the robots will have taken them all. 

They are also saying “WTF?” but in a very different tone of voice. It is an expletive. 
Astonishment: phones that give advice about the best restaurant nearby or the fastest route to work today; artificial intelligences that write news stories or advise doctors; 3-D printers that make replacement parts—for humans; gene editing that can cure disease or bring extinct species back to life; new forms of corporate organization that marshal thousands of on-demand workers so that consumers can summon services at the push of a button in an app. 

Dismay: the fear that robots and AIs will take away jobs, reward their owners richly, and leave formerly middle-class workers part of a new underclass; tens of millions of jobs here in the United States that don’t pay people enough to live on; little-understood financial products and profit-seeking algorithms that can take down the entire world economy and drive millions of people from their homes; a surveillance society that tracks our every move and stores it in corporate and government databases.

Everything is amazing, everything is horrible, and it’s all moving too fast. We are heading pell-mell toward a world shaped by technology in ways that we don’t understand and have many reasons to fear. 

WTF? Google AlphaGo, an artificial intelligence program, beat the world’s best human Go player, an event that was widely predicted to be at least twenty years in the future—until it happened in 2016. If AlphaGo can happen twenty years early, what else might hit us even sooner than we expect? 

For starters: An AI running on a $35 Raspberry Pi computer beat a top US Air Force fighter pilot trainer in combat simulation. The world’s largest hedge fund has announced that it wants an AI to make three-fourths of management decisions, including hiring and firing. 
Oxford University researchers estimate that up to 47% of human tasks, including many components of white-collar jobs, may be done by machines within as little as twenty years. WTF? 

Uber has put taxi drivers out of work by replacing them with ordinary people offering rides in their own cars, creating millions of part-time jobs worldwide. Yet Uber is intent on eventually replacing those on-demand drivers with completely automated vehicles. WTF? 

Without owning a single room, Airbnb has more rooms on offer than some of the largest hotel groups in the world. Airbnb has under 3,000 employees, while Hilton has 152,000. New forms of corporate organization are outcompeting businesses based on best practices that we’ve followed for the lifetimes of most business leaders. WTF? 

Social media algorithms may have affected the outcome of the 2016 US presidential election. WTF? 
While new technologies are making some people very rich, incomes have stagnated for ordinary people, and for the first time, children in developed countries are on track to earn less than their parents. 
What do AI, self-driving cars, on-demand services, and income inequality have in common? They are telling us, loud and clear, that we’re in for massive changes in work, business, and the economy. 

But just because we can see that the future is going to be very different doesn’t mean that we know exactly how it’s going to unfold, or when. Perhaps “WTF?” really stands for “What’s the Future?” Where is technology taking us? Is it going to fill us with astonishment or dismay? And most important, what is our role in deciding that future? How do we make choices today that will result in a world we want to live in? 

I’ve spent my career as a technology evangelist, book publisher, conference producer, and investor wrestling with questions like these. My company, O’Reilly Media, works to identify important innovations, and by spreading knowledge about them, to amplify their impact and speed their adoption. And we’ve tried to sound a warning when a failure to understand how technology is changing the rules for business or society is leading us down the wrong path. 

In the process, we’ve watched numerous technology booms and busts, and seen companies go from seemingly unstoppable to irrelevant, while early-stage technologies that no one took seriously went on to change the world. 
If all you read are the headlines, you might have the mistaken idea that how highly investors value a company is the key to understanding which technologies really matter. We hear constantly that Uber is “worth” $68 billion, more than General Motors or Ford; Airbnb is “worth” $30 billion, more than Hilton Hotels and almost as much as Marriott. 

Those huge numbers can make the companies seem inevitable, with their success already achieved. But it is only when a business becomes profitably self-sustaining, rather than subsidized by investors, that we can be sure that it is here to stay. After all, after eight years Uber is still losing $2 billion every year in its race to get to worldwide scale. That’s an amount that dwarfs the losses of companies like Amazon (which lost $ 2.9 billion over its first five years before showing its first profits in 2001). 

Is Uber losing money like Amazon, which went on to become a hugely successful company that transformed retailing, publishing, and enterprise computing, or like a dot-com company that was destined to fail? Is the enthusiasm of its investors a sign of a fundamental restructuring of the nature of work, or a sign of an investment mania like the one leading up to the dot-com bust in 2001? How do we tell the difference? 

Startups with a valuation of more than a billion dollars understandably get a lot of attention, even more so now that they have a name, unicorn, the term du jour in Silicon Valley. Fortune magazine started keeping a list of companies with that exalted status. Silicon Valley news site TechCrunch has a constantly updated “Unicorn Leaderboard.” But even when these companies succeed, they may not be the surest guide to the future. 

At O’Reilly Media, we learned to tune in to very different signals by watching the innovators who first brought us the Internet and the open source software that made it possible. They did what they did out of love and curiosity, not a desire to make a fortune. We saw that radically new industries don’t start when creative entrepreneurs meet venture capitalists. They start with people who are infatuated with seemingly impossible futures. Those who change the world are people who are chasing a very different kind of unicorn, far more important than the Silicon Valley billion-dollar valuation (though some of them will achieve that too). It is the breakthrough, once remarkable, that becomes so ubiquitous that eventually it is taken for granted. 

Tom Stoppard wrote eloquently about a unicorn of this sort in his play Rosencrantz & Guildenstern Are Dead: A man breaking his journey between one place and another at a third place of no name, character, population or significance, sees a unicorn cross his path and disappear …. “My God,” says a second man, “I must be dreaming, I thought I saw a unicorn.” At which point, a dimension is added that makes the experience as alarming as it will ever be. A third witness, you understand, adds no further dimension but only spreads it thinner, and a fourth thinner still, and the more witnesses there are the thinner it gets and the more reasonable it becomes until it is as thin as reality, the name we give to the common experience. 
The world today is full of things that once made us say “WTF?” but are already well on their way to being the stuff of daily life. The Linux operating system was a unicorn. It seemed downright impossible that a decentralized community of programmers could build a world-class operating system and give it away for free. Now billions of people rely on it. 

The World Wide Web was a unicorn, even though it didn’t make Tim Berners-Lee a billionaire. I remember showing the World Wide Web at a technology conference in 1993, clicking on a link, and saying, “That picture just came over the Internet all the way from the University of Hawaii.” People didn’t believe it. They thought we were making it up. Now everyone expects that you can click on a link to find out anything at any time. 

Google Maps was a unicorn. On the bus not long ago, I watched one old man show another how the little blue dot in Google Maps followed us along as the bus moved. The newcomer to the technology was amazed. Most of us now take it for granted that our phones know exactly where we are, and not only can give us turn-by-turn directions exactly to our destination—by car, by public transit, by bicycle, and on foot—but also can find restaurants or gas stations nearby or notify our friends where we are in real time. 

The original iPhone was a unicorn even before the introduction of the App Store a year later utterly transformed the smartphone market. Once you experienced the simplicity of swiping and touching the screen rather than a tiny keyboard, there was no going back. The original pre-smartphone cell phone itself was a unicorn. As were its predecessors, the telephone and telegraph, radio and television. 

We forget. We forget quickly. And we forget ever more quickly as the pace of innovation increases. AI-powered personal agents like Amazon’s Alexa, Apple’s Siri, the Google Assistant, and Microsoft Cortana are unicorns. Uber and Lyft too are unicorns, but not because of their valuation. Unicorns are the kinds of apps that make us say, “WTF?” in a good way. Can you still remember the first time you realized that you could get the answer to virtually any question with a quick Internet search, or that your phone could route you to any destination? How cool that was, before you started taking it for granted? And how quickly did you move from taking it for granted to complaining about it when it doesn’t work quite right? 
We are layering on new kinds of magic that are slowly fading into the ordinary. A whole generation is growing up that thinks nothing of summoning cars or groceries with a smartphone app, or buying something from Amazon and having it show up in a couple of hours, or talking to AI-based personal assistants on their devices and expecting to get results. 

It is this kind of unicorn that I’ve spent my career in technology pursuing. 
So what makes a real unicorn of this amazing kind? 1. It seems unbelievable at first. 2. It changes the way the world works. 3. It results in an ecosystem of new services, jobs, business models, and industries. 

We’ve talked about the “at first unbelievable” part. What about changing the world? In Who Do You Want Your Customers to Become? Michael Schrage writes: Successful innovators don’t ask customers and clients to do something different; they ask them to become someone different …. Successful innovators ask users to embrace—or at least tolerate—new values, new skills, new behaviors, new vocabulary, new ideas, new expectations, and new aspirations. They transform their customers. 

For example, Schrage points out that Apple (and now also Google and Microsoft and Amazon) asks their “customers to become the sort of people who wouldn’t think twice about talking to their phone as a sentient servant.” Sure enough, there is a new generation of users who think nothing of saying things like: “Siri, make me a six p.m. reservation for two at Camino.”.“Alexa, play ‘Ballad of a Thin Man.’” “Okay, Google, remind me to buy currants the next time I’m at Piedmont Grocery.” 

Correctly recognizing human speech alone is hard, but listening and then performing complex actions in response—for millions of simultaneous users—requires incredible computing power provided by massive data centers. Those data centers support an ever-more-sophisticated digital infrastructure. For Google to remind me to buy currants the next time I’m at my local supermarket, it has to know where I am at all times, keep track of a particular location I’ve asked for, and bring up the reminder in that context. For Siri to make me a reservation at Camino, it needs to know that Camino is a restaurant in Oakland, and that it is open tonight, and it must allow conversations between machines, so that my phone can lay claim to a table from the restaurant’s reservation system via a service like OpenTable. 

And then it may call other services, either on my devices or in the cloud, to add the reservation to my calendar or to notify friends, so that yet another agent can remind all of us when it is time to leave for our dinner date. 
And then there are the alerts that I didn’t ask for, like Google’s warnings: “Leave now to get to the airport on time. 25 minute delay on the Bay Bridge.” or “There is traffic ahead. Faster route available.” 
All of these technologies are additive, and addictive. As they interconnect and layer on each other, they become increasingly powerful, increasingly magical. Once you become accustomed to each new superpower, life without it is like having your magic wand turn into a stick again. These services have been created by human programmers, but they will increasingly be enabled by artificial intelligence. 

That’s a scary word to many people. But it is the next step in the progression of the unicorn from the astonishing to the ordinary. 
While the term artificial intelligence or AI suggests a truly autonomous intelligence, we are far, far from that eventuality. 
AI is still just a tool, still subject to human direction. The nature of that direction, and how we must exercise it, is a key subject of this book. AI and other unicorn technologies have the potential to make a better world, in the same way that the technologies of the first industrial revolution created wealth for society that was unimaginable two centuries ago. AI bears the same relationship to previous programming techniques that the internal combustion engine does to the steam engine. It is far more versatile and powerful, and over time we will find ever more uses for it. 

Will we use it to make a better world? Or will we use it to amplify the worst features of today’s world? So far, the “WTF?” of dismay seems to have the upper hand. “Everything is amazing,” and yet we are deeply afraid. Sixty-three percent of Americans believe jobs are less secure now than they were twenty to thirty years ago. By a two-to-one ratio, people think good jobs are difficult to find where they live. And many of them blame technology. There is a constant drumbeat of news that tells us that the future is one in which increasingly intelligent machines will take over more and more human work. 
The pain is already being felt. For the first time, life expectancy is actually declining in America, and what was once its rich industrial heartland has too often become a landscape of despair.  For everyone’s sake, we must choose a different path. Loss of jobs and economic disruption are not inevitable. 

There is a profound failure of imagination and will in much of today’s economy. For every Elon Musk—who wants to reinvent the world’s energy infrastructure, build revolutionary new forms of transport, and settle humans on Mars—there are far too many companies that are simply using technology to cut costs and boost their stock price, enriching those able to invest in financial markets at the expense of an ever-growing group that may never be able to do so. Policy makers seem helpless, assuming that the course of technology is inevitable, rather than something we must shape. 

And that gets me to the third characteristic of true unicorns: They create value. Not just financial value, but real-world value for society. Consider past marvels. Could we have moved goods as easily or as quickly without modern earthmoving equipment letting us bore tunnels through mountains or under cities? The superpower of humans + machines made it possible to build cities housing tens of millions of people, for a tiny fraction of our people to work producing the food that all the rest of us eat, and to create a host of other wonders that have made the modern world the most prosperous time in human history. 

Technology is going to take our jobs! Yes. It always has, and the pain and dislocation are real. But it is going to make new kinds of jobs possible. History tells us technology kills professions, but does not kill jobs. We will find things to work on that we couldn’t do before but now can accomplish with the help of today’s amazing technologies. 

Take, for example, laser eye surgery. I used to be legally blind without huge Coke-bottle glasses. Twelve years ago, my eyes were fixed by a surgeon who would never have been able to do the job without the aid of a robot, who was now able to do something that had previously been impossible. After more than forty years of wearing glasses so strong that I was legally blind without them, I could see clearly on my own. I kept saying to myself for months afterward, “I’m seeing with my own eyes!” But in order to remove my need for prosthetic vision, the surgeon ended up relying on prosthetics of her own, performing the surgery on my cornea with the aid of a computer-controlled laser. 

During the actual surgery, apart from lifting the flap she had cut by hand in the surface of my cornea and smoothing it back into place after the laser was done, her job was to clamp open my eyes, hold my head, utter reassuring words, and tell me, sometimes with urgency, to keep looking at the red light. I asked what would happen if my eyes drifted and I didn’t stay focused on the light. “Oh, the laser would stop,” she said. “It only fires when your eyes are tracking the dot.” 

Surgery this sophisticated could never be done by an unaugmented human being. The human touch of my superb doctor was paired with the superhuman accuracy of complex machines, a twenty-first-century hybrid freeing me from assistive devices first invented eight centuries earlier in Italy. 

The revolution in sensors, computers, and control technologies is going to make many of the daily activities of the twentieth century seem quaint as, one by one, they are reinvented in the twenty-first. This is the true opportunity of technology: It extends human capability. 
In the debate about technology and the shape of the future, it’s easy to forget just how much technology already suffuses our lives, how much it has already changed us. As we get past that moment of amazement, and it fades into the new normal, we must put technology to work solving new problems. We must commit to building something new, strange to our past selves, but better, if we commit to making it so. We must keep asking: What will new technology let us do that was previously impossible? Will it help us build the kind of society we want to live in? This is the secret to reinventing the economy. 

As Google chief economist Hal Varian said to me, “My grandfather wouldn’t recognize what I do as work.” What are the new jobs of the twenty-first century? Augmented reality—the overlay of computer-generated data and images on what we see—may give us a clue. It definitely meets the WTF? test. The first time a venture capitalist friend of mine saw one unreleased augmented reality platform in the lab, he said, “If LSD were a stock, I’d be shorting it.” That’s a unicorn. But what is most exciting to me about this technology is not the LSD factor, but how augmented reality can change the way we work. 

You can imagine how augmented reality could enable workers to be “upskilled.” I’m particularly fond of imagining how the model used by Partners in Health could be turbocharged by augmented reality and telepresence. The organization provides free healthcare to people in poverty using a model in which community health workers recruited from the population being served are trained and supported in providing primary care. Doctors can be brought in as needed, but the bulk of care is provided by ordinary people. Imagine a community health worker who is able to tap on Google Glass or some next-generation wearable, and say, “Doctor, you need to see this!” (Trust me. Glass will be back, when Google learns to focus on community health workers, not fashion models.) 

It’s easy to imagine how rethinking our entire healthcare system along these lines could reduce costs, improve both health outcomes and patient satisfaction, and create jobs. Imagine house calls coming back into fashion. Add in health monitoring by wearable sensors, health advice from an AI made as available as Siri, the Google Assistant, or Microsoft Cortana, plus an Uber-style on-demand service, and you can start to see the outlines of one small segment of the next economy being brought to us by technology. 

This is only one example of how we might reinvent familiar human activities, creating new marvels that, if we are lucky, will eventually fade into the texture of everyday life, just like wonders of a previous age such as airplanes and skyscrapers, elevators, automobiles, refrigerators, and washing machines.
***

Despite their possible wonders, many of the futures we face are fraught with unknown risks. 

I am a classicist by training, and the fall of Rome is always before me. The first volume of Gibbon’s Decline and Fall of the Roman Empire was published in 1776, the same year as the American Revolution. 
Despite Silicon Valley’s dreams of a future singularity, an unknowable fusion of minds and machines that will mark the end of history as we know it, what history teaches us is that economies and nations, not just companies, can fail. Great civilizations do collapse. Technology can go backward. After the fall of Rome, the ability to make monumental structures out of concrete was lost for nearly a thousand years. It could happen to us. 

We are increasingly facing what planners call “wicked problems”—problems that are “difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize.” Even long-accepted technologies turn out to have unforeseen downsides. The automobile was a unicorn. It afforded ordinary people enormous freedom of movement, led to an infrastructure for transporting goods that spread prosperity, and enabled a consumer economy where goods could be produced far away from where they are consumed. 

Yet the roads we built to enable the automobile carved up and hollowed out cities, led to more sedentary lifestyles, and contributed mightily to the overpowering threat of climate change. Ditto cheap air travel, container shipping, the universal electric grid. All of these were enormous engines of prosperity that brought with them unintended consequences that only came to light over many decades of painful experience, by which time any solution seems impossible to attempt because the disruption required to reverse course would be so massive. We face a similar set of paradoxes today. 

The magical technologies of today—and choices we’ve already made, decades ago, about what we value as a society—are leading us down a path with complex contingencies, unseen dangers, and decisions that we don’t even know we are making. 

AI and robotics in particular are at the heart of a set of wicked problems that are setting off alarm bells among business and labor leaders, policy makers and academics. What happens to all those people who drive for a living when the cars start driving themselves? AIs are flying planes, advising doctors on the best treatments, writing sports and financial news, and telling us all, in real time, the fastest way to get to work. They are also telling human workers when to show up and when to go home, based on real-time measurement of demand. 

Computers used to work for humans; increasingly it’s now humans working for computers. The algorithm is the new shift boss. What is the future of business when technology-enabled networks and marketplaces let people choose when and how much they want to work? What is the future of education when on-demand learning outperforms traditional universities in keeping skills up to date? What is the future of media and public discourse when algorithms decide what we will watch and read, making their choice based on what will make the most profit for their owners? What is the future of the economy when more and more work can be done by intelligent machines instead of people, or only done by people in partnership with those machines? What happens to workers and their families? And what happens to the companies that depend on consumer purchasing power to buy their products? 

There are dire consequences to treating human labor simply as a cost to be eliminated. According to the McKinsey Global Institute, 540 to 580 million people—65 to 70% of households in twenty-five advanced economies—had incomes that had fallen or were flat between 2005 and 2014. Between 1993 and 2005, fewer than 10 million people—less than 2%—had the same experience. 

Over the past few decades, companies have made a deliberate choice to reward their management and “superstars” incredibly well, while treating ordinary workers as a cost to be minimized or cut. 
Top US CEOs now earn 373x the income of the average worker, up from 42x in 1980.1
As a result of the choices we’ve made as a society about how to share the benefits of economic growth and technological productivity gains, the gulf between the top and the bottom has widened enormously, and the middle has largely disappeared. Recently published research by Stanford economist Raj Chetty shows that for children born in 1940, the chance that they’d earn more than their parents was 92%; for children born in 1990, that chance has fallen to 50%.  

Businesses have delayed the effects of declining wages on the consumer economy by encouraging people to borrow—in the United States, household debt is over $12 trillion (80% of gross domestic product, or GDP, in mid-2016) and student debt alone is $1.2 trillion (with more than seven million borrowers in default). 

We’ve also used government transfers to reduce the gap between human needs and what our economy actually delivers. But of course, higher government transfers must be paid for through higher taxes or through higher government debt, either of which political gridlock has made unpalatable. This gridlock is, of course, a recipe for disaster. Meanwhile, in hopes that “the market” will deliver jobs, central banks have pushed ever more money into the system, hoping that somehow this will unlock business investment. But instead, corporate profits have reached highs not seen since the 1920s, corporate investment has shrunk, and more than $30 trillion of cash is sitting on the sidelines. 

The magic of the market is not working. We are at a very dangerous moment in history. The concentration of wealth and power in the hands of a global elite is eroding the power and sovereignty of nation-states while globe-spanning technology platforms are enabling algorithmic control of firms, institutions, and societies, shaping what billions of people see and understand and how the economic pie is divided. At the same time, income inequality and the pace of technology change are leading to a populist backlash featuring opposition to science, distrust of our governing institutions, and fear of the future, making it ever more difficult to solve the problems we have created. 

That has all the hallmarks of a classic wicked problem. Wicked problems are closely related to an idea from evolutionary biology, that there is a “fitness landscape” for any organism. Much like a physical landscape, a fitness landscape has peaks and valleys. The challenge is that you can only get from one peak—a so-called local maximum—to another by going back down. In evolutionary biology, a local maximum may mean that you become one of the long-lived stable species, unchanged for millions of years, or it may mean that you become extinct because you’re unable to respond to changed conditions. 

And in our economy, conditions are changing rapidly. Over the past few decades, the digital revolution has transformed media, entertainment, advertising, and retail, upending centuries-old companies and business models. Now it is restructuring every business, every job, and every sector of society. No company, no job—and ultimately, no government and no economy—is immune to disruption. Computers will manage our money, supervise our children, and have our lives in their “hands” as they drive our automated cars. 

The biggest changes are still ahead, and every industry and every organization will have to transform itself in the next few years, in multiple ways, or fade away. 
We need to ask ourselves whether the fundamental social safety nets of the developed world will survive the transition, and more important, what we will replace them with. Andy McAfee, coauthor of The Second Machine Age, put his finger on the consequence of failing to do so while talking with me over breakfast about the risks of AI taking over from humans: “The people will rise up before the machines do.”

This book provides a view of one small piece of this complex puzzle, the role of technology innovation in the economy, and in particular the role of WTF? technologies such as AI and on-demand services. I lay out the difficult choices we face as technology opens new doors of possibility while closing doors that once seemed the sure path to prosperity. 

But more important, I try to provide tools for thinking about the future, drawn from decades on the frontiers of the technology industry, observing and predicting its changes. The book is US-centric and technology-centric in its narrative; it is not an overview of all of the forces shaping the economy of the future, many of which are centered outside the United States or are playing out differently in other parts of the world. 

In No Ordinary Disruption, McKinsey’s Richard Dobbs, James Manyika, and Jonathan Woetzel point out quite correctly that technology is only one of four major disruptive forces shaping the world to come. 

Demographics (in particular, changes in longevity and the birth rate that have radically shifted the mix of ages in the global population), globalization, and urbanization may play at least as large a role as technology. And even that list fails to take into account catastrophic war, plague, or environmental disruption. 
These omissions are not based on a conviction that Silicon Valley’s part of the total technology innovation economy, or the United States, is more important than the rest; it is simply that the book is based on my personal and business experience, which is rooted in this field and in this one country. 

The book is divided into four parts. In the first part, I’ll share some of the techniques that my company has used to make sense of and predict innovation waves such as the commercialization of the Internet, the rise of open source software, the key drivers behind the renaissance of the web after the dot-com bust and the shift to cloud computing and big data, the Maker movement, and much more. 

I hope to persuade you that understanding the future requires discarding the way you think about the present, giving up ideas that seem natural and even inevitable. 
In the second and third parts, I’ll apply those same techniques to provide a framework for thinking about how technologies such as on-demand services, networks and platforms, and artificial intelligence are changing the nature of business, education, government, financial markets, and the economy as a whole. I’ll talk about the rise of great world-spanning digital platforms ruled by algorithm, and the way that they are reshaping our society. I’ll examine what we can learn about these platforms and the algorithms that rule them from Uber and Lyft, Airbnb, Amazon, Apple, Google, and Facebook. And I’ll talk about the one master algorithm we so take for granted that it has become invisible to us. I’ll try to demystify algorithms and AI, and show how they are not just present in the latest technology platforms but already shape business and our economy far more broadly than most of us understand. 

And I’ll make the case that many of the algorithmic systems that we have put in place to guide our companies and our economy have been designed to disregard the humans and reward the machines. 
In the fourth part of the book, I’ll examine the choices we have to make as a society. Whether we experience the WTF? of astonishment or the WTF? of dismay is not foreordained. It is up to us. It’s easy to blame technology for the problems that occur in periods of great economic transition. But both the problems and the solutions are the result of human choices. During the industrial revolution, the fruits of automation were first used solely to enrich the owners of the machines. 

Workers were often treated as cogs in the machine, to be used up and thrown away. But Victorian England figured out how to do without child labor, with reduced working hours, and their society became more prosperous. 

We saw the same thing here in the United States during the twentieth century. We look back now on the good middle-class jobs of the postwar era as something of an anomaly. But they didn’t just happen by chance. It took generations of struggle on the part of workers and activists, and growing wisdom on the part of capitalists, policy makers, political leaders, and the voting public. In the end we made choices as a society to share the fruits of productivity more widely. We also made choices to invest in the future. That golden age of postwar productivity was the result of massive investments in roads and bridges, universal power, water, sanitation, and communications. 

After World War II, we committed enormous resources to rebuild the lands destroyed by war, but we also invested in basic research. We invested in new industries: aerospace, chemicals, computers, and telecommunications. 
We invested in education, so that children could be prepared for the world they were about to inherit. 

The future comes in fits and starts, and it is often when times are darkest that the brightest futures are being born. Out of the ashes of World War II we forged a prosperous world. By choice and hard work, not by destiny. The Great War of a generation earlier had only amplified the cycle of dismay. What was the difference? 
After World War I, we punished the losers. After World War II, we invested in them and raised them up again. After World War I, the United States beggared its returning veterans. After World War II, we sent them to college. 

Wartime technologies such as digital computing were put into the public domain so that they could be transformed into the stuff of the future. The rich taxed themselves to finance the public good. 
In the 1980s, though, the idea that “greed is good” took hold in the United States and we turned away from prosperity. We accepted the idea that what was good for financial markets was good for everyone and structured our economy to drive stock prices ever higher, convincing ourselves that “the market” of stocks, bonds, and derivatives was the same as Adam Smith’s market of real goods and services exchanged by ordinary people. 

We hollowed out the real economy, putting people out of work and capping their wages in service to corporate profits that went to a smaller and smaller slice of society. We made the wrong choice forty years ago. We don’t need to stick with it. 
The rise of a billion people out of poverty in developing economies around the world at the same time that the incomes of ordinary people in most developed economies have been going backward should tell us that we took a wrong turn somewhere. 

The WTF? technologies of the twenty-first century have the potential to turbocharge the productivity of all our industries. But making what we do now more productive is just the beginning. We must share the fruits of that productivity, and use them wisely. If we let machines put us out of work, it will be because of a failure of imagination and a lack of will to make a better future. 
***

PART I  – 
USING THE RIGHT MAPS 


The map is not the territory. —Alfred Korzybski 


1 SEEING THE FUTURE IN THE PRESENT 

IN THE MEDIA, I’m often pegged as a futurist. I don’t think of myself that way. I think of myself as a mapmaker. I draw a map of the present that makes it easier to see the possibilities of the future. Maps aren’t just representations of physical locations and routes. They are any system that helps us see where we are and where we are trying to go. 

One of my favorite quotes is from Edwin Schlossberg: “The skill of writing is to create a context in which other people can think.”
This book is a map. We use maps—simplified abstractions of an underlying reality, which they represent—not just in trying to get from one place to another but in every aspect of our lives. When we walk through our darkened home without the need to turn on the light, that is because we have internalized a mental map of the space, the layout of the rooms, the location of every chair and table. 

Similarly, when an entrepreneur or venture capitalist goes to work each day, he or she has a mental map of the technology and business landscape. We dispose the world in categories: friend or acquaintance, ally or competitor, important or unimportant, urgent or trivial, future or past. For each category, we have a mental map. But as we’re reminded by the sad stories of people who religiously follow their GPS off a no-longer-existent bridge, maps can be wrong. In business and in technology, we often fail to see clearly what is ahead because we are navigating using old maps and sometimes even bad maps—maps that leave out critical details about our environment or perhaps even actively misrepresent it. Most often, in fast-moving fields like science and technology, maps are wrong simply because so much is unknown. 

Each entrepreneur, each inventor, is also an explorer, trying to make sense of what’s possible, what works and what doesn’t, and how to move forward. Think of the entrepreneurs working to develop the US transcontinental railroad in the mid-nineteenth century. The idea was first proposed in 1832, but it wasn’t even clear that the project was feasible until the 1850s, when the US House of Representatives provided the funding for an extensive series of surveys of the American West, a precursor to any actual construction. Three years of exploration from 1853 to 1855 resulted in the Pacific Railroad Surveys, a twelve-volume collection of data on 400,000 square miles of the American West. 

But all that data did not make the path forward entirely clear. There was fierce debate about the best route, debate that was not just about the geophysical merits of northern versus southern routes but also about the contested extension of slavery. Even when the intended route was decided on and construction began in 1863, there were unexpected problems—a grade steeper than previously reported that was too difficult for a locomotive, weather conditions that made certain routes impassable during the winter. You couldn’t just draw lines on the map and expect everything to work perfectly. 

The map had to be refined and redrawn with more and more layers of essential data added until it was clear enough to act on. Explorers and surveyors went down many false paths before deciding on the final route.

Creating the right map is the first challenge we face in making sense of today’s WTF? technologies. Before we can understand how to deal with AI, on-demand applications, and the disappearance of middle-class jobs, and how these things can come together into a future we want to live in, we have to make sure we aren’t blinded by old ideas. We have to see patterns that cross old boundaries. The map we follow into the future is like a picture puzzle with many of the pieces missing. You can see the rough outline of one pattern over here, and another there, but there are great gaps and you can’t quite make the connections. And then one day someone pours out another set of pieces on the table, and suddenly the pattern pops into focus. 

The difference between a map of an unknown territory and a picture puzzle is that no one knows the full picture in advance. It doesn’t exist until we see it—it’s a puzzle whose pattern we make up together as we go, invented as much as it is discovered. Finding our way into the future is a collaborative act, with each explorer filling in critical pieces that allow others to go forward. 


LISTENING FOR THE RHYMES 

Mark Twain is reputed to have said, “History doesn’t repeat itself, but it often rhymes.” 

Study history and notice its patterns. 
This is the first lesson I learned in how to think about the future. The story of how the term open source software came to be developed, refined, and adopted in early 1998—what it helped us to understand about the changing nature of software, how that new understanding changed the course of the industry, and what it predicted about the world to come—shows how the mental maps we use limit our thinking, and how revising the map can transform the choices we make. 

Before I delve into what is now ancient history, I need you to roll back your mind to 1998. Software was distributed in shrink-wrapped boxes, with new releases coming at best annually, often every two or three years. Only 42% of US households had a personal computer, versus the 80% who own a smartphone today. Only 20% of the US population had a mobile phone of any kind. The Internet was exciting investors—but it was still tiny, with only 147 million users worldwide, versus 3.4 billion today. More than half of all US Internet users had access through AOL. 

Amazon and eBay had been launched three years earlier, but Google was only just founded in September of that year. Microsoft had made Bill Gates, its founder and CEO, the richest man in the world. It was the defining company of the technology industry, with a near-monopoly position in personal computer software that it had leveraged to destroy competitor after competitor. 

The US Justice Department launched an antitrust investigation against the company in May of that year, just as it had done nearly thirty years earlier against IBM. 

In contrast to the proprietary software that made Microsoft so successful, open source software is distributed under a license that allows anyone to freely study, modify, and build on it. Examples of open source software include the Linux and Android operating systems; web browsers like Chrome and Firefox; popular programming languages like Python, PHP, and JavaScript; modern big data tools like Hadoop and Spark; and cutting-edge artificial intelligence toolkits like Google’s TensorFlow, Facebook’s Torch, or Microsoft’s CNTK. 

In the early days of computers, most software was open source, though not by that name. Some basic operating software came with a computer, but much of the code that actually made a computer useful was custom software written to solve specific problems. The software written by scientists and researchers in particular was often shared. 

During the late 1970s and 1980s, though, companies had realized that controlling access to software gave them commercial advantage and had begun to close off access using restrictive licenses. In 1985, Richard Stallman, a programmer at the Massachusetts Institute of Technology, published The GNU Manifesto, laying out the principles of what he called “free software”—not free as in price, but free as in freedom: the freedom to study, to redistribute, and to modify software without permission.  

Stallman’s ambitious goal was to build a completely free version of AT&T’s Unix operating system, originally developed at Bell Labs, the research arm of AT&T. At the time Unix was first developed in the late 1970s, AT&T was a legal monopoly with enormous profits from regulated telephone services. As a result, AT&T was not allowed to compete in the computer industry, then dominated by IBM, and in accord with its 1956 consent decree with the Justice Department had licensed Unix to computer science research groups on generous terms. 

Computer programmers at universities and companies all over the world had responded by contributing key elements to the operating system. But after the decisive consent decree of 1982, in which AT&T agreed to be broken up into seven smaller companies (“the Baby Bells”) in exchange for being allowed to compete in the computer market, AT&T tried to make Unix proprietary. They sued the University of California, Berkeley, which had built an alternate version of Unix (the Berkeley Software Distribution, or BSD), and effectively tried to shut down the collaborative barn raising that had helped to create the operating system in the first place. 

While Berkeley Unix was stalled by AT&T’s legal attacks, Stallman’s GNU Project (named for the meaningless recursive acronym “Gnu’s Not Unix”) had duplicated all of the key elements of Unix except the kernel, the central code that acts as a kind of traffic cop for all the other software. 
That kernel was supplied by a Finnish computer science student named Linus Torvalds, whose master’s thesis in 1990 consisted of a minimalist Unix-like operating system that would be portable to many different computer architectures. He called this operating system Linux. 

Over the next few years, there was a fury of commercial activity as entrepreneurs seized on the possibilities of a completely free operating system combining Torvalds’s kernel with the Free Software Foundation’s re-creation of the rest of the Unix operating system. The target was no longer AT&T, but rather Microsoft. 
In the early days of the PC industry, IBM and a growing number of personal computer “clone” vendors like Dell and Gateway provided the hardware, Microsoft provided the operating system, and a host of independent software companies provided the “killer apps”—word processing, spreadsheets, databases, and graphics programs—that drove adoption of the new platform. 

Microsoft’s DOS (Disk Operating System) was a key part of the ecosystem, but it was far from in control. That changed with the introduction of Microsoft Windows. Its extensive Application Programming Interfaces (APIs) made application development much easier but locked developers into Microsoft’s platform. Competing operating systems for the PC like IBM’s OS/2 were unable to break the stranglehold. And soon Microsoft used its dominance of the operating system to privilege its own applications—Microsoft Word, Excel, PowerPoint, Access, and, later, Internet Explorer, their web browser (now Microsoft Edge)—by making bundling deals with large buyers. 

The independent software industry for the personal computer was slowly dying, as Microsoft took over one application category after another. 

This is the rhyming pattern that I noticed: The personal computer industry had begun with an explosion of innovation that broke IBM’s monopoly on the first generation of computing, but had ended in another “winner takes all” monopoly. Look for repeating patterns and ask yourself what the next iteration might be. Now everyone was asking whether a desktop version of Linux could change the game. Not only startups but also big companies like IBM, trying to claw their way back to the top of the heap, placed huge bets that they could. 

But there was far more to the Linux story than just competing with Microsoft. It was rewriting the rules of the software industry in ways that no one expected. It had become the platform on which many of the world’s great websites—at the time, most notably Amazon and Google—were being built. 

But it was also reshaping the very way that software was being written. In February 1997, at the Linux Kongress in Würzburg, Germany, hacker Eric Raymond delivered a paper, called “The Cathedral and the Bazaar,” that electrified the Linux community. It laid out a theory of software development drawn from reflections on Linux and on Eric’s own experiences with what later came to be called open source software development. 

Eric wrote: “Who would have thought even five years ago that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet?”

The Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who’d take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles. 

Eric laid out a series of principles that have, over the past decades, become part of the software development gospel: that software should be released early and often, in an unfinished state rather than waiting to be perfected; that users should be treated as “co-developers”; and that “given enough eyeballs, all bugs are shallow.” 

Today, whether programmers develop open source software or proprietary software, they use tools and approaches that were pioneered by the open source community. But more important, anyone who uses today’s Internet software has experienced these principles at work. When you go to a site like Amazon, Facebook, or Google, you are a participant in the development process in a way that was unknown in the PC era. You are not a “co-developer” in the way that Eric Raymond imagined—you are not another hacker contributing feature suggestions and code. But you are a “beta tester”—someone who tries out continually evolving, unfinished software and gives feedback—at a scale never before imagined. Internet software developers constantly update their applications, testing new features on millions of users, measuring their impact, and learning as they go. 

Eric saw that something was changing in the way software was being developed, but in 1997, when he first delivered “The Cathedral and the Bazaar,” it wasn’t yet clear that the principles he articulated would spread far beyond free software, beyond software development itself, shaping content sites like Wikipedia and eventually enabling a revolution in which consumers would become co-creators of services like on-demand transportation (Uber and Lyft) and lodging (Airbnb). 

I was invited to give a talk at the same conference in Würzburg. My talk, titled “Hardware, Software, and Infoware,” was very different. I was fascinated not just with Linux, but with Amazon. Amazon had been built on top of various kinds of free software, including Linux, but it seemed to me to be fundamentally different in character from the kinds of software we’d seen in previous eras of computing. Today it’s obvious to everyone that websites are applications and that the web has become a platform, but in 1997 most people thought of the web browser as the application. If they knew a little bit more about the architecture of the web, they might think of the web server and associated code and data as the application. 

The content was something managed by the browser, in the same way that Microsoft Word manages a document or that Excel lets you create a spreadsheet. By contrast, I was convinced that the content itself was an essential part of the application, and that the dynamic nature of that content was leading to an entirely new architectural design pattern for a next stage beyond software, which at the time I called “infoware.” 

Where Eric was focused on the success of the Linux operating system, and saw it as an alternative to Microsoft Windows, I was particularly fascinated by the success of the Perl programming language in enabling this new paradigm on the web. Perl was originally created by Larry Wall in 1987 and distributed for free over early computer networks. I had published Larry’s book, Programming Perl, in 1991, and was preparing to launch the Perl Conference in the summer of 1997. 

I had been inspired to start the Perl Conference by the chance conjunction of comments by two friends. Early in 1997, Carla Bayha, the computer book buyer at the Borders bookstore chain, had told me that the second edition of Programming Perl, published in 1996, was one of the top 100 books in any category at Borders that year. It struck me as curious that despite this fact, there was virtually nothing written about Perl in any of the computer trade papers. Because there was no company behind Perl, it was virtually invisible to the pundits who followed the industry. 

And then Andrew Schulman, the author of a book called Unauthorized Windows 95, told me something I found equally curious. At that time, Microsoft was airing a series of television commercials about the way that their new technology called Active/X would “activate the Internet.” The software demos in these ads were actually mostly done with Perl, according to Andrew. It was clear to me that Perl, not Active/X, was actually at the heart of the way dynamic web content was being delivered.

……

from

WTF. What’s the Future and why It’s Up To Us

by Tim O’Reilly. 

get it at Amazon.com 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s