Category Archives: Automation & Technology

Designers push limits with first 3D-printed concrete housing project – Colleen Hawkes * Chinese Company Constructs the World’s Tallest 3D Printed Building – Rory Stott.

3D printing of concrete is a potential game changer in the building industry. Besides the ability to construct almost any shape, it also enables architects to design very fine concrete structures.

Work will start this year on the world’s first 3D-printed concrete house to be built and occupied.

It will be a single-storey house and it’s expected to be ready for occupation in the first half of 2019.

Eventually, there will be five futuristic houses in the Project Milestone development, which is in the city of Eindhoven in the Netherlands.

The other four houses in the development will be multi-storey houses. And all five houses will be subject to regular building regulations and they all will meet the demands of current-day occupants concerning comfort, lay-out, quality and pricing.

Project Milestone is a joint venture between the municipality of Eindhoven, Eindhoven University of Technology, contractor Van Wijnen, real estate manager Vesteda, materials company Saint Gobain-Weber Beamix and engineering firm Witteveen+Bos.

Vesteda, the prospective buyer, will let the houses to tenants.

The design of the houses is based on erratic blocks in a green landscape. The university says the irregular shape of the buildings can be realised thanks to one of the key features of 3D-printing: the ability to construct almost any shape.

”The design aims at a high level of quality and sustainability. For example, the houses will not have a natural gas connection, which is quite rare in the Netherlands.”

During the project, research on concrete printing will be done for new innovations. The five houses will be built consecutively, so every time these innovations and all lessons learnt can be applied in the next house.

The building elements of the first house will all be printed by the concrete printer at the university. It is the intention to gradually shift the whole construction work to the construction site. The last house will be fully realised on site, including the print work.

Eindhoven is a hot spot for 3D-concrete printing, with the research group of concrete technology professor Theo Salet and its concrete printer as pivotal elements. The group recently printed world’s first 3D-printed concrete bridge for cyclists in the village of Gemert.

The university says 3D printing of concrete is a potential “game changer” in the building industry.

”Besides the ability to construct almost any shape, it also enables architects to design very fine concrete structures.

“Another new possibility is to print all kinds, qualities and colours of concrete, all in a single product. This enables integration of all sorts of functions in one and the same building element.”

The university says the process makes it easy to incorporate individual wishes for every single house, at minimum extra costs.

”Another important advantage is sustainability, as much less concrete is needed and hence much less cement, which reduces the C02 emissions originating from cement production.”

Chinese Company Constructs the World’s Tallest 3D Printed Building

Rory Stott

Once again, Chinese company WinSun Decoration Design Engineering Co has expanded the capabilities of 3D printing. After constructing ten houses in under twenty-four hours last year, now they are back with both the world’s tallest 3D printed building a five-story apartment block and a 1,100 square meter mansion with internal and external decoration to boot.

On display in Suzhou Industrial Park in Jiangsu province, the two buildings represent new frontiers for 3D printed construction, finally demonstrating its potential for creating more traditional building typologies and therefore its suitability for use by mainstream developers.

The buildings were created using the same 6.6 by 10 meter tall printer which builds up layers of an “ink” made from a mixture of glass fibre, steel, cement, hardening agents and recycled construction waste. With this technology, WinSun is able to print out large sections of a building, which are then assembled together much like prefabricated concrete designs to create the final building.

In a press conference attended by online 3D printing newspaper 3Ders.org, the Chief engineer of China Construction No.8 Engineering Bureau Ma Rongquan explained: “These two houses are in full compliance with the relevant national standards. It is safe, reliable, and features a good integration of architecture and decoration. But as there is no specific national standard for 3D printing architecture, we need to revise and improve a standard for the future.”

The company also announced a raft of signed contracts and future plans that will expand 3D printed buildings beyond the realm of construction experiments: the $160,000 mansion is the prototype for a set of ten that has been ordered by Taiwanese real estate company Tomson Group a third house on display at Suzhou Industrial Park was just one of 20,000 ordered by the Egyptian Government; they also announced the creation of Winsun Global, a collaboration with an American firm that will seek to expand into countries around the world with the aim of providing cheap and efficient homes for low-income families, particularly in Africa and the Middle East.

The Venus Project. The Best That Money Can’t Buy, Beyond Politics, Poverty, and War – Jacque Fresco.

Few technological achievements are as impressive as the ability to see our own planet from outer space. The beautiful sphere suspended against the black void of space makes plain the bond that the billions of us on Earth have in common. This global consciousness inspires space travelers who then provide emotional and spiritual observations. Their views from outer space awaken them to a grand realization that all who share our planet make up a single community. They think this viewpoint will help unite the nations of the world in order to build a peaceful future for the present generation and the ones that follow.

The future is our responsibility, but change will not take place until the majority loses confidence in their dictators’ and elected officials’ ability to solve problems. It will likely take an economic catastrophe resulting in enormous human suffering to bring about true social change.

Unfortunately, this does not guarantee that the change will be beneficial.

The purpose of this book is to explore visions and possibilities for the future that will nurture human growth and achievement, and make that the primary goal of society.

Many poets, philosophers, and writers have criticized the artificial borders that separate people preoccupied with the notion of nationhood. Despite the visions and hopes of astronauts, poets, writers, and visionaries, the reality is that nations are continuously at war with one another, and poverty and hunger prevail in many places throughout the world, including the United States.

So far, no astronaut arriving back on Earth with this new social consciousness has proposed to transcend the world’s limitations with a world where no national boundaries exist. Each remains loyal to his/her particular nation-state, and doesn’t venture beyond patriotism “my country, right or wrong” because doing so may risk their positions.

Most problems we face in the world today are of our own making. We must accept that the future depends upon us. Interventions by mythical or divine characters in white robes descending from the clouds, or by visitors from other worlds, are illusions that cannot solve the problems of our modern world.

The future of the world is our responsibility and depends upon decisions we make today. We are our own salvation or damnation. The shape and solutions of the future depend totally on the collective effort of all people working together.

Science and technology race into the future revealing new horizons in all areas. New discoveries and inventions appear at a rate never seen before in history and the rate of change will continue to increase in the years to come. Unfortunately, books and articles attempting to describe the future have one foot rooted in the past, and interpret the future through today’s concepts of technology. Most people are comfortable and less threatened with this perspective on change. But they often react negatively to proposals suggesting changes in the way they live. For this reason, when speaking of the future, very few explore or discuss changes in our social structure, much less our values. People are used to the structures and values of earlier times when stresses and levels of understanding were different. An author who wants to publish steers clear of such emotional and controversial issues. But we feel it is time to step out of that box. In this book we will freely explore a new future, one that is realistically attainable and not the doom and gloom so often presented today.

Few can envision a social structure that enables a Utopian life style as compared to today’s standards, or that this lifestyle could be made available without the sweat of one’s brow.

Yet thanks to our labor-saving machines and other technological advances, the lifestyle of a middle class person today far exceeds anything that even kings of the past could have experienced.

Since the beginning of the machine age, humankind has had a love/hate relationship with its mechanical devices. We may like what the machines do for us, but we don’t like what they do to us. They take away our means of making a living, and sometimes our sense of purpose which derives from thousands of years in which hand labor was the primary means of meeting human needs.

Many fear that machines are becoming more and more complex and sophisticated. As dependence on them grows, we give up much of our own independence and come to resemble them as passionless unfeeling automatons whose sole purpose is work, work, work. Some fear that these mechanical children may develop minds and wills of their own and enslave humanity.

Many worry about conformity and that our values and behaviors will change so that we lose the very qualities which make us human. The purpose of this book is to explore visions and possibilities for the future that will nurture human growth and achievement, and make that the primary goal of society. We will discuss the many options and roles individuals will play in this cybernated age in which our world is rebuilt by prodigious machines and governed by computers.

Most writers of the twentieth century who presented a vision of the future were blinded by national ego or selfcenteredness, and didn’t grasp the significance and meaning of the methods of science as they might be applied to the social system. Although it may appear that the focus of this book is the technology of the future, our major concern is the effect a totally cybernated world would have on humanity and on the individual. Of course no one can predict the future with precision. There are simply too many variables. New inventions, natural and man-made disasters, and new uncontrollable diseases could radically alter the course of civilization. While we cannot predict the future, we will most surely live it. Every action and decision we take or don’t ripples into the future. For the first time we have the capability, the technology, and the knowledge to direct those ripples.

When applied in a humane manner, the coming cybernated age could see the merging of technology and cybernetics into a workable synergy for all people. It could achieve a world free of hunger, war, and poverty a world humanity has failed to achieve throughout history. But if civilization continues on its present course, we will simply repeat the same mistakes all over again.

If we apply what we already know to enhance life on Earth, we can protect the environment and the symbiotic processes of living systems. It is now mandatory that we intelligently rearrange human affairs so as to live within the limits of available resources. The proposals of this book show limitless untapped potentials in the future application of new technologies where our health, intellect, and well being are involved. These are potentials not only in a material sense, but they also involve a deep concern for one another. Only in this way can science and technology support a meaningful and humane civilization.

Many of us who think seriously about the future of human civilization are familiar with stark scenarios of this new millennium, a world of growing chaos, disorder, soaring populations, and dwindling natural resources. Emaciated children cry out from decayed cities and villages with mouths agape and bellies swollen from malnutrition and disease. In more affluent areas urban Sprawl, air and water pollution, and escalating crime take a toll on the quality of life even for those who consider themselves removed from these conditions. Even the very wealthy are at a tremendous disadvantage because they fail to grasp the damage from technology applied without social concern.

Given the advances in science and technology over the last two hundred years, one may well ask, “Does it have to be this way?” There is no question that the application of science and technology can carry us with confidence and assurance into the future. What is needed is a change in our direction and purpose.

Our main problem is a lack of understanding of what it means to be human and that we are not separate from nature. Our values, beliefs, and behaviors are as much a part of natural law as any other process. We are all an integral part of the chain of life.

In this book we present an alternative vision of a sustainable new world civilization unlike any social system that has gone before. Although this vision is highly compressed, it is based upon decades of study and experimental research.

We call for a straightforward redesign of our culture in which the age-old problems of war, poverty, hunger, debt, and unnecessary suffering are viewed not only as avoidable, but also as totally unacceptable. Anything less results in a continuation of the same catalog of problems inherent in the present system.

A DESIGN FOR THE FUTURE

The future is fluid. Each act, each decision, and each development creates new possibilities and eliminates others. The future is ours to direct. In the past, change came so slowly that generations saw minimal difference in the daily business of surviving. Social structures and cultural norms remained static for centuries. In the last fifty to a hundred years, technology and social change accelerated to such an extent that governments and corporations now consider change management a core process.

Hundreds of books address technological change, business process management, human productivity, and environmental issues. Universities offer advanced degrees in public and environmental affairs. Almost all overlook the major element in these systems, human beings and their social structures and culture. Technology, policy, and automation count for nothing until humans accept them and apply them to their daily lives. This book offers a blueprint to consciously fuse these elements into a sustainable future for all, as well as provides for fundamental changes in the way we regard ourselves, one another, and our world. This can be accomplished with technology and cybernetics being applied with human and environmental concern to secure, protect, and encourage a more humane world for all.

How can such a prodigious task be accomplished? First, we must survey and inventory all of our available planetary resources. Discussions about what is scarce and what is plentiful is just talk until we actually measure our resources. We must first baseline what there is around the world. This information must be compiled so we know the parameters for humanizing social and technological development.

This can be accomplished using computers to assist in defining the most humane and appropriate ways to manage environmental and human affairs. This is basically the function of government. With computers processing trillions of bits of information per second, existing technologies far exceed the human capacity for arriving at equitable and sustainable decisions concerning the development and distribution of physical resources. With this potential, we can eventually surpass the practice of political decisions being made on the basis of power and advantage.

Eventually, with artificial intelligence, money may become irrelevant, particularly in a high-energy civilization in which material abundance eliminates the mindset of scarcity. We have arrived at a time when the methods of science and technology can provide abundance for all. It is no longer necessary to consciously withhold efficiency through planned obsolescence, or to utilize an old and obsolete monetary system.

Although many of us consider ourselves forward thinkers, we still cling tenaciously to the old values of the monetary system. We accept, without sufficient consideration, a system that breeds inefficiencies and actually encourages the creation of shortages. For example, while many concerns about environmental destruction and the misuse of technology are justified, many environmentalists draw bleak scenarios about the future based on present day methods and shortages. They view environmental destruction from the point of view that existing technologies are wasteful and used irresponsibly. They are accustomed to outmoded concepts and the economic imperatives of sales turnover and customer appeal.

Although we recognize that technological development has been misdirected, the benefits far outweigh the negatives.

Only the most diehard environmental activist would turn his back on the many elevating advances made in areas like medicine, communications, power generation, and food production. If human civilization is to endure, it must outgrow our conspicuous waste of time, effort, and natural resources. One area in which we see this is architecture. Resource conservation must be incorporated into our structures.

With a conscious and intelligent application of today’s science and technology, we can recreate the wetlands and encourage the symbiotic process between and among the elements of nature. This was not achievable in earlier times. While many urban centers grapple with retrofitting new, more efficient technologies into their existing infrastructures, these efforts fall far short of the potentials of technology. Not only must we rebuild our thought patterns, but much of our physical infrastructure, including buildings, waterways, power systems, production and distribution processes, and transportation systems must be reconstructed from the ground up. Only then can our technology overcome resource deficiencies and provide universal abundance.

If we are genuinely concerned about the environment and fellow human beings, and want to end territorial disputes, war, crime, poverty, hunger, and the other problems that confront us today, the intelligent use of science and technology are the tools with which to achieve a new direction. An approach which will serve all people, and not just a select few.

The purpose of this technology is to free people from repetitive and boring jobs and allow them to experience the fullness of human relationships, denied to so many for so long. This will call for a basic adjustment in the way we think about what makes us human.

Our times demand the declaration of the world’s resources as the common heritage of all people.

In a hundred years, historians may look back on our present civilization as a transition period from the dark ages of ignorance, superstition, and social insufficiency just as we view the world of one hundred years ago. If we arrive at a saner world in which the maximum human potential is cultivated in every person, our descendants will not understand why our world produced only one Louis Pasteur, one Edison, one Tesla, or one Salk, and why great achievements in our age were the products of a relative few.

In looking forward to this new millennium, and back at the dimmest memories of human civilization, we see that the thoughts, dreams, and visions of humanity are limited by a perception of scarcity.

We are products of a culture of deficiency which expects each confrontation and most activities to end with a winner and a loser. Funding restricts even technological development, which has the best potential to liberate humanity from its past insufficiencies. We can no longer afford the luxury of such primitive thinking.

There are other ways of looking at our lives and the world.

Either we learn to live together in full cooperation or we will cause our own extinction.

To fully understand and appreciate this coming age, we must understand the relationship between creation and creator: the machine and, as of this writing, that most marvelous of mechanisms the human being.

CHANGING VALUES IN AN EMERGING CULTURE

Any attempt to depict the future direction of civilization must include a description of the probable evolution of our culture without embellishment, propaganda, or national interest. We must reexamine our traditional habits of thought if we wish to avoid the consequences that will occur if we do not prepare for the future. It is unfortunate that most of us envision this future within our present social framework, using values and traditions that come from the past. Superficial changes perpetuate the problems of today. The challenges we face now cannot be addressed with antiquated notions and values that are no longer relevant.

Imagine a new planet with the same carrying capacity as Earth, and that you are free to design a new direction for the society on this planet. You can choose any shape or form. The only limitation imposed upon you is that your social design must correspond to the carrying capacity of that planet. This new planet has more than adequate arable land, clean air and water, and an abundance of untapped resources. This is your planet. You can rearrange the entire social order to correspond to whatever you consider the best of all possible worlds. Not only does this include environmental modification, but also human factors, interpersonal relationships, and the structuring of education.

This need not be complicated. It can be an uncluttered approach, not burdened by any past or traditional considerations, religious or otherwise. This is a prodigious project calling for many disciplines, determining the way inhabitants of your planet conduct their lives, keeping in mind for whom and for what ends this social order is being designed. Feel free to transcend present realities and reach for new and inventive ideas to shape your world of the future. An exciting exercise isn’t it? What we propose is nothing more, and nothing less, than applying that exercise to our planet.

To prepare for the future we must be willing to test new concepts. This means we must acquire enough information to evaluate these concepts, and not be like travelers in a foreign land who compare everything with their own hometown. To understand people of another place, we must set aside our usual expectations of behavior, and not judge by the values to which we are accustomed.

If you believe today’s values and virtues are absolute and ultimate for all times and all civilizations, then you may find our projection of the future shocking and unacceptable. We must feel and think as freshly as possible about the limitless possibilities of life patterns humankind may explore for attaining even higher levels of intelligence and fulfillment in the future.

Although individuals like Plato, Edward Bellamy, HG. Wells, Karl Marx, and Howard Scott have all made attempts to plan a new civilization, the established social order considered them impractical dreamers with Utopian designs that ran contrary to human nature. Against these social pioneers was a status quo of vested interests comfortable with the way things were. The populace at large, because of years of indoctrination, went along without thinking for the ride. Vested interests were unappointed guardians of the status quo. The outlook and philosophy of the leaders were consistent with their positions of advantage.

Despite advances achieved through objective scientific investigation, and the breaking down of longstanding fears and superstitions, the world is still not a reasonable place. Many attempts to make it so have failed because of selfish individual and national interests. Deeply rooted cultural norms that assume someone must lose for someone else to gain (scarcity at it’s most basic) still dictate most of our decisions. For example, we still cling to the concept of competition and accept inadequate compensation for people’s efforts, (i.e. the minimum wage), when such concepts no longer apply to our capabilities and resources, never mind their effect on human dignity and any possible elevation of the human condition.

At this turning point in our civilization, we find problems complicated by the fact that many of us still wait for someone, a messiah perhaps, the elusive “they”, or an extraterrestrial to save us. The irony of this is that, as we wait for someone to do it for us, we give up our freedom of choice and movement. We react rather than act toward events and issues.

The future is our responsibility, but change will not take place until the majority loses confidence in their dictators’ and elected officials’ ability to solve problems. It will likely take an economic catastrophe resulting in enormous human suffering to bring about true social change. Unfortunately, this does not guarantee that the change will be beneficial.

In times of conflict between nations, we still default to answering perceived threats with threats, developing weapons of mass destruction, and training people to use them against others whom we regard as enemies. Many social reformers try to solve problems of crime within the framework of the monetary system by building more prisons and enacting new laws.

There is gun legislation and a “three times and you’re out” provision in an attempt to govern crime and violence. This accomplishes little, yet requests for funding to build more prisons and hire more policemen fare far better in legislatures and voting referendums than do pleas for education or aid to the poor. Somehow in an era of plenty, we approve punishment as an answer to all problems. One symptom of insanity is repeating the same mistake over and over again and expecting a different outcome. Our society is, in this sense, truly insane.

The Manhattan Project developed the first atomic device to be used against human populations, and launched the most intensive and dangerous weapons build-up in history. The Manhattan Project was also one of the largest and best financed projects ever undertaken. If we are willing to spend that amount of money, resources, and human lives in time of war, why don’t we commit equal resources to improving lives and anticipating the humane needs of the future? The same energies that went into the Manhattan Project could be used to improve and update our way of life, and to achieve and maintain the optimal symbiotic relationship between nature and humankind.

If our system continues without modification involving environmental and social concern we will face an economic and social breakdown of our out-dated monetary and political system. When this occurs, the established government will likely enact a state of emergency or martial law in an attempt to prevent total chaos. I do not advocate this, but without the suffering of millions it may be nearly impossible to shake our complacency about the current ways of life.

OUT OF THE DARK AGES

Scientists in the space program face different challenges. For example, space scientists must develop new ways of eating in outer space. Astronauts’ clothing must withstand the vacuum of outer space, enormous temperature differentials, and radiation; yet remain lightweight and highly flexible. This new clothing design even calls for the development of self-repairing systems. Their challenge is to conceive of common items in completely new ways. In space for example clothing no longer functions as just body covering and adornment, it becomes a mini-habitat.

The space age is a good example of the search for newer and better ways of doing things. As scientists probe the limits of our universe, they must generate newer techniques and technologies for unexplored frontiers and never before encountered environments. If scientists cling to the concepts of their earlier training, their explorations will fail. Had our ancestors refused to accept new ideas, the physical sciences would have progressed little beyond the covered wagon.

Many young engineers, scientists, and architects face this dilemma. Bold and creative, they exit institutions of higher learning and step out into the world eager for change. They set out with great enthusiasm but are often beaten back and slowed by the established institutions and self-appointed guardians of tradition. Occasionally, some break away from traditional concepts and become innovators. They meet such tremendous resistance by antiquated building codes and other restrictions that their daring concepts are soon reduced to mediocrity.

Many of the dominant values shaping our present society are medieval. The idea that we live in an enlightened age, or an age of reason, has little basis in fact. We are overwhelmed with valid information concerning ourselves and our planet, but have no inkling of how to apply it. Most of our customs and modes of behavior have been handed down to us from the Dark Ages.

It was difficult for early forms of life to crawl out of the primordial slime without dragging some of it with them. So it is with entrenched value systems. The most appropriate place for traditional concepts is a museum or in books about the history of civilization.

The twenty first century will reveal what most people never suspected, which is that the majority of us have the potential of people like Leonardo da Vinci, Alexander Graham Bell and Madam Curie, if we are raised in an environment that encourages genuine individuality and creativity. This includes all the other characteristics thought of as the special and privileged heredity of great men and women. Even in today’s so called democratic society, fewer than four percent of the world’s people have supplied us with the scientific and artistic advances that sustain social systems.

SHAPING HUMAN VALUES

Humans of the future, though similar in appearance, will differ considerably in their outlook, values, and mindset. Social orders of the past that have continued into the twenty first century consistently seek to generate loyalty and conformity to established institutions as the only means to sustain a workable society. Countless laws, often passed after a misdeed has occurred, have been enacted in an attempt to govern the conduct of people. Those who do not conform are ostracized or imprisoned.

In the past, many social reformers and those called agitators by their detractors were not generally angry maladjusted individuals. They were often people with a sensitivity and concern for the needs of others who envisioned a better life for all. Among these were abolitionists, advocates for women’s suffrage and child labor laws, those who practiced non-violent resistance to oppression, and so called “freedom fighters”.

Today we accept without question the achievements of these reformers who faced violent opposition, imprisonment, ridicule, and even death from vested interests and the established order. Unfortunately most people are unaware of the identities of those individuals who helped pave the way toward social enlightenment.

Many of our parks have statues of warriors and statesman, but few have any monuments to the great social innovators. Perhaps when the history of the human race is finally written, it will be from the viewpoint of individuals in an alien and primitive culture who sought change in a world that had great tenacity to maintain things as they were.

Conformity in a population makes control of society much easier for its leaders. Our leaders pay lip service to the freedoms that democracy provides, while actually supporting an economic structure that imprisons it citizens under more and more debt. They claim that all have the opportunity to rise to the top through individual initiative and incentive. To appease those who work hard but do not achieve the good life, religion is there to assure them that if not in this life, they will obtain it in the next.

Our habits of thought and conduct show the effectiveness of constant and unrelenting propaganda on radio, on television, in publications, and in most other media. The propaganda is so effective that the average citizen is not insulted when categorized as a consumer as if a citizen’s only worth to society was as a user of goods. These patterns are gradually being modified and challenged by the Internet and the World Wide Web.

Most people expect that our televisions, computers, communications systems, methods of production and delivery of services, and even our concepts of work and reward, will continue to improve without any disruption or distress within our present value systems. But this is not necessarily so.

Our dominant values that emphasize competition and scarcity limit continued progress.

The most disruptive period in a transition from an established social order to an emergent system comes when people are not prepared emotionally or intellectually to adjust to change. People cannot simply erase all the beliefs and habits acquired in the past, which constitute their self-identity. Sudden changes in values without some preparation will cause many to lose their sense of identity and purpose, isolating them from a society they feel has passed them by. Another factor limiting the evaluation of alternative social proposals is a lack of understanding of basic scientific principles and the factors which shape culture and behavior.

The conflicts today between human beings are about opposing values. If we manage to arrive at a saner future, conflicts will be about problems common to all humans. In a vibrant and emergent culture, instead of conflicts between nations, the challenges will be overcoming scarcity, reclaiming damaged environments, creating innovative technologies, increasing agricultural yield, improving communications, building communications between nations, sharing technologies, and living a meaningful life.

WORK AND THE NEW LEISURE

From early civilizations to the present most humans had to work to earn a living. Most of our attitudes about work are a carry-over from these earlier times. In the past, and still in many low-energy cultures, it was necessary to fetch water and carry it to one’s dwelling place. People gathered wood to make fires for heating and cooking, and fuel to burn in their lamps. It would have been very difficult and still is for some to imagine a time when water would rush forth in your own dwelling at the turn of a handle; to press a button for instant light would have seemed to be magic. People of ancient times probably wondered what they would do with their time if they did not have to engage in these burdensome tasks that were so necessary to sustain their lives. In most developed countries, tasks that were once so vital to people’s very survival are no longer necessary, thanks to modern technology.

Today people attend schools to acquire marketable skills that enable them to earn a living in the “work-a-day” world. Recently, the belief that one must work to earn a living has been challenged. Working for a living to supply the necessities of life may soon be irrelevant as modern technology can provide most of these needs. As a result, many jobs have gone the way of the iceman and the elevator operator. Perhaps we have a semantic problem with the word “work.” The idea of “freedom from work” should include the elimination of repetitive and boring tasks that hold back our intellectual growth. Most jobs, from blue-collar assembly worker to professional, entail repetitious and uninteresting tasks. Human beings possess an untapped potential that they will finally be able to explore once they are free of the burden of having to work to earn a living.

At present there are no plans in government or industry to make economic adjustments to deal with the displacement of people by automated technology. It is no longer the repetitious work of laborers that cybernation is able to phase out, but also many other vocations and professions. Engineers, technicians, scientists, doctors, architects, artists, and actors will all have their roles altered, sometimes drastically. Therefore, it is imperative that we explore alternatives so as to improve our social constructs, beliefs, and quality of life to secure and sustain a future for all.

LANGUAGE OF RELEVANCE

Of the many entrenched barriers to positive change, communication is one of the most intractable. Language has evolved over centuries through ages of scarcity, superstition, and social insufficiency, and it is continuing to evolve. However, language often contains ambiguity and uncertainty when important issues are at stake, and fails to use a precise and universally intelligible means of conveying knowledge. It is difficult for the average person, or even those considered above average, including leaders of nations, to share ideas with others whose worldview may be at considerable variance with their own. Also, because of semantic differences and different experiences, words have various shades of meaning.

What would happen if we made contact with an alien civilization, when we have such difficulty making contact with our fellow human beings? We are not ready for such an encounter. We haven’t yet learned to resolve international differences by peaceful methods, so peace is simply a pause between wars.

Even in the United States, supposedly the most technologically advanced country in the world, we lack a unified, definitively stated direction. Our policies and goals are fragmented and contradictory. The Democrats cannot communicate meaningfully with the Republicans. Elsewhere, the Israelis oppose the Arabs, the Irish Catholics clash with the Irish Protestants, the Serbs with the Muslims. Everywhere there is interracial and interpersonal disharmony, an inability of husbands and wives to communicate with each other or their children, labor and management strife, and communists differing with capitalists. How then can we hope to establish any meaningful communication with an alien civilization, with beings possessing intelligence, social coherence, and technologies far in advance of our own?

The aliens might well wonder whether there really is intelligent life on Earth.

Most world leaders seek to achieve greater communication and understanding among the nations of the world. Unfortunately, their efforts have met with little success. One reason is that each comes to the table determined to achieve the optimal advantage for their own nation. We talk a lot about global development and global cooperation. But the “global” in each case reflects the individual nation’s interests and not those of all people.

In addition, we are trapped within old ways of looking at our world. While most agree change is necessary, many limit change if it threatens their advantage, just as on a personal basis they seek change in others, but not in themselves.

Many of us lack the skills to communicate logically when we are emotionally invested in an outcome. If a person or group has difficulty in communicating a point in question, rather than seek clarification, then they will raise their voices. If this does not produce the desired effect, they may include profanity or intimidating language. If this doesn’t work, they may resort to physical violence, punishment, or deprivation as a means of achieving the desired behavior. In some instances, deprivation of the means of earning a living has been, and continues to be, used.

These tactics have never produced a heightened level of understanding. In fact, many of these attempts to control behavior actually increased violence and drive the parties farther apart. It will be difficult for a future historian to understand why the language of science and technology was not incorporated into everyday communication.

THE LANGUAGE OF SCIENCE

Ambiguity may help lawyers, preachers, and politicians, but it doesn’t work in building bridges, dams, power projects, flying machines, or in space travel. For these activities we need the language of science. Despite a maze of ambiguity in normal conversation, the more serviceable language of science is coming into use throughout the world, particularly in technologically advanced countries. If communication is to improve, we need a language that correlates highly with the environment and human needs. We already have such a language in the scientific and technological communities and it’s easily understood by many.

In other words, it is already possible to use a coherent means of communication without ambivalence. If we apply the same methods used in the physical sciences to psychology, sociology, and the humanities, a lot of unnecessary conflict could be resolved. In engineering, mathematics, chemistry, and other technical fields, we have the nearest thing to a universal descriptive language that requires little in the way of individual interpretation. For instance, if a blueprint for an automobile is used in any technologically developed society anywhere in the world, the finished product would be the same as that in other areas receiving the same blueprint, regardless of their political or religious beliefs.

The language used by the average person is inadequate for resolving conflict, but the language of science is relatively free of ambiguities and the conflicts prevalent in our everyday, emotionally driven language. It is deliberately designed as opposed to evolving haphazardly through centuries of cultural change to state problems in terms that are verifiable and readily understood by most.

Most technical strides would have been unattainable without this type of improved communication. Without a common descriptive language, we would have been unable to prevent disease, increase crop yields, talk over thousands of miles, or build bridges, dams, transportation systems, and the many other technological marvels of this computerized age. Unfortunately, the same is not true of conversational language. Attempts to discuss or evaluate newer concepts in social design are greatly limited by our habit of comparing newer concepts to existing systems and beliefs.

IT’S A SEMANTIC JUNGLE OUT THERE

Utopian ideals have existed for as long as humans have dealt with problems and reflected upon a world free of them. The writers of scriptural references to Eden, Plato’s Republic, HG. Wells’ Shape of Things to Come, and such concepts as socialism, communism, democracy, and the ultimate expression of bliss, Heaven, have all shared this Utopian dream. All attempts at creating such a world have fallen far short of their vision, because the dreamers and visionaries who projected their Utopian concepts did so mostly within the framework and values of their existing culture. The language they used was limited and subject to a wide range of individual interpretation.

*

from

The Best That Money Can’t Buy, Beyond Politics, Poverty, and War

by Jacque Fresco

get it at Amazon.com


Ten million British jobs could be gone in 15 years. No one knows What happens next – John Harris.

The reality of automation is becoming clear and its terrifying. So why is there so little thinking among politicians about those who will be affected?

Plenty of people may not have heard of the retail firm Shop Direct. Its roots go back to the distant heyday of catalogue shopping, and two giants of that era, Littlewoods and Great Universal Stores. Now it is the parent company behind the online fashion brand Very and the reinvented Littlewoods.com. All this may sound innocuous enough. But in two areas of Greater Manchester, Shop Direct is newly notorious.

Until now, what the modern corporate vernacular calls “fulfilment” in other words, packing up people’s orders and seeing to returns has been dealt with at three Shop Direct sites, in Chadderton and Shaw, near Oldham, and in Little Hulton, three miles south of Bolton. But the company now has plans to transfer all such tasks to a “fully automated”, 500,000 sq ft “distribution and returns centre” located in a logistics park in the east Midlands. The compulsory consultation period begins tomorrow, and the shopworkers’ union Usdaw and local politicians are up in arms: if it happens in full, the move will entail the loss of 1,177 full-time posts, and 815 roles currently performed by agency workers; on the new site there will only be jobs for about 500 people.

At a time when apparently low unemployment figures blind people to the fragility and insecurity of so much work, the story is a compelling straw in the wind: probably the starkest example I have yet seen of this era of automation, and the disruption and pain it threatens.

Every time a self service checkout trills and beeps and issues the deathly cry “unexpected item in the bagging area”, it points to the future. So does the increasing hollowing out of British high streets. But much of the looming wave of automation in retail and so called logistics is hidden in the increasingly vast distribution centres, and the research and development departments of online giants. Inevitably, no one employed by these companies is comfortable talking about an imminent decline in human employment across the sectors in which they operate, and the accelerated growth of individual firms exemplified by Amazon, which now employs 500,000 people around the world, up from 20,000 a decade ago gives them plenty of cover. But the evidence is plain: the buying, selling, and handling of goods is rapidly becoming less and less labour intensive.

Carl Benedikt Frey, an automation specialist at the Oxford Martin School, says: “Retail is one industry in which employment is likely to vanish, as it has done in manufacturing, mining and agriculture.” In 2017 an exhaustive report he co-authored showed that 80% of jobs in “transportation, warehousing and logistics” are now susceptible to automation.

In both these connected fields, the replacement of human labour by technology is driven by the e-commerce that shifts retailing from town and city centres to the vast spaces where machines can take over all the picking and packing and in the UK, shopping habits have obviously moved hugely in that direction. The extent to which this makes jobs vulnerable is obvious: the share of total British employment in the wholesale and retail trade towers over all other sectors of the economy, accounting for 1500 of UK workers, as against only 7. 500 in manufacturing.

Across the economy as a whole, a report last year by PricewaterhouseCoopers said that the jobs of more than 10 million workers will be at high risk from automation over the next 15 years.

Technology once considered expensive and exotic is already here. Around the world, Amazon uses about 100,000 “robot drives” to move goods around its distribution centres. The warehouse within the new John Lewis “campus” near Milton Keynes has been described as a “deserted hall of clacks and hums”. It employs 860 robots and keeps the human element of shifting stuff around to an absolute minimum. Last year Credit Suisse looked at the patents the online supermarket Ocado had recently filed and concluded that the company which, among other things, is working on driverless delivery vehicles was pushing into “an automated future” in which half of its staff could be gone within a decade (even now, it operates a much less labour intensive service than its rivals).

And so to the really terrifying part. Given that it will be disproportionately concentrated on low wage, low skill employment, this wave of automation inevitably threatens to worsen inequality. There are whole swathes of Britain where, as employment in heavy industry receded, up went retail parks and distribution centres. In such places as south Wales and South Yorkshire, the result has been an almost surreal, circular economy in which people work for big retail businesses to earn money to spend in other retail businesses. What, you can only wonder, will happen next?

If we are to avoid the worst, radical plans will be necessary. The insecurity that automation brings will soon demand the comprehensive replacement of a benefits system that does almost nothing to encourage people to acquire new skills, and works on the expectation that it can shove anyone who’s jobless into exactly the kind of work that’s under threat. In many places, the disruption of the job market will make the issue of dependable and secure housing if not questions of basic subsistence even more urgent. In both schools and adult education, the system will somehow have to pull as many people as it can away from the most susceptible parts of the economy, and maximise the numbers skilled enough to work in its higher tiers which, given the current awful state of technology and computing teaching, looks like a very big ask.

On the political right, there is no serious interest in anything resembling this kind of agenda: one of the many tragedies of Tory austerity is that it blitzed the chances of any meaningful responses to automation just as its reality became clear; and Conservatives still seem either asleep, or chained to the kind of laissez faire Thatcherism that offers nothing beyond the usual insistence that the market is not to be bucked. Some of them should read an overlooked report by the independent Future of Work commission convened last year by the Labour deputy leader Tom Watson spun with the Panglossian claim that “robots can set us free”, though a lot of its analysis was spot on. It contained bracing predictions about the future of such “low skill, low pay sectors” as transport, storage and retail, and warned that “Britain’s former industrial heartlands in the Midlands and north are more vulnerable because of a concentration of jobs in risk sectors. So, without policy intervention, the technological revolution is set to exacerbate regional inequalities.”

There are signs that such things are now on the radar of senior Labour politicians as with Jeremy Corbyn’s acknowledgment last year that automation “could make so much of contemporary work redundant”. But beyond rather dreamy talk of “putting the ownership and control of the robots in the hands of those who work with them” and commendable plans for a national education service, not nearly enough flesh has been put on these bones perhaps because the conversation about automation is often still stuck in the utopian wilds of a work free, “post capitalist” future.

What does the left have to say to all those Shop Direct workers in Shaw, Chadderton and Little Hulton? How will it allay the fears of the thousands of other people likely to be faced with the same experience?

Put another way, if the future is already here, what now?

Raising the Floor. How a Universal Basic Income can Renew Our Economy – Andy Stern.

Sixteen years into the twenty-first century we are trying to find solutions to its unique problems, especially those that are challenging the way we work, earn a living, and support our families, with ideas and methods that worked in the twentieth.

.

Most Americans, working or not, have lived through a very tough period, especially since the financial crash of 2008. What I have found from speaking with thousands of people from every economic strata is that they often blame themselves for not finding a permanent or good paying job; for getting laid off or working inconsistent hours; for taking multiple low wage jobs or contingent work just to make ends meet; for, especially in the case of recent college graduates, needing to move back into their parents’ house; for not building a nest egg or enough savings to retire; for working tirelessly so that their kids could go to college, and now their children can’t get a job.

What I want to say to each and every one is: This should not be about personal blame because the changes that are causing this jobless, wage-less recovery are structural. You worked hard. You played by the rules. You did exactly what you were supposed to do to fulfill your part of America’s social contract.

There is hope for our economy and future, but only if we come to terms with how the current explosion in technology is likely to create a shortage of jobs, a surplus of labor, and a bigger and bigger gap between the rich and poor over the next twenty years.

When I left my job as president of the Service Employees International Union (SEIU) in 2010, I undertook a five year journey to better understand the way technology is changing the economy and workplace, and to find a way to revive the American Dream. I have structured this book around many of the people I met on this journey, their assessment of the problem, my observations about whether I think they are on the money or just plain wrong, and then the solution of a universal basic income. That solution is a work in progress. I invite you to join me in debating it, refining it, and building a constituency for it, so that we can help America fulfill its historical promise to future generations of our children.

Andy Stern, Washington, DC, June 2016

Can we invent a better future?

.
“There’s something happening here. What it is ain’t exactly clear.” Buffalo Springfield

CAMBRIDGE, MA. NOVEMBER 17, 2014

I am walking around one of the most out of this world places on earth, the MIT Media Lab in Cambridge, Massachusetts. There is a huge amount of brainpower here: more than twenty groups of MIT faculty, students, and researchers working on 350 projects that range from smart prostheses and sociable robots to advanced sensor networks and electronic ink. The Media Lab is famous for its emphasis on creative collaboration between the best and the brightest in disparate fields: scientists and engineers here work alongside artists, designers, philosophers, neurobiologists, and communications experts. Their mission is “to go beyond known boundaries and disciplines” and “beyond the obvious to the questions not yet asked, questions whose answers could radically improve the way people live, learn, express themselves, work, and play.”

Their motto, “inventing a better future”, conveys a forward looking confidence that’s been lacking in our nation since the 2008 financial crisis plunged us into a recession followed by a slow, anxiety inducing recovery.

On this cloudy November day, it seems that all the sunlight in Cambridge is streaming through the glass and metallic screens that cloak the Media Lab, rendering it a luminous bubble, or a glowing alternative universe. Architect Fumihiko Mako designed the building around a central atrium that rises six floors, “a kind of vertical street,” he called it, with spacious labs branching off on each floor. Walking up the atrium, you look through the glass walls and see groups of (mainly) young geniuses at work.

Or are they playing? I am struck by how casual and unhurried they seem. Whether they are lounging on couches, gathered around a computer screen, drawing equations on a wall, these inventors of the future seem to be having a whole lot of fun. That’s not how the thirty people who are leaders in the labor movement and the foundation world who accompanied me here would characterize their own workplaces. They have been grappling with growing income inequality, stagnant wages, and increasing poverty in the communities they serve, and also with political gridlock on Capitol Hill. It’s been harder for them to get funding and resources for the important work they do.

They have come here, as I have, to get a glimpse of how MIT’s wizards and their technologies will impact the millions of middle and lower income Americans whose lives are already being disrupted and diminished in the new digital economy. Will these emerging technologies create jobs or destroy them? Will they give lower and middle income families more or less access to the American Dream? Will they make my generation’s definition of a “job” obsolete for my kids and grandkids?

For the past five years, I’ve been on a personal journey to understand an issue that should be at the heart of our nation’s economic and social policies: the future of work. I have been interviewing CEOs, labor leaders, futurists, politicians, entrepreneurs, and historians to find answers to the following questions: After decades of globalization and technology driven growth, what will America’s workplaces look like in twenty years? Which job categories will be gone forever in the age of robotics and artificial intelligence? Which new ones, if any, will take their place?

The MIT Media Lab is one stop on that journey; since the early 1990s, it has been in the forefront of wireless communication, 3D printing, and digital computing. Looking around at my colleagues, I think: People like us, labor organizers, community activists, people at the helm of small foundations that work for social and economic justice, don’t usually visit places like this. We spend our time in factories and on farms, in fast food restaurants and in hospitals advocating for higher wages and better working conditions. While we refer to our organizations by acronyms, SEIU (Service Employees International Union), OSF (Open Society Foundations), and NDWA (National Domestic Workers Alliance)-there is one acronym most of us would never use to describe ourselves personally: STEM. Most of today’s discussion will involve science, technology, engineering, and mathematics, the STEM subjects and it will go way over our heads. Instead, we’ll be filtering what we see through our “progressive” justice, engagement, and empowerment lenses, and how we experience technology in our own lives.

Personally I am of two minds about technology. On the one hand, I want it to work well and make my life easier and more enjoyable. On the other, I’m afraid of the consequences if all of the futuristic promises of technology come to fruition.

As I wait for the first session to begin, I take out my iPhone and begin reading about the Media Lab’s CE 2.0 project. CE stands for consumer electronics, and CE 2.0 is “a collaboration with member companies to formulate the principles for a new generation of consumer electronics that are highly connected, seamlessly interoperable, situation-aware, and radically simpler to use,” according to the Media Lab’s website.

CE 2.0 sounds really, really great to me. Then I realize that I’m reading about it on the same iPhone that keeps dropping conference calls in my New York apartment to my partners and clients in other parts of the city. So how can I ever expect CE 2.0 to live up to the Media Lab’s hype? And then I find myself thinking: What if it does? What if CE 2.0 exceeds all the hype and disrupts a whole bunch of industries? Which jobs will become obsolete as a result of this new generation of consumer electronics? Electricians? The people who make batteries, plugs, and electrical wiring? I keep going back and forth between the promise and the hype and everything in between. Even though CE 2.0 is basically an abstraction to me, it conjures up all sorts of expectations and fears. And I think that many of my friends and colleagues have similar longings, doubts, and fears when it comes to technology.

All of the projects at the MIT Media Lab are supported by corporations. Twitter, for instance, has committed $10 million to the Laboratory for Social Machines, which is developing technologies that “make sense of semantic and social patterns across the broad span of public mass media, social media, data streams, and digital content.” Google Education is funding the Center for Mobile Learning, which seeks to innovate education through mobile computing. The corporations have no say in the direction of the research, or ownership of what the MIT researchers patent or produce; they simply have a front row seat as the researchers take the emerging technologies wherever their curiosity and the technology takes them.

Clearly, there is a counter cultural ethos to the Media Lab. Its nine governing principles are: “Resilience over strength. Pull over push. Risk over safety. Systems over objects. Compasses over maps. Practice over theory. Disobedience over compliance. Emergence over authority. Learning over education.” For me, that’s a welcome invitation to imagine, explore new frontiers, and dream.

Before we tour the various labs, Peter Cohen, the Media Lab’s Director of Development, tells us that there is an artistic or design component to most of the Media Lab’s projects. “Much of our work is displayed in museums,” he says. “And some are performed in concert halls.” One I particularly like is the brainchild of Tod Machover, who heads the Hyperinstruments/Opera of the Future group. Machover, who co-created the popular Guitar Hero and Rock Band music video games, is composing a series of urban symphonies that attempt to capture the spirit of cities around the world. Using technology he’s developed that can collect and translate sounds into music, he enlists people who live, work, and make use of each city to help create a collective musical portrait of their town. To date, he’s captured the spirits of Toronto, Edinburgh, Perth, and Lucerne through his new technology. Now he is turning his attention to Detroit. I love this idea of getting factory workers, teachers, taxi cab drivers, police officers, and other people who live and work in Detroit involved in the creation of an urban symphony that can be performed so that the entire city can enjoy and take pride in it.

We head to the Biomechatronics Lab on the second floor. Luke Mooney, our guide to this lab, is pursuing a PhD in mechanical engineering at MIT. Only twenty four, he has already designed and developed an energy efficient powered knee prosthesis. He shows us the prototype, a gleaming exoskeleton enveloping the knee of a sleek mannequin. Mooney created the prosthesis with an expert team of biophysicists, neuroscientists, and mechanical, biomedical, and tissue engineers. It will reduce the “metabolic cost of walking,” he tells us, making it easier for a sixty four year old with worn out knees and a regularly sore back like me to maybe run again and lift far more weight than I could ever have dreamed of lifting.

Looking around, I’m struck by the mess, coffee cups and Red Bull cans, plaster molds of ankles, knees, and feet, discarded tools and motors, lying all over the place, like the morning after a month of all nighters.

The founder of the Biomechatronics Lab, Hugh Herr, is out of town this day, but his life mission clearly animates the Lab. When he was seventeen, a rockclimbing accident resulted in the amputation of both his legs below his knees. Frustrated with the prosthetic devices on the market, he got a master’s degree in mechanical engineering and a PhD in biophysics, and used that knowledge to design a prosthesis that enabled him to compete as an elite rock climber again. In 2013, after the Boston Marathon bombings, he designed a special prosthesis for one of the victims: ballroom dancer Adrianne Haslet-Davis, who had lost her lower left leg in the blast. Seven months later, at a TED talk Herr was giving, Haslet-Davis walked out on the stage with her partner and danced a rumba. “In 3.5 seconds, the criminals and cowards took Adrianne off the dance floor,” Herr said. “In 200 days, we put her back.”

At our next stop, the Personal Robotics Lab, Indian born researcher Palash Nandy tells us the key to his human friendly robots: their eyes. By manipulating a robot’s eyes and eyebrows, Nandy and his colleagues can make the robot appear sad, mad, confused, excited, attentive, or bored. Hospitals are beginning to deploy human friendly robots as helpmates to terminally ill kids. “Unlike the staff and other patients, who are constantly changing,” Nandy says, “the robot is always there for the child, asking him how he’s doing, which reduces stress.”

With the help of sophisticated sensors, the Personal Robotics Lab is building robots that are increasingly responsive to the emotional states of humans. Says Nandy: “Leo the Robot might not understand what you need or mean by the words you say, but he can pick up the emotional tone of your voice.” In a video he shows us, a researcher warns Leo, “Cookie Monster is bad. He wants to eat your cookies.” In response, Leo narrows his eyes, as if to say: “I get your message. I’ll keep my distance from that greedy Cookie Monster.”

Nandy also sings the praises of a robot who helps children learn French, and one that’s been programmed to help keep adults motivated as they lose weight.

My colleagues are full of questions and also objections:

“Can’t people do most of these tasks as well or better than the robots?”

“If every child grows up with their own personal robot friend, how will they ever learn to negotiate a real human relationship?”

“If you can create a robot friend, can’t you also create a robot torturer? Ever thought of that?”

“Yeah,” Nandy says, seeming to make light of the question. “But it’s hard to imagine evil robots when I’m around robots that say ‘I love you’ all day long.”

As awestruck and exhilarated as we are by what we see, my colleagues and I are getting frustrated by the long pauses and glib answers that greet so many of our concerns about the long-term impact of the technologies being developed here on the job market, human relationships, and our political rights and freedoms. As they invent the future, are these brilliant and passionate innovators alert to the societal risks and ramifications of what they’re doing?

On our way into the Mediated Matter Lab, we encounter a chaise lounge and a grouping of truly stunning bowls and sculptures that have been created using 3D printers. Markus Kayser, our guide through the Mediated Matter Lab, is a thirty one year old grad student from northern Germany. A few years ago he received considerable acclaim and media attention for a device he created called the Sun Sinter.

Kayser shows us a video of a bearded hipster, himself, carrying a metallic suitcase over a sand dune in Egypt’s Saharan Desert. It’s like a scene in a Buster Keaton movie. He stops and pulls several photovoltaic batteries and four large lenses from the suitcase. Then he focuses the lenses, at a heat of 1,600 degrees centigrade, onto a bed of sand. Within seconds, the concentrated heat of the sun has melted the sand and transformed it into glass. What happens next on the video gives us a glimpse into the future of manufacturing. Kayser takes his laptop out of the suitcase and spends a few minutes designing a bowl on the computer. Then, with a makeshift 3D printer powered by solar energy, he prints out the bowl in layers of plywood. He places the plywood prototype of the bowl on a small patch of desert. Then he focuses the lenses of the battery charged Solar Sinter on the sand. And then, layer after layer, he melts the sand into glass until he’s manufactured a glass bowl out of the desert’s abundant supplies of sun and sand.

“My whole goal is to explore hybrid solutions that link technology and natural energy to test new scenarios for production,” Kayser tells us. But the glass bowl only hints at the possibilities. Engineers from NASA and the US Army have already talked to him about the potential of using his technology to build emergency shelters after hurricanes and housing in hazardous environments, for example, in the desert regions of Iran and Iraq.

“How about detention centers for alleged terrorists?” one of my colleagues asks slyly. “I bet the Army is licking its chops to build a glass Guantanamo in the desert only miles from the Syrian border.”

Another asks: “Has anyone talked to you about using this technology to create urban housing for the poor?”

Kayser pauses before shaking his head no. The questions that consume our group most, how can we use this powerful technology for good rather than evil and to remedy the world’s inequities and suffering, do not seem of consequence to Kayser. This is by design: the Media Lab encourages the researchers to follow the technology wherever it leads them, without the pressure of developing a big-bucks commercial product, or an application that will save the world. lf they focus on the end results and specific commercial and social outcomes, they will be less attuned to the technology, to the materials, and to nature itself, which would impede their creative process. I understand that perspective, but it also worries me.

As I watch Kayser and his Solar Sinter turn sand into glass, another image comes to mind: Nearly 4,700 years ago, in the same Egyptian desert where Kayser made his video, more than 30,000 slaves, some of them probably my ancestors, and citizen-volunteers spent seven years quarrying, cutting, and transporting thousands of tons of stone to create Pharaoh Khufu’s Great Pyramid, one of the Seven Wonders of the Ancient World. That’s a lot of labor compared with what it will take to build a modern day community of glass houses in the vicinity of the Pyramid, or in Palm Springs, using the next iteration of the Sun Sinter. I’m concerned about the Sun Sinter’s impact on construction jobs.

Employment issues are at best a distant concern for the wizards who are inventing the future. Press them and they’ll say that technological disruption always produces new jobs and industries: Isn’t that what happened after Gutenberg invented the printing press and Ford automated the assembly line?

It was. But, as I reflect on this day, I remember a conversation I had with Steven Berkenfeld, an investment banker at Barclay’s Capital. Berkenfeld has a unique and important perspective on the relationship between technology and jobs. Day in, day out, he is pitched proposals by entrepreneurs looking to take their companies public. Most of the companies are developing technologies that will help businesses become more productive and efficient. That means fewer and fewer workers, according to Berkenfeld.

“Every company is trying to do more with less,” he explained. “Industry by industry, and sector by sector, every company is looking to drive out labor.” And very few policy makers are aware of the repercussions. “They convince themselves that technology will create new jobs when, in fact, it will displace them by the millions, spreading pain and suffering throughout the country.

When you look at the future from that perspective, the single most important decisions we need to make are: How do we help people continue to make a living, and how do we keep them engaged?”

At the end of our visit to the Media Lab, my job, as convener of the group, is to summarize some of the day’s lessons. I begin with an observation: “It’s amazing how the only thing that doesn’t work here is when these genius researchers try to project their PowerPoints onto the screen.” The twenty or so people who remain in our group laugh knowingly. Just like us, the wizards at MIT can’t seem to present a PowerPoint without encountering an embarrassing technological glitch.

I continue by quoting a line from a song by Buffalo Springfield, the 1960s American-Canadian rock band: “There’s something’s happening here. What it is ain’t exactly clear.”

That’s how I feel about our day at MIT; it has given us a preview of the future of work, which will be amazing if we can grapple with the critical ethical and social justice questions it elicits. “I’ve spent my whole life in the labor movement chasing the future,” I tell my colleagues. “Now I’d like to catch up to it, or maybe even jump ahead of it, so I can see the future coming toward me.”

Toward that end, I ask everyone in the group to answer “yes,” “no,” or “abstain” to two hypotheses.

Hypothesis number one: “The role of technology in the future of work will be so significant that current conceptions of a job may no longer reflect the relationship to work for most people. Even the idea of jobs as the best and most stable source of income will come into question.”

Hypothesis number two: “The very real prospect in the United States is that twenty years from now most people will not receive a singular income from a single employer in a traditional employee-employer relationship. For some, such as those with substantial education, this might mean freedom. For others, those with a substandard education and a criminal record, the resulting structural inequality will likely increase vulnerability.”

There are a number of groans (“Jesus, Andy, can you get any more long-winded or rhetorical?”) but each member of the group writes their answers on a piece of paper, which I collect and tally. The first hypothesis gets eighteen yeses and two abstentions. The second gets sixteen yeses, three noes, and one abstention.

I am genuinely surprised by these results. Six months ago, at our last meeting of the OSF Future of Work inquiry, my colleagues had a much more varied response to these hypotheses. At least half of them did not agree with my premise that technology would have a disruptive impact on jobs, the workplace, and employer-employee relationships, and some of them disputed the premise quite angrily. (“What do you think we are, Andy-psychics?”) Today’s tally reflects their acknowledgment that something is happening at MIT and across the United States that will fundamentally change the way Americans live and work, what it is ain’t exactly clear, but it merits our serious and immediate attention.

As they go about inventing the future, the scientists and researchers at the Media Lab aren’t thinking about the consequences of their work on the millions of Americans who are laboring in factories, building our homes, guarding our streets, investing our money, computing our taxes, teaching our children and teenagers, staffing our hospitals, driving our buses, and taking care of our elders and disabled veterans.

They aren’t thinking about the millions of parents who scrimped and saved to send their kids to college, because our country told them that college was the gateway to success, only to see those same kids underemployed or jobless when they graduate and move back home.

They aren’t thinking about the dwindling fortunes of the millions of middle class Americans who spent the money they earned on products and services that made our nation’s economy and lifestyle the envy of the world.

They aren’t thinking about the forty-seven million Americans who live in poverty, including a fifth of the nation’s children.

Nor should they. That is my job, our job together, and the purpose of this book.

But I’m getting ahead of myself. The reason I am at the MIT Media Lab stems from a combination of personal and professional factors in what seemed to many to be my abrupt decision to step down as head of America’s most successful union, the Service Employees International Union, or SEIU. To better understand where I am coming from and going with this book you need to understand my personal journey.

*

from

Raising the Floor. How a Universal Basic Income can Renew Our Economy and Rebuild the American Dream

by Andy Stern

get it at Amazon.com

As robots take our jobs, we need something else. I know what that is – George Monbiot.

It’s untenable to let salaried work define us. In the future, what we do for society unpaid should be at least as important.

Why bother designing robots when you can reduce human beings to machines? Last week, Amazon acquired a patent for a wristband that can track the hand movements of workers. If this technology is developed, it could grant companies almost total control over their workforce.

Last month the Guardian interviewed a young man called Aaron Callaway, who works nights in an Amazon warehouse. He has to place 250 items an hour into particular carts. His work, he says, is so repetitive, antisocial and alienating that: “I feel like I’ve lost who I was. My main interaction is with the robots.” And this is before the wristbands have been deployed.

I see the terrible story of Don Lane, the DPD driver who collapsed and died from diabetes, as another instance of the same dehumanisation. After being fined £150 by the company for taking a day off to see his doctor, this “self-employed contractor” (who worked full-time for the company and wore its uniform) felt he could no longer keep his hospital appointments. As the philosopher Byung-Chul Han argues, in the gig economy, “every individual is master and slave in one… class struggle has become an internal struggle with oneself.” Everything work offered during the social democratic era – economic security, a sense of belonging, social life, a political focus – has been stripped away: alienation is now almost complete. Digital Taylorism, splitting interesting jobs into tasks of mind-robbing monotony, threatens to degrade almost every form of labour. Workers are reduced to the crash-test dummies of the post-industrial age. The robots have arrived, and you are one of them.

So where do we find identity, meaning and purpose, a sense of autonomy, pride and utility? The answer, for many people, is volunteering. Over the past few weeks, I’ve spent a fair bit of time in the NHS, and I’ve realised that there are two national health systems in this country: the official one, performing daily miracles, and the voluntary network that supports it.

Everywhere I look, there are notices posted by people helping at the hospital, running support groups for other patients, raising money for research and equipment. Without this support, I suspect the official system would fall apart. And so would many of the patients. Some fascinating research papers suggest that positive interactions with other people promote physical healing, reduce physical pain, and minimise anxiety and stress for patients about to have an operation. Support groups save lives. So do those who raise money for treatment and research.

Last week I spoke to two remarkable volunteers. Jeanne Chattoe started fundraising for Against Breast Cancer after her sister was diagnosed with the disease. Until that point, she had lived a quiet life, bringing up her children and working in her sister’s luggage shop. She soon discovered powers she never knew she possessed. Before long, she started organising an annual fashion show that over 13 years raised almost £400,000. Then, lying awake one night, she had a great idea: why not decorate her home town pink once a year, recruiting the whole community to the cause? Witney in the Pink has now been running for 17 years, and all the shops participate: even the butchers dye their uniforms pink. The event raises at least £6,000 a year.

“It’s changed my whole life,” Jeanne told me. “I eat, live and breathe against breast cancer … I don’t know what I would have done without fundraising. Probably nothing. It’s given me a purpose.” She acquired so much expertise organising these events that in 2009 Against Breast Cancer appointed her chair of its trustees, a position she still holds today.

After his transplant, Kieran Sandwell donated his old heart to the British Heart Foundation. Then he began thinking about how he could support its work. He told me he had “been on the work treadmill where I’ve not enjoyed my job for years, wondering what I’m doing”. He set off to walk the entire coastline of the UK, to raise money and awareness. He now has 2,800 miles behind him and 2,000 ahead. “I’ve discovered that you can actually put your mind to anything … whatever I come across in my life, I can probably cope with it. Nothing fazes me now.”

Like Jeanne, he has unlocked unexpected powers. “I didn’t know I had in me the ability just to be able to talk to anyone.” His trek has also ignited a love of nature. “I seem to have created this fluffy bubble: what happens to me every day is wonderful… I want to try to show people that there’s a better life out there.” For Jeanne and Kieran, volunteering has given them what work once promised: meaning, purpose, place, community. This, surely, is where hope lies.

So here’s my outrageous proposal: replace careers advice with volunteering advice. I’ve argued before that much of the careers advice offered by schools and universities is worse than useless, shoving students headfirst into the machine, reinforcing the seductive power of life-destroying corporations. In fairness to the advisers, their job is becoming almost impossible anyway: the entire infrastructure of employment seems designed to eliminate fulfilling and fascinating work.

But while there is little chance of finding jobs that match students’ hopes and personalities and engage their capabilities, there is every chance of connecting them with good opportunities to volunteer. Perhaps it is time we saw volunteering as central to our identities and work as peripheral: something we have to do, but which no longer defines us. I would love to hear people reply, when asked what they do: “I volunteer at the food bank and run marathons. In my time off, I work for money.”

And there’s a side-effect. The world has been wrecked by people seeking status through their work. In many professions – such as fossil fuel energy companies, weapons manufacture, banking, advertising – your prestige rises with the harm you do. The greater your destruction of other people’s lives, the greater your contribution to shareholder value. But when you volunteer, the respect you gain rises with the good you do.

We should keep fighting for better jobs and better working conditions. But the battle against workplace technology is an unequal one. The real economic struggle now is for the redistribution of wealth generated by labour and machines, through universal basic income, the revival of the commons and other such policies. Until we achieve this, most people will have to take whatever work is on offer. But we cannot let it own us.

The Guardian

Universities in the Age of AI – Andrew Wachtel.

Over the next 50 years or so, as AI and machine learning become more powerful, human labor will be cannibalized by technologies that outperform people in nearly every job function. How should higher education prepare students for this eventuality?

BISHKEK – I was recently offered the presidency of a university in Kazakhstan that focuses primarily on business, economics, and law, and that teaches these subjects in a narrow, albeit intellectually rigorous, way. I am considering the job, but I have a few conditions.

What I have proposed is to transform the university into an institution where students continue to concentrate in these three disciplines, but must also complete a rigorous “core curriculum” in the humanities, social sciences, and natural sciences – including computer science and statistics. Students would also need to choose a minor in one of the humanities or social sciences.

There are many reasons for insisting on this transformation, but the most compelling one, from my perspective, is the need to prepare future graduates for a world in which artificial intelligence and AI-assisted technology plays an increasingly dominant role. To succeed in the workplace of tomorrow, students will need new skills.

Over the next 50 years or so, as AI and machine learning become more powerful, human labor will be cannibalized by technologies that outperform people in nearly every job function. Higher education must prepare students for this eventuality. Assuming AI will transform the future of work in our students’ lifetime, educators must consider what skills graduates will need when humans can no longer compete with robots.

It is not hard to predict that rote tasks will disappear first. This transition is already occurring in some rich countries, but will take longer in places like Kazakhstan. Once this trend picks up pace, however, populations will adjust accordingly. For centuries, communities grew as economic opportunities expanded; for example, farmers had bigger families as demand for products increased, requiring more labor to deliver goods to consumers.

But the world’s current population is unsustainable. As AI moves deeper into the workplace, jobs will disappear, employment will decline, and populations will shrink accordingly. That is good in principle – the planet is already bursting at the seams – but it will be difficult to manage in the short term, as the pace of population decline will not compensate for job losses amid the robot revolution.

For this reason, the next generation of human labor – today’s university students – requires specialized training to thrive. At the same time, and perhaps more than ever before, they need the kind of education that allows them to think broadly and to make unusual and unexpected connections across many fields.

Clearly, tomorrow’s leaders will need an intimate familiarity with computers – from basic programming to neural networks – to understand how machines controlling productivity and analytic processes function. But graduates will also need experience in psychology, if only to grasp how a computer’s “brain” differs from their own. And workers of the future will require training in ethics, to help them navigate a world in which the value of human beings can no longer be taken for granted.

Educators preparing students for this future must start now. Business majors should study economic and political history to avoid becoming blind determinists. Economists must learn from engineering students, as it will be engineers building the future workforce. And law students should focus on the intersection of big data and human rights, so that they gain the insight that will be needed to defend people from forces that may seek to turn individuals into disposable parts.

Even students studying creative and leisure disciplines must learn differently. For one thing, in an AI-dominated world, people will need help managing their extra time. We won’t stop playing tennis just because robots start winning Wimbledon; but new organizational and communication skills will be required to help navigate changes in how humans create and play. Managing these industries will take new skills tailored to a fully AI world.

The future of work may look nothing like the scenarios I envision, or it may be far more disruptive; no one really knows. But higher education has a responsibility to prepare students for every possible scenario – even those that today appear to be barely plausible. The best strategy for educators in any field, and at any time, is to teach skills that make humans human, rather than training students to outcompete new technologies.

No matter where I work in education, preparing young people for their futures will always be my job. And today, that future looks to be dominated by machines. To succeed, educators – and the universities we inhabit – must evolve.

*

Andrew Wachtel is President of the American University of Central Asia.

Project Syndicate

Robots will take our jobs. We’d better plan now, before it’s too late – Larry Elliott.

The opening of the Amazon Go store in Seattle brings us one step closer to the end of work as we know it.

A new sort of convenience store opened in the basement of the headquarters of Amazon in Seattle in January. Customers walk in, scan their phones, pick what they want off the shelves and walk out again. At Amazon Go there are no checkouts and no cashiers.

Instead, it is what the tech giant calls “just walk out” shopping, made possible by a new generation of machines that can sense which customer is which and what they are picking off the shelves. Within a minute or two of the shopper leaving the store, a receipt pops up on their phone for items they have bought.

This is the shape of things to come in food retailing. Technological change is happening fast and it has economic, social and ethical ramifications. There is a downside to Amazon Go, even though consumers benefit from lower prices and don’t waste time in queues. The store is only open to shoppers who can download an app on their smartphone, which rules out those who rely on welfare food stamps. Constant surveillance means there’s no shoplifting, but it has a whiff of Big Brother about it.

Change is always disruptive but the upheaval likely as a result of the next wave of automation will be especially marked. Driverless cars, for instance, are possible because intelligent machines can sense and have conversations with each other. They can do things – or will eventually be able to do things – that were once the exclusive preserve of humans. That means higher growth but also the risk that the owners of the machines get richer and richer while those displaced get angrier and angrier.

The experience of past industrial revolutions suggests that resisting technological change is futile. Nor, given that automation offers some tangible benefits – in mobility for the elderly and in healthcare, for instance – is it the cleverest of responses.

A robot tax – a levy that firms would pay if machines were taking the place of humans – would slow down the pace of automation by making the machines more expensive but this too has costs, especially for a country such as Britain, which has a problem with low investment, low productivity and a shrunken industrial base. The UK has 33 robot units per 10,000 workers, compared with 93 in the US and 213 in Japan, which suggests the need for more automation not less. On the plus side, the UK has more small and medium-sized companies in artificial intelligence than Germany or France. Penalising these firms with a robot tax does not seem like a smart idea.

The big issue is not whether the robots are coming, because they are. It is not even whether they will boost growth, because they will. On some estimates the UK economy will be 10% bigger by 2030 as the result of artificial intelligence alone. The issue is not one of production but of distribution, of whether there is a Scandinavian-style solution to the challenges of the machine age.

In some ways, the debate that was taking place between the tech industry, politicians and academics in Davos last week was similar to that which surrounded globalisation in the early 1990s. Back then, it was accepted that free movement of goods, people and money around the world would create losers as well as winners, but provided the losers were adequately compensated – either through reskilling, better education, or a stronger social safety net – all would be well.

But the reskilling never happened. Governments did not increase their budgets for education, and in some cases cut them. Welfare safety nets were made less generous. Communities affected by deindustrialisation never really recovered. Writing in the recent McKinsey quarterly, W Brian Arthur put it this way: “Offshoring in the last few decades has eaten up physical jobs and whole industries, jobs that were not replaced. The current transfer of jobs from the physical to the virtual economy is a different sort of offshoring, not to a foreign country but to a virtual one. If we follow recent history we can’t assume these jobs will be replaced either.”

The Centre for Cities suggests that the areas hardest hit by the hollowing out of manufacturing are going to be hardest hit by the next wave of automation as well. That’s because the factories and the pits were replaced by call centres and warehouses, where the scope for humans to be replaced by machines is most obvious.

But there are going to be middle-class casualties too: machines can replace radiologists, lawyers and journalists just as they have already replaced bank cashiers and will soon be replacing lorry drivers. Clearly, it is important to avoid repeating the mistakes of the past. Any response to the challenge posed by smart machines must be to invest more in education, training and skills. One suggestion made in Davos was that governments should consider tax incentives for investment in human, as well as physical, capital.

Still this won’t be sufficient. As the Institute for Public Policy Research has noted, new models of ownership are needed to ensure that the dividends of automation are broadly shared. One of its suggestions is a citizens’ wealth fund that would own a broad portfolio of assets on behalf of the public and would pay out a universal capital dividend. This could be financed either from the proceeds of asset sales or by companies paying corporation tax in the form of shares that would become more valuable due to the higher profits generated by automation.

But the dislocation will be considerable, and comes at a time when social fabrics are already frayed. To ensure that, as in the past, technological change leads to a net increase in jobs, the benefits will have to be spread around and the concept of what constitutes work rethought. That’s why one of the hardest working academics in Davos last week was Guy Standing of Soas University of London, who was on panel after panel making the case for a universal basic income, an idea that has its critics on both left and right, but whose time may well have come.

The Guardian

#Automation #Robots #Amazon #Retail industry

WTF. What’s the Future and why It’s Up To Us – Tim O’Reilly. 

ABOUT THE BOOK 

Renowned as ‘the Oracle of Silicon Valley’, Tim O’Reilly has spent three decades exploring the world-transforming power of information technology. 

Now, the leading thinker of the internet age turns his eye to the future – and asks the questions that will frame the next stage of the digital revolution: 

Will increased automation destroy jobs or create new opportunities? 

What will the company of tomorrow look like? 

Is a world dominated by algorithms to be welcomed or feared? 

How can we ensure that technology serves people, rather than the other way around? 

How can we all become better at mapping future trends? 
Tim O’Reilly’s insights create an authoritative, compelling and often surprising portrait of the world we will soon inhabit, highlighting both the many pitfalls and the enormous opportunities that lie ahead. 

ABOUT THE AUTHOR 

TIM O’REILLY is one of the world’s most influential tech analysts. As the founder of the publishing company O’Reilly Media, he became known for spotting technologies with world-shaking potential – from predicting the rise of the internet in the 1990s to coining and popularising terms like ‘Web 2.0’ and ‘Open Source’ in the 2000s. WTF? is his first book aimed at the general reader.

***

INTRODUCTION: THE WTF? ECONOMY 

THIS MORNING, I spoke out loud to a $150 device in my kitchen, told it to check if my flight was on time, and asked it to call a Lyft to take me to the airport. A car showed up a few minutes later, and my smartphone buzzed to let me know it had arrived. And in a few years, that car might very well be driving itself. 

Someone seeing this for the first time would have every excuse to say, “WTF?” 
At times, “WTF?” is an expression of astonishment. But many people reading the news about technologies like artificial intelligence and self-driving cars and drones feel a profound sense of unease and even dismay. They worry about whether their children will have jobs, or whether the robots will have taken them all. 

They are also saying “WTF?” but in a very different tone of voice. It is an expletive. 
Astonishment: phones that give advice about the best restaurant nearby or the fastest route to work today; artificial intelligences that write news stories or advise doctors; 3-D printers that make replacement parts—for humans; gene editing that can cure disease or bring extinct species back to life; new forms of corporate organization that marshal thousands of on-demand workers so that consumers can summon services at the push of a button in an app. 

Dismay: the fear that robots and AIs will take away jobs, reward their owners richly, and leave formerly middle-class workers part of a new underclass; tens of millions of jobs here in the United States that don’t pay people enough to live on; little-understood financial products and profit-seeking algorithms that can take down the entire world economy and drive millions of people from their homes; a surveillance society that tracks our every move and stores it in corporate and government databases.

Everything is amazing, everything is horrible, and it’s all moving too fast. We are heading pell-mell toward a world shaped by technology in ways that we don’t understand and have many reasons to fear. 

WTF? Google AlphaGo, an artificial intelligence program, beat the world’s best human Go player, an event that was widely predicted to be at least twenty years in the future—until it happened in 2016. If AlphaGo can happen twenty years early, what else might hit us even sooner than we expect? 

For starters: An AI running on a $35 Raspberry Pi computer beat a top US Air Force fighter pilot trainer in combat simulation. The world’s largest hedge fund has announced that it wants an AI to make three-fourths of management decisions, including hiring and firing. 
Oxford University researchers estimate that up to 47% of human tasks, including many components of white-collar jobs, may be done by machines within as little as twenty years. WTF? 

Uber has put taxi drivers out of work by replacing them with ordinary people offering rides in their own cars, creating millions of part-time jobs worldwide. Yet Uber is intent on eventually replacing those on-demand drivers with completely automated vehicles. WTF? 

Without owning a single room, Airbnb has more rooms on offer than some of the largest hotel groups in the world. Airbnb has under 3,000 employees, while Hilton has 152,000. New forms of corporate organization are outcompeting businesses based on best practices that we’ve followed for the lifetimes of most business leaders. WTF? 

Social media algorithms may have affected the outcome of the 2016 US presidential election. WTF? 
While new technologies are making some people very rich, incomes have stagnated for ordinary people, and for the first time, children in developed countries are on track to earn less than their parents. 
What do AI, self-driving cars, on-demand services, and income inequality have in common? They are telling us, loud and clear, that we’re in for massive changes in work, business, and the economy. 

But just because we can see that the future is going to be very different doesn’t mean that we know exactly how it’s going to unfold, or when. Perhaps “WTF?” really stands for “What’s the Future?” Where is technology taking us? Is it going to fill us with astonishment or dismay? And most important, what is our role in deciding that future? How do we make choices today that will result in a world we want to live in? 

I’ve spent my career as a technology evangelist, book publisher, conference producer, and investor wrestling with questions like these. My company, O’Reilly Media, works to identify important innovations, and by spreading knowledge about them, to amplify their impact and speed their adoption. And we’ve tried to sound a warning when a failure to understand how technology is changing the rules for business or society is leading us down the wrong path. 

In the process, we’ve watched numerous technology booms and busts, and seen companies go from seemingly unstoppable to irrelevant, while early-stage technologies that no one took seriously went on to change the world. 
If all you read are the headlines, you might have the mistaken idea that how highly investors value a company is the key to understanding which technologies really matter. We hear constantly that Uber is “worth” $68 billion, more than General Motors or Ford; Airbnb is “worth” $30 billion, more than Hilton Hotels and almost as much as Marriott. 

Those huge numbers can make the companies seem inevitable, with their success already achieved. But it is only when a business becomes profitably self-sustaining, rather than subsidized by investors, that we can be sure that it is here to stay. After all, after eight years Uber is still losing $2 billion every year in its race to get to worldwide scale. That’s an amount that dwarfs the losses of companies like Amazon (which lost $ 2.9 billion over its first five years before showing its first profits in 2001). 

Is Uber losing money like Amazon, which went on to become a hugely successful company that transformed retailing, publishing, and enterprise computing, or like a dot-com company that was destined to fail? Is the enthusiasm of its investors a sign of a fundamental restructuring of the nature of work, or a sign of an investment mania like the one leading up to the dot-com bust in 2001? How do we tell the difference? 

Startups with a valuation of more than a billion dollars understandably get a lot of attention, even more so now that they have a name, unicorn, the term du jour in Silicon Valley. Fortune magazine started keeping a list of companies with that exalted status. Silicon Valley news site TechCrunch has a constantly updated “Unicorn Leaderboard.” But even when these companies succeed, they may not be the surest guide to the future. 

At O’Reilly Media, we learned to tune in to very different signals by watching the innovators who first brought us the Internet and the open source software that made it possible. They did what they did out of love and curiosity, not a desire to make a fortune. We saw that radically new industries don’t start when creative entrepreneurs meet venture capitalists. They start with people who are infatuated with seemingly impossible futures. Those who change the world are people who are chasing a very different kind of unicorn, far more important than the Silicon Valley billion-dollar valuation (though some of them will achieve that too). It is the breakthrough, once remarkable, that becomes so ubiquitous that eventually it is taken for granted. 

Tom Stoppard wrote eloquently about a unicorn of this sort in his play Rosencrantz & Guildenstern Are Dead: A man breaking his journey between one place and another at a third place of no name, character, population or significance, sees a unicorn cross his path and disappear …. “My God,” says a second man, “I must be dreaming, I thought I saw a unicorn.” At which point, a dimension is added that makes the experience as alarming as it will ever be. A third witness, you understand, adds no further dimension but only spreads it thinner, and a fourth thinner still, and the more witnesses there are the thinner it gets and the more reasonable it becomes until it is as thin as reality, the name we give to the common experience. 
The world today is full of things that once made us say “WTF?” but are already well on their way to being the stuff of daily life. The Linux operating system was a unicorn. It seemed downright impossible that a decentralized community of programmers could build a world-class operating system and give it away for free. Now billions of people rely on it. 

The World Wide Web was a unicorn, even though it didn’t make Tim Berners-Lee a billionaire. I remember showing the World Wide Web at a technology conference in 1993, clicking on a link, and saying, “That picture just came over the Internet all the way from the University of Hawaii.” People didn’t believe it. They thought we were making it up. Now everyone expects that you can click on a link to find out anything at any time. 

Google Maps was a unicorn. On the bus not long ago, I watched one old man show another how the little blue dot in Google Maps followed us along as the bus moved. The newcomer to the technology was amazed. Most of us now take it for granted that our phones know exactly where we are, and not only can give us turn-by-turn directions exactly to our destination—by car, by public transit, by bicycle, and on foot—but also can find restaurants or gas stations nearby or notify our friends where we are in real time. 

The original iPhone was a unicorn even before the introduction of the App Store a year later utterly transformed the smartphone market. Once you experienced the simplicity of swiping and touching the screen rather than a tiny keyboard, there was no going back. The original pre-smartphone cell phone itself was a unicorn. As were its predecessors, the telephone and telegraph, radio and television. 

We forget. We forget quickly. And we forget ever more quickly as the pace of innovation increases. AI-powered personal agents like Amazon’s Alexa, Apple’s Siri, the Google Assistant, and Microsoft Cortana are unicorns. Uber and Lyft too are unicorns, but not because of their valuation. Unicorns are the kinds of apps that make us say, “WTF?” in a good way. Can you still remember the first time you realized that you could get the answer to virtually any question with a quick Internet search, or that your phone could route you to any destination? How cool that was, before you started taking it for granted? And how quickly did you move from taking it for granted to complaining about it when it doesn’t work quite right? 
We are layering on new kinds of magic that are slowly fading into the ordinary. A whole generation is growing up that thinks nothing of summoning cars or groceries with a smartphone app, or buying something from Amazon and having it show up in a couple of hours, or talking to AI-based personal assistants on their devices and expecting to get results. 

It is this kind of unicorn that I’ve spent my career in technology pursuing. 
So what makes a real unicorn of this amazing kind? 1. It seems unbelievable at first. 2. It changes the way the world works. 3. It results in an ecosystem of new services, jobs, business models, and industries. 

We’ve talked about the “at first unbelievable” part. What about changing the world? In Who Do You Want Your Customers to Become? Michael Schrage writes: Successful innovators don’t ask customers and clients to do something different; they ask them to become someone different …. Successful innovators ask users to embrace—or at least tolerate—new values, new skills, new behaviors, new vocabulary, new ideas, new expectations, and new aspirations. They transform their customers. 

For example, Schrage points out that Apple (and now also Google and Microsoft and Amazon) asks their “customers to become the sort of people who wouldn’t think twice about talking to their phone as a sentient servant.” Sure enough, there is a new generation of users who think nothing of saying things like: “Siri, make me a six p.m. reservation for two at Camino.”.“Alexa, play ‘Ballad of a Thin Man.’” “Okay, Google, remind me to buy currants the next time I’m at Piedmont Grocery.” 

Correctly recognizing human speech alone is hard, but listening and then performing complex actions in response—for millions of simultaneous users—requires incredible computing power provided by massive data centers. Those data centers support an ever-more-sophisticated digital infrastructure. For Google to remind me to buy currants the next time I’m at my local supermarket, it has to know where I am at all times, keep track of a particular location I’ve asked for, and bring up the reminder in that context. For Siri to make me a reservation at Camino, it needs to know that Camino is a restaurant in Oakland, and that it is open tonight, and it must allow conversations between machines, so that my phone can lay claim to a table from the restaurant’s reservation system via a service like OpenTable. 

And then it may call other services, either on my devices or in the cloud, to add the reservation to my calendar or to notify friends, so that yet another agent can remind all of us when it is time to leave for our dinner date. 
And then there are the alerts that I didn’t ask for, like Google’s warnings: “Leave now to get to the airport on time. 25 minute delay on the Bay Bridge.” or “There is traffic ahead. Faster route available.” 
All of these technologies are additive, and addictive. As they interconnect and layer on each other, they become increasingly powerful, increasingly magical. Once you become accustomed to each new superpower, life without it is like having your magic wand turn into a stick again. These services have been created by human programmers, but they will increasingly be enabled by artificial intelligence. 

That’s a scary word to many people. But it is the next step in the progression of the unicorn from the astonishing to the ordinary. 
While the term artificial intelligence or AI suggests a truly autonomous intelligence, we are far, far from that eventuality. 
AI is still just a tool, still subject to human direction. The nature of that direction, and how we must exercise it, is a key subject of this book. AI and other unicorn technologies have the potential to make a better world, in the same way that the technologies of the first industrial revolution created wealth for society that was unimaginable two centuries ago. AI bears the same relationship to previous programming techniques that the internal combustion engine does to the steam engine. It is far more versatile and powerful, and over time we will find ever more uses for it. 

Will we use it to make a better world? Or will we use it to amplify the worst features of today’s world? So far, the “WTF?” of dismay seems to have the upper hand. “Everything is amazing,” and yet we are deeply afraid. Sixty-three percent of Americans believe jobs are less secure now than they were twenty to thirty years ago. By a two-to-one ratio, people think good jobs are difficult to find where they live. And many of them blame technology. There is a constant drumbeat of news that tells us that the future is one in which increasingly intelligent machines will take over more and more human work. 
The pain is already being felt. For the first time, life expectancy is actually declining in America, and what was once its rich industrial heartland has too often become a landscape of despair.  For everyone’s sake, we must choose a different path. Loss of jobs and economic disruption are not inevitable. 

There is a profound failure of imagination and will in much of today’s economy. For every Elon Musk—who wants to reinvent the world’s energy infrastructure, build revolutionary new forms of transport, and settle humans on Mars—there are far too many companies that are simply using technology to cut costs and boost their stock price, enriching those able to invest in financial markets at the expense of an ever-growing group that may never be able to do so. Policy makers seem helpless, assuming that the course of technology is inevitable, rather than something we must shape. 

And that gets me to the third characteristic of true unicorns: They create value. Not just financial value, but real-world value for society. Consider past marvels. Could we have moved goods as easily or as quickly without modern earthmoving equipment letting us bore tunnels through mountains or under cities? The superpower of humans + machines made it possible to build cities housing tens of millions of people, for a tiny fraction of our people to work producing the food that all the rest of us eat, and to create a host of other wonders that have made the modern world the most prosperous time in human history. 

Technology is going to take our jobs! Yes. It always has, and the pain and dislocation are real. But it is going to make new kinds of jobs possible. History tells us technology kills professions, but does not kill jobs. We will find things to work on that we couldn’t do before but now can accomplish with the help of today’s amazing technologies. 

Take, for example, laser eye surgery. I used to be legally blind without huge Coke-bottle glasses. Twelve years ago, my eyes were fixed by a surgeon who would never have been able to do the job without the aid of a robot, who was now able to do something that had previously been impossible. After more than forty years of wearing glasses so strong that I was legally blind without them, I could see clearly on my own. I kept saying to myself for months afterward, “I’m seeing with my own eyes!” But in order to remove my need for prosthetic vision, the surgeon ended up relying on prosthetics of her own, performing the surgery on my cornea with the aid of a computer-controlled laser. 

During the actual surgery, apart from lifting the flap she had cut by hand in the surface of my cornea and smoothing it back into place after the laser was done, her job was to clamp open my eyes, hold my head, utter reassuring words, and tell me, sometimes with urgency, to keep looking at the red light. I asked what would happen if my eyes drifted and I didn’t stay focused on the light. “Oh, the laser would stop,” she said. “It only fires when your eyes are tracking the dot.” 

Surgery this sophisticated could never be done by an unaugmented human being. The human touch of my superb doctor was paired with the superhuman accuracy of complex machines, a twenty-first-century hybrid freeing me from assistive devices first invented eight centuries earlier in Italy. 

The revolution in sensors, computers, and control technologies is going to make many of the daily activities of the twentieth century seem quaint as, one by one, they are reinvented in the twenty-first. This is the true opportunity of technology: It extends human capability. 
In the debate about technology and the shape of the future, it’s easy to forget just how much technology already suffuses our lives, how much it has already changed us. As we get past that moment of amazement, and it fades into the new normal, we must put technology to work solving new problems. We must commit to building something new, strange to our past selves, but better, if we commit to making it so. We must keep asking: What will new technology let us do that was previously impossible? Will it help us build the kind of society we want to live in? This is the secret to reinventing the economy. 

As Google chief economist Hal Varian said to me, “My grandfather wouldn’t recognize what I do as work.” What are the new jobs of the twenty-first century? Augmented reality—the overlay of computer-generated data and images on what we see—may give us a clue. It definitely meets the WTF? test. The first time a venture capitalist friend of mine saw one unreleased augmented reality platform in the lab, he said, “If LSD were a stock, I’d be shorting it.” That’s a unicorn. But what is most exciting to me about this technology is not the LSD factor, but how augmented reality can change the way we work. 

You can imagine how augmented reality could enable workers to be “upskilled.” I’m particularly fond of imagining how the model used by Partners in Health could be turbocharged by augmented reality and telepresence. The organization provides free healthcare to people in poverty using a model in which community health workers recruited from the population being served are trained and supported in providing primary care. Doctors can be brought in as needed, but the bulk of care is provided by ordinary people. Imagine a community health worker who is able to tap on Google Glass or some next-generation wearable, and say, “Doctor, you need to see this!” (Trust me. Glass will be back, when Google learns to focus on community health workers, not fashion models.) 

It’s easy to imagine how rethinking our entire healthcare system along these lines could reduce costs, improve both health outcomes and patient satisfaction, and create jobs. Imagine house calls coming back into fashion. Add in health monitoring by wearable sensors, health advice from an AI made as available as Siri, the Google Assistant, or Microsoft Cortana, plus an Uber-style on-demand service, and you can start to see the outlines of one small segment of the next economy being brought to us by technology. 

This is only one example of how we might reinvent familiar human activities, creating new marvels that, if we are lucky, will eventually fade into the texture of everyday life, just like wonders of a previous age such as airplanes and skyscrapers, elevators, automobiles, refrigerators, and washing machines.
***

Despite their possible wonders, many of the futures we face are fraught with unknown risks. 

I am a classicist by training, and the fall of Rome is always before me. The first volume of Gibbon’s Decline and Fall of the Roman Empire was published in 1776, the same year as the American Revolution. 
Despite Silicon Valley’s dreams of a future singularity, an unknowable fusion of minds and machines that will mark the end of history as we know it, what history teaches us is that economies and nations, not just companies, can fail. Great civilizations do collapse. Technology can go backward. After the fall of Rome, the ability to make monumental structures out of concrete was lost for nearly a thousand years. It could happen to us. 

We are increasingly facing what planners call “wicked problems”—problems that are “difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize.” Even long-accepted technologies turn out to have unforeseen downsides. The automobile was a unicorn. It afforded ordinary people enormous freedom of movement, led to an infrastructure for transporting goods that spread prosperity, and enabled a consumer economy where goods could be produced far away from where they are consumed. 

Yet the roads we built to enable the automobile carved up and hollowed out cities, led to more sedentary lifestyles, and contributed mightily to the overpowering threat of climate change. Ditto cheap air travel, container shipping, the universal electric grid. All of these were enormous engines of prosperity that brought with them unintended consequences that only came to light over many decades of painful experience, by which time any solution seems impossible to attempt because the disruption required to reverse course would be so massive. We face a similar set of paradoxes today. 

The magical technologies of today—and choices we’ve already made, decades ago, about what we value as a society—are leading us down a path with complex contingencies, unseen dangers, and decisions that we don’t even know we are making. 

AI and robotics in particular are at the heart of a set of wicked problems that are setting off alarm bells among business and labor leaders, policy makers and academics. What happens to all those people who drive for a living when the cars start driving themselves? AIs are flying planes, advising doctors on the best treatments, writing sports and financial news, and telling us all, in real time, the fastest way to get to work. They are also telling human workers when to show up and when to go home, based on real-time measurement of demand. 

Computers used to work for humans; increasingly it’s now humans working for computers. The algorithm is the new shift boss. What is the future of business when technology-enabled networks and marketplaces let people choose when and how much they want to work? What is the future of education when on-demand learning outperforms traditional universities in keeping skills up to date? What is the future of media and public discourse when algorithms decide what we will watch and read, making their choice based on what will make the most profit for their owners? What is the future of the economy when more and more work can be done by intelligent machines instead of people, or only done by people in partnership with those machines? What happens to workers and their families? And what happens to the companies that depend on consumer purchasing power to buy their products? 

There are dire consequences to treating human labor simply as a cost to be eliminated. According to the McKinsey Global Institute, 540 to 580 million people—65 to 70% of households in twenty-five advanced economies—had incomes that had fallen or were flat between 2005 and 2014. Between 1993 and 2005, fewer than 10 million people—less than 2%—had the same experience. 

Over the past few decades, companies have made a deliberate choice to reward their management and “superstars” incredibly well, while treating ordinary workers as a cost to be minimized or cut. 
Top US CEOs now earn 373x the income of the average worker, up from 42x in 1980.1
As a result of the choices we’ve made as a society about how to share the benefits of economic growth and technological productivity gains, the gulf between the top and the bottom has widened enormously, and the middle has largely disappeared. Recently published research by Stanford economist Raj Chetty shows that for children born in 1940, the chance that they’d earn more than their parents was 92%; for children born in 1990, that chance has fallen to 50%.  

Businesses have delayed the effects of declining wages on the consumer economy by encouraging people to borrow—in the United States, household debt is over $12 trillion (80% of gross domestic product, or GDP, in mid-2016) and student debt alone is $1.2 trillion (with more than seven million borrowers in default). 

We’ve also used government transfers to reduce the gap between human needs and what our economy actually delivers. But of course, higher government transfers must be paid for through higher taxes or through higher government debt, either of which political gridlock has made unpalatable. This gridlock is, of course, a recipe for disaster. Meanwhile, in hopes that “the market” will deliver jobs, central banks have pushed ever more money into the system, hoping that somehow this will unlock business investment. But instead, corporate profits have reached highs not seen since the 1920s, corporate investment has shrunk, and more than $30 trillion of cash is sitting on the sidelines. 

The magic of the market is not working. We are at a very dangerous moment in history. The concentration of wealth and power in the hands of a global elite is eroding the power and sovereignty of nation-states while globe-spanning technology platforms are enabling algorithmic control of firms, institutions, and societies, shaping what billions of people see and understand and how the economic pie is divided. At the same time, income inequality and the pace of technology change are leading to a populist backlash featuring opposition to science, distrust of our governing institutions, and fear of the future, making it ever more difficult to solve the problems we have created. 

That has all the hallmarks of a classic wicked problem. Wicked problems are closely related to an idea from evolutionary biology, that there is a “fitness landscape” for any organism. Much like a physical landscape, a fitness landscape has peaks and valleys. The challenge is that you can only get from one peak—a so-called local maximum—to another by going back down. In evolutionary biology, a local maximum may mean that you become one of the long-lived stable species, unchanged for millions of years, or it may mean that you become extinct because you’re unable to respond to changed conditions. 

And in our economy, conditions are changing rapidly. Over the past few decades, the digital revolution has transformed media, entertainment, advertising, and retail, upending centuries-old companies and business models. Now it is restructuring every business, every job, and every sector of society. No company, no job—and ultimately, no government and no economy—is immune to disruption. Computers will manage our money, supervise our children, and have our lives in their “hands” as they drive our automated cars. 

The biggest changes are still ahead, and every industry and every organization will have to transform itself in the next few years, in multiple ways, or fade away. 
We need to ask ourselves whether the fundamental social safety nets of the developed world will survive the transition, and more important, what we will replace them with. Andy McAfee, coauthor of The Second Machine Age, put his finger on the consequence of failing to do so while talking with me over breakfast about the risks of AI taking over from humans: “The people will rise up before the machines do.”

This book provides a view of one small piece of this complex puzzle, the role of technology innovation in the economy, and in particular the role of WTF? technologies such as AI and on-demand services. I lay out the difficult choices we face as technology opens new doors of possibility while closing doors that once seemed the sure path to prosperity. 

But more important, I try to provide tools for thinking about the future, drawn from decades on the frontiers of the technology industry, observing and predicting its changes. The book is US-centric and technology-centric in its narrative; it is not an overview of all of the forces shaping the economy of the future, many of which are centered outside the United States or are playing out differently in other parts of the world. 

In No Ordinary Disruption, McKinsey’s Richard Dobbs, James Manyika, and Jonathan Woetzel point out quite correctly that technology is only one of four major disruptive forces shaping the world to come. 

Demographics (in particular, changes in longevity and the birth rate that have radically shifted the mix of ages in the global population), globalization, and urbanization may play at least as large a role as technology. And even that list fails to take into account catastrophic war, plague, or environmental disruption. 
These omissions are not based on a conviction that Silicon Valley’s part of the total technology innovation economy, or the United States, is more important than the rest; it is simply that the book is based on my personal and business experience, which is rooted in this field and in this one country. 

The book is divided into four parts. In the first part, I’ll share some of the techniques that my company has used to make sense of and predict innovation waves such as the commercialization of the Internet, the rise of open source software, the key drivers behind the renaissance of the web after the dot-com bust and the shift to cloud computing and big data, the Maker movement, and much more. 

I hope to persuade you that understanding the future requires discarding the way you think about the present, giving up ideas that seem natural and even inevitable. 
In the second and third parts, I’ll apply those same techniques to provide a framework for thinking about how technologies such as on-demand services, networks and platforms, and artificial intelligence are changing the nature of business, education, government, financial markets, and the economy as a whole. I’ll talk about the rise of great world-spanning digital platforms ruled by algorithm, and the way that they are reshaping our society. I’ll examine what we can learn about these platforms and the algorithms that rule them from Uber and Lyft, Airbnb, Amazon, Apple, Google, and Facebook. And I’ll talk about the one master algorithm we so take for granted that it has become invisible to us. I’ll try to demystify algorithms and AI, and show how they are not just present in the latest technology platforms but already shape business and our economy far more broadly than most of us understand. 

And I’ll make the case that many of the algorithmic systems that we have put in place to guide our companies and our economy have been designed to disregard the humans and reward the machines. 
In the fourth part of the book, I’ll examine the choices we have to make as a society. Whether we experience the WTF? of astonishment or the WTF? of dismay is not foreordained. It is up to us. It’s easy to blame technology for the problems that occur in periods of great economic transition. But both the problems and the solutions are the result of human choices. During the industrial revolution, the fruits of automation were first used solely to enrich the owners of the machines. 

Workers were often treated as cogs in the machine, to be used up and thrown away. But Victorian England figured out how to do without child labor, with reduced working hours, and their society became more prosperous. 

We saw the same thing here in the United States during the twentieth century. We look back now on the good middle-class jobs of the postwar era as something of an anomaly. But they didn’t just happen by chance. It took generations of struggle on the part of workers and activists, and growing wisdom on the part of capitalists, policy makers, political leaders, and the voting public. In the end we made choices as a society to share the fruits of productivity more widely. We also made choices to invest in the future. That golden age of postwar productivity was the result of massive investments in roads and bridges, universal power, water, sanitation, and communications. 

After World War II, we committed enormous resources to rebuild the lands destroyed by war, but we also invested in basic research. We invested in new industries: aerospace, chemicals, computers, and telecommunications. 
We invested in education, so that children could be prepared for the world they were about to inherit. 

The future comes in fits and starts, and it is often when times are darkest that the brightest futures are being born. Out of the ashes of World War II we forged a prosperous world. By choice and hard work, not by destiny. The Great War of a generation earlier had only amplified the cycle of dismay. What was the difference? 
After World War I, we punished the losers. After World War II, we invested in them and raised them up again. After World War I, the United States beggared its returning veterans. After World War II, we sent them to college. 

Wartime technologies such as digital computing were put into the public domain so that they could be transformed into the stuff of the future. The rich taxed themselves to finance the public good. 
In the 1980s, though, the idea that “greed is good” took hold in the United States and we turned away from prosperity. We accepted the idea that what was good for financial markets was good for everyone and structured our economy to drive stock prices ever higher, convincing ourselves that “the market” of stocks, bonds, and derivatives was the same as Adam Smith’s market of real goods and services exchanged by ordinary people. 

We hollowed out the real economy, putting people out of work and capping their wages in service to corporate profits that went to a smaller and smaller slice of society. We made the wrong choice forty years ago. We don’t need to stick with it. 
The rise of a billion people out of poverty in developing economies around the world at the same time that the incomes of ordinary people in most developed economies have been going backward should tell us that we took a wrong turn somewhere. 

The WTF? technologies of the twenty-first century have the potential to turbocharge the productivity of all our industries. But making what we do now more productive is just the beginning. We must share the fruits of that productivity, and use them wisely. If we let machines put us out of work, it will be because of a failure of imagination and a lack of will to make a better future. 
***

PART I  – 
USING THE RIGHT MAPS 


The map is not the territory. —Alfred Korzybski 


1 SEEING THE FUTURE IN THE PRESENT 

IN THE MEDIA, I’m often pegged as a futurist. I don’t think of myself that way. I think of myself as a mapmaker. I draw a map of the present that makes it easier to see the possibilities of the future. Maps aren’t just representations of physical locations and routes. They are any system that helps us see where we are and where we are trying to go. 

One of my favorite quotes is from Edwin Schlossberg: “The skill of writing is to create a context in which other people can think.”
This book is a map. We use maps—simplified abstractions of an underlying reality, which they represent—not just in trying to get from one place to another but in every aspect of our lives. When we walk through our darkened home without the need to turn on the light, that is because we have internalized a mental map of the space, the layout of the rooms, the location of every chair and table. 

Similarly, when an entrepreneur or venture capitalist goes to work each day, he or she has a mental map of the technology and business landscape. We dispose the world in categories: friend or acquaintance, ally or competitor, important or unimportant, urgent or trivial, future or past. For each category, we have a mental map. But as we’re reminded by the sad stories of people who religiously follow their GPS off a no-longer-existent bridge, maps can be wrong. In business and in technology, we often fail to see clearly what is ahead because we are navigating using old maps and sometimes even bad maps—maps that leave out critical details about our environment or perhaps even actively misrepresent it. Most often, in fast-moving fields like science and technology, maps are wrong simply because so much is unknown. 

Each entrepreneur, each inventor, is also an explorer, trying to make sense of what’s possible, what works and what doesn’t, and how to move forward. Think of the entrepreneurs working to develop the US transcontinental railroad in the mid-nineteenth century. The idea was first proposed in 1832, but it wasn’t even clear that the project was feasible until the 1850s, when the US House of Representatives provided the funding for an extensive series of surveys of the American West, a precursor to any actual construction. Three years of exploration from 1853 to 1855 resulted in the Pacific Railroad Surveys, a twelve-volume collection of data on 400,000 square miles of the American West. 

But all that data did not make the path forward entirely clear. There was fierce debate about the best route, debate that was not just about the geophysical merits of northern versus southern routes but also about the contested extension of slavery. Even when the intended route was decided on and construction began in 1863, there were unexpected problems—a grade steeper than previously reported that was too difficult for a locomotive, weather conditions that made certain routes impassable during the winter. You couldn’t just draw lines on the map and expect everything to work perfectly. 

The map had to be refined and redrawn with more and more layers of essential data added until it was clear enough to act on. Explorers and surveyors went down many false paths before deciding on the final route.

Creating the right map is the first challenge we face in making sense of today’s WTF? technologies. Before we can understand how to deal with AI, on-demand applications, and the disappearance of middle-class jobs, and how these things can come together into a future we want to live in, we have to make sure we aren’t blinded by old ideas. We have to see patterns that cross old boundaries. The map we follow into the future is like a picture puzzle with many of the pieces missing. You can see the rough outline of one pattern over here, and another there, but there are great gaps and you can’t quite make the connections. And then one day someone pours out another set of pieces on the table, and suddenly the pattern pops into focus. 

The difference between a map of an unknown territory and a picture puzzle is that no one knows the full picture in advance. It doesn’t exist until we see it—it’s a puzzle whose pattern we make up together as we go, invented as much as it is discovered. Finding our way into the future is a collaborative act, with each explorer filling in critical pieces that allow others to go forward. 


LISTENING FOR THE RHYMES 

Mark Twain is reputed to have said, “History doesn’t repeat itself, but it often rhymes.” 

Study history and notice its patterns. 
This is the first lesson I learned in how to think about the future. The story of how the term open source software came to be developed, refined, and adopted in early 1998—what it helped us to understand about the changing nature of software, how that new understanding changed the course of the industry, and what it predicted about the world to come—shows how the mental maps we use limit our thinking, and how revising the map can transform the choices we make. 

Before I delve into what is now ancient history, I need you to roll back your mind to 1998. Software was distributed in shrink-wrapped boxes, with new releases coming at best annually, often every two or three years. Only 42% of US households had a personal computer, versus the 80% who own a smartphone today. Only 20% of the US population had a mobile phone of any kind. The Internet was exciting investors—but it was still tiny, with only 147 million users worldwide, versus 3.4 billion today. More than half of all US Internet users had access through AOL. 

Amazon and eBay had been launched three years earlier, but Google was only just founded in September of that year. Microsoft had made Bill Gates, its founder and CEO, the richest man in the world. It was the defining company of the technology industry, with a near-monopoly position in personal computer software that it had leveraged to destroy competitor after competitor. 

The US Justice Department launched an antitrust investigation against the company in May of that year, just as it had done nearly thirty years earlier against IBM. 

In contrast to the proprietary software that made Microsoft so successful, open source software is distributed under a license that allows anyone to freely study, modify, and build on it. Examples of open source software include the Linux and Android operating systems; web browsers like Chrome and Firefox; popular programming languages like Python, PHP, and JavaScript; modern big data tools like Hadoop and Spark; and cutting-edge artificial intelligence toolkits like Google’s TensorFlow, Facebook’s Torch, or Microsoft’s CNTK. 

In the early days of computers, most software was open source, though not by that name. Some basic operating software came with a computer, but much of the code that actually made a computer useful was custom software written to solve specific problems. The software written by scientists and researchers in particular was often shared. 

During the late 1970s and 1980s, though, companies had realized that controlling access to software gave them commercial advantage and had begun to close off access using restrictive licenses. In 1985, Richard Stallman, a programmer at the Massachusetts Institute of Technology, published The GNU Manifesto, laying out the principles of what he called “free software”—not free as in price, but free as in freedom: the freedom to study, to redistribute, and to modify software without permission.  

Stallman’s ambitious goal was to build a completely free version of AT&T’s Unix operating system, originally developed at Bell Labs, the research arm of AT&T. At the time Unix was first developed in the late 1970s, AT&T was a legal monopoly with enormous profits from regulated telephone services. As a result, AT&T was not allowed to compete in the computer industry, then dominated by IBM, and in accord with its 1956 consent decree with the Justice Department had licensed Unix to computer science research groups on generous terms. 

Computer programmers at universities and companies all over the world had responded by contributing key elements to the operating system. But after the decisive consent decree of 1982, in which AT&T agreed to be broken up into seven smaller companies (“the Baby Bells”) in exchange for being allowed to compete in the computer market, AT&T tried to make Unix proprietary. They sued the University of California, Berkeley, which had built an alternate version of Unix (the Berkeley Software Distribution, or BSD), and effectively tried to shut down the collaborative barn raising that had helped to create the operating system in the first place. 

While Berkeley Unix was stalled by AT&T’s legal attacks, Stallman’s GNU Project (named for the meaningless recursive acronym “Gnu’s Not Unix”) had duplicated all of the key elements of Unix except the kernel, the central code that acts as a kind of traffic cop for all the other software. 
That kernel was supplied by a Finnish computer science student named Linus Torvalds, whose master’s thesis in 1990 consisted of a minimalist Unix-like operating system that would be portable to many different computer architectures. He called this operating system Linux. 

Over the next few years, there was a fury of commercial activity as entrepreneurs seized on the possibilities of a completely free operating system combining Torvalds’s kernel with the Free Software Foundation’s re-creation of the rest of the Unix operating system. The target was no longer AT&T, but rather Microsoft. 
In the early days of the PC industry, IBM and a growing number of personal computer “clone” vendors like Dell and Gateway provided the hardware, Microsoft provided the operating system, and a host of independent software companies provided the “killer apps”—word processing, spreadsheets, databases, and graphics programs—that drove adoption of the new platform. 

Microsoft’s DOS (Disk Operating System) was a key part of the ecosystem, but it was far from in control. That changed with the introduction of Microsoft Windows. Its extensive Application Programming Interfaces (APIs) made application development much easier but locked developers into Microsoft’s platform. Competing operating systems for the PC like IBM’s OS/2 were unable to break the stranglehold. And soon Microsoft used its dominance of the operating system to privilege its own applications—Microsoft Word, Excel, PowerPoint, Access, and, later, Internet Explorer, their web browser (now Microsoft Edge)—by making bundling deals with large buyers. 

The independent software industry for the personal computer was slowly dying, as Microsoft took over one application category after another. 

This is the rhyming pattern that I noticed: The personal computer industry had begun with an explosion of innovation that broke IBM’s monopoly on the first generation of computing, but had ended in another “winner takes all” monopoly. Look for repeating patterns and ask yourself what the next iteration might be. Now everyone was asking whether a desktop version of Linux could change the game. Not only startups but also big companies like IBM, trying to claw their way back to the top of the heap, placed huge bets that they could. 

But there was far more to the Linux story than just competing with Microsoft. It was rewriting the rules of the software industry in ways that no one expected. It had become the platform on which many of the world’s great websites—at the time, most notably Amazon and Google—were being built. 

But it was also reshaping the very way that software was being written. In February 1997, at the Linux Kongress in Würzburg, Germany, hacker Eric Raymond delivered a paper, called “The Cathedral and the Bazaar,” that electrified the Linux community. It laid out a theory of software development drawn from reflections on Linux and on Eric’s own experiences with what later came to be called open source software development. 

Eric wrote: “Who would have thought even five years ago that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet?”

The Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who’d take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles. 

Eric laid out a series of principles that have, over the past decades, become part of the software development gospel: that software should be released early and often, in an unfinished state rather than waiting to be perfected; that users should be treated as “co-developers”; and that “given enough eyeballs, all bugs are shallow.” 

Today, whether programmers develop open source software or proprietary software, they use tools and approaches that were pioneered by the open source community. But more important, anyone who uses today’s Internet software has experienced these principles at work. When you go to a site like Amazon, Facebook, or Google, you are a participant in the development process in a way that was unknown in the PC era. You are not a “co-developer” in the way that Eric Raymond imagined—you are not another hacker contributing feature suggestions and code. But you are a “beta tester”—someone who tries out continually evolving, unfinished software and gives feedback—at a scale never before imagined. Internet software developers constantly update their applications, testing new features on millions of users, measuring their impact, and learning as they go. 

Eric saw that something was changing in the way software was being developed, but in 1997, when he first delivered “The Cathedral and the Bazaar,” it wasn’t yet clear that the principles he articulated would spread far beyond free software, beyond software development itself, shaping content sites like Wikipedia and eventually enabling a revolution in which consumers would become co-creators of services like on-demand transportation (Uber and Lyft) and lodging (Airbnb). 

I was invited to give a talk at the same conference in Würzburg. My talk, titled “Hardware, Software, and Infoware,” was very different. I was fascinated not just with Linux, but with Amazon. Amazon had been built on top of various kinds of free software, including Linux, but it seemed to me to be fundamentally different in character from the kinds of software we’d seen in previous eras of computing. Today it’s obvious to everyone that websites are applications and that the web has become a platform, but in 1997 most people thought of the web browser as the application. If they knew a little bit more about the architecture of the web, they might think of the web server and associated code and data as the application. 

The content was something managed by the browser, in the same way that Microsoft Word manages a document or that Excel lets you create a spreadsheet. By contrast, I was convinced that the content itself was an essential part of the application, and that the dynamic nature of that content was leading to an entirely new architectural design pattern for a next stage beyond software, which at the time I called “infoware.” 

Where Eric was focused on the success of the Linux operating system, and saw it as an alternative to Microsoft Windows, I was particularly fascinated by the success of the Perl programming language in enabling this new paradigm on the web. Perl was originally created by Larry Wall in 1987 and distributed for free over early computer networks. I had published Larry’s book, Programming Perl, in 1991, and was preparing to launch the Perl Conference in the summer of 1997. 

I had been inspired to start the Perl Conference by the chance conjunction of comments by two friends. Early in 1997, Carla Bayha, the computer book buyer at the Borders bookstore chain, had told me that the second edition of Programming Perl, published in 1996, was one of the top 100 books in any category at Borders that year. It struck me as curious that despite this fact, there was virtually nothing written about Perl in any of the computer trade papers. Because there was no company behind Perl, it was virtually invisible to the pundits who followed the industry. 

And then Andrew Schulman, the author of a book called Unauthorized Windows 95, told me something I found equally curious. At that time, Microsoft was airing a series of television commercials about the way that their new technology called Active/X would “activate the Internet.” The software demos in these ads were actually mostly done with Perl, according to Andrew. It was clear to me that Perl, not Active/X, was actually at the heart of the way dynamic web content was being delivered.

……

from

WTF. What’s the Future and why It’s Up To Us

by Tim O’Reilly. 

get it at Amazon.com 

Automation is more complex than people think. Here’s why – Viktor Weber.

Viktor Weber, Founder & Director, Future Real Estate Institute.

Automation is a topic on which most people have an opinion. The level of knowledge on the subject varies greatly, as does the amount of fear that people feel towards the technological revolution that is taking place.

I get the impression that even informed writers — including myself — in this field often take, for numerous reasons, convenient short-cuts when it comes to writing and talking about automation.

It is therefore time to address the abundance of factors that are influencing and will continue to influence how humanity moves forward with automation. It is an intertwined and complex network of factors that allow for multiple outcomes. This article will shine the light on a selection of these parameters.

Medium.com

Will robots bring about the end of work? – Toby Walsh.

Hal Varian, chief economist at Google, has a simple way to predict the future. The future is simply what rich people have today. The rich have chauffeurs. In the future, we will have driverless cars that chauffeur us all around. The rich have private bankers. In the future, we will all have robo-bankers.

One thing that we imagine that the rich have today are lives of leisure. So will our future be one in which we too have lives of leisure, and the machines are taking the sweat? We will be able to spend our time on more important things than simply feeding and housing ourselves?

Let’s turn to another chief economist. Andy Haldane is chief economist at the Bank of England. In November 2015, he predicted that 15 million jobs in the UK, roughly half of all jobs, were under threat from automation. You’d hope he knew what he was talking about.

And he’s not the only one making dire predictions. Politicians. Bankers. Industrialists. They’re all saying a similar thing.

“We need urgently to face the challenge of automation, robotics that could make so much of contemporary work redundant”, Jeremy Corbyn at the Labour Party Conference in September 2017.

“World Bank data has predicted that the proportion of jobs threatened by automation in India is 69 percent, 77 percent in China and as high as 85 percent in Ethiopia”, according to World Bank president Jim Yong Kim in 2016.

It really does sound like we might be facing the end of work as we know it.

Many of these fears can be traced back to a 2013 study from the University of Oxford.This made a much quoted prediction that 47% of jobs in the US were under threat of automation in the next two decades. Other more recent and detailed studies have made similar dramatic predictions.

Now, there’s a lot to criticize in the Oxford study. From a technical perspective, some of report’s predictions are clearly wrong. The report gives a 94% probability that bicycle repair person will be automated in the next two decades. And, as someone trying to build that future, I can reassure any bicycle repair person that there is zero chance that we will automate even small parts of your job anytime soon. The truth of the matter is no one has any real idea of the number of jobs at risk.

Even if we have as many as 47% of jobs automated, this won’t translate into 47% unemployment. One reason is that we might just work a shorter week. That was the case in the Industrial Revolution. Before the Industrial Revolution, many worked 60 hours per week. After the Industrial Revolution, work reduced to around 40 hours per week. The same could happen with the unfolding AI Revolution.

Another reason that 47% automation won’t translate into 47% unemployment is that all technologies create new jobs as well as destroy them. That’s been the case in the past, and we have no reason to suppose that it won’t be the case in the future. There is, however, no fundamental law of economics that requires the same number of jobs to be created as destroyed. In the past, more jobs were created than destroyed but it doesn’t have to be so in the future.

In the Industrial Revolution, machines took over many of the physical tasks we used to do. But we humans were still left with all the cognitive tasks. This time, as machines start to take on many of the cognitive tasks too, there’s the worrying question: what is left for us humans?

Some of my colleagues suggest there will be plenty of new jobs like robot repair person. I am entirely unconvinced by such claims. The thousands of people who used to paint and weld in most of our car factories got replaced by only a couple of robot repair people.

No, the new jobs will have to be doing jobs where either humans excel or where we choose not to have machines. But here’s the contradiction. In fifty to hundred years time, machines will be super-human. So it’s hard to imagine of any job where humans will remain better than the machines. This means the only jobs left will be those where we prefer humans to do them.

The AI Revolution then will be about rediscovering the things that make us human. Technically, machines will have become amazing artists. They will be able to write music to rival Bach, and paintings to match Picasso. But we’ll still prefer works produced by human artists.

These works will speak to the human experience. We will appreciate a human artist who speaks about love because we have this in common. No machine will truly experience love like we do.

As well as the artistic, there will be a re-appreciation of the artisan. Indeed, we see the beginnings of this already in hipster culture. We will appreciate more and more those things made by the human hand. Mass-produced goods made by machine will become cheap. But items made by hand will be rare and increasingly valuable.

Finally as social animals, we will also increasingly appreciate and value social interactions with other humans. So the most important human traits will be our social and emotional intelligence, as well as our artistic and artisan skills. The irony is that our technological future will not be about technology but all about our humanity.

***

Toby Walsh is Professor of Artificial Intelligence at the University of New South Wales, in Sydney, Australia.

His new book, “Android Dreams: the past, present and future of Artificial Intelligence” was published in the UK by Hurst Publishers in September 2017.

The Guardian

No, wealth isn’t created at the top. It is merely devoured there – Rutger Bregman. 

This piece is about one of the biggest taboos of our times. About a truth that is seldom acknowledged, and yet, on reflection, cannot be denied. The truth that we are living in an inverse welfare state.

These days, politicians from the left to the right assume that most wealth is created at the top. By the visionaries, by the job creators, and by the people who have “made it”. By the go-getters oozing talent and entrepreneurialism that are helping to advance the whole world.

Now, we may disagree about the extent to which success deserves to be rewarded – the philosophy of the left is that the strongest shoulders should bear the heaviest burden, while the right fears high taxes will blunt enterprise – but across the spectrum virtually all agree that wealth is created primarily at the top.

So entrenched is this assumption that it’s even embedded in our language. When economists talk about “productivity”, what they really mean is the size of your paycheck. And when we use terms like “welfare state”, “redistribution” and “solidarity”, we’re implicitly subscribing to the view that there are two strata: the makers and the takers, the producers and the couch potatoes, the hardworking citizens – and everybody else.

In reality, it is precisely the other way around. In reality, it is the waste collectors, the nurses, and the cleaners whose shoulders are supporting the apex of the pyramid. They are the true mechanism of social solidarity. Meanwhile, a growing share of those we hail as “successful” and “innovative” are earning their wealth at the expense of others. The people getting the biggest handouts are not down around the bottom, but at the very top. Yet their perilous dependence on others goes unseen. Almost no one talks about it. Even for politicians on the left, it’s a non-issue.

To understand why, we need to recognise that there are two ways of making money. The first is what most of us do: work. That means tapping into our knowledge and know-how (our “human capital” in economic terms) to create something new, whether that’s a takeout app, a wedding cake, a stylish updo, or a perfectly poured pint. To work is to create. Ergo, to work is to create new wealth.

But there is also a second way to make money. That’s the rentier way: by leveraging control over something that already exists, such as land, knowledge, or money, to increase your wealth. You produce nothing, yet profit nonetheless. By definition, the rentier makes his living at others’ expense, using his power to claim economic benefit.

For those who know their history, the term “rentier” conjures associations with heirs to estates, such as the 19th century’s large class of useless rentiers, well-described by the French economist Thomas Piketty. These days, that class is making a comeback. (Ironically, however, conservative politicians adamantly defend the rentier’s right to lounge around, deeming inheritance tax to be the height of unfairness.) But there are also other ways of rent-seeking. From Wall Street to Silicon Valley, from big pharma to the lobby machines in Washington and Westminster, zoom in and you’ll see rentiers everywhere.

There is no longer a sharp dividing line between working and rentiering. In fact, the modern-day rentier often works damn hard. Countless people in the financial sector, for example, apply great ingenuity and effort to amass “rent” on their wealth. Even the big innovations of our age – businesses like Facebook and Uber – are interested mainly in expanding the rentier economy. The problem with most rich people therefore is not that they are coach potatoes. Many a CEO toils 80 hours a week to multiply his allowance. It’s hardly surprising, then, that they feel wholly entitled to their wealth.

It may take quite a mental leap to see our economy as a system that shows solidarity with the rich rather than the poor. So I’ll start with the clearest illustration of modern freeloaders at the top: bankers. Studies conducted by the International Monetary Fund and the Bank of International Settlements – not exactly leftist thinktanks – have revealed that much of the financial sector has become downright parasitic. How instead of creating wealth, they gobble it up whole.

Don’t get me wrong. Banks can help to gauge risks and get money where it is needed, both of which are vital to a well-functioning economy. But consider this: economists tell us that the optimum level of total private-sector debt is 100% of GDP. Based on this equation, if the financial sector only grows, it won’t equal more wealth, but less. So here’s the bad news. In the United Kingdom, private-sector debt is now at 157.5%. In the United States the figure is 188.8%.

In other words, a big part of the modern banking sector is essentially a giant tapeworm gorging on a sick body. It’s not creating anything new, merely sucking others dry. Bankers have found a hundred and one ways to accomplish this. The basic mechanism, however, is always the same: offer loans like it’s going out of style, which in turn inflates the price of things like houses and shares, then earn a tidy percentage off those overblown prices (in the form of interest, commissions, brokerage fees, or what have you), and if the shit hits the fan, let Uncle Sam mop it up.

The financial innovation concocted by all the math whizzes working in modern banking (instead of at universities or companies that contribute to real prosperity) basically boils down to maximising the total amount of debt. And debt, of course, is a means of earning rent. So for those who believe that pay ought to be proportionate to the value of work, the conclusion we have to draw is that many bankers should be earning a negative salary; a fine, if you will, for destroying more wealth than they create.

Bankers are the most obvious class of closet freeloaders, but they are certainly not alone. Many a lawyer and an accountant wields a similar revenue model. Take tax evasion. Untold hardworking, academically degreed professionals make a good living at the expense of the populations of other countries. Or take the tide of privatisations over the past three decades, which have been all but a carte blanche for rentiers. One of the richest people in the world, Carlos Slim, earned his millions by obtaining a monopoly of the Mexican telecom market and then hiking prices sky high. The same goes for the Russian oligarchs who rose after the Berlin Wall fell, who bought up valuable state-owned assets for song to live off the rent.

But here comes the rub. Most rentiers are not as easily identified as the greedy banker or manager. Many are disguised. On the face of it, they look like industrious folks, because for part of the time they really are doing something worthwhile. Precisely that makes us overlook their massive rent-seeking.

Take the pharmaceutical industry. Companies like GlaxoSmithKline and Pfizer regularly unveil new drugs, yet most real medical breakthroughs are made quietly at government-subsidised labs. Private companies mostly manufacture medications that resemble what we’ve already got. They get it patented and, with a hefty dose of marketing, a legion of lawyers, and a strong lobby, can live off the profits for years. In other words, the vast revenues of the pharmaceutical industry are the result of a tiny pinch of innovation and fistfuls of rent.

Even paragons of modern progress like Apple, Amazon, Google, Facebook, Uber and Airbnb are woven from the fabric of rentierism. Firstly, because they owe their existence to government discoveries and inventions (every sliver of fundamental technology in the iPhone, from the internet to batteries and from touchscreens to voice recognition, was invented by researchers on the government payroll). And second, because they tie themselves into knots to avoid paying taxes, retaining countless bankers, lawyers, and lobbyists for this very purpose.

Even more important, many of these companies function as “natural monopolies”, operating in a positive feedback loop of increasing growth and value as more and more people contribute free content to their platforms. Companies like this are incredibly difficult to compete with, because as they grow bigger, they only get stronger.

Aptly characterising this “platform capitalism” in an article, Tom Goodwin writes: “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate.”

So what do these companies own? A platform. A platform that lots and lots of people want to use. Why? First and foremost, because they’re cool and they’re fun – and in that respect, they do offer something of value. However, the main reason why we’re all happy to hand over free content to Facebook is because all of our friends are on Facebook too, because their friends are on Facebook … because their friends are on Facebook.

Most of Mark Zuckerberg’s income is just rent collected off the millions of picture and video posts that we give away daily for free. And sure, we have fun doing it. But we also have no alternative – after all, everybody is on Facebook these days. Zuckerberg has a website that advertisers are clamouring to get onto, and that doesn’t come cheap. Don’t be fooled by endearing pilots with free internet in Zambia. Stripped down to essentials, it’s an ordinary ad agency. In fact, in 2015 Google and Facebook pocketed an astounding 64%of all online ad revenue in the US.

But don’t Google and Facebook make anything useful at all? Sure they do. The irony, however, is that their best innovations only make the rentier economy even bigger. They employ scores of programmers to create new algorithms so that we’ll all click on more and more ads. Uber has usurped the whole taxi sector just as Airbnb has upended the hotel industry and Amazon has overrun the book trade. The bigger such platforms grow the more powerful they become, enabling the lords of these digital feudalities to demand more and more rent.

Think back a minute to the definition of a rentier: someone who uses their control over something that already exists in order to increase their own wealth. The feudal lord of medieval times did that by building a tollgate along a road and making everybody who passed by pay. Today’s tech giants are doing basically the same thing, but transposed to the digital highway. Using technology funded by taxpayers, they build tollgates between you and other people’s free content and all the while pay almost no tax on their earnings.

This is the so-called innovation that has Silicon Valley gurus in raptures: ever bigger platforms that claim ever bigger handouts. So why do we accept this? Why does most of the population work itself to the bone to support these rentiers?

I think there are two answers. Firstly, the modern rentier knows to keep a low profile. There was a time when everybody knew who was freeloading. The king, the church, and the aristocrats controlled almost all the land and made peasants pay dearly to farm it. But in the modern economy, making rentierism work is a great deal more complicated. How many people can explain a credit default swap, or a collateralized debt obligation?  Or the revenue model behind those cute Google Doodles? And don’t the folks on Wall Street and in Silicon Valley work themselves to the bone, too? Well then, they must be doing something useful, right?

Maybe not. The typical workday of Goldman Sachs’ CEO may be worlds away from that of King Louis XIV, but their revenue models both essentially revolve around obtaining the biggest possible handouts. “The world’s most powerful investment bank,” wrote the journalist Matt Taibbi about Goldman Sachs, “is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money.”

But far from squids and vampires, the average rich freeloader manages to masquerade quite successfully as a decent hard worker. He goes to great lengths to present himself as a “job creator” and an “investor” who “earns” his income by virtue of his high “productivity”. Most economists, journalists, and politicians from left to right are quite happy to swallow this story. Time and again language is twisted around to cloak funneling and exploitation as creation and generation.

However, it would be wrong to think that all this is part of some ingenious conspiracy. Many modern rentiers have convinced even themselves that they are bona fide value creators. When current Goldman Sachs CEO Lloyd Blankfein was asked about the purpose of his job, his straight-faced answer was that he is “doing God’s work”. The Sun King would have approved.

The second thing that keeps rentiers safe is even more insidious. We’re all wannabe rentiers. They have made millions of people complicit in their revenue model. Consider this: What are our financial sector’s two biggest cash cows? Answer: the housing market and pensions. Both are markets in which many of us are deeply invested.

Recent decades have seen more and more people contract debts to buy a home, and naturally it’s in their interest if house prices continue to scale now heights (read: burst bubble upon bubble). The same goes for pensions. Over the past few decades we’ve all scrimped and saved up a mountainous pension piggy bank. Now pension funds are under immense pressure to ally with the biggest exploiters in order to ensure they pay out enough to please their investors.

The fact of the matter is that feudalism has been democratised. To a lesser or greater extent, we are all depending on handouts. En masse, we have been made complicit in this exploitation by the rentier elite, resulting in a political covenant between the rich rent-seekers and the homeowners and retirees.

Don’t get me wrong, most homeowners and retirees are not benefiting from this situation. On the contrary, the banks are bleeding them far beyond the extent to which they themselves profit from their houses and pensions. Still, it’s hard to point fingers at a kleptomaniac when you have sticky fingers too.

So why is this happening? The answer can be summed up in three little words: Because it can.

Rentierism is, in essence, a question of power. That the Sun King Louis XIV was able to exploit millions was purely because he had the biggest army in Europe. It’s no different for the modern rentier. He’s got the law, politicians and journalists squarely in his court. That’s why bankers get fined peanuts for preposterous fraud, while a mother on government assistance gets penalised within an inch of her life if she checks the wrong box.

The biggest tragedy of all, however, is that the rentier economy is gobbling up society’s best and brightest. Where once upon a time Ivy League graduates chose careers in science, public service or education, these days they are more likely to opt for banks, law firms, or trumped up ad agencies like Google and Facebook. When you think about it, it’s insane. We are forking over billions in taxes to help our brightest minds on and up the corporate ladder so they can learn how to score ever more outrageous handouts.

One thing is certain: countries where rentiers gain the upper hand gradually fall into decline. Just look at the Roman Empire. Or Venice in the 15th century. Look at the Dutch Republic in the 18th century. Like a parasite stunts a child’s growth, so the rentier drains a country of its vitality.

What innovation remains in a rentier economy is mostly just concerned with further bolstering that very same economy. This may explain why the big dreams of the 1970s, like flying cars, curing cancer, and colonising Mars, have yet to be realised, while bankers and ad-makers have at their fingertips technologies a thousand times more powerful.

Yet it doesn’t have to be this way. Tollgates can be torn down, financial products can be banned, tax havens dismantled, lobbies tamed, and patents rejected. Higher taxes on the ultra-rich can make rentierism less attractive, precisely because society’s biggest freeloaders are at the very top of the pyramid. And we can more fairly distribute our earnings on land, oil, and innovation through a system of, say, employee shares, or a universal basic income. 

But such a revolution will require a wholly different narrative about the origins of our wealth. It will require ditching the old-fashioned faith in “solidarity” with a miserable underclass that deserves to be borne aloft on the market-level salaried shoulders of society’s strongest. All we need to do is to give real hard-working people what they deserve.

And, yes, by that I mean the waste collectors, the nurses, the cleaners – theirs are the shoulders that carry us all.

The Guardian

Millions of UK workers at risk of being replaced by robots, PwC study says – Larry Elliott. 

More than 10 million UK workers are at high risk of being replaced by robots within 15 years as the automation of routine tasks gathers pace in a new machine age.

A report by the consultancy firm PwC found that 30% of jobs in Britain were potentially under threat from breakthroughs in artificial intelligence (AI). In some sectors half the jobs could go.

The report predicted that automation would boost productivity and create fresh job opportunities, but it said action was needed to prevent the widening of inequality that would result from robots increasingly being used for low-skill tasks.

PwC said 2.25 million jobs were at high risk in wholesale and retailing – the sector that employs most people in the UK – and 1.2 million were under threat in manufacturing, 1.1 million in administrative and support services and 950,000 in transport and storage.

“There’s no doubt that AI and robotics will rebalance what jobs look like in the future, and that some are more susceptible than others.

What’s important is making sure that the potential gains from automation are shared more widely across society and no one gets left behind. Responsible employers need to ensure they encourage flexibility and adaptability in their people so we are all ready for the change.

In the future, knowledge will be a commodity so we need to shift our thinking on how we skill and upskill future generations. Creative and critical thinking will be highly valued, as will emotional intelligence.”

The Guardian

In the age of robots, our schools are teaching children to be redundant – George Monbiot. 

In the future, if you want a job, you must be as unlike a machine as possible: creative, critical and socially skilled. So why are children being taught to behave like machines?

Children learn best when teaching aligns with their natural exuberance, energy and curiosity. So why are they dragooned into rows and made to sit still while they are stuffed with facts?

We succeed in adulthood through collaboration. So why is collaboration in tests and exams called cheating?

The best teachers use their character, creativity and inspiration to trigger children’s instinct to learn. So why are character, creativity and inspiration suppressed by a stifling regime of micromanagement?

The Guardian

Change blue collar for new collar jobs – Samuel Wills. 

Artificial Intelligence, inequality and globalisation were central themes of the World Economic Forum in Davos last week.

Fearing that AI will destroy jobs, IBM CEO Ginni Rometty called for a future where jobs are not white collar or blue collar, but “new collar”.

While change is coming, these “new collar” will actually be quite old-fashioned. Teachers, nurses and waiters will all still be important as our economy continues to shift towards services.

Regardless of the jobs, technology will also continue to widen inequality if left unchecked.

We must learn the lessons of the industrial revolution and see the world’s labour movements return to their roots: sharing the proceeds of progress with all.

Thomas Mortimer once feared that saw mills would “exclude the labour of thousands of the human race”. That was in 1772 as the industrial revolution gained steam.

Historically, technology has replaced low-skilled jobs and complemented high-skilled ones.

The invention of tractors replaced workers with pitchforks, but increased the need for engineers.

Trade has had a similar effect. When we cut tariffs on textiles, some factories closed but Australian clothing designers still flourish.

When one job disappears, a new one is created in its place. During the industrial revolution, farm workers found jobs in factories. Centuries later, they found them in call centres.

We call this “structural transformation”, as economies transition from agriculture to manufacturing and then on to services.

Forty years ago, manufacturing made up 26 percent of GDP in New Zealand. By 2009, this industry made up 13 percent of GDP. Instead, the growth area is the services – doing things for other people.

What this means is these “new collar jobs” might actually be quite old-fashioned.

Although we will obviously need more programmers, computer scientists and engineers, we will also need plenty of teachers, nurses and policemen.

Services are not easily replaced.

As William Baumol pointed out in the 1960s, it still takes just as many people to perform a Beethoven string quartet as it did in the 1800s.

It’s easy to automate an espresso, but people still seem to prefer the personal touch.

AI will be hard pressed to replace the caring touch of a nurse on a sick patient’s cheek.

However, new technologies still pose a problem: inequality.

In Australia, the average individual real wage of the richest 10 per cent grew seven times faster than for the poorest 10 per cent in the 20 years from 1988.

Although technology increases the size of the economic pie, the slices aren’t shared equally.

When someone’s job is automated they could be unemployed for months while they search for work and retrain.

Older workers may never find another job. If they do, they could be competing with a host of people in the same situation.

The costs of progress are borne at the bottom of the income ladder, while the proceeds are reaped at the top.

New technology may change this. Although nurses and police officers may be safe, artificial intelligence has already made large gains in diagnosing illnesses, writing legal documents and designing machine components.

Doctors, lawyers and engineers are all now at risk.

The response

Regardless of which jobs are affected, we need to make sure the benefits of technology are shared equitably.

It won’t happen naturally. The owners of businesses and machines – capital – are in a better bargaining position than ever.

We must learn the lessons of the past. The industrial revolution gave birth to the labour movement, which argued for better working conditions, minimum wages and shorter hours.

All of these helped share the proceeds of progress. They also spurred growth.

Around the world modern labour movements have lost their way. Although making important gains on social issues, the left in the UK and US has flirted with protectionism and strayed from its core business.

That business is ensuring workers share in the proceeds of progress.

We must open our borders and our minds to the great benefits of globalisation and technology. We must also ensure the proceeds are shared.

Artificial intelligence and globalisation offer an exciting future, but if we are all to enjoy it we must look to the past.

By Samuel Wills, Assistant Professor/Lecturer in Economics, University of Sydney.

NZ Herald

You May Not Like Technology But It Likes You – Scott Reardon. 

In Greek mythology, Prometheus taught man how to farm. But when he gave man fire, the gods felt he had gone too far. And so as punishment, Zeus chained Prometheus to a rock where every day an eagle would come and eat his liver, which would regrow because he was immortal.

Prometheus’s story is about mankind’s dominion over its world and how much power is too much. But counterintuitively it is Zeus, not Prometheus, who many artists and writers in the last thousand years have sided with. The story is relevant today because humanity is at a turning point, and two opposing forces are locked in a war that is just beginning to come into being. On one side are our innovations and the power that comes with them, and on the other side is the fact that when it comes to us ourselves, there seems to be no innovation.

For tens of thousands of years, technology has been directed outward—on the world at large. Now, for the first time in human history, technology has reached a point where it can be directed inward—back on its creators. Technology has found something new it would like to change: Us.

In 2010, researchers at the University of Colorado performed what they thought would be an unremarkable experiment on lab mice. They injured the mice’s limbs and injected them with stem cells to heal the damage. Then something strange happened. The muscles in those little limbs nearly doubled in size and strength. Not only that, the muscles stayed that way for the life of each mouse, defying even the aging process itself. Essentially the researchers had accidentally created a race of “super-mice.”

Another experiment in 2001 involved injecting human stem cells, of all things, into the brains of aging mice. Soon after, the mice began to perform better on the Morris water maze test. In other words, the stem cells had made them smarter.

When people think of stem cells, they usually think of a potential cure for diseases like Parkinson’s. But there is another, potentially far darker, use for stem cells, and that is on people who are perfectly healthy.

The Guardian

Is the 20-hour work week the way of the future? – Frank Chung. 

According to the UK-based New Economics Foundation, which first proposed the idea in 2010, a “normal” working week of 21 hours could help address a “range of urgent, interlinked problems”, including “overwork, unemployment, over-consumption, high carbon emissions, low wellbeing, entrenched inequalities, and the lack of time to live sustainably, to care for each other, and simply to enjoy life”.

Globally, the push to hit the brakes on out-of-control workloads is gaining steam, with a number of countries experimenting with, and reaping productivity benefits from, reducing hours.

“I thought if we could just cancel Monday and Friday, and use that time to sit in a cafe, interact with people, actually get out in the world, we might do better work.”

NZ Herald

Progress Abandoned – Jean Pisani-Ferry. 

Margaret Thatcher and Ronald Reagan are remembered for the laissez-faire revolution they launched in the early 1980s. They campaigned and won on the promise that free-market capitalism would unleash growth and boost prosperity. In 2016, Nigel Farage, the then-leader of the UK Independence Party (UKIP) who masterminded Brexit, and US President-elect Donald Trump campaigned and won on a very different basis: nostalgia. Tellingly, their promises were to “take back control” and “make America great again” – in other words, to turn back the clock.

As Columbia University’s Mark Lilly has observed, the United Kingdom and the US are not alone in experiencing a reactionary revival. In many advanced and emerging countries, the past suddenly seems to have much more appeal than the future. In France, Marine Le Pen, the nationalist rights candidate in the upcoming presidential election, explicitly appeals to the era when the French government controlled the borders, protected industry, and managed the currency. Such solutions worked in the 1960s, the National Front leader claims, so implementing them now would bring back prosperity.

Obviously, such appeals have struck a chord with electorates throughout the West. The main factor underlying this shift in public attitudes is that many citizens have lost faith in progress. They no longer believe that the future will bring them material improvement and that their children will have a better life than their own. They look backward because they are afraid to look ahead.

Social Europe

Japanese company replaces office workers with artificial intelligence – Justin McCurry. 

A future in which human workers are replaced by machines is about to become a reality at an insurance firm in Japan, where more than 30 employees are being laid off and replaced with an artificial intelligence system that can calculate payouts to policyholders.

Fukoku Mutual Life Insurance believes it will increase productivity by 30% and see a return on its investment in less than two years. 

The system is based on IBM’s Watson Explorer, which, according to the tech firm, possesses “cognitive technology that can think like a human”, enabling it to “analyse and interpret all of your data, including unstructured text, images, audio and video”.

The technology will be able to read tens of thousands of medical certificates and factor in the length of hospital stays, medical histories and any surgical procedures before calculating payouts. 

The Guardian

47% of Jobs Will Disappear in the next 25 Years, According to Oxford University – Philip Perry. 

The Trump campaign ran on bringing jobs back to American shores, although mechanization has been the biggest reason for manufacturing jobs’ disappearance. Similar losses have led to populist movements in several other countries. But instead of a pro-job growth future, economists across the board predict further losses as AI, robotics, and other technologies continue to be ushered in. What is up for debate is how quickly this is likely to occur.

Now, an expert at the Wharton School of Business at the University of Pennsylvania is ringing the alarm bells. According to Art Bilger, venture capitalist and board member at the business school, all the developed nations on earth will see job loss rates of up to 47% within the next 25 years, a statistic from a recent Oxford University study. “No government is prepared, ” The Economist reports. These include blue and white collar jobs. So far, the loss has been restricted to the blue collar variety, particularly in manufacturing.

To combat “structural unemployment” and the terrible blow it is bound to deal the American people, Bilger has formed a nonprofit called Working Nation, whose mission it is to warn the public and to help make plans to safeguard them from this worrisome trend. Not only is the entire concept of employment about to change in a dramatic fashion, the trend is irreversible. The venture capitalist called on corporations, academia, government, and nonprofits to cooperate in modernizing our workforce

To be clear, mechanization has always cost us jobs. The mechanical loom for instance put weavers out of business.

But it’s also created jobs. Mechanics had to keep the machines going, machinists had to make parts for them, and workers had to attend to them, and so on. A lot of times those in one profession could pivot to another. At the beginning of the 20th century for instance, automobiles were putting blacksmiths out of business. Who needed horse shoes anymore? But they soon became mechanics. And who was better suited?

Not so with this new trend. Unemployment today is significant in most developed nations and it’s only going to get worse. By 2034, just a few decades, midlevel jobs will be by and large obsolete. So far the benefits have only gone to the ultra-wealthy, the top 1%. This coming technological revolution is set to wipe out what looks to be the entire middle class. Not only will computers be able to perform tasks more cheaply than people, they’ll be more efficient too.


Accountants, doctors, lawyers, teachers, bureaucrats, and financial analysts beware: your jobs are not safe. According to The Economist, computers will be able to analyze and compare reams of data to make financial decisions or medical ones. There will be less of a chance of fraud or misdiagnosis, and the process will be more efficient. Not only are these folks in trouble, such a trend is likely to freeze salaries for those who remain employed, while income gaps only increase in size.
You can imagine what this will do to politics and social stability.

Mechanization and computerization cannot cease. You can’t put the genie back in the bottle. And everyone must have it, eventually. The mindset is this: other countries would use such technology to gain a competitive advantage and therefore we must adopt it. Eventually, new tech startups and other business might absorb those who have been displaced. But the pace is sure to move far too slowly to avoid a major catastrophe.
According to Bilger, the problem has been going on for a long time. Take into account the longevity we are enjoying nowadays and the US’s broken education system and the problem is compounded.
One proposed solution is a universal basic income doled out by the government, a sort of baseline one would receive for survival. After that, re-education programs could help people find new pursuits. Others would want to start businesses or take part in creative enterprises. It could even be a time of the flowering of humanity, when instead of chasing the almighty dollar, people would able to pursue their true passions.

BigThink

The Second Machine Age – Erik Brynjolfsson and Andrew McAfee. 

For many thousands of years, humanity was a very gradual upward trajectory. Progress was achingly slow, almost invisible. Animals and farms, wars and empires, philosophies and religions all failed to exert much influence.

But just over two hundred years ago, something sudden and profound arrived and bent the curve of human history—of population and social development—almost ninety degrees.

The Industrial Revolution, which was the sum of several nearly simultaneous developments in mechanical engineering, chemistry, metallurgy, and other disciplines.

We can be even more precise about which technology was most important. It was the steam engine or, to be more precise, one developed and improved by James Watt and his colleagues in the second half of the eighteenth century.

Even though [the steam] revolution took several decades to unfold . . . it was nonetheless the biggest and fastest transformation in the entire history of the world.

It allowed us to overcome the limitations of muscle power, human and animal, and generate massive amounts of useful energy at will. This led to factories and mass production, to railways and mass transportation. It led, in other words, to modern life.

The ability to generate massive amounts of mechanical power was so important that it made a mockery of all the drama of the world’s earlier history.

Now comes the second machine age. Computers and other digital advances are doing for mental power—the ability to use our brains to understand and shape our environments—what the steam engine and its descendants did for muscle power.

Mental power is at least as important for progress and development—for mastering our physical and intellectual environment to get things done—as physical power. So a vast and unprecedented boost to mental power should be a great boost to humanity, just as the earlier boost to physical power so clearly was.

We’re at an inflection point—a point where the curve starts to bend a lot—because of computers. We are entering a second machine age.

The transformations brought about by digital technology will be profoundly beneficial ones.

As Martin Weitzman puts it, “the long-term growth of an advanced economy is dominated by the behavior of technical progress.” Technical progress is improving exponentially.

Digitization is going to bring with it some thorny challenges. 

Rapid and accelerating digitization is likely to bring economic rather than environmental disruption, stemming from the fact that as computers get more powerful, companies have less need for some kinds of workers. Technological progress is going to leave behind some people, perhaps even a lot of people, as it races ahead.

There’s never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there’s never been a worse time to be a worker with only ‘ordinary’ skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate.

———-

from The Second Machine Age by Erik Brynjolfsson and Andrew McAfee

World first solar panel roadway opens in French town. 

The Normandy town of Tourouvre has opened the world’s first solar roadway, bringing the hugely popular idea into reality.

The notion of paving roadways with solar panels to meet our energy needs is very appealing, but for the longest time it has remained largely a theoretical one.

The newly launched French roadway is just one kilometre long but that works out to be 2800 square metres of photovoltaic cells – enough, theoretically, to power the village’s street lights.

The resin-coated solar panels were hooked up to the local power grid just in time for Christmas as France’s Environment Minister Segolene Royal looked on last week.

“This new use of solar energy takes advantage of large swathes of road infrastructure already in use … to produce electricity without taking up new real estate,” 

NZ Herald 

Solar Roadways – YouTube 

One day such technology could revolutionise roadways, energy infrastructure and even how cars work and interact with the road.

Let robots take the Xmas stress – Michelle Dickinson. 

Cooking Christmas dinner for the family is a huge responsibility – in fact research has shown it to be the most stressful meal of the year to prepare.

The fear of dry turkey, limp broccoli and soggy pavlova are enough to make even the most experienced cook think seriously about restaurant reservations for next year.

All that pressure may soon be a thing of the past, however, thanks to a new generation of smart kitchen appliances. These devices promise Christmas dinner – and all your other meals – prepared perfectly every time.

Next year the Moley Robotics robochef goes on sale to the public. Two robotic arms installed above a cooking area with a sink, oven and hob are designed to revolutionise cooking as we know it.

The robot’s anthropomorphic hands appear uncannily human-like when cooking. Using 3D motion cameras the intricate movements of a human chef are captured electronically, and this data is then translated into movements by the robot. Using 20 motors, 24 joints and 129 sensors it can hold pots, pick up and use kitchen utensils, stack dishes and squeeze ingredient bottles.

With its cooking abilities based directly on mimicry of human skills, the robochef is only as good as the human that was recorded.

NZ Herald 

Uber’s self-driving truck delivered 50,000 cans of Budweiser.

At least they’re not risking real beer 😉  

If you drank a cold beer in Colorado Springs in the US this weekend, it may have been delivered by a self-driving truck.

That’s where a big rig tricked out with a sophisticated system that lets a computer take control on the road delivered 50,000 cans of Budweiser last week – in what the beer company says was the first commercial delivery using the tech.

The truck that made the 120-mile journey is one of a handful of Volvo rigs equipped with tech developed by Otto, a start-up Uber acquired in August.

Unlike other self-driving systems on the market, such as Tesla’s autopilot, Otto’s tech lets drivers get out from behind the wheel altogether. NZ Herald 

Isis used drone to kill Kurdish fighters and wound French troops. 

No Martyr’s Required. 

In a possible first-of-its-kind attack on Western forces, Isis used a drone loaded with explosives to strike a Kurdish and French position in northern Iraq earlier this month, according to a report in the French newspaper. NZ Herald 

Christchurch research key to printing human body parts. 

A Christchurch research team is a step closer to regenerating human body parts with the development of a new type of “bio-ink”.

Bio-ink is the key to make human joints, cartilage and bone – the dream of the University of Otago, Christchurch, Regenerative Medicine research group led by Associate Professor Tim Woodfield.
Instead of artificial knees and hips, patients could have a regenerated joint made with human cells. Stuff.co.nz 

Society could let everybody follow their talents. Imagine! 

​In a world in which I don’t have to work, I might feel rather different, perhaps different enough to throw myself into a hobby or a passion project with the intensity usually reserved for professional matters

We have forgotten how to play. We teach children a distinction between play and work. Work is something that you don’t want to do but you have to do. This training, which starts in school, eventually “drills the play” out of many children, who grow up to be adults who are aimless when presented with free time.
They’ve lost the ability to create their own activities. It’s a problem that never seems to plague young children. There are no three year olds that are going to be lazy and depressed because they don’t have a structured activity.” The Atlantic

Would a Work-Free World Be So Bad?

Fears of civilization-wide idleness are based too much on the downsides of being unemployed in a society premised on the concept of employment.
People have speculated for centuries about a future without work, and today is no different, with academics, writers, and activists once again warning that technology is replacing human workers. Some imagine that the coming work-free world will be defined by inequality: A few wealthy people will own all the capital, and the masses will struggle in an impoverished wasteland.

A different, less paranoid, and not mutually exclusive prediction holds that the future will be a wasteland of a different sort, one characterized by purposelessness: Without jobs to give their lives meaning, people will simply become lazy and depressed. Indeed, today’s unemployed don’t seem to be having a great time. The Atlantic

Millions of drones will fill the skies. 

So many people are registering drones and applying for drone pilot licenses in the U. S. that federal aviation officials say they are contemplating the possibility of millions of unmanned aircraft crowding the nation’s skies in the not-too-distant future.

In the nine months since the Federal Aviation Administration created a drone registration system, more than 550,000 unmanned aircraft have been registered with the agency.

New registrations are coming in at a rate of 2,000 a day. By comparison, there are 260,165 manned aircraft registered in the U.S. NZ Herald

Uber gives riders a preview of the driverless future. 

Uber is the first company in the U.S. to make self-driving cars available to the general public.
Starting Wednesday morning, a fleet of self-driving Ford Fusions will pick up Uber riders who opted to participate in a test program. While the vehicles are loaded with features that allow them to navigate on their own, an Uber engineer will sit in the driver’s seat and seize control if things go awry. NZ Herald

Burritos in the Sky

“It’s real customers that are working and need lunch and want it delivered by drone.” Stuff