Category Archives: Artificial Intelligence

The future of AI lies in replicating our own neural networks – Ben Medlock.

Long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents.

We think with our whole body, not just with the brain.

In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time.

It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve humanlike artificial intelligence, AI, while bypassing the messy flesh that characterizes organic life.

I understand the appeal of this view because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we’re nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.

Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions such as whether your average cat is as big as a horse, or likely to chase a mouse.

This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real world problems, where fine tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.

In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.

Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these selfteaching algorithms mimic what we know about the subconscious processes of organic brains.

Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.

But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grade nail file you could eradicate human history’.

The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43% of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.

Now, it’s a bit of a leap to go from smart, self-organizing cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.

I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognizing cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.

This means that when a human approaches a new problem, most of the hard work has already been done.

In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time.

There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.

On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines.

I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.


EMOTION AI, Artificial Emotional Intelligence and Affective Computing – Richard Yonck.

The Coming Era of Emotional Machines

Emotion AI is growing rapidly and will bring many changes to our society.

You have a report deadline in 20 minutes and your software keeps incorrectly reformatting your document. Or you’re driving along when another car cuts you off at the intersection. Or another car cuts you off at the intersection. Or you’re upset at your boss and decide to finally tell him how you really feel about him in an email.

Wouldn’t it be great if technology could detect your feelings and step in to fix the problem, prevent you from doing something dangerous, or pointed out the benefits of holding onto your job?

Welcome to the world of affective computing, otherwise known as artificial emotional intelligence, or Emotion AI.

Rapidly being incorporated into everything from market research testing to automotive interfaces to chatbots and social robotics, this is a branch of Al that will continue to rapidly grow over the next few decades. According to research group Markets and Markets, they expect the global affective computing market to grow from $12.20 Billion in 2016 to $53.98 Billion by 2021, at a compound annual growth rate (CAGR) of 34.7%.

For decades we have become increasingly dependent on our computers and other devices to perform tasks and make our lives easier. Along the way, these have not only improved in performance but have gained some degree of intelligence, as well as artificial technology to become highly capable at some tasks, such as pattern recognition, there remain many ways our systems continue to come up short. But having a better sense of the user’s state of mind would go a long way to knowing what the user wants, even before they know it themselves.

Needless to say, while new technology such as this has huge potential for improving our lives, there are also many ways it could be turned to negative uses. As explored in my book, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence, this field probably brings as many risks as it does opportunities. Emotionally aware systems and robots will find many roles in healthcare, education, autism detection and therapy, politics, law enforcement, the military and more. Yet each will bring challenges as well. Issues of privacy, emotional manipulation and self-determination will definitely come into play.

As these systems become increasingly accurate and ubiquitous throughout our environment, the challenges and the stakes will rise. anticipating these and acting to mitigate the negative repercussions will be our best course to ensuring a safe and more ethical future.

Psychology Today


Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence

Richard Yonck.


Emotion. It’s as central to who you are as your body and your intellect. While most of us know emotion when we see or experience it, many questions remain about what it is, how it functions, and even why it exists in the first place. What’s known for certain is that without it, you would not be the person you are today.

Now we find ourselves entering an astonishing new era, an era in which we are beginning to imbue our technologies with the ability to read, interpret, replicate, and potentially even experience emotions themselves. This is being made possible by a relatively new branch of artificial intelligence known as affective computing. A powerful and remarkable technology, affective computing is destined to transform our lives and our world over the coming decades.

To some this may all sound like science fiction, while to others it is simply another example of the relentless march of progress. Either way, we are growing closer to our technology than ever before. Ultimately this will lead to our devices becoming our assistants, our friends and companions, and yes, possibly even our lovers. In the course of it all, we may even see the dream (or nightmare) of truly intelligent machines come true.

From the moment culture and toolmaking began, the history and evolution of humanity and technology have been deeply intertwined. Neither humans nor machines would be anywhere close to what we are today without the immediate and ongoing aid of the other. This is an inextricable trend that, with luck, will continue for our world’s remaining lifespan and beyond.

This technological evolution is being driven by social and economic forces that mimic some of the processes of natural selection, though certainly not all of them. In an effort to attain competitive advantage, humans use technologies (including machines, institutions, and culture). In turn, these pass through a series of filters that determine a given technology’s fitness within its overall environment. That environment, which blends society’s physical, social, economic, and political realities, decides the success of each new development, even as it is modified and supported by every further advance.

Though natural and technological evolution share some similarities, one way they differ is in the exponential nature of technological change. While biology evolves at a relatively steady, linear pace that is dictated by factors such as metabolism, replication rates, and the frequency of nucleotide mutation, technological evolution functions within multiple positive feedback loops that actually accelerate its development. Though this acceleration is not completely constant and typically levels off for any single domain or paradigm, over time and across the entire technological landscape, the trend results in a net positive increase in knowledge and capabilities. Because of this, technology and all it makes possible advances at an ever-increasing exponential rate, far outpacing the changes seen in the biological world over the same period.

One of the consequences of all of this progress is that it generates a need to create increasingly sophisticated user interfaces that allow us to control and interact with our many new devices and technologies. This is certainly borne out in my own experience developing interfaces for computer applications over many years. As technology theorist Brenda Laurel observed, “The greater the difference between the two entities, the greater the need for a well-designed interface.” As a result, one ongoing trend is that we continue to develop interfaces that are increasingly “natural” to use, integrating them ever more closely with our lives and our bodies, our hearts, and our minds.

Heart of the Machine is about some of the newest of these natural interfaces. Affective computing integrates computer science, artificial intelligence, robotics, cognitive science, psychology, biometrics, and much more in order to allow us to communicate and interact with computers, robots, and other technologies via our feelings. These systems are being designed to read, interpret, replicate, and potentially even influence human emotions. Already some of these applications have moved out of the lab and into commercial use. All of this marks a new era, one in which we’re seeing the digitization of affect, a term psychologists and cognitive scientists use to refer to the display of emotion.

While this is a very significant step in our increasingly high-tech world, it isn’t an entirely unanticipated one. As you’ll see, this is a development that makes perfect sense in terms of our ongoing, evolving relationship with technology. At the same time, it’s bringing about a shift in that relationship that will have tremendous repercussions for both man and machine. The path it takes us down is far from certain. The world it could lead to may be a better place, or it might be a far worse one. Will these developments yield systems that anticipate and fulfill our every need before we’re even aware of them? Or will they give rise to machines that can be used to stealthily manipulate us as individuals, perhaps even en masse? Either way, it’s in our best interests to explore the possible futures this technology could bring about while we still have time to influence how these will ultimately manifest.

In the course of this book, multiple perspectives will be taken at different points. This is entirely intentional. When exploring the future, recognizing that it can’t truly be known or predicted is critical. One of the best ways of addressing this is to explore numerous possible future scenarios and, within reason, prepare for each. This means not only considering what happens if the technology develops as planned or not, but also whether people will embrace it or resist it. It means anticipating the short-, mid-, and long-term repercussions that may arise from it, including what would otherwise be unforeseen consequences. This futurist’s view can help us to prepare for a range of eventualities, taking a proactive approach in directing how our future develops.

Heart of the Machine is divided into three sections.

The first, “The Road to Affective Computing,” introduces our emotional world, from humanity’s earliest days up to the initial development of emotionally aware affective computers and social robots. The second section, “The Rise of the Emotional Machines,” looks at the many ways these technologies are being applied, how we’ll benefit from them, and what we should be worried about as they meet their future potential.

Finally, “The Future of Artificial Emotional Intelligence” explores the big questions about how all of this is likely to develop and the effects it will have on us as individuals and as a society. It wraps up with a number of thoughts about consciousness and superintelligence and considers how these developments may alter the balance of the human-machine relationship.

Until now, our three-million-year journey with technology has been a relatively one-sided and perpetually mute one. But how might this change once we begin interacting with machines on what for us remains such a basic level of experience? At the same time, are we priming technology for some sort of giant leap forward with these advances? lf artificial intelligence is ever to attain or exceed human levels, and perhaps even achieve consciousness in the process, will feelings and all they make possible be the spark that lights the fuse? Only time will tell, but in the meantime we’d be wise to explore the possibility.

Though this is a book about emotions and feelings, it is very much founded on science, research, and an appreciation of the evolving nature of intelligence in the universe. As we’ll explore, emotions may be not only a key aspect of our own humanity, but a crucial component for many, if not all, higher intelligences, no matter what form these may eventually take.


Futures, or “strategic foresight” as it’s sometimes known, is a field unlike any other. On any given day you’re likely to be asked, “What is a futurist?” or “What does a futurist do?” Many people have an image of a fortuneteller gazing into a crystal ball, but nothing could be further from the truth. Because ultimately, all of us are futurists.

Foresight is one of the dominant characteristics of the human species. With self-awareness and introspection came the ability to anticipate patterns and cycles in our environment, enhancing our ability to survive.

As a result, we’ve evolved a prefrontal cortex that enables us to think about the days ahead far better than any other species.

It might have begun with something like the recognition of shifting patterns in the grasslands of the Serengeti that let us know a predator lay in wait. This continued as we began to distinguish the phases of the moon, the ebb and flow of the tides, the cycles of the seasons. Then it wasn’t long before we were anticipating eclipses, forecasting hurricanes, and predicting stock market crashes. We are Homo sapiens, the futurist species.

Of course, this was only the beginning. As incredible as this ability of ours is, it could only do so much in its original unstructured state. So, when the world began asking itself some very difficult and important existential questions about surviving the nuclear era, it was time to begin formalizing how we thought about the future.

Project RAND

For many, Project RAND, which began immediately after World War II, marks the beginning of the formal foresight process. Building on our existing capabilities, Project RAND sought to understand the needs and benefits of connecting military planning with R&D decisions. This allowed the military to better understand not only what its future capabilities would be, but also those of the enemy. This was critical because, being the dawn of the atomic age, there were enormous uncertainties about our future, including whether or not we would actually survive to have one.

Project RAND eventually transformed into the RAND Corporation, one of the first global policy think tanks. As the space race ramped up, interest in foresight grew, particularly in government and the military. In time, corporations began showing interest too, as was famously demonstrated by Royal Dutch Shell’s application of scenarios in response to the 1973 oil crisis. Tools and methods have continued to be developed until today, and many of the processes of foresight are used throughout our world, from corporations like lntel and Microsoft, who have in-house futurists, to smaller businesses and organizations that hire consulting futurists. Branding, product design, research and development, government planning, education administration, if it has a future, there are people who explore it. Using techniques for framing projects, scanning for and gathering information, building forecasts and scenarios, creating visions and planning and implementing them, these practitioners help identify opportunities and challenges so that we can work toward our preferred future.

This is an important aspect of foresight work: recognizing the future is not set in stone and that we all have some ability to influence how it develops.

Notice I say influence, not control. The many elements that make up the future are of a scale and complexity far too great for any of us to control. But if we recognize something about our future that we want to manifest, and we recognize it early enough, we can influence other factors that will increase its likelihood of being realized.

A great personal example would be saving for retirement. A young person who recognizes they will one day retire can start building their savings and investments early on. In doing this, they’re more likely to be financially secure in their golden years, much more so than if they’d waited until they were in their fifties or sixties before they started saving.

Many of foresight’s methods and processes have been used in the course of writing this book. Horizon scanning, surveying of experts, and trend projections are just a few of these. Scenarios are probably the most evident of these tools because they’re included throughout the book. The processes futurists use generate a lot of data, which often doesn’t convey what’s important to us as people. But telling stories does, because we’ve been storytellers from the very beginning. Stories help us relate to new knowledge and to each other. This is what a scenario does: it takes all of that data and transforms it into a more personal form that is easier for us to digest.

Forecasts are more generally included because in many respects they’re not that valuable. Some people think studying the future is about making predictions, which really isn’t the case. Knowing whether an event will happen in 2023 or 2026 is of limited value compared with the act of anticipating the event at all and then deciding what we’re going to do about it. Speculating about who’s going to win a horse race or the World Cup is for gamblers, not for futurists.

In many respects, a futurist explores the future the way a historian explores history, inferring a whole picture or pattern from fragments of clues. While it may be tempting to ask how there can be clues to something that hasn’t even happened yet, recall that every future is founded upon the past and present, and that these are laden with signals and indicators of what’s to come.

So read on and learn about this future age of artificial emotional intelligence, because all too soon, it will be part of our present as well.





Menlo Park, California-March 3, 2032 7:06 am

It’s a damp spring morning as Abigail is gently roused from slumber by Mandy, her personal digital assistant. Sensors in the bed inform Mandy exactly where Abigail is in her sleep cycle, allowing it to coordinate with her work schedule and wake her at the optimum time. Given the morning’s gray skies and Abigail’s Iess-than-cheery mood when she went to bed the night before, Mandy opts to waken her with a recorded dawn chorus of sparrows and goldfinches.

Abigail stretches and sits up on the edge of the bed, feeling for her slippers with her feet. “Mmm, morning already?” she mutters.

“You slept seven hours and nineteen minutes with minimal interruption,” Mandy informs her with a pleasant, algorithmically defined lilt via the room’s concealed speaker system. “How are you feeling this morning?”

“Good,” Abigail replies blinking. “Great, actually.”

It’s a pleasantry. Mandy didn’t really need to ask or to hear its owner’s response. The digital assistant had already analyzed Abigail’s posture, energy levels, expression, and vocal tone using its many remote sensors, assessing that her mood is much improved from the prior evening.

It’s a routine morning for the young woman and her technology. The two have been together for a long time. Many years before, when she was still a teen, Abigail named her assistant Mandy. Of course, back then the software was also several versions less sophisticated than it is today, so in a sense they’ve grown up together. During that time, Mandy has become increasingly familiar with Abigail’s work habits, behavioral patterns, moods, preferences, and various other idiosyncrasies. In many ways, it knows Abigail better than any person ever could.

Mandy proceeds to tell Abigail about the weather and traffic conditions, her morning work schedule, and a few of the more juicy items rising to the top of her social media stream as she gets ready for her day.

“Mandy,” Abigail asks as she brushes her hair, “do you have everything organized for today’s board meeting?”

The personal assistant has already anticipated the question and consulted Abigail’s calendar and biometric historical data before making all the needed preparations for her meeting with her board of directors. As the CEO of AAT, Applied Affective Technologies, Abigail and her company are at the forefront of human-machine relations. “Everyone’s received their copies of the meeting agenda. Your notes and 3D presentation are finalized. Jeremy has the breakfast catering covered. And I picked out your clothes for the day: the Nina Ricci set.”

“Didn’t I wear that recently?”

Mandy responds without hesitation. “My records show you last wore it over two months ago for a similarly important meeting. It made you feel confident and empowered, and none of today’s attendees has seen it on you before.”

“Perfect!” Abigail beams. “Mandy, what would I do without you?”

What indeed?

Though this scenario may sound like something from a science fiction novel, in fact it’s a relatively reasonable extrapolation of where technology could be fifteen years from now. Already, voice recognition and synthesis, the real-time measurement of personal biometrics, and artificially intelligent scheduling systems are becoming an increasing part of our daily lives. Given continuing improvements in computing power, as well as advances in other relevant technologies, in a mere decade these tools will be far more advanced than they are today.

However, the truly transformational changes described here will come from a branch of computer science that is still very much in its nascent stages, still early enough that many people have yet to even hear about it.

It’s called affective computing, and it deals with the development of systems and devices that interact with our feelings.

More specifically, affective computing involves the recognition, interpretation, replication, and potentially the manipulation of human emotions by computers and social robots.

This rapidly developing field has the potential to radically change the way we interact with our computers and other devices. Increasingly, systems and controls will be able to alter their operations and behavior according to our emotional responses and other nonverbal cues. By doing this, our technology will become increasingly intuitive to use, addressing not only our explicit commands but our unspoken needs as well. In the pages that follow, we will explore just what this new era could mean for our technologies and for ourselves.

We are all emotional machines. Centuries of research into anatomy, biology, neurology, and numerous other fields has consistently revealed that nearly all of what we are follows a predictable set of physical processes. These mechanistically driven rules make it possible for us to move, to eat, to grow, to procreate. Within an extremely small range of genetic variation, we are all essentially copies of those who came before us, destined to produce generation after generation of nearly identical cookie-cutter reproductions of ourselves well into the future.

Of course, we know this is far from the true reality of the human experience. Though these deterministic forces define us up to a point, we exist in far greater depth and dimension than can be explained by any mere set of stimuli and responses. This is foremost because we are emotional beings. That the dreams, hopes, fears, and desires of each and every one of us are so unique while remaining so universal is largely due to our emotional experience of the world. If this were not so, identical twins who grow up together would have all but identical personalities. Instead, they begin with certain shared genetically influenced traits and behaviors and over time diverge from there. While all humanity shares nearly identical biology, chemical processes, and modes of sensory input, it is our feelings, our emotional interpretations of and responses to the world we experience that makes all of us on this planet, all 107 billion people who have ever lived, truly unique from one another.

There are easily hundreds, if not thousands, of theories about emotions, what they are, why they exist, and how they came about, and there is no way for a book such as this to begin to introduce or address them all. Nor does this book claim to know which, if any, of these is the One True Theory, in part because, in all likelihood, there is none. It’s been said repeatedly by neuroscientists, psychologists, and philosophers that there are nearly as many theories of emotion as there are theorists.

Emotion is an incredibly complex aspect of the human condition and mind, second only perhaps to the mystery of consciousness itself. What is important is to recognize its depth and complexity without attempting to oversimplify either its mechanisms or purpose.

Emotions are one of the most fundamental components of the human experience. Yet, as central as they are to our lives, we continue to find it a challenge to define or even to account for them. In many respects, we seem to have our greatest insights about feelings and emotions in their absence or when they go awry. Despite the many theories that exist, all we know with certainty is that they are essential in making us who we are, and that without them we would be but pale imitations of ourselves.

So what might this mean as we enter an era in which our machines, our computers, robots, and other devices, become increasingly capable of interacting with our emotions? How will it change our relationship with our technologies and with each other? How will it alter technology itself? Perhaps most importantly:

If emotion has evolved in humans and certain other animals because it affords us some benefit, might it convey a similar benefit in the future development of artificial intelligence?

For reasons that will be explored in the coming chapters, affective computing is a very natural progression in our ongoing efforts to build technologies that operate increasingly on human terms, rather than the other way around. As a result, this branch of artificial intelligence will come to be incorporated to one degree or another nearly everywhere in our lives. At the same time, just like almost every other form of artificial intelligence that has been developed and commercialized, affective computing will eventually fade into the scenery, an overlooked, underappreciated feature that we will quickly take all too much for granted because it will be ubiquitous.

Consider the possibilities. Rooms that alter lighting and music based on your mood. Toys that engage young minds with natural emotional responses. Computer programs that notice your frustration over a task and alter their manner of assistance. Email that makes you pause before sending that overly inflammatory message. The scenarios are virtually endless.

But it’s a rare technology that doesn’t have unintended consequences or that is used exclusively as its inventors anticipated. Affective computing will be no different. It doesn’t take a huge leap of foresight to anticipate that this technology will also inevitably be applied and abused in ways that clearly aren’t a benefit to the majority of society. As this book will explore, like so many other technologies, affective computing will come to be seen as a double edged sword, one that is capable of working for us while also having the capacity to do us considerable harm.

Amidst all of this radical progress, there is yet another story to be told. In many respects, affective computing represents a milestone in the long evolution of technology and our relationship to it. It’s a story millions of years in the making and one that may be approaching a critical juncture, one that could well determine not only the future of technology, but of the human race.

But first, let’s examine a question that is no doubt on many people’s minds:

“Why would anyone want to do this? Why design devices that understand our feelings?”

As we’ll see in the next chapter, it’s a very natural, perhaps even inevitable step on a journey that began over three million years ago.



Gona, Afar, Ethiopia, 3. 39 million years ago

In a verdant gorge, a tiny hirsute figure squats over a small pile of stones. Cupping one of these, a modest piece of chert, in her curled hand, she repeatedly hits the side of it with a second rock, a rounded piece of granite. Every few strikes, a flake flies from the chert, leaving behind it a concave depression. As the young woman works the stone, the previously amorphous mineral slowly takes shape, acquiring a sharp edge as the result of the laborious process.

The work is half ritual, half legacy, a skill handed down from parent to child for untold generations. The end product, a small cutting tool, is capable of being firmly grasped and used to scrape meat from bones, ensuring that critical, life-sustaining morsels of food do not go to waste.

1961 Paleoanthropologist Louis Leakey and his family look for early hominid remains at Olduvai Gorge, Tanzania with their three dogs in attendance.

Here in the Great Rift Valley of East Africa, our Paleolithic ancestor is engaged in one of humanity’s very earliest technologies. While her exact species remains unknown to us, she is certainly a bipedal hominid that preceded Homo habilis, the species long renowned in our text books as “handy man, the tool maker.” Perhaps she is Kenyanthropus platyops or the slightly larger Australopithecus afarensis. She is small by our standards: about three and a half feet tall and relatively slender. Her brain case is also meager compared with our own, averaging around 400 cubic centimeters, less than a third of our 1,350 cubic centimeters. But then that’s hardly a fair comparison. When judged against earlier branches of our family tree, this hominid, this early human, is a mental giant. She puts that prowess to good use, fashioning tools that set her species apart from all that have come before.

While these stone tools might seem simple from today’s perspective, at the time they were a tremendous leap forward, improving our ancestors’ ability to obtain nutrition and to protect themselves from competitors and predators. These tools allowed them to slay beasts far more powerful than themselves and to scrape meat from bones. In turn, this altered their diet, providing much more regular access to the proteins and fats that would in time support further brain development.

Making these tools required a knowledge and skill that combined our ancestors’ considerably greater brain power with the manual dexterity granted by their opposable thumbs.

But perhaps most important of all was developing the ability to communicate the knowledge of stone tool making, knapping, as it’s now known, which allowed this technology to be passed down from generation to generation. This is all the more amazing because these hominids didn’t rely on verbal language so much as on emotion, expressiveness, and other forms of nonverbal communication.

Many cognitive and evolutionary factors needed to come together to make the development and transmission of this knowledge possible. The techniques of knapping were not simple or easy to learn, yet they were essential to our survival and eventual growth as a species. As a result, those traits that promoted its continuation and development would have been selected for, whether genetic or behavioral.

This represents something quite incredible in our history, because this is the moment when we truly became a technological species.

This is when humanity and technology first set forth on their long journey together. As we will see, emotion was there from the very beginning, making all of it possible. The coevolution that followed allowed each of us to grow in ways we never could have without the aid of the other.

It’s easy to dismiss tools and machines as “dumb” matter, but of course this is from the perspective of human intelligence. After all, we did have a billion, year head start, beginning from simple single-cell life. But over time, technology has become increasingly intelligent and capable until today, when it can actually best us on a number of fronts. Additionally, it’s done this in a relative eyeblink of time, because as we’ll discuss later, technology progresses exponentially relative to our own linear evolution.

Which brings us back to an important question: Was knapping really technology? Absolutely. There should be no doubt that the ability to forge these stone tools was the cutting-edge technology of its day. (A bad pun, but certainly an apt one.) Knapping was incredibly useful, so useful it was carried on for over three million years. After all, these hominids’ lives had literally come to depend on it. During this time, change and improvement of the techniques used to form the tools was ponderously slow, at least in part because experimentation would have been deemed very costly, if not outright wasteful. Local supplies of chert-a fine, grained sedimentary rock-were limited. Analysis of human settlements and the local fossil record show that the supply of chert was exhausted several times in different regions of Africa and in several cases presumably had to be carried in from areas where it was more plentiful.

Based on fossil records, it took more than a million years, perhaps seventy thousand generations, to go from simple single edges to beautifully flaked tools with as many as a hundred facets. But while advancement of this technology was slow, one truly crucial factor was the ability to share and transmit the process. Knapping didn’t die out with the passing of a singular exemplary mind or Paleolithic genius of its era. Because this technology was so successful, because it gave its users a competitive edge, this knowledge was meticulously passed down through the generations, allowing it to slowly morph into ever more complex forms and applications.

The image of our hominid ancestors shaping stone tools has been with us for decades. Beginning in the 1930s, Louis and Mary Leakey excavated thousands of stone tools and flakes at Olduvai Gorge in Tanzania, leading to these being dubbed Oldowan tools, a term now generally used to reference the oldest style of flaked stone. These tools were later estimated to be around 1.7 million years old and were likely made by Paranthropus boisei or perhaps Homo habilis.

However, more recent findings have pushed the date of our oldest tool-using ancestors back considerably further. In the early 1990s, another Paleolithic settlement north of Olduvai along East Africa’s Great Rift Valley turned out to have even older stone tools and fragments. In 1992 and 1993, Rutgers University paleoanthropologists digging in the Afar region of Ethiopia excavated 2,600 sharpedged flakes and flake fragments. Using radiometric dating and magnetostratigraphy, researchers dated the fragments to having been made more than 2.6 million years ago, making them remnants of the oldest known tools ever produced.

Of course, direct evidence isn’t always available when you’re on the trail of something millions of years old. This was the case when, in 2010, paleoanthropologists found animal bones in the same region bearing marks consistent with stone-inflicted scrapes and cuts. The two fossilized bones, a femur and a rib from two different species of ungulates, indicated a methodical use of tools to efficiently remove their meat. Scans dated the bones at approximately 3.39 million years old, pushing back evidence of the oldest tool user by another 800,000 years. If this is accurate, then the location and age suggest the tools would have been used, and therefore made, by Australopithecus afarensis or possibly the flatter-faced Kenyanthropus platyops. However, because the evidence was indirect, many experts disputed its validity, generating considerable controversy over the claim that such sophisticated tools had been produced so much earlier than previously thought.

Then, in 2015, researchers reported that stone flakes, cores, and anvils had been found in Kenya, some one thousand kilometers from Olduvai, which were conclusively dated to 3.3 million years BCE. (BCE is a standard scientific abbreviation for Before the Common Era.) In coming years, other discoveries may well push the origins of human tool making even further back, but for now we can say fairly certainly that knapping has been one of our longestlived technologies.

So here we have evidence that one of our earliest technologies was accurately transmitted generation after generation for more than three million years. This would be impressive enough in its own right, but there’s another factor to consider: How did our ancestors do this with such consistency when language didn’t yet exist?

No one knows exactly when language began. Even the era when we started to use true syntactic language is difficult to pinpoint, not least because spoken words don’t leave physical traces the way fossils and stone tools do. From Darwin’s own beliefs that the ability to use language evolved, to Chomsky’s anti, evolutionary Strong Minimalist Thesis, to Pinker’s neo-Darwinist stance, there is considerable disagreement as to the origins of language. However, for the purpose of this book, we’ll make a few assumptions that at least some of our capacity for language was driven and shaped by natural selection.

Despite our desire to anthropomorphize our world, other primates and animals do not have true combinatorial language. While many use hoots, cries, and calls, these are only declarative or emotive in nature and at best indicate a current status or situation. Most of these sounds cannot be combined or rearranged to produce different meanings, and even when they can, as is the case with some songbirds and cetaceans, the meaning of the constituent units is not retained. Additionally, animal calls have no means of indicating negation, irony, or a past or future condition. In short, animal language isn’t truly equivalent to our own.

Our nearest cousins, genetically speaking, are generally considered to be the common chimpanzee (Pan troglodytes) and the bonobo (Pan paniscus). For a long time, evolutionary biologists have said that our last common ancestor (or LCA) the ancestor species we most recently shared with these chimps, existed about six million years ago. This is estimated based on the rate specific segments of DNA mutate. In human beings, this overall mutation rate is currently estimated at about thirty mutations per offspring. Recently however, the rate of this molecular clock for chimpanzees has been reassessed as faster than was once thought. If this is accurate, then it’s been reestimated that chimps and humans last shared a common ancestor, perhaps the now extinct homininae species Sanelanthropus-approximately thirteen million years ago.

Of course, the difference of a single gene does not a new species make. It’s estimated that a sufficient number of mutations needed to give rise to a distinctly new primate species, such as Ardipethicus, wouldn’t have accumulated until ten to seven million years ago. Nevertheless, it’s a significant amount of time.

Can we pinpoint when in this vast span of time the origins of human language appeared? It’s generally accepted that Australopithecines’s capacity for vocal communication wasn’t all that different from chimpanzees and other primates. In fact, many evolutionary biologists would say that our vocal tract wasn’t structurally suited to the sounds of modern speech until our hyoid bone evolved with its specific shape and in its specific location. This, along with our precisely shaped larynx, is believed to have allowed us to begin forming complex phoneme, based sounds (unlike our chimpanzee relatives) sometime between 200,000 and 250,000 years ago. In recent years there has been some suggestion that Neanderthals may have also had the capacity for speech. Either way, it was long after Australopithecus afarensis, Paranthropus boisei, and Homo habilis had all disappeared from Earth.



Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence

by Richard Yonck

get it at

Universities in the Age of AI – Andrew Wachtel.

Over the next 50 years or so, as AI and machine learning become more powerful, human labor will be cannibalized by technologies that outperform people in nearly every job function. How should higher education prepare students for this eventuality?

BISHKEK – I was recently offered the presidency of a university in Kazakhstan that focuses primarily on business, economics, and law, and that teaches these subjects in a narrow, albeit intellectually rigorous, way. I am considering the job, but I have a few conditions.

What I have proposed is to transform the university into an institution where students continue to concentrate in these three disciplines, but must also complete a rigorous “core curriculum” in the humanities, social sciences, and natural sciences – including computer science and statistics. Students would also need to choose a minor in one of the humanities or social sciences.

There are many reasons for insisting on this transformation, but the most compelling one, from my perspective, is the need to prepare future graduates for a world in which artificial intelligence and AI-assisted technology plays an increasingly dominant role. To succeed in the workplace of tomorrow, students will need new skills.

Over the next 50 years or so, as AI and machine learning become more powerful, human labor will be cannibalized by technologies that outperform people in nearly every job function. Higher education must prepare students for this eventuality. Assuming AI will transform the future of work in our students’ lifetime, educators must consider what skills graduates will need when humans can no longer compete with robots.

It is not hard to predict that rote tasks will disappear first. This transition is already occurring in some rich countries, but will take longer in places like Kazakhstan. Once this trend picks up pace, however, populations will adjust accordingly. For centuries, communities grew as economic opportunities expanded; for example, farmers had bigger families as demand for products increased, requiring more labor to deliver goods to consumers.

But the world’s current population is unsustainable. As AI moves deeper into the workplace, jobs will disappear, employment will decline, and populations will shrink accordingly. That is good in principle – the planet is already bursting at the seams – but it will be difficult to manage in the short term, as the pace of population decline will not compensate for job losses amid the robot revolution.

For this reason, the next generation of human labor – today’s university students – requires specialized training to thrive. At the same time, and perhaps more than ever before, they need the kind of education that allows them to think broadly and to make unusual and unexpected connections across many fields.

Clearly, tomorrow’s leaders will need an intimate familiarity with computers – from basic programming to neural networks – to understand how machines controlling productivity and analytic processes function. But graduates will also need experience in psychology, if only to grasp how a computer’s “brain” differs from their own. And workers of the future will require training in ethics, to help them navigate a world in which the value of human beings can no longer be taken for granted.

Educators preparing students for this future must start now. Business majors should study economic and political history to avoid becoming blind determinists. Economists must learn from engineering students, as it will be engineers building the future workforce. And law students should focus on the intersection of big data and human rights, so that they gain the insight that will be needed to defend people from forces that may seek to turn individuals into disposable parts.

Even students studying creative and leisure disciplines must learn differently. For one thing, in an AI-dominated world, people will need help managing their extra time. We won’t stop playing tennis just because robots start winning Wimbledon; but new organizational and communication skills will be required to help navigate changes in how humans create and play. Managing these industries will take new skills tailored to a fully AI world.

The future of work may look nothing like the scenarios I envision, or it may be far more disruptive; no one really knows. But higher education has a responsibility to prepare students for every possible scenario – even those that today appear to be barely plausible. The best strategy for educators in any field, and at any time, is to teach skills that make humans human, rather than training students to outcompete new technologies.

No matter where I work in education, preparing young people for their futures will always be my job. And today, that future looks to be dominated by machines. To succeed, educators – and the universities we inhabit – must evolve.


Andrew Wachtel is President of the American University of Central Asia.

Project Syndicate

House of Lords Select Committee on Artificial Intelligence – Submission by Prof Toby Walsh.

Written Submission to
House of Lords Select Committee on Artificial Intelligence
Prof. Toby Walsh FAA, FAAAI, FEurAI.
1. Pace of technological change.
Recent advances in AI are being driven by four rapid changes: the doubling of processing power every two years (aka Moore’s Law), the doubling of data storage also every two years (aka Kryder’s Law), significant improvements is AI algorithms especially in the area of Machine Learning, and a doubling of funding into the field also roughly every two years. This has enabled significant progress to be made on a number of aspects of AI, especially in areas like image processing, speech recognition and machine translation. Nevertheless many barriers remain to building machines that match the breadth of human cognitive capabilities. A recent survey I conducted of hundreds of members of the public and as well as experts in the field ( reveals that experts are significantly more cautious about the challenges remaining.
2. Impact on society.
Education is likely the best tool to prepare the public for the changes that AI will bring to almost every aspects of our lives. An informed society is one that will best be able to make good choices so we all share the benefits. Life-long education will be the key to keeping ahead of the machines as many jobs start to be displaced by automation. Regarding the skills of the future, STEM is not the answer. The population does need to be computationally literate so the new technologies are not magic. But the most valued skills will be those that make us most human: skills like emotional and social intelligence, adaptability, and creativity.
3. Public perception.
The public’s perception is driven more by Hollywood than reality. This has focused attention on very distant threats (like the fear that the machines are about to take over) distracting concern about very real and immediate problems (like the fact that we’re already giving responsibility to stupid algorithms with potentially drastic consequences on society).
4. Industry.
The large technology companies look set to benefit most from the AI revolution. These tend to be winner-take-all markets, with immense network effects. We only need and want one search engine, one social network, one messaging app, one car-sharing service, etc. These companies can use their immense wealth and access to data to buy out or squash any startup looking to innovate. Like any industry that has become rather too powerful, big tech will need to be regulated more strongly by government so it remains competitive and acting in the public good. The technology industry can no longer be left to regulate itself. It creates markets which are immensely distorted. It is not possible to compete against companies like Uber because they don’t care if they lose money. Uber also often doesn’t care if it breaks the law. As to fears that regulation will stifle innovation, we only need look at the telecommunications industry in the US to see that regulation can result in much greater innovation as it permits competition. Competition is rapidly disappearing out of the technology industry as power becomes concentrated in the hands of a few natural monopolies who pay little tax and act in their own, supra-national interests. For example, wouldn’t it likely be a better, more open and competitive market place if we all owned our own social media and not Facebook?
5. Ethics.
There will be immense ethical consequences to handing over many of the decisions in our lives to machines, especially when these machines start to have the autonomy to act in our world (on the battlefield, on the roads, etc.). This promises to be a golden age for philosophy as we will be need to make very precise the ethical choices we make as a society, precisely enough that we can write computer code to execute these decisions. We do not know today how, for example, to build autonomous weapons that can behave ethically and follow international humanitarian law. The UK therefore should be supporting the 19 nations that have called for a pre-emptive ban on lethal autonomous weapons at the CCW in the UN. More generally, we will need to follow the lead being taken at EU on updating legislation to ensure we do not sacrifice rights like the right to avoid discrimination on the grounds of race, age, or sex to machines that cannot explain their decision making. Finally, just as we have strict controls in place to ensure money cannot be used to influence elections, we need strict controls in place to limit the already visible and corrosive effect of algorithms on political debate. Elections should be won by the best ideas and not the best algorithms.
6. Conclusions:
The UK is one of the birthplaces of AI. Alan Turing helped invent the computer and dreamt of how, by now, we would be talking of machines that think. The UK therefore has the opportunity and responsibility to take a lead in ensuring that AI improves all our lives. There are a number of actions needed today. The UK Government needs to reverse its position in the ongoing discussions around fully autonomous weapons, and support the introduction of regulation to control the use and proliferation of such weapons. Like any technology, AI and Robotics are morally neutral. It can be used for good or for bad. However, the market and existing rules cannot alone decide how AI and Robotics are used. Government has a vital responsibility to ensure the public good. This will require greater regulation of the natural monopolies developing in the technology sector to ensure competition, to ensure privacy and to ensure that all of society benefits from the technological changes underway.


Toby Walsh is Scientia Professor of AI at the University of New South Wales. He is a graduate of the University of Cambridge, and received his Masters and PhD from the Dept. of AI at the University of Edinburgh. He has been elected a Fellow of the Australian Academy of Science, the Association for the Advancement of Artificial Intelligence and the European Association for Artificial Intelligence. He is currently Guest Professor at TU Berlin. His latest book, “Android Dreams: The Past, Present, and Future of Artificial Intelligence” is published in the UK on 7 th September 2017. 5 September 2017

Will robots bring about the end of work? – Toby Walsh.

Hal Varian, chief economist at Google, has a simple way to predict the future. The future is simply what rich people have today. The rich have chauffeurs. In the future, we will have driverless cars that chauffeur us all around. The rich have private bankers. In the future, we will all have robo-bankers.

One thing that we imagine that the rich have today are lives of leisure. So will our future be one in which we too have lives of leisure, and the machines are taking the sweat? We will be able to spend our time on more important things than simply feeding and housing ourselves?

Let’s turn to another chief economist. Andy Haldane is chief economist at the Bank of England. In November 2015, he predicted that 15 million jobs in the UK, roughly half of all jobs, were under threat from automation. You’d hope he knew what he was talking about.

And he’s not the only one making dire predictions. Politicians. Bankers. Industrialists. They’re all saying a similar thing.

“We need urgently to face the challenge of automation, robotics that could make so much of contemporary work redundant”, Jeremy Corbyn at the Labour Party Conference in September 2017.

“World Bank data has predicted that the proportion of jobs threatened by automation in India is 69 percent, 77 percent in China and as high as 85 percent in Ethiopia”, according to World Bank president Jim Yong Kim in 2016.

It really does sound like we might be facing the end of work as we know it.

Many of these fears can be traced back to a 2013 study from the University of Oxford.This made a much quoted prediction that 47% of jobs in the US were under threat of automation in the next two decades. Other more recent and detailed studies have made similar dramatic predictions.

Now, there’s a lot to criticize in the Oxford study. From a technical perspective, some of report’s predictions are clearly wrong. The report gives a 94% probability that bicycle repair person will be automated in the next two decades. And, as someone trying to build that future, I can reassure any bicycle repair person that there is zero chance that we will automate even small parts of your job anytime soon. The truth of the matter is no one has any real idea of the number of jobs at risk.

Even if we have as many as 47% of jobs automated, this won’t translate into 47% unemployment. One reason is that we might just work a shorter week. That was the case in the Industrial Revolution. Before the Industrial Revolution, many worked 60 hours per week. After the Industrial Revolution, work reduced to around 40 hours per week. The same could happen with the unfolding AI Revolution.

Another reason that 47% automation won’t translate into 47% unemployment is that all technologies create new jobs as well as destroy them. That’s been the case in the past, and we have no reason to suppose that it won’t be the case in the future. There is, however, no fundamental law of economics that requires the same number of jobs to be created as destroyed. In the past, more jobs were created than destroyed but it doesn’t have to be so in the future.

In the Industrial Revolution, machines took over many of the physical tasks we used to do. But we humans were still left with all the cognitive tasks. This time, as machines start to take on many of the cognitive tasks too, there’s the worrying question: what is left for us humans?

Some of my colleagues suggest there will be plenty of new jobs like robot repair person. I am entirely unconvinced by such claims. The thousands of people who used to paint and weld in most of our car factories got replaced by only a couple of robot repair people.

No, the new jobs will have to be doing jobs where either humans excel or where we choose not to have machines. But here’s the contradiction. In fifty to hundred years time, machines will be super-human. So it’s hard to imagine of any job where humans will remain better than the machines. This means the only jobs left will be those where we prefer humans to do them.

The AI Revolution then will be about rediscovering the things that make us human. Technically, machines will have become amazing artists. They will be able to write music to rival Bach, and paintings to match Picasso. But we’ll still prefer works produced by human artists.

These works will speak to the human experience. We will appreciate a human artist who speaks about love because we have this in common. No machine will truly experience love like we do.

As well as the artistic, there will be a re-appreciation of the artisan. Indeed, we see the beginnings of this already in hipster culture. We will appreciate more and more those things made by the human hand. Mass-produced goods made by machine will become cheap. But items made by hand will be rare and increasingly valuable.

Finally as social animals, we will also increasingly appreciate and value social interactions with other humans. So the most important human traits will be our social and emotional intelligence, as well as our artistic and artisan skills. The irony is that our technological future will not be about technology but all about our humanity.


Toby Walsh is Professor of Artificial Intelligence at the University of New South Wales, in Sydney, Australia.

His new book, “Android Dreams: the past, present and future of Artificial Intelligence” was published in the UK by Hurst Publishers in September 2017.

The Guardian

Japanese company replaces office workers with artificial intelligence – Justin McCurry. 

A future in which human workers are replaced by machines is about to become a reality at an insurance firm in Japan, where more than 30 employees are being laid off and replaced with an artificial intelligence system that can calculate payouts to policyholders.

Fukoku Mutual Life Insurance believes it will increase productivity by 30% and see a return on its investment in less than two years. 

The system is based on IBM’s Watson Explorer, which, according to the tech firm, possesses “cognitive technology that can think like a human”, enabling it to “analyse and interpret all of your data, including unstructured text, images, audio and video”.

The technology will be able to read tens of thousands of medical certificates and factor in the length of hospital stays, medical histories and any surgical procedures before calculating payouts. 

The Guardian