EMOTION AI, Artificial Emotional Intelligence and Affective Computing – Richard Yonck.

The Coming Era of Emotional Machines

Emotion AI is growing rapidly and will bring many changes to our society.

You have a report deadline in 20 minutes and your software keeps incorrectly reformatting your document. Or you’re driving along when another car cuts you off at the intersection. Or another car cuts you off at the intersection. Or you’re upset at your boss and decide to finally tell him how you really feel about him in an email.

Wouldn’t it be great if technology could detect your feelings and step in to fix the problem, prevent you from doing something dangerous, or pointed out the benefits of holding onto your job?

Welcome to the world of affective computing, otherwise known as artificial emotional intelligence, or Emotion AI.

Rapidly being incorporated into everything from market research testing to automotive interfaces to chatbots and social robotics, this is a branch of Al that will continue to rapidly grow over the next few decades. According to research group Markets and Markets, they expect the global affective computing market to grow from $12.20 Billion in 2016 to $53.98 Billion by 2021, at a compound annual growth rate (CAGR) of 34.7%.

For decades we have become increasingly dependent on our computers and other devices to perform tasks and make our lives easier. Along the way, these have not only improved in performance but have gained some degree of intelligence, as well as artificial technology to become highly capable at some tasks, such as pattern recognition, there remain many ways our systems continue to come up short. But having a better sense of the user’s state of mind would go a long way to knowing what the user wants, even before they know it themselves.

Needless to say, while new technology such as this has huge potential for improving our lives, there are also many ways it could be turned to negative uses. As explored in my book, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence, this field probably brings as many risks as it does opportunities. Emotionally aware systems and robots will find many roles in healthcare, education, autism detection and therapy, politics, law enforcement, the military and more. Yet each will bring challenges as well. Issues of privacy, emotional manipulation and self-determination will definitely come into play.

As these systems become increasingly accurate and ubiquitous throughout our environment, the challenges and the stakes will rise. anticipating these and acting to mitigate the negative repercussions will be our best course to ensuring a safe and more ethical future.

Psychology Today


Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence

Richard Yonck.


Emotion. It’s as central to who you are as your body and your intellect. While most of us know emotion when we see or experience it, many questions remain about what it is, how it functions, and even why it exists in the first place. What’s known for certain is that without it, you would not be the person you are today.

Now we find ourselves entering an astonishing new era, an era in which we are beginning to imbue our technologies with the ability to read, interpret, replicate, and potentially even experience emotions themselves. This is being made possible by a relatively new branch of artificial intelligence known as affective computing. A powerful and remarkable technology, affective computing is destined to transform our lives and our world over the coming decades.

To some this may all sound like science fiction, while to others it is simply another example of the relentless march of progress. Either way, we are growing closer to our technology than ever before. Ultimately this will lead to our devices becoming our assistants, our friends and companions, and yes, possibly even our lovers. In the course of it all, we may even see the dream (or nightmare) of truly intelligent machines come true.

From the moment culture and toolmaking began, the history and evolution of humanity and technology have been deeply intertwined. Neither humans nor machines would be anywhere close to what we are today without the immediate and ongoing aid of the other. This is an inextricable trend that, with luck, will continue for our world’s remaining lifespan and beyond.

This technological evolution is being driven by social and economic forces that mimic some of the processes of natural selection, though certainly not all of them. In an effort to attain competitive advantage, humans use technologies (including machines, institutions, and culture). In turn, these pass through a series of filters that determine a given technology’s fitness within its overall environment. That environment, which blends society’s physical, social, economic, and political realities, decides the success of each new development, even as it is modified and supported by every further advance.

Though natural and technological evolution share some similarities, one way they differ is in the exponential nature of technological change. While biology evolves at a relatively steady, linear pace that is dictated by factors such as metabolism, replication rates, and the frequency of nucleotide mutation, technological evolution functions within multiple positive feedback loops that actually accelerate its development. Though this acceleration is not completely constant and typically levels off for any single domain or paradigm, over time and across the entire technological landscape, the trend results in a net positive increase in knowledge and capabilities. Because of this, technology and all it makes possible advances at an ever-increasing exponential rate, far outpacing the changes seen in the biological world over the same period.

One of the consequences of all of this progress is that it generates a need to create increasingly sophisticated user interfaces that allow us to control and interact with our many new devices and technologies. This is certainly borne out in my own experience developing interfaces for computer applications over many years. As technology theorist Brenda Laurel observed, “The greater the difference between the two entities, the greater the need for a well-designed interface.” As a result, one ongoing trend is that we continue to develop interfaces that are increasingly “natural” to use, integrating them ever more closely with our lives and our bodies, our hearts, and our minds.

Heart of the Machine is about some of the newest of these natural interfaces. Affective computing integrates computer science, artificial intelligence, robotics, cognitive science, psychology, biometrics, and much more in order to allow us to communicate and interact with computers, robots, and other technologies via our feelings. These systems are being designed to read, interpret, replicate, and potentially even influence human emotions. Already some of these applications have moved out of the lab and into commercial use. All of this marks a new era, one in which we’re seeing the digitization of affect, a term psychologists and cognitive scientists use to refer to the display of emotion.

While this is a very significant step in our increasingly high-tech world, it isn’t an entirely unanticipated one. As you’ll see, this is a development that makes perfect sense in terms of our ongoing, evolving relationship with technology. At the same time, it’s bringing about a shift in that relationship that will have tremendous repercussions for both man and machine. The path it takes us down is far from certain. The world it could lead to may be a better place, or it might be a far worse one. Will these developments yield systems that anticipate and fulfill our every need before we’re even aware of them? Or will they give rise to machines that can be used to stealthily manipulate us as individuals, perhaps even en masse? Either way, it’s in our best interests to explore the possible futures this technology could bring about while we still have time to influence how these will ultimately manifest.

In the course of this book, multiple perspectives will be taken at different points. This is entirely intentional. When exploring the future, recognizing that it can’t truly be known or predicted is critical. One of the best ways of addressing this is to explore numerous possible future scenarios and, within reason, prepare for each. This means not only considering what happens if the technology develops as planned or not, but also whether people will embrace it or resist it. It means anticipating the short-, mid-, and long-term repercussions that may arise from it, including what would otherwise be unforeseen consequences. This futurist’s view can help us to prepare for a range of eventualities, taking a proactive approach in directing how our future develops.

Heart of the Machine is divided into three sections.

The first, “The Road to Affective Computing,” introduces our emotional world, from humanity’s earliest days up to the initial development of emotionally aware affective computers and social robots. The second section, “The Rise of the Emotional Machines,” looks at the many ways these technologies are being applied, how we’ll benefit from them, and what we should be worried about as they meet their future potential.

Finally, “The Future of Artificial Emotional Intelligence” explores the big questions about how all of this is likely to develop and the effects it will have on us as individuals and as a society. It wraps up with a number of thoughts about consciousness and superintelligence and considers how these developments may alter the balance of the human-machine relationship.

Until now, our three-million-year journey with technology has been a relatively one-sided and perpetually mute one. But how might this change once we begin interacting with machines on what for us remains such a basic level of experience? At the same time, are we priming technology for some sort of giant leap forward with these advances? lf artificial intelligence is ever to attain or exceed human levels, and perhaps even achieve consciousness in the process, will feelings and all they make possible be the spark that lights the fuse? Only time will tell, but in the meantime we’d be wise to explore the possibility.

Though this is a book about emotions and feelings, it is very much founded on science, research, and an appreciation of the evolving nature of intelligence in the universe. As we’ll explore, emotions may be not only a key aspect of our own humanity, but a crucial component for many, if not all, higher intelligences, no matter what form these may eventually take.


Futures, or “strategic foresight” as it’s sometimes known, is a field unlike any other. On any given day you’re likely to be asked, “What is a futurist?” or “What does a futurist do?” Many people have an image of a fortuneteller gazing into a crystal ball, but nothing could be further from the truth. Because ultimately, all of us are futurists.

Foresight is one of the dominant characteristics of the human species. With self-awareness and introspection came the ability to anticipate patterns and cycles in our environment, enhancing our ability to survive.

As a result, we’ve evolved a prefrontal cortex that enables us to think about the days ahead far better than any other species.

It might have begun with something like the recognition of shifting patterns in the grasslands of the Serengeti that let us know a predator lay in wait. This continued as we began to distinguish the phases of the moon, the ebb and flow of the tides, the cycles of the seasons. Then it wasn’t long before we were anticipating eclipses, forecasting hurricanes, and predicting stock market crashes. We are Homo sapiens, the futurist species.

Of course, this was only the beginning. As incredible as this ability of ours is, it could only do so much in its original unstructured state. So, when the world began asking itself some very difficult and important existential questions about surviving the nuclear era, it was time to begin formalizing how we thought about the future.

Project RAND

For many, Project RAND, which began immediately after World War II, marks the beginning of the formal foresight process. Building on our existing capabilities, Project RAND sought to understand the needs and benefits of connecting military planning with R&D decisions. This allowed the military to better understand not only what its future capabilities would be, but also those of the enemy. This was critical because, being the dawn of the atomic age, there were enormous uncertainties about our future, including whether or not we would actually survive to have one.

Project RAND eventually transformed into the RAND Corporation, one of the first global policy think tanks. As the space race ramped up, interest in foresight grew, particularly in government and the military. In time, corporations began showing interest too, as was famously demonstrated by Royal Dutch Shell’s application of scenarios in response to the 1973 oil crisis. Tools and methods have continued to be developed until today, and many of the processes of foresight are used throughout our world, from corporations like lntel and Microsoft, who have in-house futurists, to smaller businesses and organizations that hire consulting futurists. Branding, product design, research and development, government planning, education administration, if it has a future, there are people who explore it. Using techniques for framing projects, scanning for and gathering information, building forecasts and scenarios, creating visions and planning and implementing them, these practitioners help identify opportunities and challenges so that we can work toward our preferred future.

This is an important aspect of foresight work: recognizing the future is not set in stone and that we all have some ability to influence how it develops.

Notice I say influence, not control. The many elements that make up the future are of a scale and complexity far too great for any of us to control. But if we recognize something about our future that we want to manifest, and we recognize it early enough, we can influence other factors that will increase its likelihood of being realized.

A great personal example would be saving for retirement. A young person who recognizes they will one day retire can start building their savings and investments early on. In doing this, they’re more likely to be financially secure in their golden years, much more so than if they’d waited until they were in their fifties or sixties before they started saving.

Many of foresight’s methods and processes have been used in the course of writing this book. Horizon scanning, surveying of experts, and trend projections are just a few of these. Scenarios are probably the most evident of these tools because they’re included throughout the book. The processes futurists use generate a lot of data, which often doesn’t convey what’s important to us as people. But telling stories does, because we’ve been storytellers from the very beginning. Stories help us relate to new knowledge and to each other. This is what a scenario does: it takes all of that data and transforms it into a more personal form that is easier for us to digest.

Forecasts are more generally included because in many respects they’re not that valuable. Some people think studying the future is about making predictions, which really isn’t the case. Knowing whether an event will happen in 2023 or 2026 is of limited value compared with the act of anticipating the event at all and then deciding what we’re going to do about it. Speculating about who’s going to win a horse race or the World Cup is for gamblers, not for futurists.

In many respects, a futurist explores the future the way a historian explores history, inferring a whole picture or pattern from fragments of clues. While it may be tempting to ask how there can be clues to something that hasn’t even happened yet, recall that every future is founded upon the past and present, and that these are laden with signals and indicators of what’s to come.

So read on and learn about this future age of artificial emotional intelligence, because all too soon, it will be part of our present as well.





Menlo Park, California-March 3, 2032 7:06 am

It’s a damp spring morning as Abigail is gently roused from slumber by Mandy, her personal digital assistant. Sensors in the bed inform Mandy exactly where Abigail is in her sleep cycle, allowing it to coordinate with her work schedule and wake her at the optimum time. Given the morning’s gray skies and Abigail’s Iess-than-cheery mood when she went to bed the night before, Mandy opts to waken her with a recorded dawn chorus of sparrows and goldfinches.

Abigail stretches and sits up on the edge of the bed, feeling for her slippers with her feet. “Mmm, morning already?” she mutters.

“You slept seven hours and nineteen minutes with minimal interruption,” Mandy informs her with a pleasant, algorithmically defined lilt via the room’s concealed speaker system. “How are you feeling this morning?”

“Good,” Abigail replies blinking. “Great, actually.”

It’s a pleasantry. Mandy didn’t really need to ask or to hear its owner’s response. The digital assistant had already analyzed Abigail’s posture, energy levels, expression, and vocal tone using its many remote sensors, assessing that her mood is much improved from the prior evening.

It’s a routine morning for the young woman and her technology. The two have been together for a long time. Many years before, when she was still a teen, Abigail named her assistant Mandy. Of course, back then the software was also several versions less sophisticated than it is today, so in a sense they’ve grown up together. During that time, Mandy has become increasingly familiar with Abigail’s work habits, behavioral patterns, moods, preferences, and various other idiosyncrasies. In many ways, it knows Abigail better than any person ever could.

Mandy proceeds to tell Abigail about the weather and traffic conditions, her morning work schedule, and a few of the more juicy items rising to the top of her social media stream as she gets ready for her day.

“Mandy,” Abigail asks as she brushes her hair, “do you have everything organized for today’s board meeting?”

The personal assistant has already anticipated the question and consulted Abigail’s calendar and biometric historical data before making all the needed preparations for her meeting with her board of directors. As the CEO of AAT, Applied Affective Technologies, Abigail and her company are at the forefront of human-machine relations. “Everyone’s received their copies of the meeting agenda. Your notes and 3D presentation are finalized. Jeremy has the breakfast catering covered. And I picked out your clothes for the day: the Nina Ricci set.”

“Didn’t I wear that recently?”

Mandy responds without hesitation. “My records show you last wore it over two months ago for a similarly important meeting. It made you feel confident and empowered, and none of today’s attendees has seen it on you before.”

“Perfect!” Abigail beams. “Mandy, what would I do without you?”

What indeed?

Though this scenario may sound like something from a science fiction novel, in fact it’s a relatively reasonable extrapolation of where technology could be fifteen years from now. Already, voice recognition and synthesis, the real-time measurement of personal biometrics, and artificially intelligent scheduling systems are becoming an increasing part of our daily lives. Given continuing improvements in computing power, as well as advances in other relevant technologies, in a mere decade these tools will be far more advanced than they are today.

However, the truly transformational changes described here will come from a branch of computer science that is still very much in its nascent stages, still early enough that many people have yet to even hear about it.

It’s called affective computing, and it deals with the development of systems and devices that interact with our feelings.

More specifically, affective computing involves the recognition, interpretation, replication, and potentially the manipulation of human emotions by computers and social robots.

This rapidly developing field has the potential to radically change the way we interact with our computers and other devices. Increasingly, systems and controls will be able to alter their operations and behavior according to our emotional responses and other nonverbal cues. By doing this, our technology will become increasingly intuitive to use, addressing not only our explicit commands but our unspoken needs as well. In the pages that follow, we will explore just what this new era could mean for our technologies and for ourselves.

We are all emotional machines. Centuries of research into anatomy, biology, neurology, and numerous other fields has consistently revealed that nearly all of what we are follows a predictable set of physical processes. These mechanistically driven rules make it possible for us to move, to eat, to grow, to procreate. Within an extremely small range of genetic variation, we are all essentially copies of those who came before us, destined to produce generation after generation of nearly identical cookie-cutter reproductions of ourselves well into the future.

Of course, we know this is far from the true reality of the human experience. Though these deterministic forces define us up to a point, we exist in far greater depth and dimension than can be explained by any mere set of stimuli and responses. This is foremost because we are emotional beings. That the dreams, hopes, fears, and desires of each and every one of us are so unique while remaining so universal is largely due to our emotional experience of the world. If this were not so, identical twins who grow up together would have all but identical personalities. Instead, they begin with certain shared genetically influenced traits and behaviors and over time diverge from there. While all humanity shares nearly identical biology, chemical processes, and modes of sensory input, it is our feelings, our emotional interpretations of and responses to the world we experience that makes all of us on this planet, all 107 billion people who have ever lived, truly unique from one another.

There are easily hundreds, if not thousands, of theories about emotions, what they are, why they exist, and how they came about, and there is no way for a book such as this to begin to introduce or address them all. Nor does this book claim to know which, if any, of these is the One True Theory, in part because, in all likelihood, there is none. It’s been said repeatedly by neuroscientists, psychologists, and philosophers that there are nearly as many theories of emotion as there are theorists.

Emotion is an incredibly complex aspect of the human condition and mind, second only perhaps to the mystery of consciousness itself. What is important is to recognize its depth and complexity without attempting to oversimplify either its mechanisms or purpose.

Emotions are one of the most fundamental components of the human experience. Yet, as central as they are to our lives, we continue to find it a challenge to define or even to account for them. In many respects, we seem to have our greatest insights about feelings and emotions in their absence or when they go awry. Despite the many theories that exist, all we know with certainty is that they are essential in making us who we are, and that without them we would be but pale imitations of ourselves.

So what might this mean as we enter an era in which our machines, our computers, robots, and other devices, become increasingly capable of interacting with our emotions? How will it change our relationship with our technologies and with each other? How will it alter technology itself? Perhaps most importantly:

If emotion has evolved in humans and certain other animals because it affords us some benefit, might it convey a similar benefit in the future development of artificial intelligence?

For reasons that will be explored in the coming chapters, affective computing is a very natural progression in our ongoing efforts to build technologies that operate increasingly on human terms, rather than the other way around. As a result, this branch of artificial intelligence will come to be incorporated to one degree or another nearly everywhere in our lives. At the same time, just like almost every other form of artificial intelligence that has been developed and commercialized, affective computing will eventually fade into the scenery, an overlooked, underappreciated feature that we will quickly take all too much for granted because it will be ubiquitous.

Consider the possibilities. Rooms that alter lighting and music based on your mood. Toys that engage young minds with natural emotional responses. Computer programs that notice your frustration over a task and alter their manner of assistance. Email that makes you pause before sending that overly inflammatory message. The scenarios are virtually endless.

But it’s a rare technology that doesn’t have unintended consequences or that is used exclusively as its inventors anticipated. Affective computing will be no different. It doesn’t take a huge leap of foresight to anticipate that this technology will also inevitably be applied and abused in ways that clearly aren’t a benefit to the majority of society. As this book will explore, like so many other technologies, affective computing will come to be seen as a double edged sword, one that is capable of working for us while also having the capacity to do us considerable harm.

Amidst all of this radical progress, there is yet another story to be told. In many respects, affective computing represents a milestone in the long evolution of technology and our relationship to it. It’s a story millions of years in the making and one that may be approaching a critical juncture, one that could well determine not only the future of technology, but of the human race.

But first, let’s examine a question that is no doubt on many people’s minds:

“Why would anyone want to do this? Why design devices that understand our feelings?”

As we’ll see in the next chapter, it’s a very natural, perhaps even inevitable step on a journey that began over three million years ago.



Gona, Afar, Ethiopia, 3. 39 million years ago

In a verdant gorge, a tiny hirsute figure squats over a small pile of stones. Cupping one of these, a modest piece of chert, in her curled hand, she repeatedly hits the side of it with a second rock, a rounded piece of granite. Every few strikes, a flake flies from the chert, leaving behind it a concave depression. As the young woman works the stone, the previously amorphous mineral slowly takes shape, acquiring a sharp edge as the result of the laborious process.

The work is half ritual, half legacy, a skill handed down from parent to child for untold generations. The end product, a small cutting tool, is capable of being firmly grasped and used to scrape meat from bones, ensuring that critical, life-sustaining morsels of food do not go to waste.

1961 Paleoanthropologist Louis Leakey and his family look for early hominid remains at Olduvai Gorge, Tanzania with their three dogs in attendance.

Here in the Great Rift Valley of East Africa, our Paleolithic ancestor is engaged in one of humanity’s very earliest technologies. While her exact species remains unknown to us, she is certainly a bipedal hominid that preceded Homo habilis, the species long renowned in our text books as “handy man, the tool maker.” Perhaps she is Kenyanthropus platyops or the slightly larger Australopithecus afarensis. She is small by our standards: about three and a half feet tall and relatively slender. Her brain case is also meager compared with our own, averaging around 400 cubic centimeters, less than a third of our 1,350 cubic centimeters. But then that’s hardly a fair comparison. When judged against earlier branches of our family tree, this hominid, this early human, is a mental giant. She puts that prowess to good use, fashioning tools that set her species apart from all that have come before.

While these stone tools might seem simple from today’s perspective, at the time they were a tremendous leap forward, improving our ancestors’ ability to obtain nutrition and to protect themselves from competitors and predators. These tools allowed them to slay beasts far more powerful than themselves and to scrape meat from bones. In turn, this altered their diet, providing much more regular access to the proteins and fats that would in time support further brain development.

Making these tools required a knowledge and skill that combined our ancestors’ considerably greater brain power with the manual dexterity granted by their opposable thumbs.

But perhaps most important of all was developing the ability to communicate the knowledge of stone tool making, knapping, as it’s now known, which allowed this technology to be passed down from generation to generation. This is all the more amazing because these hominids didn’t rely on verbal language so much as on emotion, expressiveness, and other forms of nonverbal communication.

Many cognitive and evolutionary factors needed to come together to make the development and transmission of this knowledge possible. The techniques of knapping were not simple or easy to learn, yet they were essential to our survival and eventual growth as a species. As a result, those traits that promoted its continuation and development would have been selected for, whether genetic or behavioral.

This represents something quite incredible in our history, because this is the moment when we truly became a technological species.

This is when humanity and technology first set forth on their long journey together. As we will see, emotion was there from the very beginning, making all of it possible. The coevolution that followed allowed each of us to grow in ways we never could have without the aid of the other.

It’s easy to dismiss tools and machines as “dumb” matter, but of course this is from the perspective of human intelligence. After all, we did have a billion, year head start, beginning from simple single-cell life. But over time, technology has become increasingly intelligent and capable until today, when it can actually best us on a number of fronts. Additionally, it’s done this in a relative eyeblink of time, because as we’ll discuss later, technology progresses exponentially relative to our own linear evolution.

Which brings us back to an important question: Was knapping really technology? Absolutely. There should be no doubt that the ability to forge these stone tools was the cutting-edge technology of its day. (A bad pun, but certainly an apt one.) Knapping was incredibly useful, so useful it was carried on for over three million years. After all, these hominids’ lives had literally come to depend on it. During this time, change and improvement of the techniques used to form the tools was ponderously slow, at least in part because experimentation would have been deemed very costly, if not outright wasteful. Local supplies of chert-a fine, grained sedimentary rock-were limited. Analysis of human settlements and the local fossil record show that the supply of chert was exhausted several times in different regions of Africa and in several cases presumably had to be carried in from areas where it was more plentiful.

Based on fossil records, it took more than a million years, perhaps seventy thousand generations, to go from simple single edges to beautifully flaked tools with as many as a hundred facets. But while advancement of this technology was slow, one truly crucial factor was the ability to share and transmit the process. Knapping didn’t die out with the passing of a singular exemplary mind or Paleolithic genius of its era. Because this technology was so successful, because it gave its users a competitive edge, this knowledge was meticulously passed down through the generations, allowing it to slowly morph into ever more complex forms and applications.

The image of our hominid ancestors shaping stone tools has been with us for decades. Beginning in the 1930s, Louis and Mary Leakey excavated thousands of stone tools and flakes at Olduvai Gorge in Tanzania, leading to these being dubbed Oldowan tools, a term now generally used to reference the oldest style of flaked stone. These tools were later estimated to be around 1.7 million years old and were likely made by Paranthropus boisei or perhaps Homo habilis.

However, more recent findings have pushed the date of our oldest tool-using ancestors back considerably further. In the early 1990s, another Paleolithic settlement north of Olduvai along East Africa’s Great Rift Valley turned out to have even older stone tools and fragments. In 1992 and 1993, Rutgers University paleoanthropologists digging in the Afar region of Ethiopia excavated 2,600 sharpedged flakes and flake fragments. Using radiometric dating and magnetostratigraphy, researchers dated the fragments to having been made more than 2.6 million years ago, making them remnants of the oldest known tools ever produced.

Of course, direct evidence isn’t always available when you’re on the trail of something millions of years old. This was the case when, in 2010, paleoanthropologists found animal bones in the same region bearing marks consistent with stone-inflicted scrapes and cuts. The two fossilized bones, a femur and a rib from two different species of ungulates, indicated a methodical use of tools to efficiently remove their meat. Scans dated the bones at approximately 3.39 million years old, pushing back evidence of the oldest tool user by another 800,000 years. If this is accurate, then the location and age suggest the tools would have been used, and therefore made, by Australopithecus afarensis or possibly the flatter-faced Kenyanthropus platyops. However, because the evidence was indirect, many experts disputed its validity, generating considerable controversy over the claim that such sophisticated tools had been produced so much earlier than previously thought.

Then, in 2015, researchers reported that stone flakes, cores, and anvils had been found in Kenya, some one thousand kilometers from Olduvai, which were conclusively dated to 3.3 million years BCE. (BCE is a standard scientific abbreviation for Before the Common Era.) In coming years, other discoveries may well push the origins of human tool making even further back, but for now we can say fairly certainly that knapping has been one of our longestlived technologies.

So here we have evidence that one of our earliest technologies was accurately transmitted generation after generation for more than three million years. This would be impressive enough in its own right, but there’s another factor to consider: How did our ancestors do this with such consistency when language didn’t yet exist?

No one knows exactly when language began. Even the era when we started to use true syntactic language is difficult to pinpoint, not least because spoken words don’t leave physical traces the way fossils and stone tools do. From Darwin’s own beliefs that the ability to use language evolved, to Chomsky’s anti, evolutionary Strong Minimalist Thesis, to Pinker’s neo-Darwinist stance, there is considerable disagreement as to the origins of language. However, for the purpose of this book, we’ll make a few assumptions that at least some of our capacity for language was driven and shaped by natural selection.

Despite our desire to anthropomorphize our world, other primates and animals do not have true combinatorial language. While many use hoots, cries, and calls, these are only declarative or emotive in nature and at best indicate a current status or situation. Most of these sounds cannot be combined or rearranged to produce different meanings, and even when they can, as is the case with some songbirds and cetaceans, the meaning of the constituent units is not retained. Additionally, animal calls have no means of indicating negation, irony, or a past or future condition. In short, animal language isn’t truly equivalent to our own.

Our nearest cousins, genetically speaking, are generally considered to be the common chimpanzee (Pan troglodytes) and the bonobo (Pan paniscus). For a long time, evolutionary biologists have said that our last common ancestor (or LCA) the ancestor species we most recently shared with these chimps, existed about six million years ago. This is estimated based on the rate specific segments of DNA mutate. In human beings, this overall mutation rate is currently estimated at about thirty mutations per offspring. Recently however, the rate of this molecular clock for chimpanzees has been reassessed as faster than was once thought. If this is accurate, then it’s been reestimated that chimps and humans last shared a common ancestor, perhaps the now extinct homininae species Sanelanthropus-approximately thirteen million years ago.

Of course, the difference of a single gene does not a new species make. It’s estimated that a sufficient number of mutations needed to give rise to a distinctly new primate species, such as Ardipethicus, wouldn’t have accumulated until ten to seven million years ago. Nevertheless, it’s a significant amount of time.

Can we pinpoint when in this vast span of time the origins of human language appeared? It’s generally accepted that Australopithecines’s capacity for vocal communication wasn’t all that different from chimpanzees and other primates. In fact, many evolutionary biologists would say that our vocal tract wasn’t structurally suited to the sounds of modern speech until our hyoid bone evolved with its specific shape and in its specific location. This, along with our precisely shaped larynx, is believed to have allowed us to begin forming complex phoneme, based sounds (unlike our chimpanzee relatives) sometime between 200,000 and 250,000 years ago. In recent years there has been some suggestion that Neanderthals may have also had the capacity for speech. Either way, it was long after Australopithecus afarensis, Paranthropus boisei, and Homo habilis had all disappeared from Earth.



Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence

by Richard Yonck

get it at Amazon.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s