The future of AI lies in replicating our own neural networks – Ben Medlock.

Long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents.

We think with our whole body, not just with the brain.

In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time.

It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve humanlike artificial intelligence, AI, while bypassing the messy flesh that characterizes organic life.

I understand the appeal of this view because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we’re nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.

Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions such as whether your average cat is as big as a horse, or likely to chase a mouse.

This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real world problems, where fine tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.

In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.

Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these selfteaching algorithms mimic what we know about the subconscious processes of organic brains.

Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.

But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grade nail file you could eradicate human history’.

The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43% of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.

Now, it’s a bit of a leap to go from smart, self-organizing cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.

I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognizing cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.

This means that when a human approaches a new problem, most of the hard work has already been done.

In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time.

There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.

On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines.

I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.

American Islamophobia. Understanding the Roots and Rise of Fear – Khaled A. Beydoun.

I remember the four words that repeatedly scrolled across my mind after the first plane crashed into the World Trade Center. “Please don’t be Muslims, please don’t be Muslims,”

These four words reverberated through the mind of every Muslim American that day and every day after, forming a unifying prayer for Muslim Americans after every attack.

This system of inculcating fear and calculated bigotry was not entirely spawned in the wake of the 9/11 terror attacks but is a modern extension of a deeply embedded and centuries old form of American hate.

Now more than ever, Islamophobia is not limited to the irrational views or hateful slurs of individuals, but is an ideology that drives the president’s political worldview and motivates the laws, policies, and programs he seeks to push forward.

Crossroads and Intersections

“Nobody’s going to save you. No one’s going to cut you down, out the thorns thick around you. . . . There is no one who will feed the yearning. Face it. You will have to do, do it yourself.” Gloria AnzaldUa, Borderlands/La Frontera

“If you know who you are, nobody can tell you what you are or what you are not.” My momma, Fikrieh Beydoun

I took my seat in the back of the Uber car, plugged in my phone and reclined my head to recharge on the way to the hotel. The road ahead is going to be a long one, I thought as I sank into the backseat, settling in for a temporary respite from the oncoming storm. “As-salamu ‘alaikum,” the young driver greeted me in Spanish-inflected Arabic, abruptly ending my break.

“Wa ‘alaikum aI-salam,” I responded, thoroughly surprised that these familiar words came out of the mouth of my tattooed Latino Uber driver, Juan. Was he Muslim? I pondered, wondering whether his neat beard signified more than a recent fad or fashionable grooming.

“It’s an honor to meet you, Professor,” he said, and continued, “I’m very familiar with your writing and work, and I’m happy you’re here speaking at Cal State LA. I wish I could’ve been there to hear your talk.” Another sign that Juan might in fact be Muslim, given that my work centers on Muslim American identity and, increasingly, Islamophobia.

“Thank you so much,” I responded, taken aback by the fact that he knew who I was, and still contemplating whether he was a recent Muslim convert or born into a Muslim family. As a longtime resident of Los Angeles and a scholar familiar with Muslim American demographics, I was well aware that Latinx Muslims were the fastest growing segment of the Muslim American population. I had attended Friday prayers with sermons delivered en espanol in California and in Florida, where I lived and taught law for two years, and prayed alongside brothers from Puerto Rico, the Dominican Republic, and Mexico as often as I did next to Muslims from Egypt, Syria, or Pakistan. However, I was still unsure about Juan’s religious identity, and to which destination he might steer this conversation.

I learned, en route from the East Los Angeles campus to my downtown hotel, that Juan was neither born to a Muslim family nor a convert. He was, rather, a man on the cusp of embracing Islam at a moment of unprecedented Islamophobia and rabid xenophobia, of imminent Muslim bans and Mexican walls.

“I have been studying Islam closely for some time now, and try to go to the mosque on some Fridays,” he shared. “I am considering making my shahada,” Juan continued, referencing the oath of induction whereby a new Muslim proclaims that “there is only one God, and Mohammed is his final messenger.” “Everybody assumes that I am a Muslim already,” he said, with a cautious laugh that revealed discomfort with his liminal status. Juan turned down the radio, and the voice of Compton native Kendrick Lamar rapping, “We gon’ be alright,” to engage in a more fluid conversation. And, it appeared, to seek a response from me about his spiritual direction.

“That’s wonderful,” I responded to Juan, who was likely no more than twenty-three or twenty-four years old, trying to balance my concern for the challenges his new religious affiliation would present with the answer that I thought he wanted to hear, and perhaps expected, from a Muslim American scholar and activist whose name and work he recognized.

As he drove, we discussed the political challenges posed by the Trump administration, and specifically, the policies that would directly or disproportionately target Muslim and Latinx communities. Indeed, Trump capitalized heavily on demonizing these vulnerable groups, as evidenced most clearly by the two proposals, the Muslim ban and the Mexico Wall, that became the rallying cries of his campaign. We also discussed how our kindred struggles with poverty complicated our pursuit of education, and how Trump’s economic vision exacerbated conditions for indigent Americans, including the 45 percent of Muslim Americans living below, at, or dangerously close to the federal poverty line. The city’s infamous, slow moving traffic enabled a fast paced conversation between my new friend and me and gave rise to an LA story seldom featured in newspapers or on television.

Juan’s responses focused on his everyday struggles living in LA and the stories of family and friends from his Pico Union neighborhood. He pointed out that the onslaughts on Muslims and Latinx communities were hardly separate and independent, or parallel and segregated. Rather, they were, and are, overlapping, intersecting, and, for him, very intimate.

“As an undocumented Latino from El Salvador living in Pico Union”, a heavily concentrated Latinx community on the margins of downtown Los Angeles, “I am most fearful about the pop-up checkpoints and the immigration raids,” he told me. These fears were more than imminent under the administration of President Obama, dubbed the “Deporter in Chief” by critics who opposed the accelerated mass deportations carried out during the final stages of his second term. But without question, Juan’s fears have become more visceral, more palpable during the Trump administration.

“I think about this every time I drive to school, work, or visit a family member,” Juan recounted, reminding me of the debilitating fear that comes over me after any terror attack. Yet his fear was far more immediate and frequent than mine, and loomed over him at every moment, including this one while he and I weaved through Los Angeles traffic, talking animatedly about politics, faith, and fear. He could be stopped at any time, whether alone or while whizzing customers through the city he knew better than the life lines on his palms.

I thought about the very imminent dangers these xenophobic policies and programs posed for Juan and people in similar situations in Los Angeles and throughout the country. I knew this city well and understood that the armed and irrational fear directed at nonwhite, non-Christian people was intense in LA, descending (among other places) on the city’s galaxy of dense and large Latinx neighborhoods. This armed xenophobia was aimed particularly at those communities gripped by poverty, where Spanish was spoken primarily, and was concentrated on people and families lacking legal documentation, indeed, the very intersection where Juan began and ended each day, and lived most of his hours in between.

Years before I rode with Juan, Los Angeles was my home away from my hometown of Detroit, the city where I began my career as a law professor, earned my law degree, and only two weeks into my first year of law school at UCLA, the setting from which I witnessed the 9/11 terror attacks. I remember the events of that day more clearly than I do any other day, largely because every terror attack that unfolds in the United States or abroad compels me to revisit the motions and emotions of that day.

For Muslim Americans, 9/11 is not just a day that will live in infamy or an unprecedented tragedy buried in the past; it is a stalking reminder that the safeguards of citizenship are tenuous and the prospect of suspicion and the presumption of guilt are immediate.

My phone kept ringing that morning, interrupting my attempt to sleep in after a long night of studying. As I turned to set the phone to vibrate, I noticed that my mother had called me six times in a span of fifteen minutes. My eyes widened. Was something wrong at home? Three hours behind in California, I called her back to make sure everything at home in Detroit was alright, still in the dark about the tragedy that would mark a crossroads for the country, my community, and indeed, my life.

“Turn on the TV,” she instructed, in her flat but authoritative Arabic that signaled that something serious was unfolding: “Go to your TV right now.” I had an eerie sense of what she was alluding to before I clicked the television on and turned to the news, but I could not have imagined the scale of the terror that unfolded that early Tuesday morning. My eyes were glued to the screen as I awoke fully to what it would mean for me, my family, and Muslim Americans at large if the perpetrators of the attacks looked like us or believed like us.

I recall the surreal images and events of that day as if they happened yesterday. And just as intimately, I remember the four words that repeatedly scrolled across my mind after the first plane crashed into the World Trade Center. “Please don’t be Muslims, please don’t be Muslims,” I quietly whispered to myself over and again, standing inside my small apartment, surrounded by bags and boxes not yet unpacked, a family portrait of my mother, sister, and brother hanging on an otherwise barren white wall. I was alone in the apartment, far from home, but knew in that very moment that the same fear that left me frozen and afraid gripped every Muslim in the country.

The four words I whispered to myself on 9/11 reverberated through the mind of every Muslim American that day and every day after, forming a unifying prayer for Muslim Americans after every attack.

Our fear, and the collective breath or brace for the hateful backlash that ensued, symbolize the existential tightrope that defines Muslim American identity today. It has become a definitive part of what it means to be Muslim American when an act of terror unfolds and the finger-pointing begins.

Indeed, this united state of fear converges with a competing fear stoked by the state to galvanize hatemongers and mobilize damaging policies targeting Islam and Muslims. That state stoked fear has a name: Islamophobia.

This system of inculcating fear and calculated bigotry was not entirely spawned in the wake of the 9/11 terror attacks, I have gradually learned, but is a modern extension of a deeply embedded and centuries old form of American hate.

Following 9/11 it was adorned with a new name, institutionalized within new government structures and strident new policies, and legitimized under the auspices of a “war on terror” that assigned the immediate presumption of terrorism to Islam and the immediate presumption of guilt to Muslim citizens and immigrants.

Thousands of miles away from home and loved ones, my world unraveled. Islamophobia and what would become a lifelong commitment to combating it were thrust to the fore. Although raised in Detroit, home to the most concentrated, celebrated, and scrutinized Muslim American population in the country, my activism, advocacy, and intellectual mission to investigate the roots of American Islamophobia and its proliferation after the 9/11 terror attacks were first marshaled on the other side of the country. For me, 9/11 was both a beginning and an end, putting to rest my romantic designs on an international human rights law career for the more immediate challenges unfolding at home.

I left for Los Angeles a wide-eyed twenty two year old in the late summer of 2001. I was the first in my family to attend university an graduate school, the first to pack his bags for another city, not knowing what direction his career or life would take. After three years and three wars, those in Afghanistan and Iraq, and the amorphous, fluidly expanding war on terror on the home front, I was fully resolved to take on the rising tide of Islamophobia ravaging the country and ripping through concentrated Muslim American communities like the one I called home. I learned about the law at a time when laws were being crafted to punish, persecute, and prosecute Muslim citizens and immigrants under the thinnest excuses, at an intersection when my law professors, including Kimberlé Crenshaw, Cheryl Harris, and Devon Carbado, were equipping me with the spirit and skill to fight Islamophobia in the middle grounds it rose from, and even more importantly, at the margins.

On February 22, 2017, more than a decade and a half after 9/11, I found myself back in Los Angeles. I was now a law professor and a scholar researching national security, Muslim identity, and constitutional law. I was to give a series of lectures on Islamophobia at several colleges and community centers in the LA area. My expertise was in high demand as a result of the 2016 presidential election and the intense lslamophobia that followed. I delivered the lectures roughly one month after newly elected President Donald Trump signed the executive order widely known as the “Muslim ban.”

Seven days into his presidency, Trump delivered on the promise he first made on the campaign trail on December 7, 2015, enacting a travel ban that restricted the entry of nationals from seven Muslim-majority nations: Libya, Iraq, Iran, Somalia, Sudan, Syria, and Yemen. To me, the Muslim ban was not merely a distant policy signed into law in a distant city; it was personal in a myriad of ways. First, I am a Muslim American, and second, I had close friends from several of the restricted nations and had visited several of those nations. Moreover, since the war on terror had been rolled out in 2001, all of the countries on the list had been either sites of full-scale American military aggression or strategic bombings.

“The bombs always precede the bans,” my mother said out loud as she watched the news one day, observing a truism that ties American foreign policy to immigration policy, particularly in relation to Muslim majority countries.

The Muslim ban was the first policy targeting Muslims enacted by the man I formally dubbed the “Islamophobia President.” It certainly would not be the last law, policy, or program implemented by the man who capitalized on Islamophobia as a “full-fledged campaign strategy” to become the forty-fifth president of the United States.

President Trump promised a more hardline domestic surveillance program, which he called Countering Islamic Violence; a registry to keep track of Muslim immigrants within the United States; legislation that would bludgeon the civic and advocacy programs of Muslim American organizations; and other measures that would threaten Muslim immigrants, citizens, and institutions. He was poised to integrate Islamophobia fully into the government he would preside over and to convert his bellicose rhetoric into state sanctioned policy.

If Trump demonstrated anything during his first week in office, it was an ability to follow through on the hateful promises most pundits had dismissed as “mere campaign rhetoric” months earlier. He kept his promises. Islamophobia was not merely an appeal for votes, but a resonant message that would drive policy and inform immigration and national security policing. His electioneering was not mere bluster, but in fact a covenant built on Islamophobia, an Islamophobia that motivated large swaths of Americans to vote for him. In exchange, he delivered on his explicit and “dog whistle” campaign messaging by generating real lslamophobic policies, programs, and action.

Trump, like many candidates before him and others who will follow, traded a grand narrative of nativism and hate for votes, which registered to great success at the ballot box.

Memories of the trials and wounds Muslim Americans endured in the wake of 9/11, which I witnessed firsthand and examined closely as a scholar, and those unfolding in this era of trumped-up, unhinged Islamophobia raced through my head as I walked to the Uber waiting for me outside the California State University, Los Angeles campus. Scores of mosques vandalized, immigrants scapegoated and surveilled, citizens falsely profiled and prosecuted, the private confines of Muslim American households violated in furtherance of baseless witch hunts, immigration restrictions and registries imposed, and innocent mothers and children killed.

Yesterday, and with this intensified third phase of the war on terror, again today. I set my bag down in the car, thinking about the turbulent road ahead. I thought about how the challenges ahead compared and contrasted with those that ravaged Muslim Americans following 9/11. More than fifteen years had passed, and the face of the country, the composition of the Muslim American population, and I myself had all undergone radical, transformative change. I had recently bid farewell to and buried my father, Ali, who in 1981 brought his three children and wife to the United States in search of all the things Donald Trump stood against, values his campaign slogan, “Make America Great Again,” sought to erode. Life after loss is never the same, and my season of mourning was punctuated by the fear and hysteria that followed Donald Trump all the way to the White House.

The world and the country were spinning faster and more furiously than ever before, it seemed. Locked in between the two, my life raced forward at a rate I had never experienced. The Black Lives Matter movement unveiled institutional racism that was as robust and violent as ever, as evidenced by the killing of Trayvon Martin, Rekia Boyd, Mike Brown, Tamir Rice, Philando Castille, Sandra Bland, and a rapidly growing list of unarmed black children, men, and women gunned down by police, all of them memorialized and uplifted as martyrs by youth and adult, black and non-black activists marching up and down city blocks or taking protests to the virtual sphere on Twitter, Facebook, and other social media platforms.

Black Lives Matter inspired mass actions across the country and an ongoing march of social media protests that spawned new generations of activists and trenchant thought leaders. I saw this unfold, in dynamic fashion, on city blocks, in neighborhoods, on college campuses, and on social media feeds. It left an indelible impression on my activism, writing, and worldview.

In the face of a political world seemingly spinning out of control, I decided to write this book. I hope to provide general readers, students, and activists an intimate and accessible introduction to Islamophobia, what it is, how it evolved, how we can combat it in Trump’s America, and most importantly, how to fight it beyond the current administration.

As a Muslim American law professor and civil rights activist, I hope to help readers view Islamophobia through a unique lens. I draw on a range of sources, from court cases, media headlines, and scholarship to my own experiences in walking the walk every day. Along the way, I make links and assertions that might be new to many readers: pointing out how Islamophobia has a long, notorious history in the United States, for example, and showing how the Black Lives Matter movement intersects with, and inspires, activism against Islamophobia. My aim is to offer a succinct, informed handbook for anyone interested in Islamophobia and its prolific growth at this definitive juncture in our country’s history.

I wrote this book at a time when American Islamophobia was intensifying at a horrific clip, giving immediate importance to my research and expertise and simultaneously endangering the people I love most. In addition to examining the roots and rise of American Islamophobia, this book also looks to humanize the individuals and communities impacted by it, so they can be seen beyond the frame of statistics. Many stories are interwoven, some are well known and others are not, to facilitate an understanding of Islamophobia that treats Muslim Americans not as distant subjects of study or analysis, but as everyday citizens. Citizens who, like members of other faith groups, are not only integral and contributing members of society, but are also part of a group that will define the future of the United States moving forward.

The United States is indeed at a crossroads. The rise of mass social protest movements fueled by calls for dignity, justice, and an end to structural racism have been met by an opposing front galvanized by demographic shifts toward a majority minority population and eight years of scapegoating and systematic obstruction of the first black president. Echoing through it all is the dread of an “end of white America,” a fear that politicians on the right readily stoked and fervently fed to the masses.

Much of this opposing front is fully wed to racism and xenophobia, and it backed a businessman who peddled a promise to “Make American Great Again”, a promise that was not just a campaign slogan, but was also a racial plea evoked at a time when whiteness was the formal touchstone of American citizenship and white supremacy was endorsed and enabled by law. Trump dangled before the electorate studies that project that people of color will outnumber whites by 2044, and that over half (50.2 percent) of the babies born in the United States today are minorities, and he inflamed the ever present fear that foreigners are stealing our jobs.

As a cure for these supposed ills, Trump’s campaign offered to a primed and ready audience a cocktail of nativism, scapegoating, and racism; his campaign met with resounding success and helped polarize the nation along the very lines that colored his stump speeches. Much of Trump’s fearmongering centered again on Islam and the suspicion, fear, and backlash directed at its more than eight million adherents living in Los Angeles, Detroit, and big and small American towns beyond and in between.

Islamophobia was intensifying throughout the country, relentlessly fueled on the presidential campaign trail, and after the inauguration of President Trump on January 20, 2017, it was unleashed from the highest office in the land.

Now more than ever, Islamophobia was not limited to the irrational views or hateful slurs of individuals, but was an ideology that drove the president’s political worldview and motivated the laws, policies, and programs he would seek to push forward.

This had also been the case during the Bush and Obama administrations, but the Trump moment marked a new phase of transparency in which explicit rhetorical Islamophobia aligned, in language and spirit, with the programs the new president was poised to implement.

I found myself wedged between the hate and its intended victims. Muslim Americans like myself were presumptive terrorists, not citizens; unassimilable aliens, not Americans; and the speeches I delivered on campuses and in community centers, to Muslims and non-Muslims, cautioned that the dangers Islamophobia posed yesterday were poised to become even more perilous today. The road ahead was daunting, I warned audiences after each lecture, hoping to furnish them with the awareness to be vigilant, and the pale consolation that today’s Islamophobia is not entirely new.

I was feeling alarmed for Juan, my Uber driver, even as I felt I should celebrate his being drawn toward Islam. I could not help but fear the distinct and convergent threats he would face if he embraced Islam. As an undocumented Latino Muslim in Los Angeles, Juan would be caught in the crosshairs of “terrorism” and “illegality.” Los Angeles was not only ground zero for a range of xenophobic policies targeting undocumented (and documented) Latinx communities, but also a pilot city where, in 2014, the Department of Homeland Security launched its counter radicalization program, Countering Violent Extremism, in partnership with the Los Angeles Police Department.

This new counterterror program, which effectively supplanted the federal surveillance model ushered in by the USA PATRIOT Act, deputized LAPD members to function as national security officers tasked with identifying, detaining, prosecuting, and even deporting “homegrown radicals.” Suspicion was disproportionately assigned to recent Muslim converts, particularly young men like Juan, keen on expressing their newfound Muslim identity by wearing a beard, attending Friday prayers, and demonstrating fluency in Arabic, the language tied to Islam, and in line with Islamophobia, terrorism.

I feared for Juan’s wellbeing, whether Muslim or not. I knew that the dangers he dodged every day would be far greater in number and more ominous in nature if he embraced Islam. The president, from inside the White House, was marshaling islamophobia and mobilizing xenophobia to inflict irreparable injury on Muslims, Latinx communities, and the growing population of Latinx Muslims that Juan would be part of if he walked into a mosque and declared that “there is only one God, and Mohammed is his final messenger.” He would be vulnerable to the covert counter-radicalization policing that was descending on Los Angeles mosques and Muslim student associations and simultaneously exposed to the ubiquitous threat of immigration checkpoints and deportation raids. He would also be a prime target for Victims of Immigration Crime Engagement, or VOICE, the new catchan-“illegal-alien” hotline installed by President Trump.

This seemed far too much for any one person to endure all at once, and the boundary Juan contemplated crossing by becoming a Muslim, during the height of American Islamophobia, might very well be one that he should drive far away from.

All of this rushed through my head as Juan drove me to my hotel, sharing with me his concerns and fears about the country’s current condition. I remained silent, gripped by the desire, if not the responsibility, to advise Juan to reconsider embracing Islam at this time. I tried to muster up the courage to tell him to postpone his conversion for a later time, when Islamophobic attitudes and policies were abating, when, and if, that time should come. I feared that if he did convert, the ever expanding and extending arms of the state would find him at once, brand him a radical, and toss him from the country, sending him far from the only home he has ever known, and the second home that summoned me back during a fateful moment in his life and mine.

*

Before my conversation with Juan, I’d been gripped by memories of the post 9/11 period. But for those moments in the car, I felt overwhelmed by the dangers that would encircle Juan if he took his shahada. Islam in America has never been simply a religion one chooses. From the gaze of the state and society, Islam was and still is an indelible marker of otherness, and in war-on-terror America, it is a political identity that instantly triggers the suspicion of acts of terror and subversion. The urge to advise Juan against converting reached its climax when the car came to an abrupt stop near Grand Avenue and 11th Street, in the heart of downtown Los Angeles, not far from Pico Union.

Juan stepped out to greet me on the right side of the car. “It was an honor to meet and speak to you, Brother Khaled,” he said, extending his hand to bid me farewell.

“Likewise Juan, I wish you the best,” I told him, extending my hand to meet his. I then turned away from the stranger who, after a thirty minute drive through grueling city traffic, had pushed me to grapple with my most pressing fears and had given me an intimate introduction to new fears that I could not turn away from.

I stopped, turned back toward Juan, and mustered up the strength to implore him, “But I ask you to think about whether now is the right time to become a Muslim,” attempting to cloak a desperate plea with the tone and language of evenhanded guidance. This was more difficult than any lecture or presentation I had given during the past several months, and the many more I would give later. “Your status already puts you in a difficult position, and falling victim to Islamophobia would put you in a more dangerous place,” I pled.

Voicing the words released a great weight off my shoulders. At the same time, they felt unnatural because they clashed with the spiritual aim of encouraging interest in Islam. The paradox mirrored the political confusion that gripped the nation. But the challenges and perils I lectured about in university classrooms, community centers, and mosques had to be extended to the street, and to the most vulnerable. My words were met with a look of utter surprise by Juan, who stood there and said nothing.

“Either way, you are my brother,” I closed, before we walked off in opposite directions. He thanked me, circled back to the driver’s seat, and turned right on 12th Street, in the direction of Pico Union, perhaps feeling disappointed in or spurned by the individual whose activism he admired.

I often wondered what decision Juan made, and whether he made his shahada. I also feared the worst, wondering whether he was still in the country. Was he profiled on the grounds of his Latino identity and detained because he was undocumented? Did he embrace Islam and fall victim to the counter-radicalization policing unfolding in Los Angeles? Or had he become a victim of the intersecting xenophobic backlash and Islamophobic violence authorized by Trump’s rhetoric and policies, inflicted by a bigot on or off campus?

My fears were stoked daily by bleak headlines and backward actions taken by the Trump administration, but I tried to remain optimistic. I hoped that Juan was still enrolled in classes, zigzagging his car through the maze of Los Angeles traffic to help his mother make rent, to pay his college tuition, and to drive toward his goal of becoming the first member of his family to earn a college degree. And most importantly, I prayed that he was safe and sound while working toward realizing this and other aspirations, academic, professional, and spiritual, in a country where informants and officers, bans and walls threaten to crush these very dreams and the people precariously holding onto them.

*

from

American Islamophobia. Understanding the Roots and Rise of Fear

by Khaled A. Beydoun

get it at Amazon.com

If You Like Being Alone You Have These 5 Amazing Traits.

“I have to be alone very often. I’d be quite happy if I spent from Saturday night until Monday morning alone in my apartment. That’s how I refuel.” Audrey Hepburn

Let’s clear something up: being alone is not the same as being lonely. In fact, many people prefer being alone because that’s their way to recharge and refuel their energy.

Being a loner and enjoying solitude can be a great thing. And people who enjoy being alone are one of the most interesting and fun people to be with. They have many, many amazing qualities that make them extraordinary human beings.

Here are 5 of them:

1. They Are Open-Minded

Many would perceive someone who is reserved and quiet as being judgmental and unsocial. However, this is not true. People who are comfortable being alone are actually more open minded than one would think because they can discuss any topic due to their massive knowledge they have gained during their alone time by reading books watching documentaries, or just focusing on themselves and their thoughts.

2. They Are Exquisite Listeners

All introverts are amazing listeners. This is because when people spend time alone they process things in their heads instead of saying them out loud. So, in turn, their listening ratio is higher than their talking ratio.

They would listen to anyone as long as the conversation doesn’t involve small talk. They hate small talk more than anything.

3. They Are Emotionally Stable

No, they are not neurotic as many people would believe. The word neurotic typically encompasses feelings of anger, fear, worry, anxiety, loneliness, and depressive mood. However, people who enjoy solitude are not by default experiencing those feelings. In fact, they are more in touch with themselves and their emotions.

4. They Are Quickly Over-Stimulated

Studies have shown that people who enjoy spending time alone have a different brain structure than those who are overly social. Namely, people who are socially active have more dopamine reward action in their brain.

The introverts, on the other hand, prefer the acetylcholine, a brain chemical that is similar to dopamine and is connected with the reward system as well. The main difference between them is that this chemical gets activated when people are by themselves and turn inward.

This is why extroverts enjoy loud music and noise, they think it is a part of the fun while introverts prefer the quiet dinners and the comfort of their home.

5. They DO Like People

They have small circles of friends but this doesn’t mean that they don’t like people. They just despise small talk. That’s it.

Curious Mind Magazine

My family has a Nazi past. I see that ideology returning across Europe – Geraldine Schwarz.

In Germany and elsewhere, younger generations are becoming indifferent to the history of fascism. This is how the far right thrives, “Empathy is a weakness”.

It is no coincidence that these are countries where patterns of extremism we’d thought long gone have returned.

In Aistersheim, a Village in north-west Austria, a pale yellow castle towers over a frozen lake, as if out of a fairy tale. It looks like it might be awaiting royal guests. But the sign at the entrance reads: “Congress of the defenders of Europe.” I had signed up under a false name, because only the “well-wishing” press was allowed to attend this gathering in March of far-right activists, mostly from Germany and Austria.

Under the ribbed vaults of a large hall, I join an audience of 300. The first speaker is the deputy mayor of Graz, Mario Eustacchio, from Austria’s far-right Freedom party. He lashes out against what he calls modern obsessions with “human rights”, which he says have produced a “catastrophic situation in Europe”.

Next is Andre Poggenburg, the regional head of the German far-right Alternative fiir Deutschland party in Saxony Anhalt. He calls for Grexit, Germany’s departure from the EU. He wants a “fortress Europe” that will ally with Putin’s Russia, a regime clearly admired in these circles. A blonde woman wearing a satin dress stands up to sing German and Russian patriotic songs. Another AfD member follows. He uses the word Mitteldeutschland (central Germany) in reference to former East Germany as if more German territories lie beyond the Oder-Neisse line which has marked the border with Poland since the second world war.

After that, an Austrian publisher complains about “censorship” of the word Neger (negro).

Later, there are speeches by self-styled “alternative media” representatives, who explain that infiltrating social networks helps “influence public opinion”, for example by posting insults on Angela Merkel’s Facebook page. And to top it all off, a youthful, elected politician from Italy’s South Tyrol calls, hand on chest, for his region to be annexed by Austria.

Stepping out for some fresh air, I stroll around some stalls showcasing various publications, including those of Les Identitaires, a racist French group calling for a “white Europe”. Other books carry titles such as Race, Evolution and Behaviour, or The Young Hitler, A Corrected Biography. I pick up a copy of The Brainwashing of Germans and its Lasting Consequences. It is the opposite of the message I wrote in a book (Les Amnésiques) about Germany’s postwar transformation and its efforts to deal with its Nazi past, through the story of my own family.

I am the granddaughter of a German member of the Nazi party and of a French gendarme who served under the Vichy regime, which collaborated with the Nazis. My German grandfather was not an ideological National Socialist, he joined out of opportunism and for convenience. He took advantage of Nazi “Aryanisation” policies to buy a Jewish family business at a low price. My grandmother was not a card carrying Nazi, but was fascinated by the Führer. Between them, they were typical of the Mitlaüfer (followers): those masses of people who, through blinkered vision and small acts of cowardice, helped create the conditions for the Third Reich to perpetrate its crimes.

After 1945, Germany’s trickiest task was not setting up new institutions or prosecuting highproflle criminals, it was transforming the mindset of an entire population whose moral standing had been reversed by Nazism in ways that made crime appear not only legal but heroic. My grandparents never acknowledged their responsibilities as Mitlaüfer. But their son, my father, became part of a generation that confronted its parents and forced Germans to ask themselves: What did I do? What could I have done? How do I act now?

One of the greatest achievements of the memorial work Germany has undertaken since the 1960s has been to infuse many of its citizens with a historical conscience and a sense of duty towards democracy, as well as a critical attitude towards populism and extremism both left and right. In France, the taboo long attached to how people behaved under Vichy made such teachings more difficult. In Italy, Austria and eastern Europe, efforts to reckon with their past as allies of the Nazis were even weaker.

It is no coincidence that these are countries where patterns of extremism we’d thought long gone have returned.

But now, Germany in turn is affected. Last September, 12,6% of voters cast a ballot for the MD, allowing a far-right party to secure a strong position in parliament for the first time since the second world war. The arrival of more than a million refugees seems to have broken down the safeguards. In former East Germany where no true reckoning of the past was possible under communism because state propaganda held West Germans solely responsible for Nazism the AfD’s popularity was twice as high as in western parts of the country.

What worries me most is that younger generations in Germany and elsewhere feel less and less concerned with the history of fascism, and hence risk becoming indifferent to the new threats. That’s precisely what the MD strives for when it says it wants a “ISO-degree turn” from the tradition of atoning for Nazism, and suggests the Holocaust memorial in Berlin should be closed down, and Wehrmacht soldiers rehabilitated. It’s also what the Austrian FPO has in mind when its MPs refuse to applaud a speech commemorating the 1938 Kristallnacht massacre.

Today’s far-right parties want to downplay Nazi crimes as a first step towards reawakening ideas from that era: the notion that a hierarchy can be drawn among humans according to their race or their religion, the acceptance of violence and hatred, mendacious propaganda and devotion to a strong leader.

“Empathy is a weakness” was the motto of the SS.

We have to give young people a knowledge of the past, and a pride in belonging to a continent where two totalitarian systems were ultimately defeated. Democracy in Europe was built through blood, sweat and tears the dignity of citizens was eventually restored. Now is the time to remember.

*

Geraldine Schwarz is a Berlin based German/French journalist and author of Les Amnesiques

Childhood Disrupted. How Your Biography Becomes Your Biology, and How You Can Heal – Donna Jackson Nakazawa * The Origins of Addiction. Evidence from the Adverse Childhood Experiences Study – Vincent J. Felitti, MD.

Chronic adversities change the architecture of a child’s brain, altering the expression of genes that control stress hormone output, triggering an overactive inflammatory stress response for life, and predisposing the child to adult disease.

“I felt myself a stranger at life’s party.”

New findings in neuroscience, psychology, and medicine have recently unveiled the exact ways in which childhood adversity biologically alters us for life. The past can tick away inside us for decades like a silent time bomb, until it sets off a cellular message that lets us know the body does not forget the past. Something that happened to you when you were five or fifteen can land you in the hospital thirty years later, whether that something was headline news, or happened quietly, without anyone else knowing it, in the living room of your childhood home.

No matter how old you are, or how old your children may be, there are scientifically supported and relatively simple steps that you can take to reboot the brain, create new pathways that promote healing, and come back to who it is you were meant to be.

Our findings are disturbing to some because they imply that the basic causes of addiction lie within us and the way we treat each other, not in drug dealers or dangerous chemicals. They suggest that billions of dollars have been spent everywhere except where the answer is to be found. Our findings indicate that the major factor underlying addiction is adverse childhood experiences that have not healed with time and that are overwhelmingly concealed from awareness by shame, secrecy, and social taboo.

“I wept, I saw how much people had suffered and I wept.” Robert Anda

“Our findings exceeded anything we had conceived. The correlation between having a difficult childhood and facing illness as an adult offered a whole new lens through which we could view human health and disease. Here was the missing piece as to what was causing so much of our unspoken suffering as human beings. Time does not heal all wounds. One does not ‘just get over’ something, not even fifty years later. Instead time conceals. And human beings convert traumatic emotional experiences in childhood into organic disease later in life.” Vincent Felitti

Adverse childhood experiences are the main determinant of the health and social well being of a nation.

This book explores how the experiences of childhood shape us into the adults we become. Cutting-edge research tells us that what doesn’t kill you doesn’t necessarily make you stronger. Far more often, the opposite is true: the early chronic unpredictable stressors, losses, and adversities we face as children shape our biology in ways that predetermine our adult health. This early biological blueprint depicts our proclivity to develop life altering adult illnesses such as heart disease, cancer, autoimmune disease, fibromyalgia, and depression. It also lays the groundwork for how we relate to others, how successful our love relationships will be, and how well we will nurture and raise our own children.

My own investigation into the relationship between childhood adversity and adult physical health began after I’d spent more than a dozen years struggling to manage several life limiting autoimmune illnesses while raising young children and working as a journalist. In my forties, I was paralyzed twice with an autoimmune disease known as Guillain-Barré syndrome, similar to multiple sclerosis, but with a more sudden onset. I had muscle weakness; pervasive numbness; a pacemaker for vasovagal syncope, a fainting and seizing disorder; white and red blood cell counts so low my doctor suspected a problem was brewing in my bone marrow; and thyroid disease.

Still I knew: I was fortunate to be alive, and I was determined to live the fullest life possible. If the muscles in my hands didn’t cooperate, I clasped an oversized pencil in my fist to write. If I couldn’t get up the stairs because my legs resisted, I sat down halfway up and rested. I gutted through days battling flulike fatigue, pushing away fears about what might happen to my body next; faking it through work phone calls while lying prone on the floor; reserving what energy I had for moments with my children, husband, and family life; pretending that our “normal” was really okay by me. It had to be, there was no alternative in sight.

Increasingly, I devoted my skills as a science journalist to helping women with chronic illness, writing about the intersection between neuroscience, our immune systems, and the innermost workings of our human hearts. I investigated the many triggers of disease, reporting on chemicals in our environment and foods, genetics, and how inflammatory stress undermines our health. I reported on how going green, eating clean, and practices like mind-body meditation can help us to recuperate and recover. At health conferences I lectured to patients, doctors, and scientists. My mission became to do all I could to help readers who were caught in a chronic cycle of suffering, inflammation, or pain to live healthier, better lives.

In the midst of that quest, three years ago, in 2012, I came across a growing body of science based on a groundbreaking public health research study, the Adverse Childhood Experiences Study, or ACE Study. The ACE Study shows a clear scientific link between many types of childhood adversity and the adult onset of physical disease and mental health disorders. These traumas include being verbally put down and humiliated; being emotionally or physically neglected; being physically or sexually abused; living with a depressed parent, a parent with a mental illness, or a parent who is addicted to alcohol or other substances; witnessing one’s mother being abused; and losing a parent to separation or divorce. The ACE Study measured ten types of adversity, but new research tells us that other types of childhood trauma, such as losing a parent to death, witnessing a sibling being abused, violence in one’s community, growing up in poverty, witnessing a father being abused by a mother, being bullied by a classmate or teacher, also have a longterm impact.

These types of chronic adversities change the architecture of a child’s brain, altering the expression of genes that control stress hormone output, triggering an overactive inflammatory stress response for life, and predisposing the child to adult disease. ACE research shows that 64 percent of adults faced one ACE in their childhood, and 40 percent faced two or more.

My own doctor at Johns Hopkins medical institutions confessed to me that she suspected that, given the chronic stress I’d faced in my childhood, my body and brain had been marinating in toxic inflammatory chemicals my whole life, predisposing me to the diseases I now faced.

My own story was a simple one of loss. When I was a girl, my father died suddenly. My family struggled and became estranged from our previously tight knit, extended family. I had been exceptionally close to my father and I had looked to him for my sense of being safe, okay, and valued in the world. In every photo of our family, I’m smiling, clasped in his arms. When he died, childhood suddenly ended, overnight. If I am honest with myself, looking back, I cannot recall a single “happy memory” from there on out in my childhood. It was no one’s fault. It just was. And I didn’t dwell on any of that. In my mind, people who dwelled on their past, and especially on their childhood, were emotionally suspect.

I soldiered on. Life catapulted forward. I created a good life, worked hard as a science journalist to help meaningful causes, married a really good husband, and brought up children I adored, children I worked hard to stay alive for. But other than enjoying the lovely highlights of a hard won family life, or being with close friends, I was pushing away pain.

I felt myself a stranger at life’s party. My body never let me forget that inside, pretend as I might, I had been masking a great deal of loss for a very long time. I felt myself to be “not like other people.”

Seen through the lens of the new field of research into Adverse Childhood Experiences, it suddenly seemed almost predictable that, by the time I was in my early forties, my health would deteriorate and I would be brought, in my case, quite literally, to my knees.

Like many people, I was surprised, even dubious, when I first learned about ACEs and heard that so much of what we experience as adults is so inextricably linked to our childhood experiences. I did not consider myself to be someone who had had Adverse Childhood Experiences. But when I took the ACES questionnaire and discovered my own ACE Score, my story also began to make so much more sense to me. This science was entirely new, but it also supported old ideas that we have long known to be true: “the child is father of the man.” This research also told me that none of us is alone in our suffering.

One hundred thirty three million Americans suffer from chronic illness and 116 million suffer from chronic pain. This revelation of the link between childhood adversity and adult illness can inform all of our efforts to heal. With this knowledge, physicians, health practitioners, psychologists, and psychiatrists can better understand their patients and find new insights to help them. And this knowledge will help us ensure that the children in our lives, whether we are parents, mentors, teachers, or coaches, don’t suffer from the long term consequences of these sorts of adversity.

To learn everything I could, I spent two years interviewing the leading scientists who research and study the effects of Adverse Childhood Experiences and toxic childhood stress. I combed through seventy research papers that comprise the ACE Study and hundreds of other studies from our nation’s best research institutions that support and complement these findings. And I followed thirteen individuals who suffered early adversity and later faced adult health struggles, who were able to forge their own lifechanging paths to physical and emotional healing.

In these pages, I explore the damage that Adverse Childhood Experiences can do to the brain and body; how these invisible changes contribute to the development of disease including autoimmune diseases, long into adulthood; why some individuals are more likely to be affected by early adversity than others; why girls and women are more affected than men; and how early adversity affects our ability to love and parent.

Just as important, I explore how we can reverse the effects of early toxic stress on our biology, and come back to being who we really are. I hope to help readers to avoid spending so much of their lives locked in pain.

Some points to bear in mind as you read these pages:

– Adverse Childhood Experiences should not be confused with the inevitable small challenges of childhood that create resilience. There are many normal moments in a happy childhood, when things don’t go a child’s way, when parents lose it and apologize, when children fail and learn to try again. Adverse Childhood Experiences are very different sorts of experiences; they are scary, chronic, unpredictable stressors, and often a child does not have the adult support needed to help navigate safely through them.

– Adverse Childhood Experiences are linked to a far greater likelihood of illness in adulthood, but they are not the only factor. All disease is multifactorial. Genetics, exposures to toxins, and infection all play a role. But for those who have experienced ACEs and toxic stress, other disease promoting factors become more damaging.

To use a simple metaphor, imagine the immune system as being something like a barrel. If you encounter too many environmental toxins from chemicals, a poor processed food diet, viruses, infections, and chronic or acute stressors in adulthood, your barrel will slowly fill. At some point, there may be one certain exposure, that last drop that causes the barrel to spill over and disease to develop.

Having faced the chronic unpredictable stressors of Adverse Childhood Experiences is a lot like starting life with your barrel half full. ACEs are not the only factor in determining who will develop disease later in life. But they may make it more likely that one will.

– The research into Adverse Childhood Experiences has some factors in common with the research on post-traumatic stress disorder, or PTSD. But childhood adversity can lead to a far wider range of physical and emotional health consequences than the overt symptoms of posttraumatic stress. They are not the same.

– The Adverse Childhood Experiences of extreme poverty and neighborhood violence are not addressed specifically in the original research. Yet clearly, growing up in unsafe neighborhoods where there is poverty and gang violence or in a war-torn area anywhere around the world creates toxic childhood stress, and that relationship is now being more deeply studied. It is an important field of inquiry and one I do not attempt to address here; that is a different book, but one that is no less important.

– Adverse Childhood Experiences are not an excuse for egregious behavior. They should not be considered a “blame the childhood” moral pass. The research allows us to finally tackle real and lasting physical and emotional change from an entirely new vantage point, but it is not about making excuses.

This research is not an invitation to blame parents. Adverse Childhood Experiences are often an intergenerational legacy, and patterns of neglect, maltreatment, and adversity almost always originate many generations prior to one’s own.

The new science on Adverse Childhood Experiences and toxic stress has given us a new lens through which to understand the human story; why we suffer; how we parent, raise, and mentor our children; how we might better prevent, treat, and manage illness in our medical care system; and how we can recover and heal on a deeper level than we thought possible.

And that last bit is the best news of all. The brain, which is so changeable in childhood, remains malleable throughout life. Today researchers around the world have discovered a range of powerful ways to reverse the damage that Adverse Childhood Experiences do to both brain and body. No matter how old you are, or how old your children may be, there are scientifically supported and relatively simple steps that you can take to reboot the brain, create new pathways that promote healing, and come back to who it is you were meant to be.

To find out about how many categories of ACEs you might have faced when you were a child or teenager, and your own ACE Score, turn the page and take the Adverse Childhood Experiences Survey for yourself.

TAKE THE ADVERSE CHILDHOOD EXPERIENCES (ACE) SURVEY

You may have picked up this book because you had a painful or traumatic childhood. You may suspect that your past has something to do with your current health problems, your depression, or your anxiety. Or perhaps you are reading this book because you are worried about the health of a spouse, partner, friend, parent, or even your own child, who has survived a trauma or suffered adverse experiences. In order to assess the likelihood that an Adverse Childhood Experience is affecting your health or the health of your loved one, please take a moment to fill out the following survey before you read this book.

ADVERSE CHILDHOOD EXPERIENCES SURVEY

Prior to your eighteenth birthday:

1. Did a parent or another adult in the household

often or very often . . . swear at you, insult you, put you down, or humiliate you? Or act in a way that made you afraid that you might be physically hurt?

Yes No, If yes, enter 1

2. Did a parent or another adult in the household

often or very often . . . push, grab, slap, or throw something at you? Or ever hit you so hard that you had marks or were injured?

Yes No, If yes, enter 1

3. Did an adult or person at least five years older than you

ever touch or fondle you or have you touch their body in a sexual way? Or attempt to touch you or touch you inappropriately or sexually abuse you?

Yes No, If yes, enter 1

4. Did you often or very often feel that

noone in your family loved you or thought you were important or special? Or feel that your family members didn’t look out for one another, feel close to one another, or support one another?

Yes No, If yes, enter 1

5. Did you often or very often

feel that you didn’t have enough to eat, had to wear dirty clothes, and had no one to protect you? Or that your parents were too drunk or high to take care of you or take you to the doctor if you needed it?

Yes No, If yes, enter 1

6. Was a biological parent ever lost to you

through divorce, abandonment, or another reason?

Yes No, If yes, enter 1

7. Was your mother or stepmother often or very often

pushed, grabbed, slapped, or have something thrown at her? Or was she sometimes, often, or very often kicked, bitten, hit with a fist, or hit with something hard? Or ever repeatedly hit over the course of at least a few minutes or threatened with a gun or knife?

Yes No, If yes, enter 1

8. Did you live with anyone who was

a problem drinker or alcoholic, or who used street drugs?

Yes No, If yes, enter 1

9. Was a household member

depressed or mentally ill, or did a household member attempt suicide?

Yes No, If yes, enter 1

10. Did a household member go to prison?

Yes No, If yes, enter 1

Add up your “Yes” answers: (this is your ACE Score)

Now take a moment and ask yourself how your experiences might be affecting your physical, emotional, and mental well-being. Is it possible that someone you love has been affected by Adverse Childhood Experiences they experienced? Are any children or young people you care for in adverse situations now?

Keep your Adverse Childhood Experiences Score in mind as you read the stories and science that follow, and keep your own experiences in mind, as well as those of the people you love. You may find this science to be the missing link in understanding why you or your loved one is having health problems. And this missing link will also lead to the information you will need in order to heal.

PART 1

How It Is We Become Who We Are

CHAPTER ONE

Every Adult Was Once a Child

If you saw Laura walking down the New York City street where she lives today, you’d see a well dressed forty six year old woman with auburn hair and green eyes who exudes a sense of “I matter here.” She looks entirely in charge of her life, as long as you don’t see the small ghosts trailing after her.

When Laura was growing up, her mom was bipolar. Laura’s mom had her good moments: she helped Laura with school projects, braided her hair, and taught her the name of every bird at the bird feeder. But when Laura’s mom suffered from depressive bouts, she’d lock herself in her room for hours. At other times she was manic and hypercritical, which took its toll on everyone around her. Laura’s dad, a vascular surgeon, was kind to Laura, but rarely around. He was, she says, “home late, out the door early, and then just plain out the doom”

Laura recalls a family trip to the Grand Canyon when she was ten. In a photo taken that day, Laura and her parents sit on a bench, sporting tourist whites. The sky is blue and cloudless, and behind them the dark, ribboned shadows of the canyon stretch deep and wide. It is a perfect summer day.

“That afternoon my mom was teaching me to identify the ponderosa pines,” Laura recalls. “Anyone looking at us would have assumed we were a normal, loving family.” Then, something seemed to shift, as it sometimes would. Laura’s parents began arguing about where to set up the tripod for their family photo. By the time the three of them sat down, her parents weren’t speaking. As they put on fake smiles for the camera, Laura’s mom suddenly pinched her daughter’s midriff around the back rim of her shorts, and told her to stop “staring off into space.” Then, a second pinch: “no wonder you’re turning into a butterball, you ate so much cheesecake last night you’re hanging over your shorts!”

If you look hard at Laura’s face in the photograph, you can see that she’s not squinting at the Arizona sun, but holding back tears.

When Laura was fifteen, her dad moved three states away with a new wife to be. He sent cards and money, but called less and less often. Her mother’s untreated bipolar disorder worsened. Laura’s days were punctuated with put downs that caught her off guard as she walked across the living room. “My mom would spit out something like, ‘You look like a semiwide from behind. If you’re ever wondering why no boy asks you out, that’s why!”’ One of Laura’s mother’s recurring lines was, “You were such a pretty baby, I don’t know what happened.” Sometimes Laura recalls, “My mom would go on a vitriolic diatribe about my dad until spittle foamed on her chin. I’d stand there, trying not to hear her as she went on and on, my whole body shaking inside.”

Laura never invited friends over, for fear they’d find out her secret: her mom “wasn’t like other moms.”

Some thirty years later, Laura says, “In many ways, no matter where I go or what I do, I’m still in my mother’s house.” Today, “If a car swerves into my lane, a grocery store clerk is rude, my husband and I argue, or my boss calls me in to talk over a problem, I feel something flip over inside. It’s like there’s a match standing inside too near a flame, and with the smallest breeze, it ignites.” Something, she says, “just doesn’t feel right. Things feel bigger than they should be. Some days, I feel as if I’m living my life in an emotional boom box where the volume is turned up too high.”

To see Laura, you would never know that she is “always shaking a little, only invisibly, deep down in my cells.”

Laura’s sense that something is wrong inside is mirrored by her physical health. In her mid thirties, she began suffering from migraines that landed her in bed for days at a time. At forty, Laura developed an autoimmune thyroid disease. At forty four, during a routine exam, Laura’s doctor didn’t like the sound of her heart. An EKG revealed an arrhythmia. An echocardiogram showed that Laura had a condition known as dilated cardiomyopathy. The left ventricle of her heart was weak; the muscle had trouble pumping blood into her heart. Next thing Laura knew, she was a heart disease patient, undergoing surgery. Today, Laura has a cardioverter defibrillator implanted in the left side of her chest to prevent heart failure. The two-inch scar from the implant is deceivingly small.

John’s parents met in Asia when his father was deployed there as an army officer. After a whirlwind romance, his parents married and moved to the United States. For as long as John can remember, he says, “my parents’ marriage was deeply troubled, as was my relationship with my dad. I consider myself to have been raised by my mom and her mom. I longed to feel a deeper connection with my dad, but it just wasn’t there. He couldn’t extend himself in that way.”

John occasionally runs his hands through his short blond hair, as he carefully chooses his words. “My dad would get so worked up and pissed off about trivial things. He’d throw out opinions that we all knew were factually incorrect, and just keep arguing.” If John’s dad said the capital of New York was New York City, it didn’t matter if John showed him it was Albany. “He’d ask me to help in the garage and I’d be doing everything right, and then a half hour into it I’d put the screwdriver down in the wrong spot and he’d start yelling and not let up. There was never any praise. Even when he was the one who’d made a mistake, it somehow became my fault. He could not be wrong about anything.”

As John got older, it seemed wrong to him that “my dad was constantly pointing out all the mistakes that my brother and I made, without acknowledging any of his own.” His dad chronically criticized his mother, who was, John says, “kinder and more confident.”

When John was twelve, he interjected himself into the fights between his parents. One Christmas Eve, when he was fifteen, John awoke to the sound of “a scream and a commotion. I realized it was my mother screaming. I jumped out of bed and ran into my parents’ room, shouting, ‘What the hell is going on here?’ My mother sputtered, ‘He’s choking me!’ My father had his hands around my mother’s neck. I yelled at him: ‘You stay right here! Don’t you dare move! Mom is coming with me!’ I took my mother downstairs. She was sobbing. I was trying to understand what was happening, trying to be the adult between them.”

Later that Christmas morning, John’s father came down the steps to the living room where John and his mom were sleeping. “No one explained,” he says. “My little brother came downstairs and we had Christmas morning as if nothing had happened.”

Not long after, John’s grandmother, “who’d been an enormous source of love for my mom and me,” died suddenly. John says, “It was a terrible shock and loss for both of us. My father couldn’t support my mom or me in our grieving. He told my mom, ‘You just need to get over it!’ He was the quintessential narcissist. If it wasn’t about him, it wasn’t important, it wasn’t happening.”

Today, John is a boyish forty. He has warm hazel eyes and a wide, affable grin that would be hard not to warm up to. But beneath his easy, open demeanor, John struggles with an array of chronic illnesses.

By the time John was thirty three, his blood pressure was shockingly high for a young man. He began to experience bouts of stabbing stomach pain and diarrhea and often had blood in his stool. These episodes grew more frequent. He had a headache every day of his life. By thirty four, he’d developed chronic fatigue, and was so wiped out that sometimes he struggled to make it through an entire day at work.

For years, John had loved to go hiking to relieve stress, but by the time he was thirty five, he couldn’t muster the physical stamina. “One day it hit me, I’m still a young man and I’ll never go hiking again.’ ”

John’s relationships, like his physical body, were never quite healthy. John remembers falling deeply in love in his early thirties. After dating his girlfriend for a year, she invited him to meet her family. During his stay with them, John says, “I became acutely aware of how different I was from kids who grew up without the kind of shame and blame I endured.” One night, his girlfriend, her sisters, and their boyfriends all decided to go out dancing. “Everyone was sitting around the dinner table planning this great night out and I remember looking around at her family and the only thing going through my mind were these words: ‘I do not belong here.’ Everyone seemed so normal and happy. I was horrified suddenly at the idea of trying to play along and pretend that I knew how to be part of a happy family.”

So John faked “being really tired. My girlfriend was sweet and stayed with me and we didn’t go. She kept asking what was wrong and at some point I just started crying and I couldn’t stop. She wanted to help, but instead of telling her how insecure I was, or asking for her reassurance, I told her I was crying because I wasn’t in love with her.”

John’s girlfriend was, he says, “completely devastated.” She drove John to a hotel that night. “She and her family were shocked. No one could understand what had happened.” Even though John had been deeply in love, his fear won out. “I couldn’t let her find out how crippled I was by the shame and grief I carried inside.”

Bleeding from his inflamed intestines, exhausted by chronic fatigue, debilitated and distracted by pounding headaches, often struggling with work, and unable to feel comfortable in a relationship, John was stuck in a universe of pain and solitude, and he couldn’t get out.

Georgia’s childhood seems far better than the norm: she had two living parents who stayed married through thick and thin, and they lived in a stunning home with walls displaying Ivy League diplomas; Georgia’s father was a well-respected, Yale-educated investment banker. Her mom stayed at home with Georgia and two younger sisters. The five of them appear, in photos, to be the perfect family.

All seemed fine, growing up, practically perfect.

“But I felt, very early on, that something wasn’t quite right in our home, and that no one was talking about it,” Georgia says. “Our house was saturated by a kind of unease all the time. You could never put your finger on what it was, but it was there.”

Georgia’s mom was “emotionally distant and controlling,” Georgia recalls. “If you said or did something she didn’t like, she had a way of going stone cold right in front of you she’d become what I used to think of as a moving statue that looked like my mother, only she wouldn’t look at you or speak to you.” The hardest part was that Georgia never knew what she’d done wrong. “I just knew that l was shut out of her world until whenever she decided I was worth speaking to again.”

For instance, her mother would “give my sisters and me a tiny little tablespoon of ice cream and then say, ‘You three will just have to share that.’ We knew better than to complain. If we did, she’d tell us how ungrateful we were, and suddenly she wouldn’t speak to us.”

Georgia’s father was a borderline alcoholic and “would occasionally just blow up over nothing,” she says. “One time he was changing a light bulb and he just started cursing and screaming because it broke. He had these unpredictable eruptions of rage. They were rare but unforgettable.” Georgia was so frightened at times that “I’d run like a dog with my tail between my legs to hide until it was safe to come out again.”

Georgia was “so sensitive to the shifting vibe in our house that I could tell when my father was about to erupt before even he knew. The air would get so tight and I’d know, it’s going to happen again.” The worst part was that “We had to pretend my father’s outbursts weren’t happening. He’d scream about something minor, and then he’d go take a nap. Or you’d hear him strumming his guitar in his den.”

Between her mother’s silent treatments and her dad’s tirades, Georgia spent much of her childhood trying to anticipate and move out of the way of her parents’ anger. She had the sense, even when she was nine or ten, “that their anger was directed at each other. They didn’t fight, but there was a constant low hum of animosity between them. At times it seemed they vehemently hated each other.” Once, fearing that her inebriated father would crash his car after an argument with her mother, Georgia stole his car keys and refused to give them back.

Today, at age forty nine, Georgia is reflective about her childhood. “I internalized all the emotions that were storming around me in my house, and in some ways it’s as if I’ve carried all that external angst inside me all my life.” Over the decades, carrying that pain has exacted a high toll. At first, Georgia says, “My physical pain began as a low whisper in my body.” But by the time she entered Columbia graduate school to pursue a PhD in classics, “I’d started having severe back problems. I was in so much physical pain, I could not sit in a chair. I had to study lying down.” At twenty six, Georgia was diagnosed with degenerative disc disease. “My body just started screaming with its pain.”

Over the next few years, in addition to degenerative disc disease, Georgia was diagnosed with severe depression, adrenal fatigue, and finally, fibromyalgia. “I’ve spent my adult life in doctors’ clinics and trying various medications to relieve my pain,” she says. “But there is no relief in sight.”

Laura’s, John’s, and Georgia’s life stories illustrate the physical price we pay, as adults, for childhood adversity. New findings in neuroscience, psychology, and medicine have recently unveiled the exact ways in which childhood adversity biologically alters us for life.

This groundbreaking research tells us that the emotional trauma we face when we are young has farther reaching consequences than we might have imagined.

Adverse Childhood Experiences change the architecture of our brains and the health of our immune systems, they trigger and sustain inflammation in both body and brain, and they influence our overall physical health and longevity long into adulthood.

These physical changes, in turn, prewrite the story of how we will react to the world around us, and how well we will work, and parent, befriend, and love other people throughout the course of our adult lives.

This is true whether our childhood wounds are deeply traumatic, such as witnessing violence in our family, as John did; or more chronic living room variety humiliations, such as those Laura endured; or more private but pervasive familial dysfunctions, such as Georgia’s.

All of these Adverse Childhood Experiences can lead to deep biophysical changes in a child that profoundly alter the developing brain and immunology in ways that also change the health of the adult he or she will become.

Scientists have come to this startling understanding of the link between Adverse Childhood Experiences and later physical illness in adulthood thanks, in large part, to the work of two individuals: a dedicated physician in San Diego, and a determined medical epidemiologist from the Centers for Disease Control (CDC). Together, during the 1980s and 1990s, the same years when Laura, John, and Georgia were growing up, these two researchers slowly uncovered the stunning scientific link between Adverse Childhood Experiences and later physical and neurological inflammation and life changing adult health outcomes.

The Philosophical Physicians

In 1985 physician and researcher Vincent J. Felitti, MD, chief of a revolutionary preventive care initiative at the Kaiser Permanente Medical Program in San Diego, noticed a startling pattern: adult patients who were obese also alluded to traumatic incidents in their childhood. Felitti came to this realization almost by accident.

In the mid 1980s, a significant number of patients in Kaiser Permanente’s obesity program were, with the help and support of Felitti and his nurses, successfully losing hundreds of pounds a year nonsurgically, a remarkable feat. The program seemed a resounding success, up until a large number of patients who were losing substantial amounts of weight began to drop out.

The attrition rate didn’t make sense, and Felitti was determined to find out what was going on. He conducted face-to-face interviews with 286 patients. In the course of Felitti’s one-on-one conversations, a striking number of patients confided that they had faced trauma in their childhood; many had been sexually abused. To these patients, eating was a solution: it soothed the anxiety, fear, and depression that they had secreted away inside for decades. Their weight served, too, as a shield against unwanted physical attention, and they didn’t want to let it go.

Felitti’s conversations with this large group of patients allowed him to perceive a pattern, and a new way of looking at human health and well-being, that other physicians just were not seeing. It became clear to him that, for his patients, obesity, “though an obvious physical sign,” was not the core problem to be treated, “any more than smoke is the core problem to be treated in house fires.”

In 1990, Felitti presented his findings at a national obesity conference. He told the group of physicians gathered that he believed “certain of our intractable public health problems” had root causes hidden “by shame, by secrecy, and by social taboos against exploring certain areas of life experience.”….

*

from

Childhood Disrupted. How Your Biography Becomes Your Biology, and How You Can Heal

by Donna Jackson Nakazawa

get it at Amazon.com


The Origins of Addiction: Evidence from the Adverse Childhood Experiences Study

Vincent J. Felitti, MD

Department of Preventive Medicine Kaiser Permanente Medical Care Program

”In my beginning is my end.” T.S. Elliot, “Four Quartets”

ABSTRACT:

A population based analysis of over 17,000 middle class American adults undergoing comprehensive, biopsychosocial medical evaluation indicates that three common categories of addiction are strongly related in a proportionate manner to several specific categories of adverse experiences during childhood. This, coupled with related information. suggests that the basic cause of addiction is predominantly experience dependent during childhood and not substance dependent. This challenge to the usual concept of the cause of addictions has significant implications for medical practice and for treatment programs.

Purpose: My intent is to challenge the usual concept of addiction with new evidence from a population based clinical study of over 17,000 adult, middle class Americans.

The usual concept of addiction essentially states that the compulsive use of ‘addictive’ substances is in some way caused by properties intrinsic to their molecular structure’. This view confuses mechanism with cause. Because any accepted explanation of addiction has social. medical. therapeutic, and legal implications, the way one understands addiction is important. Confusing mechanism with basic cause quickly leads one down a path that is misleading. Here, new data is presented to stimulate rethinking the basis of addiction.

Background: The information I present comes from the Adverse Childhood Experiences (ACE) Study. The ACE Study deals with the basic causes underlying the 10 most common causes of death in America; addiction is only one of several outcomes studied.

In the mid 1980s, physicians in Kaiser Permanente’s Department of Preventive Medicine in San Diego discovered that patients successfully losing weight in the Weight Program were the most likely to drop out. This unexpected observation led to our discovery that overeating and obesity were often being used unconsciously as protective solutions to unrecognized problems dating back to childhood.” Counterintuitively, obesity provided hidden benefits: it often was sexually, physically, or emotionally protective.

Our discovery that public health problems like obesity could also be personal solutions, and our finding an unexpectedly high prevalence of adverse childhood experiences in our middle class adult population, led to collaboration with the Centers for Disease Control (CDC) to document their prevalence and to study the implications of these unexpected clinical observations. I am deeply indebted to my colleague, Robert F. Anda MD, who skillfully designed the Adverse Childhood Experiences (ACE) Study in an epidemiologically sound manner, and whose group at CDC analyzed several hundred thousand pages of patient data to produce the data we have published.

Many of our obese patients had previously been heavy drinkers, heavy smokers, or users of illicit drugs. Of what relevance are these observations; do they imply some unspecified innate tendency to addiction? Is addiction genetic, as some have proposed for alcoholism? Is addiction a biomedical disease. a personality disorder. or something different? Are diseases and personality disorders separable. or are they ultimately related? What does one make of the dramatic recent findings in neurobiology that seem to promise a neurochemical explanation for addiction? Why does only a small percent of persons exposed to addictive substances become compulsive users?

Although the problem of narcotic addiction has led to extensive legislative attempts at eradication, its prevalence has not abated over the past century. However. the distribution pattern of narcotic use within the population has radically changed. attracting significant political attention and governmental actions The inability to control addiction by these major, well intended governmental efforts has drawn thoughtful and challenging commentary from a number of different viewpoints.

In our detailed study of over 17.000 middle class American adults of diverse ethnicity, we found that the compulsive use of nicotine, alcohol, and injected street drugs increases proportionally in a strong, graded dose response manner that closely parallels the intensity of adverse life experiences during childhood. This of course supports old psychoanalytic views and is at odds with current concepts, including those of biological psychiatry, drug treatment programs, and drug eradication programs.

Our findings are disturbing to some because they imply that the basic causes of addiction lie within us and the way we treat each other, not in drug dealers or dangerous chemicals. They suggest that billions of dollars have been spent everywhere except where the answer is to be found.

Study design: Kaiser Permanente (KP) is the largest prepaid. non profit. healthcare delivery system in the United States; there are 500,000 KP members in San Diego, approximately 30% of the greater metropolitan population. We invited 26,000 consecutive adults voluntarily seeking comprehensive medical evaluation in the Department of Preventive Medicine to help us understand how events in childhood might later affect health status in adult life. Seventy percent agreed, understanding the information obtained was anonymous and would not become part of their medical records.

Our cohort population was 80% white including Hispanic, 10% black, and 10% Asian. Their average age was 57 years; 74% had been to college. 44% had graduated college; 49.5% were men.

In any four year period, 81% of all adult Kaiser Health Plan members seek such medical evaluation; there is no reason to believe that selection bias is a significant factor in the Study. The Study was carried out in two waves. to allow mid point correction if necessary. Further details of Study design are described in our initial publication.

The ACE Study compares adverse childhood experiences against adult health status, on average a half century later. The experiences studied were eight categories of adverse childhood experience commonly observed in the Weight Program. The prevalence of each category is stated in parentheses. The categories are:

1. recurrent and severe physical abuse (11%)

2. recurrent and severe emotional abuse (11%)

3. contact sexual abuse (22%)

growing up in a household with:

4. an alcoholic or drug user (25%)

5. a member being imprisoned (3%)

6. a mentally ill, chronically depressed, or institutionalized member (19%)

7. the mother being treated violently (12%)

8. both biological parents not being present (22%)

The scoring system is simple: exposure during childhood or adolescence to any category of ACE was scored as one point. Multiple exposures within a category were not scored: one alcoholic within a household counted the same as an alcoholic and a drug user; if anything, this tends to understate our findings. The ACE Score therefore can range from 0 to 8. Less than half of this middle class population had an ACE Score of 0; one in fourteen had an ACE Score of 4 or more.

In retrospect, an initial design flaw was not scoring subtle issues like low level neglect and lack of interest in a child who is otherwise the recipient of adequate physical care. This omission will not affect the interpretation of our First Wave findings, and may explain the presence of some unexpected outcomes in persons having ACE Score zero. Emotional neglect was studied in the Second Wave.

The ACE Study contains a prospective arm: the starting cohort is being followed forward in time to match adverse childhood experiences against current doctor office visits, emergency depanment visits, pharmacy costs, hospitalizations, and death. Publication of these analyses soon will begin.

Findings: Our overall findings. presented extensively in the American literature, demonstrate that:

– Adverse childhood experiences are surprisingly common. although typically concealed and unrecognized.

– ACEs still have a profound effect 50 years later, although now transformed from psychosocial experience into organic disease, social malfunction, and mental illness.

– Adverse childhood experiences are the main determinant of the health and social well being of the nation.

Our overall findings challenge conventional views, some of which are clearly defensive. They also provide opportunities for new approaches to some of our most difficult public health problems. Findings from the ACE Study provide insights into changes that are needed in pediatrics and adult medicine, which expectedly will have a significant impact on the cost and effectiveness of medical care.

Our intent here is to present our findings only as they relate to the problem of addiction, using nicotine, alcohol, and injected illicit drugs as examples of substances that are commonly viewed as ‘addicting‘. If we know why things happen and how, then we may have a new basis for prevention.

Smoking

Smoking tobacco has come under heavy opposition in the United States, particularly in southern California where the ACE Study was carried out. Whereas at one time most men and many women smoked, only a minority does so now; it is illegal to smoke in office buildings, public transportation, restaurants, bars, and in most areas of hotels.

When we studied current smokers, we found that smoking had a strong, graded relationship to adverse childhood experiences Figure 1 illustrates this clearly. The p value for this and all other data displays is .001 or better.

This stepwise 250% increase in the likelihood of an ACE Score 6 child being a current smoker, compared to an ACE Score 0 child. is generally not known. This simple observation has profound implications that illustrate the psychoactive benefits of nicotine; this information has largely been lost in the public health onslaught against smoking but is important in understanding the intractable nature of smoking in many people.

When we match the prevalence of adult chronic bronchitis and emphysema against ACEs, we again see a strong dose response relationship. We thereby proceed from the relationship of adverse childhood experiences to a health risk behavior to their relationship with an organic disease. In other words, Figure 2 illustrates the conversion of emotional stressors into an organic disease, through the intermediary mechanism of an emotionally beneficial (although medically unsafe) behavior.

Alcoholism

One’s own alcoholism is not easily or comfortably acknowledged; therefore. when we asked our Study cohort if they had ever considered themselves to be alcoholic, we felt that Yes answers probably understated the truth, making the effect even stronger than is shown. The relationship of self acknowledged alcoholism to adverse childhood experiences is depicted in Figure 3. Here we see that more than a 500% increase in adult alcoholism is related in a strong, graded manner to adverse childhood experiences.

Injection of illegal drugs

In the United States the most commonly injected street drugs are heroin and methamphetamine. Methamphetamine has the interesting property of being closely related to amphetamine, the first anti depressant introduced by Ciba Pharmaceuticals in 1932.

When we studied the relation of injecting illicit drugs to adverse childhood experiences, we again found a similar dose response pattern; the likelihood of injection of street drugs increases strongly and in a graded fashion as the ACE Score increases (Figure 4). At the extremes of ACE Score. the figures for injected drug use are even more powerful. For instance, a male child with an ACE Score of 6, when compared to a male child with an ACE Score of 0, has a 46 fold (4.600%) increase in the likelihood of becoming an injection drug user sometime later in life.

Discussion

Although awareness of the hazards of smoking is now near universal. and has caused a significant reduction in smoking, in recent years the prevalence of smoking has remained largely unchanged. In fact. the association between ACE score and smoking is stronger in age cohorts born after the Surgeon General’s Report on Smoking.

Do current smokers now represent a core of individuals who have a more profound need for the psychoactive benefits of nicotine than those who have given up smoking? Our clinical experience and data from the ACE Study suggest this as a likely possibility. Certainly, there is good evidence of the psychoactive benefits of nicotine for moderating anger anxiety, and hunger.

Alcohol is well accepted as a psychoactive agent. This obvious explanation of alcoholism is now sometimes rejected in favor of a proposed genetic causality. Certainly, alcoholism may be familial, as is language spoken. Our findings support an experiential and psychodynamic explanation for alcoholism, although this may well be moderated by genetic and metabolic differences between races and individuals.

Analysis of our Study data for injected drug use shows a powerful relation to ACEs. Population Attributable Risk (PAR) analysis shows that 78% of drug injection by women can be attributed to adverse childhood experiences. For men and women combined, the PAR is 67%. Moreover, this PAR has been constant in four age cohorts whose birth dates span a century; this indicates that the relation of adverse childhood experiences to illicit drug use has been constant in spite of major changes in drug availability and in social customs, and in the introduction of drug eradication programs.

American soldiers in Vietnam provided an important although overlooked observation. Many enlisted men in Vietnam regularly used heroin. However, only 5% of those considered addicted were still using it 10 months after their return to the US.” Treatment did not account for this high recovery rate.

Why does not everyone become addicted when they repeatedly inject a substance reputedly as addicting as heroin? If a substance like heroin is not inherently addicting to everyone, but only to a small minority of human users, what determines this selectivity? Is it the substance that is intrinsically addicting, or do life experiences actually determine its compulsive use? Surely its chemical structure remains constant.

Our findings indicate that the major factor underlying addiction is adverse childhood experiences that have not healed with time and that are overwhelmingly concealed from awareness by shame, secrecy, and social taboo.

The compulsive user appears to be one who, not having other resolutions available, unconsciously seeks relief by using materials with known psychoactive benefit, accepting the known long term risk of injecting illicit, impure chemicals. The ACE Study provides population based clinical evidence that unrecognized adverse childhood experiences are a major, if not the major, determinant of who turns to psychoactive materials and becomes ‘addicted’.

Given that the conventional concept of addiction is seriously flawed, and that we have presented strong evidence for an alternative explanation, we propose giving up our old mechanistic explanation of addiction in favor of one that explains it in terms of its psychodynamics: unconscious although understandable decisions being made to seek chemical relief from the ongoing effects of old trauma, often at the cost of accepting future health risk.

Expressions like ‘self destructive behavior’ are misleading and should be dropped because, while describing the acceptance of long term risk, they overlook the importance of the obvious short term benefits that drive the use of these substances.

This revised concept of addiction suggests new approaches to primary prevention and treatment. The current public health approach of repeated cautionary warnings has demonstrated its limitations, perhaps because the cautions do not respect the individual when they exhort change without understanding.

Adverse childhood experiences are widespread and typically unrecognized. These experiences produce neurodevelopmental and emotional damage, and impair social and school performance. By adolescence, children have a sufficient skill and independence to seek relief through a small number of mechanisms, many of which have been in use since biblical times: drinking alcohol, sexual promiscuity, smoking tobacco, using psychoactive materials, and overeating. These coping devices are manifestly effective for their users, presumably through their ability to modulate the activity of various neurotransmitters. Nicotine, for instance. is a powerful substitute for the neurotransmitter acetylcholine. Not surprisingly, the level of some neurotransmitters varies genetically between individuals.

It is these coping devices, with their short term emotional benefits, that often pose long term risks leading to chronic disease; many lead to premature death. This sequence is depicted in the ACE Pyramid (Figure 5). The sequence is slow, often unstoppable, and is generally obscured by time, secrecy, and social taboo. Time does not heal in most of these instances. Because cause and effect usually lie within a family, it is understandably more comforting to demonize a chemical than to look within. We find that addiction overwhelmingly implies prior adverse life experiences.

The sequence in the ACE Pyramid supports psychoanalytic observations that addiction is primarily a consequence of adverse childhood experiences. Moreover, it does so by a population based study, thereby escaping the potential selection bias of individual case reports.

Addiction is not a brain disease, nor is it caused by chemical imbalance or genetics. Addiction is best viewed as an understandable, unconscious, compulsive use of psychoactive materials in response to abnormal prior life experiences, most of which are concealed by shame, secrecy, and social taboo.

Our findings show that childhood experiences profoundly and causally shape adult life‘ ‘Chemical imbalances’. whether genetically modulated or not, are the necessary intermediary mechanisms by which these causal life experiences are translated into manifest effect. It is important to distinguish between cause and mechanism. Uncertainty and confusion between the two will lead to needless polemics and misdirected efforts for preventing or treating addiction, whether on a social or an individual scale.

Our findings also make it clear that studying any one category of adverse experience, be it domestic violence. childhood sexual abuse, or other forms of family dysfunction is a conceptual error. None occur in vacuum; they are part of a complex systems failure: one does not grow up with an alcoholic where everything else in the household is fine.

Treatment

If we are to improve the current unhappy situation, we must in medical settings routinely screen at the earliest possible point for adverse childhood experiences. It is feasible and acceptable to carry out mass screening for ACEs in the context of comprehensive medical evaluation. This identifies cases early and allows treatment of basic causes rather than vainly treating the symptom of the moment. We have screened over 450, 000 adult members of Kaiser Health Plan for these eight categories of adverse childhood experiences. Our initial screening is by an expanded Review of Systems questionnaire; patients certainly do not spontaneously volunteer this information. ‘Yes’ answers then are pursued with conventional history taking: “I see that you were molested as a child. Tell me how that has affected you later in your life.”

Such screening has demonstrable value. Before we screened for adverse childhood experiences, our standardized comprehensive medical evaluation led to a 12% reduction in medical visits during the subsequent year. Later, in a pilot study, an on site psychoanalyst conducted a one time interview of depressed patients; this produced a 50% reduction in the utilization of this subset during the subsequent year. However, the reduction occurred only in those depressed patients who were high utilizers of medical care because of somatization disorders.

Recently, we evaluated our current approach by a neural net analysis of the records of 135,000 patients who were screened for adverse childhood experiences as part of our redesigned comprehensive medical evaluation. This entire cohort showed an overall reduction of 35% in doctor office visits during the year subsequent to evaluation.

Our experience asking these questions indicates that the magnitude of the ACE problem is so great that primary prevention is ultimately the only realistic solution. Primary prevention requires the development of a beneficial and acceptable intrusion into the closed realm of personal and family experience. Techniques for accomplishing such change en masse are yet to be developed because each of us, fearing the new and unknown as a potential crisis in self esteem, often adjusts to the status quo. However, one possible approach to primary prevention lies in the mass media: the story lines of movies and television serials present a major therapeutic opportunity, unexploited thus far, for contrasting desirable and undesirable parenting skills in various life situations.

Because addiction is experience dependent and not substance dependent, and because compulsive use of only one substance is actually uncommon, one also might restructure treatment programs to deal with underlying causes rather than to focus on substance withdrawal. We have begun using this approach with benefit in our Obesity Program, and plan to do so with some of the more conventionally accepted addictions.

Conclusion

The current concept of addiction is ill founded. Our study of the relationship of adverse childhood experiences to adult health status in over 17,000 persons shows addiction to be a readily understandable, although largely unconscious, attempt to gain relief from well concealed prior life traumas by using psychoactive materials. Because it is difficult to get enough of something that doesn’t quite work, the attempt is ultimately unsuccessful, apart from its risks. What we have shown will not surprise most psychoanalysts, although the magnitude of our observations is new, and our conclusions are sometimes vigorously challenged by other disciplines.

The evidence supporting our conclusions about the basic cause of addiction is powerful and its implications are daunting. The prevalence of adverse childhood experiences and their long term effects are clearly a major determinant of the health and social well being of the nation. This is true whether looked at from the standpoint of social costs, the economics of health care, the quality of human existence, the focus of medical treatment, or the effects of public policy.

Adverse childhood experiences are difticult issues, made more so because they strike close to home for many of us. Taking them on will create an ordeal of change, but will also provide for many the opportunity to have a better life.


Adverse Childhood Experiences Study

Wikipedia

The Adverse Childhood Experiences Study (ACE Study) is a research study conducted by the American health maintenance organization Kaiser Permanente and the Centers for Disease Control and Prevention. Participants were recruited to the study between 1995 and 1997 and have been in long-term follow up for health outcomes. The study has demonstrated an association of adverse childhood experiences (ACEs) (aka childhood trauma) with health and social problems across the lifespan. The study is frequently cited as a notable landmark in epidemiological research, and has produced many scientific articles and conference and workshop presentations that examine ACEs.

Background

In the 1980s, the dropout rate of participants at Kaiser Permanente’s obesity clinic in San Diego, California, was about 50%; despite all of the dropouts successfully losing weight under the program. Vincent Felitti, head of Kaiser Permanente’s Department of Preventive Medicine in San Diego, conducted interviews with people who had left the program, and discovered that a majority of 286 people he interviewed had experienced childhood sexual abuse. The interview findings suggested to Felitti that weight gain might be a coping mechanism for depression, anxiety, and fear.

Felitti and Robert Anda from the Centers for Disease Control and Prevention (CDC) went on to survey childhood trauma experiences of over 17,000 Kaiser Permanente patient volunteers. The 17,337 participants were volunteers from approximately 26,000 consecutive Kaiser Permanente members. About half were female; 74.8% were white; the average age was 57; 75.2% had attended college; all had jobs and good health care, because they were members of the Kaiser health maintenance organization. Participants were asked about 10 types of childhood trauma that had been identified in earlier research literature:

– Physical abuse

– Sexual abuse

– Emotional abuse

– Physical or emotional neglect

– Exposure to domestic violence

– Household substance abuse

– Household mental illness

– Family member (attempted) suicide

– Parental separation or divorce

– Incarcerated household member

In one way or another, all ten questions speak to family dysfunction.

Findings

The ACE Pyramid represents the conceptual framework for the ACE Study, which has uncovered how adverse childhood experiences are strongly related to various risk factors for disease throughout the lifespan, according to the Centers for Disease Control and Prevention.

According to the United States’ Substance Abuse and Mental Health Services Administration, the ACE study found that:

Adverse childhood experiences are common. For example, 28% of study participants reported physical abuse and 21% reported sexual abuse. Many also reported experiencing a divorce or parental separation, or having a parent with a mental and/or substance use disorder.

Adverse childhood experiences often occur together. Almost 40% of the original sample reported two or more ACEs and 12.5% experienced four or more. Because ACEs occur in clusters, many subsequent studies have examined the cumulative effects of ACEs rather than the individual effects of each.

Adverse childhood experiences have a dose response relationship with many health problems. As researchers followed participants over time, they discovered that a person’s cumulative ACEs score has a strong, graded relationship to numerous health, social, and behavioral problems throughout their lifespan, including substance use disorders. Furthermore, many problems related to ACEs tend to be comorbid, or co-occurring.

About two-thirds of individuals reported at least one adverse childhood experience; 87% of individuals who reported one ACE reported at least one additional ACE. The number of ACEs was strongly associated with adulthood high-risk health behaviors such as smoking, alcohol and drug abuse, promiscuity, and severe obesity, and correlated with ill-health including depression, heart disease, cancer, chronic lung disease and shortened lifespan.

Compared to an ACE score of zero, having four adverse childhood experiences was associated with a seven fold (700%) increase in alcoholism, a doubling of risk of being diagnosed with cancer, and a four-fold increase in emphysema; an ACE score above six was associated with a 30-fold (3000%) increase in attempted suicides.

The ACE study’s results suggest that maltreatment and household dysfunction in childhood contribute to health problems decades later. These include chronic diseases, such as heart disease, cancer, stroke, and diabetes, that are the most common causes of death and disability in the United States. The study’s findings, while relating to a specific population within the United States, might reasonably be assumed to reflect similar trends in other parts of the world, according to the World Health Organization. The study was initially published in the American Journal of Preventive Medicine.

Subsequent surveys

The ACE Study has produced more than 50 articles that look at the prevalence and consequences of ACEs. It has been influential in several areas. Subsequent studies have confirmed the high frequency of adverse childhood experiences, or found even higher incidences in urban or youth populations.

The original study questions have been used to develop a 10-item screening questionnaire. Numerous subsequent surveys have confirmed that adverse childhood experiences are frequent.

The CDC runs the Behavioral Risk Factor Surveillance System (BRFSS), an annual survey conducted by individual state health departments in all 50 states. An expanded survey instrument in several states found each state to be similar. Some states have collected additional local data. Adverse childhood experiences were even more frequent in studies in urban Philadelphia, and in a survey of young mothers (mostly younger than 19). Internationally, an Adverse Childhood Experiences International Questionnaire (ACE-IQ) is undergoing validation testing. Surveys of adverse childhood experiences have been conducted in Romania, the Czech Republic, the Republic of Macedonia, Norway, the Philippines, the United Kingdom, Canada, China and Jordan.

Child Trends used data from the 2011/12 National Survey of Children’s Health (NSCH) to analyze ACEs prevalence in children nationally, and by state. The NSCH’s list of “adverse family experiences” includes a measure of economic hardship and shows that this is the most common ACE reported nationally.

Neurobiology of Stress

Cognitive and neuroscience researchers have examined possible mechanisms that might explain the negative consequences of adverse childhood experiences on adult health. Adverse childhood experiences can alter the structural development of neural networks and the biochemistry of neuroendocrine System and may have long term effects on the body, including speeding up the processes of disease and aging and compromising immune systems.

Allostatic load refers to the adaptive processes that maintain homeostasis during times of toxic stress through the production of mediators such as adrenalin, cortisol and other chemical messengers. According to researcher Bruce S McEwen, who coined the term:

“These mediators of the stress response promote adaptation in the aftermath of acute stress, but they also contribute to allostatic overload, the wear and tear on the body and brain that result from being ‘stressed out.‘ This conceptual framework has created a need to know how to improve the efficiency of the adaptive response to stressors while minimizing overactivity of the same systems, since such overactivity results in many of the common diseases of modern life. This framework has also helped to demystify the biology of stress by emphasizing the protective as well as the damaging effects of the body’s attempts to cope with the challenges known as stressors.”

Additionally, epigenetic transmission may occur due to stress during pregnancy or during interactions between mother and newborns. Maternal stress, depression, and exposure to partner violence have all been shown to have epigenetic effects on infants.

Implementing practices

As knowledge about the prevalence and consequences of adverse childhood experiences increases, trauma informed and resilience building practices based on the research is being implemented in communities, education, public health departments, social services, faith-based organizations and criminal justice. A few states are considering legislation.

Communities

As knowledge about the prevalence and consequences of ACEs increases, more communities seek to integrate trauma informed and resilience building practices into their agencies and systems. Tarpon Springs, Florida, became the first trauma informed community in 2011. Trauma informed initiatives in Tarpon Springs include trauma awareness training for the local housing authority, changes in programs for ex-offenders, and new approaches to educating students with learning difficulties.

Education

Children who are exposed to adverse childhood experiences may become overloaded with stress hormones, leaving them in a constant state of arousal and alertness to environmental and relational threats. Therefore, they may have difficulty focusing on school work, and consolidating new memory, making it harder for them to learn at school.

Approximately one in three or four children have experienced significant ACEs. A study by the Area Health Education Center of Washington State University found that students with at least three ACEs are three times as likely to experience academic failure, six times as likely to have behavioral problems, and five times as likely to have attendance problems. These students may have trouble trusting teachers and other adults, and may have difficulty creating and maintaining relationships.

The trauma informed school movement aims to train teachers and staff to help children self-regulate, and to help families that are having problems that result in children’s normal response to trauma, rather than simply jumping to punishment. It also seeks to provide behavioral consequences that will not retraumatize a child. Punishment is often ineffective, and better results can often be achieved with positive reinforcement. Out of school suspensions can be particularly bad for students with difficult home lives; forcing students to remain at home may increase their distrust of adults.

Trauma sensitive, or compassionate, schooling has become increasingly popular in Washington, Massachusetts, and California. Lincoln High School in Walla Walla, Washington, adapted a trauma informed approach to discipline and reduced its suspensions by 85%. Rather than standard punishment, students are taught to recognize their reaction to stress and learn to control it.

Spokane, Washington, schools conducted a research study that demonstrated that academic risk was correlated with students’ experiences of traumatic events known to their teachers. The same school district has begun a study to test the impact of trauma informed intervention programs, in an attempt to reduce the impact of toxic stress.

In Brockton, Massachusetts, a community wide meeting led to a trauma informed approach being adopted by the Brockton School District. So far, all of the district’s elementary schools have implemented trauma informed improvement plans, and there are plans to do the same in the middle school and high school. About one-fifth of the district teachers have participated in a course on teaching traumatized students. Police alert schools when they have arrested someone or visited at a student’s address.

Massachusetts state legislation has sought to require all schools to develop plans to create “safe and supportive schools”.

At El Dorado, an elementary school in San Francisco, California, trauma-informed practices were associated with a suspension reduction of 89%.

Social services

Social service providers, including welfare systems, housing authorities, homeless shelters, and domestic violence centers are adopting trauma informed approaches that help to prevent ACEs or minimize their impact. Utilizing tools that screen for trauma can help a social service worker direct their clients to interventions that meet their specific needs. Trauma informed practices can also help social service providers look at how trauma impacts the whole family.

Trauma informed approaches can improve child welfare services by 1) openly discussing trauma and 2) addressing parental trauma.

The New Hampshire Division for Children Youth and Families (DCYF) is taking a trauma informed approach to their foster care services by educating staff about childhood trauma, screening children entering foster care for trauma, using trauma informed language to mitigate further traumatization, mentoring birth parents and involving them in collaborative parenting, and training foster parents to be trauma informed.

In Albany, New York the HEARTS Initiative has led to local organizations developing trauma informed practice. Senior Hope Inc, an organization serving adults over the age of 50, began implementing the 10 question ACE survey and talking with their clients about childhood trauma. The LaSalle School, which serves orphaned and abandoned boys, began looking at delinquent boys from a trauma informed perspective and began administering the ACE questionnaire to their clients.

Housing authorities are also becoming trauma informed. Supportive housing can sometimes recreate control and power dynamics associated with clients’ early trauma. This can be reduced through trauma informed practices, such as training staff to be respectful of clients’ space by scheduling appointments and not letting themselves into clients’ private spaces, and also understanding that an aggressive response may be trauma related coping strategies.

The housing authority in Tarpon Springs provided trauma awareness training to staff so they could better understand and react to their clients’ stress and anger resulting from poor employment, health, and housing.

A survey of 200 homeless individuals in California and New York demonstrated that more than 50% had experienced at least four ACEs. In Petaluma, California, the Committee on the Shelterless (COTS) uses a trauma informed approach called Restorative Integral Support (RIS) to reduce intergenerational homelessness. RIS increases awareness of and knowledge about ACEs, and calls on staff to be compassionate and focus on the whole person. COTS now consider themselves ACE informed and focus on resiliency and recovery.

Health care services

Screening for or talking about ACEs with parents and children can help to foster healthy physical and psychological development and can help doctors understand the circumstances that children and their parents are facing. By screening for ACEs in children, pediatric doctors and nurses can better understand behavioral problems.

Some doctors have questioned whether some behaviors resulting in attention deficit hyperactivity disorder (ADHD) diagnoses are in fact reactions to trauma. Children who have experienced four or more ACEs are three times as likely to take ADHD medication when compared with children with less than four ACEs.

Screening parents for their ACEs allows doctors to provide the appropriate support to parents who have experienced trauma, helping them to build resilience, foster attachment with their children, and prevent a family cycle of ACEs. Trauma informed pediatric care also allows doctors to develop a more trusting relationship with parents, opening the lines of communication.

At Monteflore Medical Center ACEs screenings will soon be implemented in 22 pediatric clinics. In a pilot program any child with one parent who has an ACE score of four or higher is offered enrollment and receive a variety of services. For families enrolled in the program parents report fewer ER visits and children have healthier emotional and social development, compared with those not enrolled.

Public health

Most American doctors as of 2015 do not use ACE surveys to assess patients. Objections to doing so include that there are no randomized controlled trials that show that such surveys can be used to actually improve health outcomes, there are no standard protocols for how to use the information gathered, and that revisiting negative childhood experiences could be emotionally traumatic. Other obstacles to adoption include that the technique is not taught in medical schools, is not billable, and the nature of the conversation makes some doctors personally uncomfortable.

Some public health centers see ACEs as an important way, especially for mothers and children, to target health interventions for individuals during sensitive periods of development early in their life, or even in utero.

For example, Jefferson Country Public Health clinic in Port Townsend, Washington, now screens pregnant women, their partners, parents of children with special needs, and parents involved with CPS for ACEs. With regard to patient counseling, the clinic treats ACEs like other health risks such as smoking or alcohol consumption.

Resiliency

Resilience is not a trait that people either have or do not have. It involves behaviors, thoughts and actions that can be learned and developed in anyone.

According to the American Psychological Association (2017) resilience is the ability to adapt in the face of adversity, tragedy, threats or significant stress such as family and relationship problems, serious health problems or workplace and financial stressors. Resilience refers to bouncing back from difficult experiences in life. There is nothing extraordinary about resilience. People often demonstrate resilience in times of adversity. However, being resilient does not mean that a person will not experience difficulty or distress as emotional pain is common for people when they suffer from a major adversity or trauma. In fact, the path to resilience often involves considerable emotional pain.

Resilience is labeled as a protective factor. Having resilience can benefit children who have been exposed to trauma and have a higher ACE score. Children who can learn to develop it, can use resilience to build themselves up after trauma. A child who has not developed resilience will have a harder time coping with the challenges that can come in adult life. People and children who are resilient, embrace the thinking that adverse experiences do not define who they are. They also can think about past events in their lives that were traumatic and, try to reframe them in a way that is constructive. They are able to find strength in their struggle and ultimately can overcome the challenges and adversity that was faced in childhood.

In childhood, resiliency can come from having a caring adult in a child’s life. Resiliency can also come from having meaningful moments such as an academic achievement or getting praise from teachers or mentors. In adulthood, resilience is the concept of self-care. If you are taking care of yourself and taking the necessary time to reflect and build on your experiences, then you will have a higher capacity for taking care of others.

Adults can also use this skill to counteract some of the trauma they have experienced. Self-care can mean a variety of things. One example of selfcare, is knowing when you are beginning to feel burned out and then taking a step back to rest and recuperate yourself. Another component of self-care is practicing mindfulness or engaging in some form of meditation. If you are able to take the time to reflect upon your experiences, then you will be able to build a greater level of resiliency moving forward.

All of these strategies put together can help to build resilience and counteract some of the childhood trauma that was experienced. With these strategies children can begin to heal after experiencing adverse childhood experiences. This aspect of resiliency is so important because it enables people to find hope in their traumatic past.

When first looking at the ACE study and the different correlations that come with having 4 or more traumas, it is easy to feel defeated. It is even possible for this information to encourage people to have unhealthy coping behaviors. Introducing resilience and the data that supports its positive outcome in regards to trauma, allows for a light at the end of a tunnel. It gives people the opportunity to be proactive instead of reactive when it comes to addressing the traumas in their past.

Criminal justice

Since research suggests that incarcerated individuals are much more likely to have been exposed to violence and suffer from posttraumatic stress disorder (PTSD), a trauma informed approach may better help to address some of these criminogenic risk factors and can create a less traumatizing criminal justice experience. Programs, like Seeking Safety, are often used to help individuals in the criminal justice system learn how to better cope with trauma, PTSD, and substance abuse.

Juvenile courts better help deter children from crime and delinquency when they understand the trauma many of these children have experienced.

The criminal justice system itself can also retraumatize individuals. This can be prevented by creating safer facilities where correctional and police officers are properly trained to keep incidents from escalating. Partnerships between police and mental health providers can also reduce the possible traumatizing effects of police intervention and help provide families with the proper mental health and social services.

The Women’s Community Correctional Center of Hawaii began a Trauma Informed Care Initiative that aims to train all employees to be aware and sensitive to trauma, to screen all women in their facility for trauma, to assess those who have experienced trauma, and begin providing trauma informed mental health care to those women identified.

Faith based Organizations

Some faith based organizations offer spiritual services in response to traumas identified by ACE surveys. For example, the founder of ACE Overcomers combined the epidemiology of ACEs, the neurobiology of toxic stress and principles of the Christian Bible into a workbook and 12-week course used by clergy in several states.

Another example of this integration of faith based principles and ACEs science is the work of Intermountain Residential’s chaplain, who has created a curriculum called “Bruised Reeds and Smoldering Wicks” a six week study meant to introduce the science behind ACEs and early childhood trauma within the context of Christian theology and ministry practice. Published in 2017, it has been used by ministry professionals in 30 states, the District of Columbia, and two Canadian provinces.

Faith based organizations also participate in the online group ACES Connection Network.

The Faith and Health Connection Ministry also applies principles of Christian theology to address childhood traumas.

Legislation

Vermont has passed a bill, Act 43(H.508), an act relating to building resilience for individuals experiencing adverse childhood experiences which acknowledges the life span effects of ACEs on health outcomes, seeks wide use of ACE screening by health providers and aims to educate medical and health school students about ACEs.

“Vermont first state to propose bill to screen for ACEs in health care”, ACEs Connection, 18 March 2014

Previously Washington State passed legislation to set up a public-private partnership to further community development of trauma informed and resilience building practices that had begun in that state; but it was not adequately funded.

On August 18, 2014, California lawmakers unanimously passed ACR No. 155, which encourages policies reducing children’s exposure to adverse experiences.

Recent Massachusetts legislation supports a trauma informed school movement as part of The Reduction of Gun Violence bill (No. 4376). This bill aims to create “safe and supportive schools” through services and initiatives focused on physical, social, and emotional safety.

THE RESTLESS WAVE. Good Times, Just Causes, Great Fights and Other Appreciations – John McCain.

Tribute to a decent man, an honest man of honour. Even though he backed the Iraq disaster, and is a Republican.


Many an old geezer like me reaches his last years wishing he had lived more in the moment, had savored his days as they happened. Not me, friends. Not me. I have loved my life. All of it.

ACCUMULATED MEMORIES

TEARS WELLED IN MY EYES as I watched the old men march. It was a poignant sight, but not an unfamiliar one, and I was surprised at my reaction. l have attended Memorial Day and Veterans Day parades in dozens of American cities, watched aging combat veterans, heads high, shoulders back, summon memories of their service and pay homage to friends they had lost. I had always kept my composure.

It was the fiftieth anniversary of Japan’s surprise attack on Pearl Harbor and I had been invited to the official commemoration. The President of the United States, George H. W. Bush, was there and would give an emotional, memorable address at the USS Arizona memorial. I assumed that I, a first term senator, had been included with more important dignitaries because that famous ship was named for the state I represent. Or perhaps I had been invited because I’m a Navy veteran, the son and grandson of admirals, and this was a Navy show.

My best friend from the Naval Academy, Chuck Larson, acted as host and master of ceremonies for the proceedings at the Arizona. Chuck had a far more distinguished naval career than I had, continuing a divergence that had begun in our first year at the Academy, where he had graduated at the top of our class and I very near the bottom. We had gone through flight training together, and remained the closest of friends. Chuck had been an aviator, then a submariner and a military aide to President Richard Nixon. He had been a rear admiral at forty three, one of the youngest officers in Navy history to make that rank. He was the only person to serve as superintendent of the U.S. Naval Academy twice. On the fiftieth anniversary of Pearl Harbor, he had four stars and was commander in chief of all U.S. forces in the Pacific, CINCPAC, the largest operational command in the U.S. military, my father’s old command, headquartered in Hawaii.

The Arizona ceremony was the main event of the weekend. The President would also pay a visit to the battleship USS Missouri, as would I. She had come from operations in the Persian Gulf to join in the Remembrance Day tribute. It was her last mission before she would be decommissioned. The war that had begun for America in Pearl Harbor had ended on her deck. My grandfather had been there, standing in the first line of senior officers observing the surrender ceremony.

My father, a submarine skipper, was waiting in Tokyo Harbor to meet him for, as it turned outthe last time. They lunched together that afternoon in the wardroom of a submarine tender. When they parted that day my grandfather began his journey home to Coronado, California. He died of a heart attack the day after he arrived, during a welcome home party my grandmother had arranged for him. He was only sixty one years old, but looked decades older, aged beyond his years from “riotous living,” as he called it, and the strain of the war. My father, who admired his father above all other men, was inconsolable. Many years later he recalled in detail their final reunion and the last words his father spoke to him, “Son, there is no greater thing than to die for the country and principles that you believe in.”

The day before the ceremony on the Arizona I had joined a small group of more senior senators and combat veterans, among them Senate Republican leader Bob Dole and the senior senator from Hawaii, Dan Inouye. Bob had served in the Army’s 10th Mountain Division. A few weeks before the end of the war in Europe, in Italy’s Apennine Mountains, he was grievously wounded by a German machine gun while trying to rescue his fallen radio operator. His wounds cost him the use of his right arm, and much of the feeling in his left. Around the same time, Dan had led an assault on a German bunker in Tuscany. He was shot in the stomach and a grenade severed his right arm. He kept fighting, and would receive the Medal of Honor for his valor. Bob and Dan had been friends longer than either had been a senator. They had met while recuperating from their wounds in Percy Jones Army Hospital in Battle Creek, Michigan, along with another future senator, Phil Hart, who had been wounded on D-Day.

That day, we watched two thousand Pearl Harbor survivors march to honor their fallen. Most appeared to be in their seventies. Neither the informality of their attire nor the falling rain nor the cheers of the crowd along the parade route detracted from their dignified comportment. A few were unable to walk and rode in Army trucks. All of a sudden I felt overwhelmed. Maybe it was the effect of their straight faces and erect bearing evoking such a hard-won dignity; maybe it was the men riding in trucks managing to match the poise of the marchers; maybe it was the way they turned their heads toward us as they passed and the way Bob and Dan returned their attention. A little embarrassed by my reaction, I confessed to Dan, “I don’t know what comes over me these days. I guess I’m getting sentimental with age.” Without turning his gaze from the marchers, he answered me quietly, “Accumulated memories.”

That was it. Accumulated memories. I had reached an age when I had begun to feel the weight of them. Memories evoked by a connection to someone or to an occasion, by a familiar story or turn of phrase or song. Memories of intense experiences, of family and friends from younger days, of causes fought, some worth it, others not so much, some won, some lost, of adventures bigger than those imagined as a child, memories of a life that even then had seemed to me so lucky and unlikely, and of the abbreviated lives of friends who had been braver but not as fortunate, memories brought to mind by veterans of a war I had not fought in, but I knew something of what it had cost them, and what it had given them.

I had been a boy of five, playing in the front yard of my family’s home in New London, Connecticut, when a black sedan pulled up and a Navy officer rolled down the window and shouted to my father, “Jack, the Japs bombed Pearl Harbor.” The news and the sight of my father leaving in that sedan is one of my most powerful memories, the only memory of my father during the war I’ve managed to retain all these years. I know he didn’t go to sea immediately and I know we were briefly reunited with him when he was reassigned from a submarine command in the Atlantic to another in the Pacific theater. But I don’t recall seeing my father again after he got into that car until the war was over, and he had lost his father and many of his friends. He returned changed in the way most combat veterans are, more self-possessed and serious. I understood the journey the Pearl Harbor veterans had made.

That empathy stirred by my own memories had made me weep.

I feel the weight of memories even more now, of course. I’ve accumulated so many more of them. I was in my mid fifties in 1991. I’m eighty one now, twenty years older than my grandfather had been when he died, and more than ten years older than my father when we buried him, as it happened, on the day I left the Navy, a year before I was elected to my first term in Congress.

A quarter century’s worth of new memories, of new causes, won and lost, more fights, new friendships and a few new enemies, of more mistakes made and new lessons learned, of new experiences that enriched my life so far beyond my wildest dreams that I feel even luckier than I did in 1991.

Of course, the longer we live, the more we lose, too, and many people who figure prominently in my memories have left the scene. Friends from prison have passed away. Bob Craner, my closest confidant in prison, the man who got me back on my feet after the Vietnamese forced me to make a false confession and propaganda statement, died many years ago. Bill Lawrence, my exemplary senior ranking officer, died in 2005. Ned Shuman, whose good cheer was a tonic in the worst of times, is gone now, too. And Bud Day, the toughest man I ever knew, veteran of three wars, who wouldn’t let me die in those hard first months of my captivity, left us four years ago.

Close Senate friends have passed as well, including brave Dan Inouye. My pal Fred Thompson, whose company was a delight, died two years ago. Lion of the Senate Ted Kennedy, with whom I worked and fought and joked in some of the more memorable moments of my time in the Senate, succumbed in 2009 to the cancer that I now have. Ted and I shared the conviction that a fight not joined is a fight not enjoyed. We had some fierce ones in our time, fierce, worthwhile, and fun. I loved every minute of them.

Other friends have left, too. I’m tempted to say, before their time, but that isn’t the truth. What God and good luck provide we must accept with gratitude. Our time is our time. It’s up to us to make the most of it, make it amount to more than the sum of our days. God knows, my dear friend Chuck Larson, whom I had looked up to since we were boys, made the most of his. Leukemia killed him in 2014. He was laid to rest in the Naval Academy’s cemetery on Hospital Point, a beautiful spot overlooking the Severn River, near where our paths first crossed.

I’ve been given more years than many, and had enough narrow escapes along the way to make me appreciate them, not just in memory, but while I lived them. Many an old geezer like me reaches his last years wishing he had lived more in the moment, had savored his days as they happened. Not me, friends. Not me. I have loved my life. All of it. I’ve wasted more than a few days on pursuits that might not have proved as important as they seemed to me at the time. Some things didn’t work out the way I hoped they would. I had difficult moments and a few disappointments. But, by God, I enjoyed it. Every damn day of it. I have lived with a will. I served a purpose greater than my own pleasure or advantage, but I meant to enjoy the experience, and I did. I meant to be amazed and excited and encouraged and useful, and I was.

All that is attributable to one thing more than any other. I have been restless all my life, even now, as time grows precious. America and the voters of Arizona have let me exercise my restlessness in their service. I had the great good fortune to spend sixty years in the employ of our country, defending our country’s security, advancing our country’s ideals, supporting our country’s indispensable contributions to the progress of humanity. It has not been perfect service, to be sure, and there were times when the country might have benefited from a little less of my help. But I’ve tried to deserve the privilege, and I have been repaid a thousand times over with adventure and discoveries, with good company, and with the satisfaction of serving something more important than myself, of being a bit player in the story of America, and the history we made. And I am so very grateful.

I share that sentiment with another naval aviator, the good man and patriot we elected our forty-first President, George Herbert Walker Bush. He paid tribute twenty-six years ago to those fellow patriots whose service to America was not repaid with a long life of achievement and adventure.

We had assembled at the Arizona memorial around seven o’clock the morning of December 7, 1991. President and Mrs. Bush and their party arrived shortly after. Chuck opened the proceedings and introduced a Navy chaplain to give an invocation. At 7:55, fifty years to the minute since the attack on Pearl Harbor had commenced, the cruiser USS Chosin crossed in front of the memorial and sounded its horn as its officers and crew standing along its rails saluted. The minute of silence we observed ended when four F-15 fighters roared overhead, and one pulled up and away in the missing man formation. A bugler sounded attention at eight o’clock, the colors were raised, and the national anthem sung. President and Mrs. Bush dropped flower wreaths into the well of the memorial.

Secretary of Defense Dick Cheney introduced retired USN Captain Donald Ross, who had been a warrant officer on the USS Nevada, one of eight battleships stationed at Pearl Harbor when the Japanese attacked. He was the senior engineer on the ship and managed to get her under way in the firestorm, the only one of the battleships to do so. The Nevada was struck by six bombs and a torpedo. Ross lost consciousness twice from the smoke and was twice resuscitated. He was blinded by an explosion, but he kept the ship steaming long enough to run her aground where she wouldn’t block the entrance to the harbor. He received the Medal of Honor for his valor. He was eighty-one years old in 1991, slight and stooped in his Navy whites, and walked with a cane. He would die the next spring. But he was exuberant that morning and emotional as he introduced his fellow World War II veteran, almost shouting, “Ladies and Gentlemen, I give you the President of the United States.”

The President read from a printed text. He would give another, longer speech later that day about America’s leadership of the postwar world, and the international order we had superintended for nearly fifty years. But his speech at the memorial was devoted to the Americans who had fought and perished there at the dawn of the American century. “The heroes of the harbor,” he called them.

As he closed the speech, his voice grew thick with emotion. I think he must have felt not only the sacrifices made at Pearl Harbor, but the weight of his own memories, the memories of friends he had lost in the war, when he was the youngest aviator in the Navy.

“Look at the water here, clear and quiet,” he directed, “bidding us to sum up and remember. One day, in what now seems another lifetime, it wrapped its arms around the finest sons any nation could ever have, and it carried them to a better world.” He paused and fussed with the pages of his speech, struggling to compose himself before delivering the last line of the speech. “May God bless them, and may God bless America, the most wondrous land on earth.”

The most wondrous land on earth, indeed. What a privilege it is to serve this big, boisterous, brawling, intemperate, striving, daring, beautiful, bountiful, brave, magnificent country. With all our flaws, all our mistakes, with all the frailties of human nature as much on display as our virtues, with all the rancor and anger of our politics, we are blessed. We are living in the land of the free, the land where anything is possible, the land of the immigrant’s dream, the land with the storied past forgotten in the rush to the imagined future, the land that repairs and reinvents itself, the land where a person can escape the consequences of a self-centered youth and know the satisfaction of sacrificing for an ideal, where you can go from aimless rebellion to a noble cause, and from the bottom of your class to your party’s nomination for President.

We are blessed, and in turn, we have been a blessing to humanity. The world order we helped build from the ashes of world war, and that we defend to this day, has liberated more people from tyranny and poverty than ever before in history. This wondrous land shared its treasures and ideals and shed its blood to help make another, better world. And as we did we made our own civilization more just, freer, more accomplished and prosperous than the America that existed when I watched my father go off to war.

We have made mistakes. We haven’t always used our power wisely. We have abused it sometimes and we’ve been arrogant. But, as often as not, we recognized those wrongs, debated them openly, and tried to do better. And the good we have done for humanity surpasses the damage caused by our errors. We have sought to make the world more stable and secure, not just our own society. We have advanced norms and rules of international relations that have benefited all. We have stood up to tyrants for mistreating their people even when they didn’t threaten us, not always, but often. We don’t steal other people’s wealth. We don’t take their land. We don’t build walls to freedom and opportunity. We tear them down.

To fear the world we have organized and led for three-quarters of a century, to abandon the ideals we have advanced around the globe, to refuse the obligations of international leadership for the sake of some half-baked, spurious nationalism cooked up by people who would rather find scapegoats than solve problems is unpatriotic. American nationalism isn’t the same as in other countries. It isn’t nativist or imperial or xenophobic, or it shouldn’t be. Those attachments belong with other tired dogmas that Americans consigned to the ash heap of history.

We live in a land made from ideals, not blood and soil. We are custodians of those ideals at home, and their champion abroad. We have done great good in the world because we believed our ideals are the natural aspiration of all mankind, and that the principles, rules, and alliances of the international order we superintended would improve the security and prosperity of all who joined with us. That leadership has had its costs, but we have become incomparably powerful and wealthy as well. We have a moral obligation to continue in our just cause, and we would bring more than shame on ourselves if we let other powers assume our leadership role, powers that reject our values and resent our influence. We will not thrive in a world where our leadership and ideals are absent. We wouldn’t deserve to.

I have served that cause all my adult life. I haven’t always served it well. I haven’t even always appreciated that I was serving it. But among the few compensations of old age is the acuity of hindsight. I was part of something bigger than myself that drew me along in its wake even when I was diverted by personal interests. I was, knowingly or not, along for the ride as America made the future better than the past. Yes, l have enjoyed it, all of it, and I would love for it to continue. A fight not joined is a fight not enjoyed, and I wouldn’t mind another scrap or two for a good cause before I’m a memory. Who knows, maybe I’ll get another round. And maybe I won’t. So be it. I’ve lived in this wondrous land for most of eight decades, and I’ve had enough good fights and good company in her service to satisfy even my restless nature, a few of which I relate in the pages that follow.

Who am I to complain? I’m the luckiest man on earth.

John McCain, Cornville, Arizona

CHAPTER ONE

NO Surrender

ON AN ORDINARY NOVEMBER MORNING in Phoenix, sunny and warm, Cindy and I walked the two blocks from our building to the nearest Starbucks. We stood in line with other early risers, and made our purchases. We walked back to our condo, coffees in hand, and got ready to drive to our place in Northern Arizona, where we go to rest and relax in good times and bad. Friends would join us there for a few days, and our conversations would inevitably return now and again to the intense experience we had just shared. But whenever it looked like we were about to dwell at length on that subject, I would steer the conversation in another direction, toward the future. And that morning in Phoenix, we were left entirely to ourselves, just another couple in need of their morning coffee, which made for a welcome change.

The night before, I had conceded the election to the man who had defeated me and would be our forty-fourth President, Barack Obama. After I had left the stage, Mark Hughes, the agent in charge of my Secret Service detail, started to brief me on the schedule and security procedures for the trip north. The Secret Service customarily continues to protect defeated presidential candidates for a little while after the election. I suppose they worry some fool might think the losing candidate deserved a more severe sanction than disappointment. I thought it unlikely, and while I regretted losing the election, I did not expect to regret recovering autonomy over decisions about where I would go and when and with whom. Wherever the hell I wanted, I thought to myself, and the notion brightened a day that might otherwise have been spent contemplating “if only.”

If only we had done this. If only we hadn’t done that. I intended to leave those questions to reporters and academics. They were unproductive. I still had a job, a job I enjoyed and looked forward to resuming. And, as I said, I looked forward to resuming the routine habits of a man without a security detail: opening doors, driving my car, walking to a coffee shop. Being at liberty. Having spent more than five years of my life in prison, I tend to appreciate even the more mundane exercises of my freedom more than others might.

Mark Hughes had done a fine job supervising my protection, as had Billy Callahan, the agent in charge of my other Secret Service detail, which alternated weeks with Mark’s crew. All the agents protecting Cindy and me, and my running mate, Sarah Palin, and her family, had been consummate professionals and had at my repeated requests exercised as much restraint as circumstances and good sense allowed. I was appreciative and grateful. But that didn’t stop me from taking a little pleasure in interrupting Mark’s briefing.

“Mark, my friend, you guys have been great, and I appreciate all your concern and hard work. I’ve enjoyed getting to know you. But tomorrow, I want all of you to go home to your families like I’m going home to mine. I’d appreciate a ride home tonight. Then we’ll say goodbye, and we probably won’t see each other again.”

Mark was accustomed to my chafing at restrictions imposed on my independence, and did not argue. He smiled, and said, “Yes, sir.” I liked him all the more for it. We said goodbye that night. And the next morning, Cindy and I walked to Starbucks without any more protection than a little sunscreen. An hour or so after that, I was happily driving north on Interstate 17, a free man at last.

It had been an exhilarating and exhausting two years. And though almost every defeated candidate insists the experience was wonderful and satisfying, I imagine I was only slightly less pleased that it was over than was President-elect Obama. Don’t get me wrong, I fought as hard as I could to win, and I really don’t enjoy losing. We had triumphant moments, and deeply touching experiences in the campaign. We had disappointing experiences as well, and days that were blurred by adrenaline fueled activity and stress. It was like drinking from a firehose all day, every day, especially in the months between the party conventions and Election Day. But it had been for the most part a wonderful experience.

While some might find it odd, the part I had enjoyed the most were the days when l was again an underdog for the Republican nomination. I’m not sure why, but my enjoyment of a fight of any kind is inversely proportional to the odds of winning it. And in July of 2007 the odds that I would win the Republican nomination for President were starting to look pretty long.

I had formally announced my candidacy in April, but the campaign had been under way for months before then. I had started out as the presumed front-runner for the nomination, and my friend Hillary Clinton, whom I had gotten to know and like while serving with her on the Armed Services Committee, was the front-runner for the Democratic nomination. Her status would last a bit longer than mine. We had built a front-runner’s campaign with a large and experienced staff and a big budget. Much too big, it turned out. We were spending a lot more than we were raising. I’m not the most prodigious fund-raiser, to be sure. I don’t mind asking people for money, but I don’t really enjoy it, either, and I certainly wasn’t as good at it as was my principal rival for the nomination, Governor Mitt Romney. I suppose it didn’t help matters with many donors that I was the leading Republican proponent of limiting campaign donations or that I was inextricably tied to the deeply unpopular surge in Iraq. My support for comprehensive immigration reform was proving to be a liability as well, although majorities of Americans then and now support its provisions. I had sponsored an immigration bill that year with Ted Kennedy. The bill was as unpopular with some conservatives as Ted was. Some of the other candidates, particularly Mitt, were already making an issue of it, and it was starting to generate grassroots opposition to my candidacy.

Whatever the reasons for my failure to outraise the competition, our spending should have been more in line with our financing. We shouldn’t have assembled an operation with as big a payroll and expenses as we had until my front-runner status was earned by winning primaries. In the spring and early summer of 2007 it was based on not much more than the fact that I had been the runner-up for the nomination in 2000, and was at the moment better known nationally than Governor Romney.

I was, to put it mildly, unhappy with my situation and considering what to do about it when I left for an overseas trip in early July. The whole thing just didn’t feel right to me. I felt as if I was running someone else’s campaign or pretending my campaign was something it wasn’t or shouldn’t have been. I had enjoyed my experiences as the underdog in the 2000 Republican nomination race partly because hardly anyone expected me to win and I felt as if I had nothing to lose. Then we caught fire in the fall of 1999, won the New Hampshire primary in a landslide, and had a rocket ride for a couple months, losing South Carolina, winning Michigan, before crashing in the Super Tuesday primaries. I left the race having outperformed expectations, possessing a much bigger national reputation, increased influence in the Senate, and an abundance of truly wonderful memories. Not bad for a defeat.

Before I made the decision to run again, I had nagging doubts that I mentioned frequently to aides that we weren’t likely to bottle lightning twice. Compounding my concern over spending and the direction of the campaign in 2007 were my concerns about the surge in Iraq, which preoccupied me more than the campaign did. There had not been many advocates in Congress, even among Republicans, for President George W. Bush’s decision to surge troops to Iraq to run a counterinsurgency under the command of General David Petraeus.

The war had been almost lost in 2006. A Sunni insurgency had grown much stronger as it claimed more territory, and more Iraqis and foreign fighters were joining its ranks. Shia militias were working with Iran to terrorize Sunnis and, when the spirit moved them, to kill Americans. They operated practically unfettered in some neighborhoods. We were obviously losing ground and were at risk of losing the war. That reality wasn’t altered by repeated assurances from senior commanders in Baghdad and from Defense Secretary Donald Rumsfeld that the American effort in Iraq was meeting all its targets (principally, the number of Iraqi troops trained, which proved as useless as a measure of success as body counts had in Vietnam). And a majority of the American people, which grew larger by the day, wanted us to get out.

I had been advocating for a counterinsurgency campaign in Iraq since August 2003. I had lost all confidence in Secretary Rumsfeld’s willingness to change what clearly wasn’t working, and I said so. To my and many others’ relief, President Bush asked for his resignation in November 2006. Knowing the President was actively considering the idea, I had urged for months that we surge thousands more troops to Iraq. I knew it was a decision that some officials in his administration opposed, that Democrats and more than a few Republicans would strongly criticize, and that most of the American people would not agree with. They had already punished Republicans for Iraq in the 2006 midterm election. They would likely want to rebuke us again in 2008, and that probability would loom larger as casualties spiked in the first months of the surge.

President Bush knew all this as well or better than I did. Good man that he is, I knew he was deeply pained by the loss of Americans he had sent to Iraq. He knew that if he decided to order the surge the situation would get worse and more Americans would die before it got better. He knew there was no guarantee it would succeed.

We had gone into Iraq based on faulty intelligence about weapons of mass destruction, and destroyed the odious Saddam Hussein regime. Bad tactics, a flawed strategy, and bad leadership in the highest ranks of uniformed and civilian defense leadership had allowed violent forces unleashed by Saddam’s destruction to turn Iraq into hell on earth, and threaten the stability of the Middle East. The situation was dire, and the price that we had already paid in blood and treasure was clear. But we had a lot at stake and we had a responsibility to attempt one last, extremely difficult effort to turn it around, to test whether a genuine counterinsurgency could avert defeat. The President chose to do the right thing, and the hardest. I imagine it was a lonely, painful experience for him, and I admired his resolve. I admired also his choice to lead the effort, General David Petraeus.

I believed that we should have responded to the insurgency at its inception, and I was increasingly convinced with every month that followed that only a full-fledged counterinsurgency, with all the force it required, had any chance for winning the war. But I didn’t know in late 2006 whether or not the situation was too far gone to salvage. Advisors whose counsel I trusted believed it still could be won. General Petraeus believed it could be. But none of us felt as confident about the outcome as we would have liked, and we knew most Americans believed we were wrong.

Five additional Army brigades were deployed to lraq, and Marine and Army units already in country had their tours extended, providing just enough force to support a counterinsurgency. The numbers of Americans killed or wounded in the first months of 2007 increased substantially, as additional forces arrived and fought to take back territory from Sunni insurgents and Shia militias. For the first time in the war on a large scale, they held the ground they took and provided security for the affected populations. The spike in casualties was expected, but it was hard not to worry you were needlessly sending young kids to their death in a war that had been a mistake. You couldn’t help but wonder if maybe the best thing now was to cut our losses. But I believed our defeat would be catastrophic for the Middle East and our security interests there as terrorists and Tehran gained power and prestige at our expense. And I was worried about the humanitairian implications of our withdrawal, fearing that the raging sectarian war might descend into genocide. Of course, if the surge failed, there would be nothing we could realistically do to prevent that defeat or prevent history and our own consciences from damning us for having made this last, costly effort.

So, as I considered what to do about my campaign, I did so recognizing that I would be spending more time and energy focusing on the issue that was likely to cost me votes. Nowhere was that likelier to be the case than in my favorite state after Arizona, New Hampshire, scene of my 2000 landslide win. In the 2006 election, Democrats had swept almost every state and federal contest in New Hampshire, a Republican wipeout blamed on voters’ deep dissatisfaction with the war. There was no credible scenario in which I could win the nomination without winning the New Hampshire primary, as I had in 2000. And even Granite State voters who had supported me seven years before and who still liked me were not pleased with my support for the war. It was increasingly apparent that many of them would express their displeasure by voting for a candidate other than me.

Anxious about the surge, upset with the state of my campaign, increasingly aware of the extent of the challenge before me, I was in a bad frame of mind that summer. My uncertainty about what to do only aggravated my condition. There have been very few times in my life when I have felt I might be in a predicament that I could not eventually escape. But I had serious doubts that I could win an election and maintain my position on Iraq. In fact, I was beginning to ask myself if I should even be trying. And that was my attitude as I departed with my friend Senator Lindsey Graham for a long-scheduled trip to Iraq, leaving decisions about how to repair my campaign or even whether to continue it for my return.

On the flight over I confided to Lindsey my unhappiness with the campaign, and we discussed what I ought to do about it. I told him I was leaning toward getting out of the race. I wasn’t sure I could win. I wasn’t sure I wanted it badly enough to do what I had to do to win. We were broke. Unlike our merry little band of insurgents in 2000, factions had formed in the campaign, and they were sniping at each other in the press. Old friendships were becoming rivalries. It was an increasingly joyless experience, and I had begun to worry that it would ultimately prove pointless. Lindsey thought it was salvageable, that we could downsize, and fight more like a challenger than a front-runner. If nothing else, that would feel more natural to me. But I was skeptical. I would need to raise a lot more money to run any kind of serious campaign, and that would get harder, not easier, as donors saw us cutting payroll, shedding talented staff, and closing state offices six months before the lowa caucuses. We were about to become in the eyes of the press and donors the first casualty of the 2008 Republican nomination race.

The worst violence had started to subside by the time of our July visit to Baghdad, which strengthened our faith that the surge could succeed. Casualties had peaked in May. The number of killed and wounded declined every month thereafter. General Petraeus and Ambassador Ryan Crocker and their staffs briefed us on the military and political gains that had been made since our last visit. We could see for ourselves that things were improving. There were visible signs of progress almost everywhere in Baghdad. Dangerous neighborhoods had been quieted, commercial activity was resuming. There wasn’t enough progress to convince you that victory was assured. Far from it. But it was enough to think that maybe, to quote Churchill, we were at the end of the beginning. I was more hopeful that the decision I had long advocated would not end up sacrificing the lives ransomed to it in a failed effort to rescue an already lost cause.

The experience that made the biggest impression on me was a ceremonial one. General Petraeus had asked us to participate in an Independence Day event at Saddam’s al-Faw Palace at Camp Victory that included the reenlistment of over 600 soldiers and the naturalization of 161 soldiers, mostly Hispanic immigrants, who had risked life and limb for the United States while they waited to become citizens. Some of these soldiers, the reenlisted and the newly naturalized, were on their second and third combat tours. Some of them had just had their current tour extended. Most were kids, of course, and some of them had spent two or three years of their short lives living with fear and fatigue, cruelty and confusion, and all the other dehumanizing effects of war. They had seen friends killed and wounded. Some had been wounded themselves. They had seen firsthand the failed strategy that had allowed the insurgency to gain strength, and had risked their lives to reinforce what they knew was a mistake. They had retaken the same real estate over and over again. They had conducted raids night after night looking for insurgents and caches of arms. They had been shot at by snipers and blasted by IEDs, and buried friends who hadn’t survived the encounters, while month after month the situation got worse. And here they were, re-upping again, choosing to stay in harm’s way. Most of them, it appeared, were excited to be finally doing something that made sense, taking and holding ground, protecting and earning the trust of the locals. Lindsey and I spoke at the ceremony. We were awed by them. It was hard to keep our composure while witnessing that kind of courage and selfless devotion to duty. And it was all the harder after General Petraeus recognized the sacrifice made by two soldiers who had planned to become naturalized citizens at the ceremony, and were now represented by two pairs of boots on two chairs, having been killed in action two days before. “They died serving a country that was not yet theirs,” Petraeus observed.

I wasn’t the only person there with a lump in his throat and eyes brimming with tears. I wish every American who out of ignorance or worse curses immigrants as criminals or a drain on the country’s resources or a threat to our “culture” could have been there. I would like them to know that immigrants, many of them having entered the country illegally, are making sacrifices for Americans that many Americans would not make for them.

The ceremony was one of the most inspirational displays of genuine loyalty to country and comrades I’d ever witnessed, and I’ll never forget it. On our return flight, Lindsey and I again discussed my political predicament and what to do about it. But I had decided before we boarded the flight that whatever I was risking by remaining a candidate, which wasn’t much more than embarrassment, it was nothing compared to what those kids were risking and the cause they were fighting for. I decided to stay in the race.

We had to downsize substantially. Many staffers left of their own accord and others involuntarily. We closed our operations in a number of states.

We borrowed money to keep the thing going. We developed a “living off the land” strategy that relied on debates and other free media opportunities to get out our message. We couldn’t afford to pay to advertise. And we had to adjust our expectations accordingly. I wasn’t able to run campaign operations with paid staff in as many primaries and caucuses as we had planned. We were going to have to downplay our involvement in the Iowa caucuses, as we had eight years before, and bet it all on New Hampshire again. We would be active in the states that immediately followed New Hampshire, Michigan, which was Governor Romney’s native state, and South Carolina. We knew we would have to win at least one of those to have a decent shot at winning the Florida primary. Whoever won Florida would have the most momentum going into Super Tuesday, when twenty-one states would hold primaries or caucuses. But for all practical purposes it was New Hampshire or bust for us again. There wasn’t a way to win without it.

I made one other commitment. I wouldn’t just stand by my position on the surge, I would make it the centerpiece of our campaign, arguing for its necessity and predicting its success if sustained, a message that many New Hampshire voters did not welcome. I couldn’t win the nomination without winning New Hampshire. I probably couldn’t win New Hampshire if I continued to support the surge. But I was going to make defending the surge my principal message in New Hampshire. An underdog again.

My very first campaign stop after returning from Iraq was in Concord, New Hampshire, where I was scheduled to deliver a speech on Iraq. Before we left, I planned to speak in the Senate about the progress Lindsey and I had witnessed and the necessity of sustaining the surge beyond its difficult first months. Before the speech, in difficult conversations with senior staff, I ordered the downsizing that necessitated staff departures, provoked bitter feelings between former colleagues and angry recriminations in the press, and spawned hours of political prognostication that our campaign was for all practical purposes “a corpse” as my days as a front-runner came to an abrupt and messy end.

I didn’t have an elaborate response to the situation. Rick Davis, my campaign manager, was working on a plan to run a smaller campaign, and find the money for it. I decided the best thing I could do was to put my head down and plod through the next few weeks. I’d like to say I ignored the skepticism and mockery directed my way. But I heard it and read it and felt it. I didn’t like it but I didn’t let it intimidate me. I intended to go to New Hampshire and make my case to people I had a pretty good rapport with even if they were no longer supporting me. If they didn’t buy it, so be it. I wouldn’t be President. I don’t want this to sound flip because it’s not as if I didn’t want to win. I did. I’m a very competitive person. But I just decided that if I was likely to lose and was going to run anyway, I shouldn’t be afraid of losing. I had something to say. I thought it was important that I say it. And I would see the damn thing through.

On a Friday morning in July, I boarded a flight to New Hampshire at Reagan National Airport with my youngest son, Jimmy, a Marine, who was about to deploy on his first combat tour, and my administrative assistant and co-writer, Mark Salter. No other staff accompanied me. Flights to Manchester, New Hampshire, in primary season are usually crowded with Washington reporters. Press accounts quickly proliferated that I had been spotted in much reduced circumstances carrying my own bag to the gate. I had carried my own bag before then. I almost always carried it, as a matter of fact (although it was another thing I was accustomed to doing for myself that the Secret Service would eventually relieve me of). I didn’t care that reporters remarked on it. The image gave them a handy metaphor for our humbled campaign. I kind of liked it.

When we arrived at the venue in Concord, which if I remember correctly was hosted by the local Chamber of Commerce, the room was congested with reporters, including some of the most well known and respected in the country. I knew most of them, and I liked many of them. A half dozen TV cameras were there to record the moment. Although we had announced I would be making remarks about the situation in Iraq, reporters, seeing what they thought was the chaos and confusion that beset a campaign in its death throes, suspected or hoped that I would withdraw from the race then and there. They were like crows on a wire, watching the unfortunate roadkill breathe its last before they descended to scavenge the remains.

I made my speech. It wasn’t a memorable one, I’m afraid. But it did not include an announcement that l was ending my campaign. Professionals that they are, none of the reporters present betrayed their disappointment that they had been denied their deathbed scene. Most of them believed I was a ghost candidate, who would sooner or later realize that he was not part of this world any longer. For my part, I would stick to my scheduled appearances for the time being while we sorted through tough decisions we would have to make about strategy, staffing, and financing. The next morning, I held a town hall meeting at the American Legion post in Claremont. Most of the questions were about Iraq. Many of them were skeptical, and a few hostile.

On a summer night a month later, I was halfway through a town hall meeting in Wolfeboro, and had answered the usual questions about the war, federal spending, immigration, climate change, veterans care, questions I got at every event. Nothing out of the ordinary had yet occurred when a middle aged woman stood and gestured to the staffer holding the microphone. When he handed it to her she started speaking in a quiet voice. When you’ve done as many town halls as I have, you can tell in an instant the people who are used to questioning candidates and those who are uncomfortable with public attention. Lynne Savage, a special education assistant in the local school system, and a mother, was the latter. I sensed as I called on her that she had something to say that would affect me. I thought it might be a criticism. She was standing just a few feet from me. Shy but purposeful, she prefaced her question by recalling that during the Vietnam War she had “proudly worn a silver bracelet on her arm in support of a soldier who was fighting.” Then she got to her point. “Today, unfortunately I wear a black bracelet in memory of my son who lost his life in Baghdad.”

My first thought in the instant she uttered her statement was that she would hold me responsible for her loss, and she would be right to do so. By my vote in support of the war and my support for the surge, I assumed a share of that responsibility, and a Gold Star mother was well within her rights to resent me for it. But she didn’t speak of resentment or accountability. She didn’t ask any questions about the war. She had only come to ask me if I would wear his bracelet, “so you could remember your mission and their mission in support of them.” The room was completely still. My emotions began to swell and I worried I would lose my composure. I managed to get out “I would be honored and grateful” before giving her a hug. “Don’t let his sacrifice be in vain,” she instructed me. I took the bracelet from her and read the name inscribed on it, Matthew Stanley. I asked how old Matthew had been. “Twenty-two,” she replied. “Twenty-two,” I repeated. My voice cracked a little as I thanked her for his service. All I could find the wit and will to say after that was, “Yes, ma’am, I will wear this. Thank you.”

Specialist Matthew Stanley was two months into his second tour in Iraq in December 2006 when an IED destroyed the Humvee he was in, killing him and four other soldiers. He was ten days shy of his twenty-third birthday and was still a newlywed, having married Amy the previous New Year’s Eve. I wore Matthew Stanley’s bracelet every day of the campaign, and I’ve worn it every day since. I’ll wear it for the rest of my life.

“Why not make a virtue of necessity?” Steve Schmidt, who was acting as a volunteer strategist for us, had proposed a few days before the Wolfeboro town hall. His pitch went something like this: You’re broke. You’re down in the polls. You’re not drawing crowds. The press has moved on. Why not get some of your POW buddies and other friends to travel with you while you hold small events all over New Hampshire, and make the case for the surge. Go to VFW and American Legion halls, to people’s backyards if you have to, and tell them you’re not quitting on the men and women we sent to fight for us in Iraq, even if it costs you the election. Voters like seeing politicians stick to their guns, especially if it looks like it’s going to cost them the election. Call it the “No Surrender Tour.”

It made sense to me. We began that September and traveled in vans and cars at first. Buses were expensive. Some of the earliest events were held in people’s homes, which weren’t exactly bursting with crowds of cheering people. I traveled with old pals from prison, Bud Day, Orson Swindle, and others, as well as my dearest friends in the Senate, Lindsey Graham and Joe Lieberman. I got to say what I wanted to say, what I believed was important to say and true, ending every speech with what, depending on your point of view, was either a boast or a prediction: “I’d rather lose an election than see my country lose a war.”

Being an underdog with low expectations can be liberating and fun. The humor gets a little dark, but that’s often the most fortifying kind. I have a quote I jokingly attribute to Chairman Mao that I like to use in tough situations: “It’s always darkest before it’s completely black.” I remember Lindsey and I were excited when we arrived at a VFW hall one Friday night and found the place packed with people. “We must be catching on,” we congratulated ourselves, only to learn that it was fried fish night, an event so popular with the locals they were willing to put up with the annoyance of politicians interrupting their supper. We eventually got a bus, wrapped it in our new motto, “No Surrender,” and rolled along the highways of the Granite State, stumping for the surge and my struggling candidacy wherever we could find people to listen.

It worked. We slowly started to revive. The crowds grew modestly, my poll numbers improved slightly, and the press started paying a little more attention. I doubt reporters thought I was a serious contender for the nomination again, but they believed I might fight until the New Hampshire primary. I think most of them appreciated that I was a proven campaigner in New Hampshire. I also think most of them expected Governor Romney to win the expensive, labor intensive Iowa caucuses, and probably have enough momentum coming out of Iowa to beat me in New Hampshire, where he had a vacation home and was well known and liked.

A defeat in New Hampshire would surely force my exit from the race. We had to hit a triple bank shot to stay viable. I had to place respectably in Iowa without being seen to have made a major investment of time and money there. One of the other candidates had to win or come awfully close to winning Iowa so the press would declare Governor Romney had underperformed expectations. Then I had to win New Hampshire on the strength of a good grassroots organization, nostalgia for my 2000 campaign by independents who can vote in New Hampshire party primaries, respect for my open style of campaigning, taking all questions and abuse, and my willingness to tell people what they didn’t want to hear and still ask for their vote.

I like and respect Mitt Romney. I think he would have made a very good President. I liked him before we ran against each other and I liked him after we were finished running against each other. In between, I and my more demonstrative staffers worked up a little situational antipathy for the governor and his campaign. That’s natural, of course. Presidential campaigns are exhausting, stressful experiences, run on coffee, adrenaline, and fear, and when you need a little extra boost, resentment of your opponent can be a handy motivator. Mitt is an intelligent, accomplished, decent, convivial man, who is really good at raising money and looks like a movie star. Deep into the endless series of primary debates, I and the other candidates were looking a little worse for wear. Mitt always arrived looking as if he had just returned from a two week vacation at the beach, tanned, smiling, and utterly self-possessed. If you’re not constantly reminding yourself to behave like an adult, you might start getting a little pissed off at your opponent’s many fine attributes. That kind of childishness usually ends when the contest is over as it did with our campaigns. But when the game is on between very competitive people, something akin to trash talking to the press can happen, as was the case with us. Nothing below the belt, really, from either side, just jabs here and there, enough to make you want to, well, beat the other guy.

We had worked hard. We had a strategy we could afford. And we got lucky. Iowa worked out about as well as it could have under the circumstances. A late surging Governor Mike Huckabee, who had extensive support in Iowa’s evangelical community, the most influential and well represented bloc of Republican caucus voters, caught Mitt and a lot of the press if not by surprise (it was evident in the last rounds of polls) then unprepared for the magnitude of his victory. Huckabee ended up winning the thing by a nine point margin, which meant Mitt wouldn’t only be deprived of momentum coming out of Iowa, he would drop in the polls in reaction to the unexpected size of his defeat there. I had managed to come in a respectable fourth, only a couple hundred votes behind the third place finisher, my friend Fred Thompson. It’s all an expectations game. The press thought I hadn’t put in the time in Iowa and didn’t have a real organization there, but I had just enough of both to do well enough to avoid hurting myself in New Hampshire.

I wasn’t overconfident after learning the Iowa results, but I did think I was now the candidate to beat in the New Hampshire primary five days later. I had a small lead in most of the latest polls. Huckabee didn’t have much support there, but his win in Iowa had likely cost Mitt some of his support. So, as I heard the news from Iowa that night after finishing an event in New Hampshire, the guy who had come in fourth in a six-man field was, after the actual winner, the happiest candidate in the race.

I didn’t expect to win a blowout as I had in 2000. My lead in the latest polls was in the two to three point range, way too tight to get cocky. But I was confident enough to ignore my usual superstition about not discussing my primary-night speech before I knew whether we would celebrate a victory or concede a defeat. The victory I and just about everyone expected would be the biggest that night would likely belong to the candidate riding the most momentum out of Iowa and the biggest wave of enthusiasm. That was Senator Barack Obama, the eloquent newcomer to American politics, who had just defeated the front-runner, Hillary Clinton, in Iowa and given a victory speech that captured the imaginations of Americans who were tired of politics, including many first-time voters. He appeared unstoppable after Iowa. Everyone assumed he would win New Hampshire, too, and drive Hillary out of the race. I discussed with Davis, Salter, and Schmidt the right message for my speech that night, and we agreed I should begin by saluting Senator Obama’s historic achievement, and recognize what it meant to his supporters and to the entire country. I would also express my hope that should I be the Republican nominee, our contest would be conducted in a way that would impress Americans in both parties as respectful.

That sentiment wasn’t only a sincere wish for more civility in politics. The country wanted change. They wanted the biggest change they could get. Barack Obama was offering them change, and he had advantages I did not. He was not a member of the party in power, I was. He was young and cool and new to national politics. I was seventy one years old and had been a known commodity for some time, with a long record of votes and statements to criticize. He opposed the unpopular war in Iraq. I supported it. He would be the first African American to earn a major party’s presidential nomination. He represented change in his very person. I had to convince people I, too, was a change candidate. But the most effective means I had to convey that message was campaigning in ways that might appear novel and authentic to cynical voters. I intended to use my victory speech to start that effort.

When it became clear that night that I had managed a come-from-behind victory, beating Mitt by about five points, it was looking like Hillary might be doing the same. When the networks declared me the New Hampshire winner, the Democratic race was still too close to call, and we revised my speech accordingly. I began by noting I was too old to be called any kind of kid, “but we sure showed them what a comeback looks like.” I thanked the people of New Hampshire for hearing me out even when they disagreed with me. We were down in the polls and written off when we came here, I reminded them, “and we had just one strategy: to tell you what I believe.”

Unable to congratulate the winner of the Democrats’ primary, I paid my respects to the supporters of all the candidates, Republicans and Democrats, who “worked for a cause they believe is good for the country we all love.”

We had a long way to go. The Michigan primary was a little more than a week away. Mitt would be hard to beat there. South Carolina would be a close contest between Huckabee, Fred Thompson, and me. I needed to win one of them to continue. Winning both would be preferable, but South Carolina, the place where my rocket ride out of New Hampshire in 2000 crashed, loomed larger. Eight years before, I had stood on the steps of the Bedford, New Hampshire, town hall the night before the primary and looked out on a sea of faces. There were people crowding the streets and intersection, extending several blocks. It was thrilling, and I knew I was on the cusp of my biggest political triumph. It remains to this day my favorite campaign memory.

My 2008 primary win was not as heady as our victory in 2000. But I was deeply touched by it, and have had ever since a special affection for the proud voters in the first-in-the-nation primary. “These people have been so good to us,” I told Cindy that night. “I owe them so much.”

The next day, somewhere in Iraq’s Anbar Province, my son Jimmy helped dig an MRAP, a heavily armored personnel carrier, out of the mud in a wadi that had flooded in a downpour. He was knee-deep in the muck working a shovel, and sweating in the oppressive heat, when his sergeant walked over to him.

“McCain.”

“Yes, Sergeant.”

“Your dad won New Hampshire.”

“Did he?”

“Yeah, keep digging.”

“Yes, Sergeant.”

I laughed when Jimmy recounted the exchange for me when we were reunited some months later, and I laugh every time I retell it to friends. But as I have remembered it in the years that followed, and remembered, too, my worry then that my ambitions had exposed my youngest son to even greater danger, I’m moved to tears.

CHAPTER TWO

Country First

I RECEIVED A DECENT BUMP in the national polls following my New Hampshire win, and our fund raising picked up, although we still had to pay off the bank loan we borrowed in the summer to keep the campaign running. National polling leads can create a false impression that someone is a front-runner. We don’t have national primaries. The next contest was in Michigan on January 15, and Mitt and I were running neck and neck there. Michigan wasn’t do-or-die for me, but it was for Mitt. Huckabee and I had split the first two contests. Mitt had to get into the picture now or risk being written off by reporters and donors. South Carolina was four days after Michigan, and Mitt wasn’t competing there. I saw the chance to finish him off and secure a nearly invincible position by winning Michigan and beating Huckabee and Fred Thompson in South Carolina. I had upset George Bush in the 2000 Michigan primary, and believed I had a good feel for campaign…..


from

THE RESTLESS WAVE. Good Times, Just Causes, Great Fights and Other Appreciations

by John McCain

get it at Amazon.com

Could New Zealand’s economy survive a China crisis? – Liam Dann.

“The scenarios chosen are almost certain not to accurately reflect any future shock or combination of shocks that occurs.”

In July 2018 China’s economy falters sending shockwaves through the global banking sector. Commodity prices plunge and the world faces its first global financial crisis since the meltdown of 2008.

What happens next is pretty ugly for most New Zealanders, job losses, soaring mortgage rates, falling house prices and a sharp recession.

As a forecast it would be unnecessarily gloomy, although not implausible.

But the grim situation painted by NZ Treasury is not meant to be a prediction it is a model designed to offer a stress test of our economy under extreme conditions.

A Chinese financial crisis is one of three ”large but plausible shocks” modelled by Treasury in its 2018 Investment Statement, along with a major Wellington earthquake and an outbreak of foot and mouth disease.

So what happens next in the event of a Chinese economic meltdown?

The first thing that New Zealand would see is a dramatic fall in demand for our exports. The terms of trade drops 20 per cent. The value of the Kiwi dollar plunges 13 per cent.

For most New Zealanders that means the cost of imported goods, like iPhones, and overseas travel spikes.

But that’s not the really ugly bit.

Disruption to global debt markets would push local funding costs up by 3 per cent, Treasury says.

In other words interest rates would soar, bad news for homeowners who aren’t on fixed rates.

Treasury’s model sees this flowing through to sharp fails in property prices and on the sharemarket.

In fact they estimate the cost of the revaluation if assets and liabilities at $30 billion.

Nearly $20b of that would be due to a 40 per cent crash on the stock exchange both here and around the world, devastating news for KiwiSavers.

For homeowners the immediate price fall would be about 10 per cent, as we saw in the last GFC, survivable for most unless you are under pressure to sell.

But similar falls in commercial property and farm prices would put additional stress on the economy.

Meanwhile, the uncertain outlook would drive a decline in consumer and business confidence. Both retail spending and business investment would fall. Then firms would start cutting jobs.

Some 60,000 jobs would be lost in 2019, with unemployment spiking to 7.4 per cent the highest level since 1999. it is currently 4.4 percent.

The Reserve Bank (RBNZ) would attempt to ride to the rescue of course.

You could expect to see the RBNZ cut rates by half a per cent in its September review to 1.25 per cent, Treasury estimates.

The RBNZ would likely keep cutting over the next six months until the official cash rate was at, or near, zero.

From here the news gets a little better. And as we saw during the 2008 GFC the economy has the strength and flexibility to bounce back.

The rate cuts couldn’t prevent a recession in the March quarter of 2019.

But while demand for goods exports remains low, the depreciation in the dollar, means the annual value stays on target.

“Record low interest rates and an improvement in the economic outlook leads to a pickup in business confidence, driving a strong increase in business investment,” Treasury says.

Life would still be tough for workers.

“Employment growth and consumer spending remain soft throughout.”

In the final wash-up the financial downturn would cost the Crown $157b across five years.

Net debt would rise to 33 per cent of GDP after five years 15 per cent higher than 2017 forecasts.

But ultimately the economy would pass the test.

Treasury notes that these stress tests are designed to assess whether severe but plausible shocks could have impacts that are beyond the financial capacity to absorb, thus putting the provision of public services at risk.

“The scenarios chosen are almost certain not to accurately reflect any future shock or combination of shocks that occurs.”

Childhood Adversity Can Change Your Brain. How People Recover From Post Childhood Adversity Syndrome – Donna Jackson Nakazawa * Future Directions in Childhood Adversity and Youth Psychopathology – Katie A. McLaughlin.

Childhood Adversity: exposure during childhood or adolescence to environmental circumstances that are likely to require significant psychological, social, or neurobiological adaptation by an average child and that represent a deviation from the expectable environment.

Early emotional trauma changes who we are, but we can do something about it.

The brain and body are never static; they are always in the process of becoming and changing.

Findings from epidemiological studies indicate clearly that exposure to childhood adversity powerfully shapes risk for psychopathology.

This research tells us that what doesn’t kill you doesn’t necessarily make you stronger; far more often, the opposite is true.

Donna Jackson Nakazawa

If you’ve ever wondered why you’ve been struggling a little too hard for a little too long with chronic emotional and physical health conditions that just won’t abate, feeling as if you’ve been swimming against some invisible current that never ceases, a new field of scientific research may offer hope, answers, and healing insights.

In 1995, physicians Vincent Felitti and Robert Anda launched a large scale epidemiological study that probed the child and adolescent histories of 17,000 subjects, comparing their childhood experiences to their later adult health records. The results were shocking: Nearly two thirds of individuals had encountered one or more Adverse Childhood Experiences (ACEs), a term Felitti and Anda coined to encompass the chronic, unpredictable, and stress inducing events that some children face. These included growing up with a depressed or alcoholic parent; losing a parent to divorce or other causes; or enduring chronic humiliation, emotional neglect, or sexual or physical abuse. These forms of emotional trauma went beyond the typical, everyday challenges of growing up.

The number of Adverse Childhood Experiences an individual had had predicted the amount of medical care she’d require as an adult with surprising accuracy:

– Individuals who had faced 4 or more categories of ACEs were twice as likely to be diagnosed with cancer as individuals who hadn’t experienced childhood adversity.

– For each ACE Score a woman had, her risk of being hospitalized with an autoimmune disease rose by 20 percent.

– Someone with an ACE Score of 4 was 460 percent more likely to suffer from depression than someone with an ACE Score of 0.

– An ACE Score greater than or equal to 6 shortened an individual’s lifespan by almost 20 years.

The ACE Study tells us that experiencing chronic, unpredictable toxic stress in childhood predisposes us to a constellation of chronic conditions in adulthood. But why? Today, in labs across the country, neuroscientists are peering into the once inscrutable brain-body connection, and breaking down, on a biochemical level, exactly how the stress we face when we’re young catches up with us when we’re adults, altering our bodies, our cells, and even our DNA. What they’ve found may surprise you.

Some of these scientific findings can be a little overwhelming to contemplate. They compel us to take a new look at how emotional and physical pain are intertwined.

1. Epigenetic Shifts

When we’re thrust over and over again into stress inducing situations during childhood or adolescence, our physiological stress response shifts into overdrive, and we lose the ability to respond appropriately and effectively to future stressors 10, 20, even 30 years later. This happens due to a process known as gene methylation, in which small chemical markers, or methyl groups, adhere to the genes involved in regulating the stress response, and prevent these genes from doing their jobs.

As the function of these genes is altered, the stress response becomes re-set on ”high” for life, promoting inflammation and disease.

This can make us more likely to overreact to the everyday stressors we meet in our adult life, an unexpected bill, a disagreement with a spouse, or a car that swerves in front of us on the highway, creating more inflammation. This, in turn, predisposes us to a host of chronic conditions, including autoimmune disease, heart disease, cancer, and depression.

Indeed, Yale researchers recently found that children who’d faced chronic, toxic stress showed changes “across the entire genome,” in genes that not only oversee the stress response, but also in genes implicated in a wide array of adult diseases. This new research on early emotional trauma, epigenetic changes, and adult physical disease breaks down longstanding delineations between what the medical community has long seen as “physical” disease versus what is “mental” or “emotional.”

2. Size and Shape of the Brain

Scientists have found that when the developing brain is chronically stressed, it releases a hormone that actually shrinks the size of the hippocampus, an area of the brain responsible of processing emotion and memory and managing stress. Recent magnetic resonance imaging (MRI) studies suggest that the higher an individual’s ACE Score, the less gray matter he or she has in other key areas of the brain, including the prefrontal cortex, an area related to decision making and self regulatory skills, and the amygdala, or fear-processmg center. Kids whose brains have been changed by their Adverse Childhood Experiences are more likely to become adults who find themselves over-reacting to even minor stressors.

3. Neural Pruning

Children have an overabundance of neurons and synaptic connections; their brains are hard at work, trying to make sense of the world around them. Until recently, scientists believed that the pruning of excess neurons and connections was achieved solely in a “use-it-or-lose-it” manner, but a surprising new player in brain development has appeared on the scene: non-neuronal brain cells, known as microglia, which make up one-tenth of all the cells in the brain, and are actually part of the immune system, participate in the pruning process. These cells prune synapses like a gardener prunes a hedge. They also engulf and digest entire cells and cellular debris, thereby playing an essential housekeeping role.

But when a child faces unpredictable, chronic stress of Adverse Childhood Experiences, microglial cells “can get really worked up and crank out neurochemicals that lead to neuroinflammation,” says Margaret McCarthy, PhD, whose research team at the University of Maryland Medical Center studies the developing brain. “This below-the-radar state of chronic neuroinflammation can lead to changes that reset the tone of the brain for life.”

That means that kids who come into adolescence with a history of adversity and lack the presence of a consistent, loving adult to help them through it may become more likely to develop mood disorders or have poor executive functioning and decision-making skills.

4. Telomeres

Early trauma can make children seem “older,” emotionally speaking, than their peers. Now, scientists at Duke University; the University of California, San Francisco; and Brown University have discovered that Adverse Childhood Experiences may prematurely age children on a cellular level as well. Adults who’d faced early trauma show greater erosion in what are known as telomeres, the protective caps that sit on the ends of DNA strands, like the caps on Shoelaces, to keep the genome healthy and intact. As our telomeres erode, we’re more likely to develop disease, and our cells age faster.

5. Default Mode Network

Inside each of our brains, a network of neurocircuitry, known as the “default mode network,” quietly hums along, like a car idling in a driveway. It unites areas of the brain associated with memory and thought integration, and it’s always on standby, ready to help us to figure out what we need to do next. “The dense connectivity in these areas of the brain help us to determine what’s relevant or not relevant, so that we can be ready for whatever our environment is going to ask of us,” explains Ruth Lanius, neuroscientist professor of psychiatry, and director of the Post Traumatic Stress (PTSD) Research Unit at the University of Ontario.

But when children face early adversity and are routinely thrust into a state of fight-or-flight, the default mode network starts to go offline; it’s no longer helping them to figure out what’s relevant, or what they need to do next.

According to Lanius, kids who’ve faced early trauma have less connectivity in the default mode network, even decades after the trauma occurred. Their brains don’t seem to enter that healthy idling position, and so they may have trouble reacting appropriately to the world around them.

6. Brain-Body Pathway

Until recently, it’s been scientifically accepted that the brain is ”immune-privileged,” or cut off from the body’s immune system. But that turns out not to be the case, according to a groundbreaking study conducted by researchers at the University of Virginia School of Medicine. Researchers found that an elusive pathway travels between the brain and the immune system via lymphatic vessels. The lymphatic system, which is part of the circulatory system, carries lymph, a liquid that helps to eliminate toxins, and moves immune cells from one part of the body to another. Now we know that the immune system pathway includes the brain.

The results of this study have profound implications for ACE research. For a child who’s experienced adversity, the relationship between mental and physical suffering is strong: the inflammatory chemicals that flood a child’s brain when she’s chronically stressed aren’t confined to the brain alone; they’re shuttled from head to toe.

7. Brain Connectivity

Ryan Herringa, neuropsychiatrist and assistant professor of child and adolescent psychiatry at the University of Wisconsin, found that children and teens who’d experienced chronic childhood adversity showed weaker neural connections between the prefrontal cortex and the hippocampus. Girls also displayed weaker connections between the prefrontal cortex and the amygdala. The prefrontalcortex-amygdala relationship plays an essential role in determining how emotionally reactive we’re likely to be to the things that happen to us in our day-to-day life, and how likely we are to perceive these events as stressful or dangerous.

According to Herringa:

“If you are a girl who has had Adverse Childhood Experiences and these brain connections are weaker, you might expect that in just about any stressful situation you encounter as life goes on, you may experience a greater level of fear and anxiety.”

Girls with these weakened neural connections, Herringa found, stood at a higher risk for developing anxiety and depression by the time they reached late adolescence. This may, in part, explain why females are nearly twice as likely as males to suffer from later mood disorders.

This science can be overwhelming, especially to those of us who are parents. So, what can you do if you or a child you love has been affected by early adversity?

The good news is that, just as our scientific understanding of how adversity affects the developing brain is growing, so is our scientific insight into how we can offer the children we love resilient parenting, and how we can all take small steps to heal body and brain. Just as physical wounds and bruises heal, just as we can regain our muscle tone, we can recover function in under-connected areas of the brain. The brain and body are never static; they are always in the process of becoming and changing.

Donna Jackson Nakazawa

8 Ways People Recover From Post Childhood Adversity Syndrome

New research leads to new approaches with wide benefits.

In this infographic, I show the link between Adverse Childhood Experiences, later physical adult disease, and what we can do to heal.

Cutting edge research tells us that experiencing childhood emotional trauma can play a large role in whether we develop physical disease in adulthood. In Part 1 of this series we looked at the growing scientific link between childhood adversity and adult physical disease. This research tells us that what doesn’t kill you doesn’t necessarily make you stronger; far more often, the opposite is true.

Adverse Childhood Experiences (ACES), which include emotional or physical neglect; harm developing brains, predisposing them to autoimmune disease, heart disease, cancer, debression, and a number of other chronic conditions, decades after the trauma took place.

Recognizing that chronic childhood stress can play a role, along with genetics and other factors, in developing adult illnesses and relationship challenges, can be enormously freeing. If you have been wondering why you’ve been struggling a little too hard for a little too long with your emotional and physical wellbeing, feeling as if you’ve been swimming against some invisible current that never ceases this “aha” can come as a welcome relief. Finally, you can begin to see the current and understand how it’s been working steadily against you all of your life.

Once we understand how the past can spill into the present, and how a tough childhood can become a tumultuous, challenging adulthood, we have a new possibility of healing. As one interviewee in my new book, Childhood Disrupted: How Your Biography Becomes Your Biology, and How You Can Heal, said, when she learned about Adverse Childhood Experiences for the first time, “Now I understand why I’ve felt all my life as if I’ve been trying to dance without hearing any music.” Suddenly, she felt the possibility that by taking steps to heal from the emotional wounds of the past she might find a new layer of healing in the present.

There is truth to the old saying that knowledge is power. Once you understand that your body and brain have been harmed by the biological impact of early emotional trauma, you can at last take the necessary, science based steps to remove the fingerprints that early adversity left on your neurobiology. You can begin a journey to healing, to reduce your proclivity to inflammation, depression, addiction, physical pain, and disease.

Science tells us that biology does not have to be destiny. ACEs can last a lifetime but they don’t have to. We can reboot our brains. Even if we have been set on high reactive mode for decades or a lifetime, we can still dial it down. We can respond to life’s inevitable stressors more appropriately and shift away from an overactive inflammatory response. We can become neurobiologically resilient. We can turn bad epigenetics into good epigenetics and rescue ourselves.

Today, researchers recognize a range of promising approaches to help create new neurons (known as neurogenesis), make new synaptic connections between those neurons (known as synaptogenesis), promote new patterns of thoughts and reactions, bring underconnected areas of the brain back online, and reset our stress response so that we decrease the inflammation that makes us ill.

We have the capacity, within ourselves, to create better health. We might call this brave undertaking “the neurobiology of awakening.”

There can be no better time than now to begin your own awakening, to proactively help yourself and those you love, embrace resilience, and move forward toward growth, even transformation.

Here are 8 steps to try:

1. Take the ACE Questionnaire

The single most important step you can take toward healing and transformation is to fill out the ACE Questionnaire for yourself and share your results with your health, care practitioner. For many people, taking the 10-question survey “helps to normalize the conversation about Adverse Childhood Experiences and their impact on our lives,” says Vincent Felitti, co-founder of the ACE Study. “When we make it okay to talk about what happened, it removes the power that secrecy so often has.”

You’re not asking your healthcare practitioner to act as your therapist, or to change your prescriptions; you’re simply acknowledging that there might be a link between your past and your present. Ideally, given the recent discoveries in the field of ACE research, your doctor will also acknowledge that this link is plausible, and add some of the following modalities to your healing protocol.

2. Begin Writing to Heal.

Think about writing down your story of childhood adversity, using a technique psychologists call “writing to heal.” James Pennebaker, professor of psychology at the University of Texas, Austin, developed this assignment, which demonstrates the effects of writing as a healing modality. He suggests: “Over the next four days, write down your deepest emotions and thoughts about the emotional upheaval that has been influencing your life the most. in your writing, really let go and explore the event and how it has affected you. You might tie this experience to your childhood, your relationship with your parents, people you have loved or love now…Write continuously for twenty minutes a day.”

When Pennebaker had students complete this assignment, their grades went up. When adults wrote to heal, they made fewer doctors’ visits and demonstrated changes in their immune function. The exercise of writing about your secrets, even if you destroy what you’ve written afterward, has been shown to have positive health effects.

3. Practice Mindfulness Meditation.

A growing body of research indicates that individuals who’ve practiced mindfulness meditation and Mindfulness Based Stress Reduction (MBSR) show an increase in gray matter in the same parts of the brain that are damaged by Adverse Childhood Experiences and shifts in genes that regulate their physiological stress response.

According to Trish Magyari, LCPC, a mindfulness-based psychotherapist and researcher who specializes in trauma and illness, adults abuse who took part in a “trauma-sensitive” MBSR program, had less anxiety and depression, and demonstrated fewer PTSD symptoms, even two years after taking the course.

Many meditation centers offer MBSR classes and retreats, but you can practice anytime in your own home. Choose a time and place to focus on your breath as it enters and leaves your nostrils; the rise and fall of your chest; the sensations in your hands or through the whole body; or sounds within or around you. If you get distracted, just come back to your anchor.

There are many medications you can take that dampen the sympathetic nervous system (which ramps up your stress response when you come into contact with a stressor), but there aren’t any medications that boost the parasympathetic nervous system (which helps to calm your body down after the stressor has passed).

Your breath is the best natural calming treatment, and it has no side effects.

4. Yoga

When children face ACEs, they often store decades of physical tension from a fight, flight, or freeze state of mind in their bodies. PET scans show that yoga decreases blood flow to the amygdala, the brain’s alarm center, and increases blood flow to the frontal lobe and prefrontal cortex, which help us to react to stressors with a greater sense of equanimity.

Yoga has also be found to increase levels of GABA, or gamma aminobutyric acid, a chemical that improves brain function, promotes calm, and helps to protect us against depression and anxiety.

5. Therapy

Sometimes, the long lasting effects of childhood trauma are just too great to tackle on our own. In these cases, says Jack Kornfield, psychologist and meditation teacher, “meditation is not always enough.” We need to bring unresolved issues into a therapeutic relationship, and get backup in unpacking the past.

When we partner with a skilled therapist to address the adversity we may have faced decades ago, those negative memories become paired with the positive experience of being seen by someone who accepts us as we are, and a new window to healing opens.

Part of the power of therapy lies in an allowing safe person. A therapist’s unconditional acceptance helps us to modify the circuits in our brain that tell us that we can’t trust anyone, and grow new, healthier neural connections.

It can also help us to heal the underlying, cellular damage of traumatic stress, down to our DNA. In one study, patients who underwent therapy showed changes in the integrity of their genome, even a year after their regular sessions ended.

6. EEG Neurofeedback

Electroencephalographic (EEG) Neurofeedback is a clinical approach to healing childhood trauma in which patients learn to influence their thoughts and feelings by watching their brain’s electrical activity in real-time, on a laptop screen. Someone hooked up to the computer via electrodes on his scalp might see an image of a field; when his brain is under-activated in a key area, the field, which changes in response to neural activity, may appear to be muddy and gray, the flowers wilted; but when that area of the brain reactivates, it triggers the flowers to burst into color and birds to sing. With practice, the patient learns to initiate certain thought patterns that lead to neural activity associated with pleasant images and sounds.

You might think of a licensed EEG Neurofeedback therapist as a musical conductor, who’s trying to get different parts of the orchestra to play a little more softly in some cases, and a little louder in others, in order to achieve harmony. After just one EEG Neurofeedback session, patients showed greater neural connectivity and improved emotional resilience, making it a compelling option for those who’ve suffered the long lasting effects of chronic, unpredictable stress in childhood.

7. EMDR Therapy

Eye Movement Desensitization and Reprocessing (EMDR) is a potent form of psychotherapy that helps individuals to remember difficult experiences safely and relate those memories in ways that no longer cause pain in the present.

Here’s how it works:

EMDR-certified therapists help patients to trigger painful emotions. As these emotions lead the patients to recall specific difficult experiences, they are asked to shift their gaze back and forth rapidly, often by following a pattern of lights or a wand that moves from right to left, right to left, in a movement that simulates the healing action of REM sleep.

The repetitive directing of attention in EMDR induces a neurobiological state that helps the brain to re-integrate neural connections that have been dysregulated by chronic, unpredictable stress and past experiences. This re-integration can, in turn, lead to a reduction in the episodic, traumatic memories we store in the hippocampus, and downshift the amygdala’s activity. Other studies have shown that EMDR increases the volume of the hippocampus,

EMDR therapy has been endorsed by the World Health Organization as one of only two forms of psychotherapy for children and adults in natural disasters and war settings.

8. Rally Community Healing

Often, ACEs stem from bad relationships, neglectful relatives, schoolyard bullies, abusive partners, but the right kinds of relationships can help to make us whole again. When we find people who support us, when we feel “tended and befriended,” our bodies and brains have a better shot at healing. Research has found that having strong social ties improves outcomes for women with breast cancer, multiple sclerosis, and other diseases. In part, that’s because positive interactions with others boost our production of oxytocin, a feel-good hormone that dials down the inflammatory stress response.

If you’re at a loss for ways to connect, try a mindfulness meditation community or an MBSR class, or pass along the ACE Questionnaire or even my newest book, Childhood Disrupted: How Your Biography Becomes Your Biology, and How You Can Heal, to family and friends to spark important, meaningful conversations.

You’re Not Alone

Whichever modalities you and your physician choose to implement, it’s important to keep in mind that you’re not alone. When you begin to understand that your feelings of loss, shame, guilt, anxiety, or grief are shared by so many others, you can lend support and swap ideas for healing.

When you embrace the process of healing despite your Adverse Childhood Experiences, you don’t just become who you might have been if you hadn’t encountered childhood suffering in the first place. You gain something better, the hard earned gift of life wisdom, which you bring forward into every arena of your life. The recognition that you have lived through hard times drives you to develop deeper empathy, seek more intimacy, value life’s sweeter moments, and treasure your connectedness to others and to the world at large. This is the hard won benefit of having known suffering.

Best of all, you can find ways to start right where you are, no matter where you find yourself.

Future Directions in Childhood Adversity and Youth Psychopathology

Katie A. McLaughlin, Department of Psychology, University of Washington

Abstract

Despite long standing interest in the influence of adverse early experiences on mental health, systematic scientific inquiry into childhood adversity and developmental outcomes has emerged only recently. Existing research has amply demonstrated that exposure to childhood adversity is associated with elevated risk for multiple forms of youth psychopathology.

In contrast. knowledge of developmental mechanisms linking childhood adversity to the onset of Psychopathology, and whether those mechanisms are general or specific to particular kinds of adversity, remains cursory.

Greater understanding of these pathways and identification of protective factors that buffer children from developmental disruptions following exposure to adversity is essential to guide the development of interventions to prevent the onset of psychopathology following adverse childhood experiences,

This article provides recommendations for future research in this area. In particular, use of a consistent definition of childhood adversity, integration of studies of typical development with those focused on childhood adversity, and identification of distinct dimensions of environmental experience that differentially influence development are required to uncover mechanisms that explain how childhood adversity is associated with numerous psychopathology outcomes (i.e., multifinality) and identify moderators that shape divergent trajectories following adverse childhood experiences.

A transdiagnostic model that highlights disruptions in emotional processing and poor executive functioning as key mechanisms linking childhood adversity with multiple forms of psychopathology is presented as a starting point in this endeavour. Distinguishing between general and specific mechanisms linking childhood adversity with psychopathology is needed to generate empirically informed interventions to prevent the long term consequences of adverse early environments on children’s development.

The lasting influence of early experience on mental health across the lifespan has been emphasized in theories of the etiology of psychopathology since the earliest formulations of mental illness. In particular, the roots of mental disorder have often been argued to be a consequence of adverse environmental experiences occurring in childhood. Despite this long standing interest, systematic scientific inquiry into the effects of childhood adversity on health and development has emerged only recently.

Prior work on childhood adversity focused largely on individual types of adverse experiences, such as death of a parent, divorce, sexual abuse, or poverty, and research on these topics evolved as relatively independent lines of inquiry. The transition to considering these types of adversities as indicators of the same underlying construct was prompted, in part, by the findings of a seminal study examining childhood adversity as a determinant of adult physical and mental health and advances in theoretical conceptualizations of stress. Specifically. findings from the Adverse Childhood Experiences (ACE) Study documented high levels of cooccurrence of multiple forms of childhood adversity and strong associations of exposure to adverse childhood experiences with a wide range of adult health outcomes (Dong et al., 2004; Edwards, Holden, Felitti, & Anda, 2003; Felitti et al., 1998).

Around the same time, the concept of allostatic load was introduced as a comprehensive neurobiological model of the effects of stress (McEwen, 1998, 2000). Allostatic load provided a framework for explaining the neurobiological mechanisms linking a variety of adverse social experiences to health. Together, these discoveries sparked renewed interest in the childhood determinants of physical and mental health. Since that time there has been a veritable explosion of research into the impact of childhood adversity on developmental outcomes, including psychopathology.

CHILDHOOD ADVERSITY AND PSYCHOPATHOLOGY

Over the past two decades, hundreds of studies have examined the associations between exposure to childhood adversity and risk for psychopathology (Evans, Li, & Whipple, 2013). Here, I briefly review this evidence, focusing specifically on findings from epidemiological studies designed to allow inferences to be drawn at the population level. These studies have documented five general patterns with regard to childhood adversity and the distribution of mental disorders in the population.

First, despite differences across studies in the prevalence of specific types of adversity, all population based studies indicate that exposure to childhood adversity is common. The prevalence of exposure to childhood adversity is estimated at about 50% in the U.S. population across numerous epidemiological surveys (Green et al., 2010; Kessler, Davis, & Kendler, 1997; McLaughlin, Conron, Koenen, & Gilman, 2010; McLaughlin, Green et al., 2012). Remarkably similar prevalence estimates have been documented in other high income countries, as well as in low and middle income countries worldwide (Kessler et al., 2010).

Second, individuals who have experienced childhood adversity are at elevated risk for developing a lifetime mental disorder compared to individuals without such exposure, and the odds of developing a lifetime mental disorder increase as exposure to adversity increases (Edwards et al., 2003; Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010; McLaughlin, Conron, etal., 2010; McLaughlin, Green, et al., 2012).

Third, exposure to childhood adversity confers vulnerability to psychopathology that persists across the life course. Childhood adversity exposure is associated not only with risk of mental disorder onset in childhood and adolescence (McLaughlin, Green, et al., 2012) but also with elevated odds of developing a first onset mental disorder in adulthood, which persists after adjustment for mental disorders beginning at earlier stages of development (Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010).

Fourth, the associations of childhood adversity with different types of commonly occurring mental disorders are largely nonspecific. Individuals who have experienced childhood adversity experience greater odds of developing mood, anxiety, substance use, and disruptive behavior disorders, with little meaningful variation in the strength of associations across disorder classes (Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010; McLaughlin, Green, et al., 2012).

Recent epidemiological findings suggest that the associations of child maltreatment, a commonly measured form of adversity, with lifetime mental disorders operate entirely through a latent liability to experience internalizing and externalizing psychopathology with no direct effects on specific mental disorders that are not explained by this latent vulnerability (Caspi et al., 2014; Keyes et al., 2012).

Finally, exposure to childhood adversity explains a substantial proportion of mental disorder onsets in the population, both in the United States and cross nationally (Afifi et al., 2008′, Green et a1., 2010; Kessler et al., 2010; McLaughlin, Green, et al., 2012). This reflects both the high prevalence of exposure to childhood adversity and the strong association of childhood adversity with the onset of psychopathology.

Together, findings from epidemiological studies indicate clearly that exposure to childhood adversity powerfully shapes risk for psychopathology in the population.

As such, it is time for the field to move beyond these types of basic descriptive studies to research designs aimed at identifying the underlying developmental mechanisms linking childhood adversity to psychopathology. Although ample research has been conducted examining mechanisms linking individual types of adversity to psychopathology (e.g., sexual abuse; Trickett, Noll, & Putnam, 2011), far less is known about which of these mechanisms are common across different types of adversity versus specific to particular types of experiences. Greater understanding of these pathways, as well as the identification of protective factors that buffer children from disruptions in emotional, cognitive, social, and neurobiological development following exposure to adversity, is essential to guide the development of interventions to prevent the onset of psychopathology in children exposed to adversity, a critical next step for the field.

However, persistent issues regarding the definition and measurement of childhood adversity must be addressed before meaningful progress on mechanisms, protective factors, and prevention of psychopathology following childhood adversity will be possible.

FUTURE DIRECTIONS IN CHILDHOOD ADVERSITY AND YOUTH PSYCHOPATHOLOGY

This article has two primary goals. The first is to provide recommendations for future research on childhood adversity and youth psychopathology. These recommendations relate to the definition and measurement of childhood adversity, the integration of studies of typical development with those on childhood adversity, and the importance of distinguishing between general and specific mechanisms linking childhood adversity to psychopathology.

The second goal is to provide a transdiagnostic model of mechanisms linking childhood adversity and youth psychopathology that incorporates each of these recommendations.

Defining Childhood Adversity

Childhood adversity is a construct in search of a definition. Despite the burgeoning interest and research attention devoted to childhood adversity, there is a surprising lack of consistency with regard to the definition and measurement of the construct. Key issues remain unaddressed in the literature regarding the definition of childhood adversity and the boundary conditions of the construct. To what does the construct of childhood adversity refer? What types of experiences qualify as childhood adversity and what types do not?

Where do we draw the line between normative experiences of stress and those that qualify as an adverse childhood experience? How does the construct of childhood adversity differ from other constructs that have been linked to psychopathology risk, including stress, toxic stress. and trauma? It will be critical to gain clarity on these definitional issues before more complex questions regarding mechanisms and protective factors can be systematically examined.

Even in the seminal ACE Study that spurred much of the recent research into childhood adversity, a concrete definition of adverse childhood experience is not provided. The original article from the study argues for the importance of understanding the lasting health effects of child abuse and “household dysfunction,” the latter of which is never defined specifically (Felitti et al., 1998). The CDC website for the ACE Study indicates that the ACE score. a count of the total number of adversities experienced. is designed to assess ”the total amount of stress experienced during childhood.”

Why has a concrete definition of childhood adversity remained elusive? As I see it, there is a relatively simple explanation for this notable gap in the literature. Childhood adversity is difficult to define but fairly obvious to most observers. making the construct an example of the classic standard of you know it when you see it. Although this has allowed a significant scientific knowledge base on childhood adversity to emerge within a relatively short period, the lack of an agreed upon definition of the construct represents a significant impediment to future progress in the field.

How can we begin to build scientific consensus on the definition of childhood adversity? Critically, we must come to an agreement about what childhood adversity is and what it is not. Adversity is defined as “a state or instance of serious or continued difficulty or misfortune; a difficult situation or condition; misfortune or tragedy” (“Adversity,” 2015).

This provides a reasonable starting point. Adversity is an environmental event that must be serious (i.e., severe) or a series of events that continues overtime (i.e.. chronic).

Building on Scott Monroe‘s (2008) definition of life stress and models of experience expectant brain development (Baumrind, 1993; Fox. Levitt, & Nelson. 2010), I propose that childhood adversity should be defined as experiences that are likely to require significant adaptation by an average child and that represent a deviation from the expectable environment. The expectable environment refers to a wide range of species typical environmental inputs that the human brain requires to develop normally. These include sensory inputs (e.g., variation in patterned light information that is required for normal development of the visual system), exposure to language, and the presence of a sensitive and responsive caregiver (Fox et al., 2010).

As I have argued elsewhere (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014), deviations from the expectable environment often take two primary forms: an absence of expected inputs (e. g., limited exposure to language or the absence of a primary caregiver), or the presence of unexpected inputs that represent significant threats to the physical integrity or well being of the child (e.g., exposure to violence).

A similar approach to classifying key forms of child adversity has been articulated by others as well (Farah et al., 2008; Humphreys & Zeanah, 2015). These experiences can either be chronic (e.g.. prolonged neglect) or involve single events that are severe enough to represent a deviation from the expectable environment (e.g., sexual abuse).

Together, this provides a working definition of childhood adversity: exposure during childhood or adolescence to environmental circumstances that are likely to require significant psychological, social, or neurobiological adaptation by an average child and that represent a deviation from the expectable environment.

This definition provides some clarity about what childhood adversity is not. The clearest boundary condition involves the developmental timing of exposure; experiences classified as childhood adversity must occur prior to adulthood, either during childhood or adolescence. Most research on childhood adversity has taken a broad definition of childhood, including events occurring during either childhood or adolescence. Although the demarcation between adolescence and adulthood is itself a point of debate, relative consensus exists regarding the onset of adult roles as the end of adolescence (Steinberg, 2014).

Second. childhood adversity refers to an event or ongoing events in the environment. Childhood adversity thus refers only to specific environmental circumstances or events and not to an individual child’s response to those circumstances.

Third, childhood adversity refers to environmental conditions that are likely to require significant psychological, social, or neurobiological adaptation by an average child; therefore, events that represent transient or minor hassles should not qualify.

What types of events should be considered severe enough to warrant classification as adversity? Although there is no absolute rule or formula that can be used to distinguish circumstances or events requiring significant adaptation from those that are less severe or impactful, childhood adversity should include conditions or events that are likely to have a meaningful and lasting impact on developmental processes for most children who experience them. In other words, experiences that could alter fundamental aspects of development in emotional, cognitive, social, or neurobiological domains are the types of experiences that should qualify as adversity.

Studies of childhood adversity should clearly define the study specific decision rules used to distinguish between adversity and more normative stressors.

Finally, environmental circumstances or stressors that do not represent deviations from the expectable environment should not be classified as childhood adversity. In other words. childhood adversity should not include any and all stressors that occur during childhood or adolescence. Two examples of childhood stressors that would likely not qualify as childhood adversity based on this definition, because they do not meet the condition of representing a deviation from the expectable environment, are moving to a new school and death of an elderly grandparent. Each of these childhood stressors should require adaptation by an average child. and could influence mental health and development. However, neither represents a deviation from the expectable childhood environment and therefore does not meet the proposed definition of childhood adversity.

A key question for the field is whether the definition of childhood adversity should be narrow or broad. This question will determine whether other common forms of adversity or stress should be considered as indicators of childhood adversity. For example, many population based studies have included parental psychopathology and divorce as forms of adversity (Felitti et al., 1998; Green et al.. 2010). Given the high prevalence of psychopathology and divorce in the population, consideration of any form of parental psychopathology or any type of divorce as a form of adversity results in a fairly broad definition of adversity; certainly, not all cases of parental psychopathology or all divorces result in significant adversity for children. A more useful approach might be to consider only those cases of parental psychopathology or divorce that result in parenting behavior that deviates from the expectable environment (i. e., consistent unavailability, unresponsiveness, or insensitive care) or that generate other types of significant adversity for children (e.g., economic adversity, emotional abuse, etc.) as meeting the threshold for childhood adversity. Providing these types of boundary conditions is important to prevent the construct of childhood adversity from meaning everything and nothing at the same time.

Finally, how does childhood adversity differ from related constructs, including stress, toxic stress, and trauma that can also occur during childhood? What is unique about the construct of childhood adversity that is not captured in definitions of these similar constructs?

First, how is childhood adversity different from stress? The prevailing conceptualization of life stress defines the construct as the adaptation of an organism to specific circumstances that change over time (Monroe, 2008). This definition includes three primary components that interact with one another: environment (the circumstance or event that requires adaptation by the organism), organism (the response to the environmental stimulus), and time (the interactions between the organism and the environment over time; Monroe, 2008). In contrast, childhood adversity refers only to the first of these three components, the environmental aspect of stress.

Second. how is adversity different from toxic stress, a construct recently developed by Jack Shonkoff and colleagues (Shonkoff & Garner, 2012)? Toxic stress refers to the second component of stress just described, the response of the organism. Specifically, toxic stress refers to exaggerated, frequent, or prolonged activation of physiological stress response systems in response to an accumulation of multiple adversities over time in the absence of protection from a supportive caregiver (Shonkoff & Garner, 2012). The concept of toxic stress is conceptually similar to the construct of allostatic load as defined by McEwen (2000) and focuses on a different aspect of stress than childhood adversity.

Finally, how is childhood adversity distinct from trauma? Trauma is defined as exposure to actual or threatened death. serious injury, or sexual violence, either by directly experiencing or witnessing such events or by learning of such events occurring to a close relative or friend (American Psychiatric Association, 2013). Traumatic events occurring in childhood represent one potential form of childhood adversity, but not all types of childhood adversity are traumatic. Examples of adverse childhood experiences that would not be considered traumatic are neglect; poverty; and the absence of a stable, supportive caregiver.

The first concrete recommendation for future research is that the field must utilize a consistent definition of childhood adversity. A useful definition must have clarity about what childhood adversity is and what it is not, provide guidance about decision rules for applying the definition in specific contexts, and increase consistency in the measurement and application of childhood adversity across studies. The definition proposed here that childhood adversity involves experiences that are likely to require significant adaptation by an average child and that represent a deviation from the expectable environment-represents a starting point in this endeavor, although consideration of alterative definitions and scholarly debate about the relative merits of different definitions is encouraged.

Integrating Studies of Typical and Atypical Development

A developmental psychopathology perspective emphasizes the reciprocal and integrated nature of our understanding of normal and abnormal development (Cicchetti, 1996′, Cicchetti & Lynch, 1993; Lynch & Cicchetti, 1998). Normal developmental patterns must be characterized to identify developmental deviations, and abnormal developmental outcomes shed light on the normal developmental processes that lead to maladaptation when disrupted (Cicchetti, 1993; Sroufe, 1990). Maladaptive outcomes, including psychopathology, are considered to be the product of developmental processes (Sroufe, 1997, 2009). This implies that in order to uncover mechanisms linking childhood adversity to psychopathology, the developmental trajectory of the candidate emotional, cognitive, social, or neurobiological process under typical circumstances must first be characterized before examining how exposure to an adverse environment alters that trajectory. This approach has been utilized less frequently than would be expected in the literature on childhood adversity.

Recent work from Nim Tottenham’s lab on functional connectivity between the amygdala and medial prefrontal cortex (mPFC) highlights the utility of this strategy. In an initial study, Gee, Humphreys, et a1. (2013) demonstrated age related changes in amygdala-mPFC functional connectivity in a typically developing sample of children during a task involving passive viewing of fearful and neutral faces. Specifically, they observed a developmental shift from a pattern of positive amygdala-mPFC functional connectivity during early and middle childhood to a pattern of negative connectivity (i.e., higher mPFC activity, lower amygdala activity) beginning in the prepubertal period and continuing throughout adolescence (Gee. Humphreys, et al., 2013). Next, they examined how exposure to institutional rearing in infancy influenced these age related changes, documenting a more mature pattern of negative functional connectivity among young children with a history of institutionalization (Gee, Gabard Dumam, et a1., 2013).

Utilizing this type of approach is important not only to advance knowledge of developmental mechanisms underlying childhood adversity-psychopathology associations but also to leverage research on adverse environmental experiences to inform our understanding of typical development. Specifically, as frequently argued by Cicchetti (Cicchetti & Toth, 2009), research on atypical or aberrant developmental processes can provide a window into typical development not available through other means, This is particularly relevant in studies of some forms of childhood adversity that involve an absence of expected inputs from the environment, such as institutional rearing and child neglect (McLaughlin. Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014). Examining the developmental consequences associated with deprivation in a particular type of input from the environment (e.g., the presence of an attachment figure, exposure to complex language) can provide insights into the types of environmental inputs that are required for a system or set of competencies to develop normally.

Evidence on the developmental trajectories of children raised in institutional settings provides an illustrative example. Institutions for abandoned and orphaned children vary widely, but a common feature across them is the absence of an attachment figure who provides sensitive and responsive care for each child (Smyke et al., 2007; Tottenham, 2012; Zeanah et al., 2003). Developmental research on children raised in institutional settings has provided ample evidence about the importance of the attachment relationship in early development for shaping numerous aspects of development, Unsurprisingly, most children raised in institutions fail to develop a secure attachment relationship to a caregiver; this is particularly true if children remain in institutional care past the age of 2 years (Smyke, Zeanah, Fox, Nelson, & Guthrie, 2010; Zeanah, Smyke, Koga, Carlson, & The Bucharest Early Intervention Project Core Group, 2005).

Children reared in institutional settings also exhibit social skills deficits, delays in language development, lasting disruptions in executive functioning skills, decrements in IQ, and atypical patterns of emotional processing (Almas et al., 2012; Bos. Fox, Zeanah, & Nelson, 2009; Nelson et al., 2007; Tibu et al., 2016; Tottenham et al., 2011; Windsor et al., 2011). Institutional rearing also has wide ranging impacts on patterns of brain development, including neural structure and function (Gee et al., 2013; McLaughlin, Fox, Zeanah, & Nelson, 20] 1; McLaughlin, Sheridan, Winter, et al., 2014; Sheridan, Fox, Zeanah, McLaughlin, & Nelson, 2012; Tottenham et al., 2011).

Although children raised in institutional settings often experience deprivation in environmental inputs of many kinds, it is likely that the absence of a primary attachment figure in early development explains many of the downstream consequences of institutionalization on developmental outcomes. Indeed, recent evidence suggests that disruptions in attachment may be a causal mechanism linking institutional rearing with the onset of anxiety and depression in children. Specifically, in a randomized controlled trial of foster care as an intervention for orphaned children in Romania, improvements in attachment security were a mechanism underlying the preventive effects of the intervention on the onset of anxiety and depression in children (McLaughlin, Zeanah, Fox, & Nelson, 2012). By examining the developmental consequences of the absence of an expected input from the environment, namely, the presence of a primary attachment figure, studies of institutional rearing provide strong evidence for the centrality of the early attachment relationship in shaping numerous aspects of development.

Sensitive Periods

The integration of studies on typical and atypical development may be particularly useful in the identification of sensitive periods. Developmental psychopathology emphasizes the cumulative and hierarchical nature of development (Gottlieb, 1991a, 1991b; Sroufe, 2009; Sroufe, Egeland, & Kreutzer, I990; Werner & Kaplan, 1963). Learning and acquisition of competencies at one point in development provide the scaffolding upon which subsequent skills and competencies are built, such that capabilities from previous periods are consolidated and reorganized in a dynamic, unfolding process across time. The primary developmental tasks occurring at the time of exposure to a risk factor are thought to be the most likely to be interrupted or disrupted by the experience. Developmental deviations from earlier periods are then carried forward and have consequences for children’s ability to successfully accomplish developmental tasks in a later period (Cicchetti & Toth, 1998; Sroufe, 1997). In other words, early experiences constrain future learning of patterns or associations that represent departures from those that were previously learned (Kuhl, 2004).

This concept points to a critical area for future research on childhood adversity involving the identification of sensitive periods of emotional, cognitive, social, and neurobiological development when inputs from the environment are particularly influential. Sensitive periods have been identified both in sensory development and in the development of complex social-cognitive skills, including language (Hensch, 2005′, Kuhl. 2004).

Emerging evidence from cognitive neuroscience also suggests the presence of developmental periods when specific regions of the brain are most sensitive to the effects of stress and adversity (Andersen et al., 2008).

However, identification of sensitive periods has remained elusive in other domains of emotional and social development, potentially reflecting the fact that sensitive periods exist for fewer processes in these domains. However, determining how anomalous or atypical environmental inputs influence developmental processes differently based on the timing of exposure provides a unique opportunity to identify sensitive periods in development; in this way, research on adverse environments can inform our understanding of typical development by highlighting the environmental inputs that are necessary to foster adaptive development.

Identifying sensitive periods of emotional and social development requires detailed information on the timing of exposure to atypical or adverse environments, which is challenging to measure. To date, studies of institutional rearing have provided the best opportunity for studying sensitive periods in human emotional and social development, as it is straightforward to determine the precise period during which the child lived in the institutional setting.

Studies of institutional rearing have identified a sensitive period for the development of a secure attachment relationship at around 2 years of age; the majority of children placed into stable family care before that time ultimately develop secure attachments to a caregiver, whereas the majority of children placed after 2 years fail to develop secure attachments (Smyke et al., 2010).

Of interest, a sensitive period occurring around 2 years of age has also been identified for other domains, including reactivity of the autonomic nervous system and hypothalamic pituitary adrenal (HPA) axis to the environment and a neural marker of affective style (i.e., frontal electroencephalogram asymmetry; McLaughlin et al., 20] l; McLaughlin, Sheridan, et al., 2015), suggesting the importance of the early attachment relationship in shaping downstream aspects of emotional and neurobiological development.

The second concrete recommendation for future research is to integrate studies of typical development with those focused on understanding the impact of childhood adversity, in particular, research that can shed light on sensitive periods in emotional, social, cognitive, and neurobiological development is needed. Identifying the developmental processes that are disrupted by exposure to particular types of adverse environments will be facilitated by first characterizing the typical developmental trajectories of the processes in question. In turn, studies of atypical or adverse environments should be leveraged to inform our understanding of the types of environmental inputs that are required, and when, for particular systems to develop normally.

Given the inherent problems in retrospective assessment of timing of exposure to particular environmental experiences, longitudinal studies with repeated measurements of environmental experience and acquisition of developmental competencies are likely to be most informative. Alternatively, the occurrence of exogenous events like natural disasters, terrorist attacks, and changes in policies or the availability of resources (e.g., the opening of the casino on a Native American reservation; Costello, Compton, Keeler, & Angold, 2003) provides additional opportunities to study sensitive periods of development. Identifying sensitive periods is likely to yield critical insights into the points in development when particular capabilities are most likely to be influenced by environmental experience, an issue of central importance for understanding both typical and atypical development. Such information can be leveraged to inform decisions about the points in time when psychosocial interventions for children exposed to adversity are likely to be maximally efficacious.

Explaining Multifinality

The principle of multifinality is central to developmental psychopathology (Cicchetti, 1993). Multifinality refers to the process by which the same risk and/or protective factors may ultimately lead to different developmental outcomes (Cicchetti & Rogosch, 1996).

It has been repeatedly demonstrated that most forms of childhood adversity are associated with elevated risk for the onset of virtually all commonly occurring mental disorders (Green et al., 2010; McLaughlin, Green, et al., 2012). As noted earlier, recent evidence suggests that child maltreatment is associated with a latent liability for psychopathology that explains entirely the associations of maltreatment with specific mental disorders (Caspi et al., 2014; Keyes et al., 2012). However, the mechanisms that explain how child maltreatment, or other forms of adversity, influence a generalized liability to psychopathology have not been specified. To date, there have been few attempts to articulate a model explaining how childhood adversity leads to the diversity of mental disorders with which it is associated (i. e., multifinality). What are the mechanisms that explain this generalized vulnerability to psychopathology arising from adverse early experiences? Are these mechanisms shared across multiple forms of childhood adversity, or are they specific to particular types of adverse experience?

Identifying general versus specific mechanisms will require changes in the way we conceptualize and measure childhood adversity. Prior research has followed one of two strategies. The first involves studying individual types of childhood adversity, such as parental death, physical abuse, neglect, or poverty (Chase Lansdale, Cherlin, & Kieman, 1995; Dubowitz, Papas, Black, & Starr, 2002; Fristad, Jedel, Weller, & Weller, 1993; Mullen, Martin, Anderson, Romans, & Herbison, 1993; Noble, McCandliss, & Farah, 2007; Wolfe, Sas, & Wekerle, 1994). However, most individuals exposed to childhood adversity have experienced multiple adverse experiences (Dong et a1., 2004; Finkelhor, Ormrod, & Turner, 2007; Green et a1., 2010; McLaughlin, Green, et a1., 2012). This presents challenges for studies focusing on a single type of adversity, as it is unclear if any observed associations represent the downstream effects of the focal adversity in question (e.g., poverty) or the consequences of other co occurring experiences (e.g., exposure to violence) that might have different developmental consequences.

Increasing recognition of the co-occurring nature of adverse childhood experiences has resulted in a shift from focusing on single types of adversity to examining the associations between a number of adverse childhood experiences and developmental outcomes, the core strategy of the ACE approach (Arata, Langhinrichsen Roling, Bowers, & O’Brien, 2007; Dube et al., 2003; Edwards et a1., 2003; Evans et al., 2013). There has been a proliferation of research utilizing this approach in recent years, and it has proved useful in documenting the importance of childhood adversity as a risk factor for a wide range of negative mental health outcomes. However, this approach implicitly assumes that very different kinds of experiences ranging from violence exposure to material deprivation (e.g., food insecurity) to parental loss influence psychopathology through similar mechanisms. Although there is likely to be some overlap in the mechanisms linking different forms of adversity to psychopathology, the count approach oversimplifies the boundaries between distinct types of environmental experience that may have unique developmental consequences.

An alternative approach that is likely to meet with more success involves identifying dimensions of environmental experience that underlie multiple forms of adversity and are likely to influence development in similar ways. In recent work, my colleague Margaret Sheridan and I have proposed two such dimensions that cut across multiple forms of adversity: threat and deprivation (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014).

Threat involves exposure to events involving harm or threat of harm, consistent with the definition of trauma in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; American Psychiatric Association, 2013). Threat is a central dimension underlying multiple commonly studied forms of adversity, including physical abuse, sexual abuse, some forms of emotional abuse (i.e., that involve threats of physical violence and coercion), exposure to domestic violence, and other forms of violent victimization in home, school, or community settings.

Deprivation, in contrast, involves the absence of expected cognitive and social inputs from the environmental stimuli, resulting in reduced opportunities for learning. Deprivation in expected environmental inputs is common to multiple forms of adversity including emotional and physical neglect, institutional rearing, and poverty. Critically, we do not propose that exposure to deprivation and threat occurs independently for children, as these experiences are highly co-occurring, or that these are the only important dimensions of experience involved in childhood adversity.

Instead we propose, first, that these are two important dimensions that can be measured separately and, second, that the mechanisms linking these experiences to the onset of psychopathology are likely to be at least partially distinct (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014). I describe some of these key mechanisms in the transdiagnostic model presented later. Recently, others have argued for the importance of taking this type of dimensional approach as well (Hamby & Grych, 2013; Humphreys & Zeanah, 2015).

Specific recommendations are for future research to (a) identify key dimensions of environmental experience that might differentially influence developmental outcomes and (b) measure multiple such dimensions in studies of childhood adversity to distinguish between general and specific underlying mechanisms linking different forms of adversity to psychopathology. Fine grained measurement of the dimensions of threat and deprivation has often not been conducted within the same study.

Studies focusing on specific types of exposure (e.g., abuse) without measuring or adjusting for co-occurring exposures (e.g., neglect) are unable to distinguish between common and specific mechanisms linking different dimensions of adverse experiences to psychopathology. The only way to determine whether such specificity exists is to measure and model these dimensions of experience together in future studies.

Characterizing the Interplay of Risk and Protective Factors

Although psychopathology is common among children exposed to a wide range of adverse environments, many children exhibit adaptation and resilience following adversity (Masten, 2001; Masten, Best, & Garmezy, 1990). For example, studies of resilience suggest that children who have a positive relationship with a caring and competent adult; are good at learning. problem solving, and self regulation; are socially engaging; and have positive self image are more likely to exhibit positive adaptation after exposure to adversity than children without these characteristics (Luthar, Cicchetti. & Becker, 2000; Masten. 2001; Masten et al.. 1990).

However, in contrast to the consistent pattern of associations between childhood adversity and psychopathology, evidence for protective factors varies widely across studies, and in most cases children exposed to adversity exhibit adaptive functioning in some domains but not others: even within a single domain, children may be functioning well at one point in time but not at others (Luthar et al.. 2000). This is not surprising given that the degree to which a particular factor is protective depends heavily upon context, including the specific risk factors with which it is interacting (Cicchetti & Lynch. 1993; Sameroff. Gutman, & Peck, 2003).

For example. authoritative parenting has been shown to be associated with adaptive outcomes for children raised in stable contexts that are largely free of significant adversity (Steinberg, Elmen, & Mounts, 1989; Steinberg, Lamborn, Dornbusch. & Darling. 1992; Steinberg, Mounts, Lambom. & Dombusch, I991); in contrast, authoritarian parenting appears to be protective for children being raised in environments characterized by low resources and/or high degrees of violence and other threats (Flouri, 2007; Gonzales, Cauce. Friedman. & Mason, 1996).

The degree to which variation in specific genetic polymorphisms moderates the impact of childhood adversity on development outcomes is also highly variable across studies; although genetic variation clearly contributes to developmental trajectories of adaptation and maladaptation following childhood adversity, this topic has been reviewed extensively elsewhere (Heim & Binder. 2012: McCrory, De Brito, & Viding. 2010; Uher & McGuffrn, 20l0) and is not discussed further. This complexity has contributed to the widely variable findings regarding protective factors and resilience.

Progress in identifying protective factors that buffer children from maladaptive outcomes following childhood adversity might be achieved by shifting the focus from downstream outcomes to more proximal mechanisms known to underlie the relationship between adverse childhood experiences and psychopathology. Research on resiliency has often focused on distal outcomes, such as the absence of psychopathology, the presence of high quality peer relationships, or good academic performance as markers of adaptive functioning in children with exposure to adversity (Bolger, Patterson, & Kupersmidt. 1999; Collishaw et al., 2007; Fergusson & Lynskey, 1996; Luthar, 1991).

Just as there are numerous mechanisms through which exposure to adverse environments lead to psychopathology and other downstream outcomes, there are likely to be a wide range of mechanisms through which protective factors buffer children from maladaptation following childhood adversity. Indeed. modern conceptualizations of resilience describe it as a developmental process that unfolds over time as an ongoing transaction between a child and the multiple contexts in which he or she is embedded (Luthar et al., 2000)

Rather than examining protective factors that buffer children from developing psychopathology following adverse childhood experiences, an alternative approach is to focus on factors that moderate the association of childhood adversity with the developmental processes that serve as mechanisms linking adversity with psychopathology (e.g., emotion regulation, executive functioning) or that moderate the link between these developmental processes and the onset of psychopathology. Deconstructing the pathways linking childhood adversity to psychopathology allows moderators to be examined separately at different stages of these pathways and may yield greater information about how protective factors ultimately exert their effects on downstream outcomes. including psychopathology.

Accordingly, a fourth recommendation is that future research should focus on identifying protective factors that buffer children from the negative consequences of adversity at two levels: (a) factors that modify the association between childhood adversity and the maladaptive patterns of emotional, cognitive, social, and neurobiological development that serve as intermediate phenotypes linking adversity with psychopathology. and (b) factors that moderate the influence of intennediate phenotypes on the emergence of psychopathology, leading to divergent trajectories of adaptation across children.

To understand resilience, we first need to understand the developmental processes that are disrupted following exposure to adversity and how certain characteristics either prevent or compensate for those developmental disruptions or reduce their impact on risk for psychopathology.

A TRANSDIAGNOSTIC MODEL OF CHILDHOOD ADVERSITY AND PSYCHOPATHOLOGY

The remainder of the article outlines a transdiagnostic model of mechanisms linking childhood adversity with youth psychopathology. Two core developmental mechanisms are proposed that, in part, explain patterns of multitinality: emotional processing and executive functioning.

The model builds on a framework described by Nolen Hoeksema and Watkins (2011) for identifying transdiagnostic processes. Of importance, the model is not intended to be comprehensive in delineating all mechanisms linking childhood adversity with psychopathology but rather focuses on two candidate mechanisms linking childhood adversity to multiple forms of psychopathology. At the same time, these mechanisms are also specific in that each is most likely to emerge following exposure to specific dimensions of adverse early experience.

The model is specific with regard to the underlying dimensions of adverse experience considered and identifies several key moderators that might explain divergent developmental trajectories among children following exposure to adversity. Future research is needed to expand this framework to incorporate other key dimensions of the adverse environmental experience, developmental mechanisms linking those dimensions of adversity with psychopathology, and moderators of those associations.

Distal Risk Factors

Within the proposed model, core dimensions of environmental experience that underlie multiple forms of adversity are conceptualized as distal risk factors for psychopathology, Specifically, experiences of threat and deprivation constitute the first component of the proposed transdiagnostic model of childhood adversity and psychopathology.

Experiences of threat and deprivation meet each of Nolen Hoeksema and Watkins’s (2011) criteria for a distal risk factor. They represent environmental conditions largely outside the control of the child that are linked to the onset of psychopathology only through intervening causal mechanisms that represent more proximal risk factors. Although they are probabilistically related to psychopathology, exposure to threat and deprivation do not invariably lead to mental disorders. These experiences influence proximal risk factors primarily through learning mechanisms that ultimately shape patterns of information processing, emotional responses to the environment, and higher order control processes that influence both cognitive and emotional processing.

Proximal Risk Factors

The developmental processes that are altered following exposure to adverse environmental experiences represent proximal risk factors, or intermediate phenotypes, linking them to the onset of psychopathology. These proximal risk factors represent the second component of the proposed transdiagnostic model. Nolen Hoeksema and Watkins (2011) argued that proximal risk factors are within person factors that mediate the relationship between distal risk factors, including aspects of environmental context that are difficult to modify, such as childhood adversity, and the emergence of psychopathology. Proximal risk factors directly influence symptoms and are temporally closer to symptom onset and often easier to modify than distal risk factors (Nolen Hoeksema & Watkins, 2011).

Identifying modifiable within person factors that link adverse environmental experiences with the onset of symptoms is the key to developing interventions to prevent the onset of psychopathology in children who have experienced adversity.

The model includes two primary domains of proximal risk factors: emotional processing and executive functioning.

Emotional processing refers to information processing of emotional stimuli (e.g., attention, memory), emotional reactivity, and both automatic (e.g., habituation, fear extinction) and effortful (e.g., cognitive reappraisal) forms of emotion regulation. These processes all represent responses to emotional stimuli, and many involve interactions of cognition with emotion.

Executive functions comprise a set of cognitive processes that support the ability to learn new knowledge and skills; hold in mind goals and information; and create and execute complex, future oriented plans. Executive functioning comprises the ability to hold information in mind and focus on currently relevant information (working memory), inhibit actions and information not currently relevant (inhibition). and switch flexibly between representations or goals (cognitive flexibility; Miyake & Friedman. 2012; Miyake, Friedman, Rettinger, Shah, & Hegarty, 2001).

Together these skills allow the creation and execution of future oriented plans and the inhibition of behaviors that do not serve these plans, providing the foundation for healthy decision making and self regulation. Many of the diverse mechanisms linking childhood adversity to psychopathology are subsumed within these two broad domains.

Emotional processing, stable patterns of emotional processing, emotional responding to the environment, and emotion regulation represent the first core domain of proximal risk factors. Experiences of uncontrollable threat are associated with strong learning of specific contingencies and overgeneralization of that learning to novel contexts, which facilitates the processing of salient emotional cues in the environment (e.g., biased attention to threat). Given the importance of quickly identifying potential threats in the environment for children growing up in environments characterized by legitimate danger, these learning processes should produce information processing biases that promote rapid identification of potential threats. Indeed, evidence suggests that children with abuse histories, an environment characterized by high levels of threat, exhibit attention biases toward facial displays of anger, identify anger with little perceptual information, have difficulty disengaging from angry faces, and display anticipatory monitoring of the environment following interpersonal displays of anger (Pollak, Cicchetti, Hornung, & Reed, 2000; Pollak & Sinha, 2002; Pollak & Tolley Schell, 2003; Pollak, Vardi, Putzer Bechner, & Curtin, 2005; Shackman, Shackman, & Pollak, 2007).

Given the relevance of anger as a signal of potential threat, these findings suggest that exposure to threatening environments results in stable patterns of information processing that facilitate threat identification and maintenance of attention to threat cues. These attention biases are specific to children who have experienced violence; for example, children who have been neglected (i.e., an environment characterized by deprivation in social and cognitive inputs) experience difficulty discriminating facial expressions of emotion but do not exhibit attention biases toward threat (Pollak, Klorrnan, Thatcher, & Cicchetti, 2001; Pollak et al., 2005).

In addition to attention biases, children who have been the victims of violence are also more likely to generate attributions of hostility to others in socially ambiguous situations (Dodge, Bates, & Pettit, 1990; Dodge, Pettit, Bates, & Valente, 1995; Weiss, Dodge, Bates, & Petit, 1992), a pattern of social information processing tuned to be overly sensitive to potential threats in the environment. Finally, some evidence suggests that exposure to threatening environments is associated with memory biases for overgeneral autobiographical memories in both children and adults (Crane et al., 2014; Williams et al., 2007).

Children with trauma histories also exhibit meaningful differences in patterns of emotional responding that are consistent with these patterns of information processing. For example, children who have experienced interpersonal violence exhibit greater activation in the amygdala and other nodes of the salience network (e.g., anterior insula, putamen, thalamus) to a wide range of negative emotional stimuli (McCrory et al., 2013; McCrory et al., 2011; McLaughlin, Peverill, Gold, Alves, & Sheridan, 2015), suggesting heightened salience of information that could predict threat.

These findings build on earlier work using evoked response potentials documenting amplified neural response to angry faces in children who were physically abused (Pollak, Cicchetti, Klorman, & Brumaghim, 1997; Pollak et al., 2001) and suggests that exposure to threatening experiences heightens the salience of negative emotional information, due to the potential relevance for detecting novel threats.

Heightened amygdala response to negative emotional cues could also reflect fear learning processes, whereby previously neutral stimuli that have become associated with traumatic events begin to elicit conditioned fear responses, or the result of deficits in automatic emotion regulation processes like fear extinction and habituation, which are mediated through connections between the ventromedial prefrontal cortex and amygdala. Recent findings of poor resting state functional connectivity between the ventromedial prefrontal cortex and amygdala among female adolescents with abuse histories provide some evidence for this latter pathway (Herringa et al., 2013).

In addition to heightened neural responses in regions involved in salience processing, consistent associations between exposure to threatening environments and elevations in self reported emotional reactivity to the environment have been observed in our lab and elsewhere (Glaser, Van Os, Portegijs. & Myin Genneys, 2006; Heleniak, Jenness, Van Der Stoep, McCauley, & McLaughlin, in press; McLaughlin, Kubzansky et al., 2010).

Atypical physiological responses to emotional cues have also been documented consistently among children who have experienced trauma, although the specific pattern of findings has varied across studies depending on the specific physiological measures and emotion eliciting paradigms employed. We recently applied a theoretical model drawn from social psychology on adaptive and maladaptive responses to stress to examine physiological responses to stress among maltreated youths. We observed a pattern of increased vascular resistance and blunted cardiac output reactivity among youths who had been physically or sexually abused relative to participants with no history of violence exposure (McLaughlin, Sheridan, Alves, & Mendes, 2014). This pattern of autonomic nervous system reactivity reflects an inefficient cardiovascular response to stress that has been shown in numerous studies to occur when individuals are in a state of heightened threat and is associated with threat appraisals and maladaptive cognitive and behavioral responses to stress (J amieson, Mendes, Blackstock, & Schmader, 2010; Jamieson, Nock, & Mendes, 2012; Mendes, Blascovich, Major. & Seery, 200l; Mendes, Major, McCoy, & Blascovich, 2008). Using data from a large population based cohort of adolescents, we recently replicated the association between childhood trauma exposure and blunted cardiac output reactivity during acute stress (Heleniak, Riese, Ormel, & McLaughlin, 2016).

Together, converging evidence across multiple levels of analysis indicates that exposure to trauma is associated with a persistent pattern of information processing involving biased attention toward potential threats in the environment, heightened neural and subjective responses to negative emotional cues, and a pattern of autonomic nervous system reactivity consistent with heightened threat perception. This heightened reactivity to negative emotional cues may make it more difficult for children who have been exposed to threatening enviromnents to regulate emotional responses. Indeed, a recent study from my lab found that when trying to regulate emotional responses using cognitive reappraisal, children who had been abused recomited regions of the prefrontal cortex involved in effortful control to a greater degree than children who had never experienced violence (McLaughlin, Peverill, et a1., 2015). This pattern suggests that attempts to modulate emotional responses to negative cues require more cognitive resources for children with abuse histories, meaning that effective regulation may break down more easily in the face of stress. Evidence that the negative emotional effects of stressful events are heightened among those with maltreatment histories is consistent with this possibility (Glaser et a1., 2006; McLaughlin, Conron, et al., 2010).

In addition to alterations in patterns of emotional reactivity to environmental cues, child trauma has been associated with maladaptive patterns of responding to distress. For example, exposure to threatening environments early in development is associated with habitual engagement in rumination, a response style characterized by passive focus on feelings of distress along with their causes and consequences without attempts to actively resolve the causes of distress (Nolen Hoeksema, Wisco, & Lyubomirsky, 2008). High reliance on rumination as a strategy for responding to distress has been observed in adolescents and adults who were abused as children (Conway. Mendelson, Giannopoulos, Csank. & Holm, 2005; Heleniak et al., in press; Sarin & Nolen Hoeksema, 2010). Adolescents who experienced victimization by peers (McLaughlin, Hatzenbuehler, & Hilt, 2009), and both adolescents and adults exposed to a wide range of negative life events (McLaughlin & Hatzenbuehler, 2009; Michl, McLaughlin, Shepherd, & Nolen Hoeksema, 2013), although the latter findings are not specific to threat per se.

Although evidence for disruptions in emotional processing come primarily from studies examining children exposed to environments characterized by high degrees of threat, deprived environments are also likely to have downstream effects on emotional development that are at least partially unique from those associated with threat. As noted previously, children who have been neglected experience difficulties discriminating facial displays of emotion (Pollak et al., 2001: Pollak et al., 2005), although some studies of neglected children have found few differences in neural responses to facial emotion in early childhood (Moulson, Fox, Zeanah, & Nelson, 2009; Slopen, McLaughlin, Fox, Zeanah, & Nelson, 2012). However, recent work suggests that children raised in deprived early environments exhibit elevated amygdala response to facial emotion and a mature pattem of functional connectivity between the amygdala and mPFC during emotional processing tasks (Gee et al., 2013; Tottenham et al., 201 1). Finally, children who were neglected or raised in deprived institutions tend to exhibit blunted physiological responses to stress, including in the autonomic nervous system and HPA axis (Gunnar, Frenn, Wewerka, & Van Ryzin, 2009; McLaughlin, Sheridan, et al., 2015).

Much of the existing work on childhood adversity and emotional responding has focused on responses to negative emotional cues. However, a growing body of evidence also suggests that responses to appetitive and rewarding cues are disrupted in children exposed to adversity. For example, children raised in deprived early environments exhibit blunted ventral striatal response to the anticipation of reward (Mehta et al., 2010), and a similar pattern has been observed in a sample of adults exposed to abuse during childhood (Dillon et al., 2009). In a recent study, an increase in ventral striatum response to happy emotional faces occurred from childhood to adolescence in typically developing children but not in children reared in deprived institutions (Goff et al., 2013). In recent work in our lab, we have also observed blunted reward learning among children exposed to institutional rearing (Sheridan, McLaughlin, et al., 2016).

Although the mechanisms underlying the link between diverse forms of childhood adversity and responsiveness to reward have yet to be clearly identified, it has been suggested that repeated activation of the HPA axis in early childhood can attenuate expression of brain derived neurotrophic factor, which in turn regulates the mesolimbic dopamine system that underlies reward learning (Goff & Tottenham, 2014). These reductions in brain derived neurotrophic factor expression may contribute to a pattern of blunted ventral striatum response to reward anticipation or receipt.

Alternatively, given the central role of the mesolimbic dopamine system in attachment related behavior (Strathearn, 2011), the absence or unpredictability of an attachment figure in early development may reduce opportunities for learning about the rewarding nature of affiliative interactions and social bonds; the absence of this type of stimulus reward learning early in development, when sensitive and responsive caregiving from a primary attachment figure is an expected environmental input, may ultimately contribute to biased processing of rewarding stimuli later in development. If social interactions in early life are either absent or unrewarding, expectations about the hedonic value of social relationships and other types of rewards might be altered in the long term, culminating in attenuated responsiveness to anticipation of reward. Future research is needed to identify the precise mechanisms through which adverse early environments ultimately shape reward learning and responses to rewarding stimuli.

Links between emotional processing and psychopathology

An extensive and growing body of work suggests that disruptions in emotional processing, emotional responding, and emotion regulation represent transdiagnostic factors associated with virtually all commonly occurring forms of psychopathology (Aldao, Nolen Hoeksema, & Schweizer, 2010). Specifically, attention biases to threat and overgeneral autobiographical memory biases have been linked to anxiety and depression, respectively, in numerous studies (Bar Haim, Lamy, Bakermans Kranenburgh, Pergamin, & Van Ijzendoorn, 2007; Williams et al., 2007), and attributions of hostility and other social information processing biases associated with trauma exposure are associated with risk for the onset of conduct problems and aggression (Dodge et al., 1990; Dodge et 211., 1995; Weiss et al., 1992).

Heightened emotional responses to negative environmental cues are associated with both internalizing and externalizing psychopathology in laboratory based paradigms examining self reported emotional and physiological responses to emotional stimuli (Boyce et al., 2001; Carthy, Horesh, Apter, Edge. & Gross, 2010: Hankin, Badanes, Abela, & Watamura, 2010′, McLaughlin, Kubzansky. et al., 2010; McLaughlin, Sheridan, Alves, et al., 2014; Rao. Hammen, Ortiz, Chen, & Poland, 2008), MRI studies examining neural response to facial emotion (Sebastian et al., 2012; Siegle, Thompson, Caner, Steinhauer, & Thase, 2007; Stein, Simmons, Feinstein, & Paulus, 2007; Suslow et al., 2010; Thomas et al.. 2001), and experience sampling studies that measure emotional responses in real world situations (Myin Germeys et al., 2003; Silk. Steinberg, & Morris. 2003).

Habitual engagement in rumination has also been linked to heightened risk for anxiety, depression, eating disorders, and problematic substance use (McLaughlin & Nolen Hoeksema, 20l 1: Nolen Hoeksema, 2000; Nolen Hoeksema, Stice, Wade, & Bohon, 2007). Together, evidence from numerous studies examining emotional processing at multiple levels of analysis suggests that disruptions in emotional processing are a key transdiagnostic factor in psychopathology that may explain patterns of multifinality following exposure to threatening early environments.

Executive functioning

Disruption in executive functioning represent the second key proximal risk factor in the model. A growing body of evidence suggests that environmental deprivation is associated with lasting alterations in executive functioning skills. Poor executive functioning, including problems with working memory, inhibitory control, planning ability, and cognitive flexibility, has consistently been documented among children raised in deprived environments ranging from institutional settings to low socioeconomic status [3138) families.

Children raised in institutional settings exhibit a range of deficits in cognitive functions including general intellectual ability (Nelson et al., 2007‘, O’Connor, Rutter, Beckett, Keaveney, & Kreppner, 2000), expressive and receptive language (Albers, Johnson, Hostetter, Iverson, & Miller, 1997; Windsor et al., 201 I), and executive function skills (Bos et al., 2009; Tibu et al., 2016). In contrast to other domains of cognitive ability, however, deficits in executive functioning and marked elevations in the prevalence of attention deficit hyperactivity disorder (ADHD), which is characterized by executive functioning problems, are persistent over time even after placement into a stable family environment (Bos et al., 2009; Tibu et al., 2016; Zeanah et al., 2009).

Similar patterns of executive functioning deficits have also been observed among children raised in low SES families, including problems with working memory, inhibitory control, and cognitive flexibility (Blair, 2002; Farah et al., 2006; Noble et al., 2007; Noble, Norman, & Farah, 2005; Raver, Blair, Willoughby, & The Family life Project Key Investigators, 2013), as well as deficits in language abilities (Fernald, Marchman, & Weisleder, 2013; Weisleder & Femald, 2013). Poor cognitive flexibility among children raised in low SES environments has been observed as early as infancy (Clearfield & Niman, 2012). Relative to children who have been abused, children exposed to neglect are at greater risk for cognitive deficits (Hildyard & Wolfe, 2002) similar to those observed in poverty and institutionalization (Dubowitz et al., 2002; Spratt et al., 2012).

The lateral PFC is recruited during a wide variety of executive functioning tasks, including working memory (Wager & Smith, 2003), inhibition (Aron, Robbins, & Poldrack, 2004), and cognitive flexibility (Rougier, Noelle, Braver, Cohen, & O’Reilly, 2005), and is one of the brain regions most centrally involved in executive functioning. In addition to exhibiting poor performance on executive functioning tasks, children from low SES families also have different patterns of lateral PFC recruitment during these tasks as compared to children from middle class families (Kishiyama, Boyce, Jimenez, Perry, & Knight, 2009; Sheridan, Sarsour, Jutte, D’Esposito, & Boyce, 2012). A similar pattern of poor inhibitory control and altered lateral PFC recruitment during an inhibition task has also been observed in children raised in institutional settings (Mueller et al., 2010).

These studies provide some clues about where to look with regards to the types of environmental inputs that might be necessary for the development of adaptive executive functions. In particular, environmental inputs that are absent or atypical among children raised in institutional settings. as well as among children raised in poverty, are promising candidates. Institutional rearing is associated with an absence of environmental inputs of numerous kinds, including the presence of an attachment figure, variation in daily routines and activities, access to age appropriate enriching cognitive stimulation from books, toys, and interactions with adults, and complex language exposure (Smyke et al., 2007; Zeanah et al., 2003).

Some of these dimensions of environmental experience have also been shown to be deprived among children raised in poverty, including access to cognitively enriching activities, including access to books, toys, and puzzles; learning opportunities outside the home (e.g., museums) and within the context of the parent-child relationship (e.g., parental encouragement of learning colors, words, and numbers, reading to the child); and variation in environmental complexity and stimulation as well as the amount and complexity of language input (Bradley, Convyn, Burchinal, McAdoo, & C01], 200]; Bradley, Corwyn, MCAdoo. & C011, 2001; Dubowitz et al., 2002; Garrett, Ng’andu, & Ferron, 1994; Hart & Risley, 1995; Hoff, 2003; Linver, Brooks Gunn, & Kohen, 2002).

Together, these distinct lines of research suggest that enriching cognitive activities and exposure to complex language might provide the scaffolding that children require to develop executive functions. Some indirect evidence supports this notion. For example, degree of environmental stimulation in the home and amount and quality of maternal language each predict the development of language skills in early childhood (Farah et al., 2008; Hoff, 2003), and children raised in both institutional settings and low SES families exhibit deficits in expressive and receptive language (Albers et al., 1997; Hoff, 2003; Noble et al., 2007; Noble et al., 2005; Windsor et al., 2011), in addition to problems with executive functioning skills. Moreover, a recent study found that atypical patterns of PFC activation during executive function tasks among children from low SES families is explained by degree of complex language exposure in the home (Sheridan et al., 2012). Finally, children raised in bilingual environments appear to have improved performance on executive function tasks (Carlson & Meltzoff, 2008).

These findings suggest that the environmental inputs that are required for language development (i.e., complex language directed at the child) may also be critical for the development of executive function skills. Language provides an opportunity to develop multiple such skills ranging from working memory (e.g., holding in mind the first part of a sentence as you wait for the speaker to finish), inhibitory control (e.g., waiting your turn in a conversation), and cognitive flexibility (e.g,, switching between grammatical and syntactic rules).

Lack of consistent rules, routines, structure, and parental scaffolding behaviors may be another mechanism explaining deficits in executive functioning among children from low SES families. This lack of environmental predictability is more common among low SES than middle class families (Deater Deckard, Chen, Wang, & Bell, 2012; Evans, Gonnella, Mareynyszyn, Gentile, & Salpekar, 2005; Evans & Wachs, 2009). The absence of consistent rules, routines, and contingencies in the environment may interfere with children’s ability to learn abstract rules and to develop the capacity for self regulation. Indeed, higher levels of parental scaffolding, or provision of support to allow the child to solve problems autonomously, has been prospectively linked with the development of better executive function skills in early childhood (Bemier, Carlson. & Whipple, 2010; Hammond, Muller, Carpendale, Bibok, & Lieberrnann Finestone, 2012; Landry, Miller Loncar, Smith, & Swank, 2002).

These findings suggest that environmental unpredictability is an additional mechanism linking low SES environments to poor executive functioning in children. However, given the highly structured and routinized nature of most institutional settings, environmental unpredictability is an unlikely explanation for executive functioning deficits among institutionally reared children.

Deficits in executive functioning skills have sometimes been observed in children with exposure to trauma (DePrince, Weinzierl, & Combs, 2009; Mezzacappa, Kindlon, & Earls, 2001) as well as children with high levels of exposure to stressful life events (Hanson et al., 2012), although some studies have found associations between trauma exposure and working memory but not inhibition or cognitive flexibility (Augusti & Melinder. 2013).

There are two possible explanations for these findings.

First, for children exposed to threat, it may be that deficits in executive functions emerge primarily in emotional contexts, such that the heightened perceptual sensitivity and reactivity to emotional stimuli in children exposed to threat draws attention to emotional stimuli (Shackman et al., 2007), making it more difficult to hold other stimuli in mind, effectively inhibit responses to emotional stimuli, or flexibly allocate attention to nonemotional stimuli. Indeed. in a recent study in my lab, we observed that exposure to trauma (both maltreatment and community violence) was associated with deficits in inhibitory control only in the context of emotional stimuli (i.e., a Stroop task involving emotional faces) and not when stimuli were neutral (i.e., shapes), and had no association with cognitive flexibility (Lambert. King, Monahan, & McLaughlin, 2016). In contrast, deprivation exposure was associated with deficits in inhibition to both neutral and emotional stimuli and poor cognitive flexibility. Although this suggests there may be specificity in the association of trauma exposure with executive functions, greater research is needed to understand these links.

Second, studies examining exposure to trauma seldom measure indicies of deprivation, nor do they adjust for deprivation exposure (just as studies of deprivation rarely assess or control for trauma exposure). Disentangling the specific effects of these two types of experiences on executive functioning processes is a critical goal for future research.

Links between executive functioning and psychopathology

Executive functioning deficits are a central feature of ADHD (Martinussen, Hayden, Hogg Johnson, & Tannock, 2005′, Sergeant, Geurts. & Oosterlaan, 2002; Willcutt, Doyle, Nigg, Faraone, & Pennington, 2005). Problems with executive functions have also been observed in children with externalizing psychopathology, including conduct disorder and oppositional defiant disorder, even after accounting for comorbid ADHD (Hobson, Scott, & Rubia, 2011). They are also associated with elevated risk for the onset of substance use problems and other types of risky behavior (Crews & Boettiger, 2009; Patrick, Blair, & Maggs, 2008), including criminal behavior (Moffitt et al., 2011) and the likelihood of becoming incarcerated (Yechiam et a1., 2008).

Although executive functioning deficits figure less prominently in theoretical models of the etiology of internalizing psychopathology, when these deficits emerge in the context of emotional processing (e.g., poor inhibition of negative emotional information) they are more strongly linked to internalizing problems, including depression (Goeleven, De Raedt, Baert, & Koster, 2006; Joorman & Gotlib, 2010). Executive functioning deficits also contribute to other proximal risk factors, such as rumination (Joorman, 2006), that are well established risk factors for depression and anxiety disorders. Patterns of executive functioning in childhood have lasting implications for health and development beyond effects on psychopathology. Recent work suggests that executive functioning measured in early childhood predicts a wide range of outcomes in adulthood in the domains of health, SES, and criminal behavior. over and above the effects of IQ (Moffrtt et al., 2011).

Mechanisms Linking Distal Risk Factors to Proximal Risk Factors

How do experiences of threat and deprivation come to influence proximal risk factors? Learning mechanisms are the most obvious pathways linking these experiences with changes in emotional processing and executive functioning. although other mechanisms (e. g., the development of stable beliefs and schemes) are also likely to play an important role. Specifically, the impact of threatening and deprived early environments on the development of patterns of emotional processing and emotional responding may be mediated, at least in part, through emotional learning pathways. The associative learning mechanisms and neural circuitry underlying fear learning and reward learning have been well characterized in both animals and humans and reviewed elsewhere (Delgado, Olsson, & Phelps, 2006; Flagel et al., 2011; Johansen, Cain, Ostroff, & LeDoux. 20l l; O’Doherty‘ 2004).

Exposure to threatening or deprived environments early in development results in the presence (i.e., in the case of threats) or absence (i.e., in the case of deprivation) of opportunities for emotional learning: these learning experiences, in turn, have lasting downstream effects on emotional processing. Specifically, early learning histories can influence the salience of environmental stimuli as either potential threats or incentives, shape the magnitude of emotional responses to environmental stimuli, particularly those that represent either threat or reward, and alter motivation to avoid threats or pursue rewards. Thus, fear learning mechanisms and their downstream consequences explain, in part, the association of threatening environments with alterations in emotional processing (McLaughlin et al., 2014; Sheridan & McLaughlin. 2014).

Similarly, the effects of deprived early environments on emotional processing are likely to be partially explained through reward learning pathways. Pathways linking threatening early environments to habitual patterns of responding to distress, such as rumination, may also involve learning mechanisms including both observational (e.g., modeling responses utilized by caregivers) and instrumental (e.g., reinforcement of passive responses to distress when emotional displays are met with dismissive or punishing reactions from caregivers) learning.

Learning mechanisms may also be a central mechanism in the association between deprived early environments and the development of executive functioning. In particular, deprived environments such as institutional rearing. neglect, and poverty are characterized by the absence of learning opportunities, which is thought to directly contribute to later difficulties with complex higher order cognition. Specifically. reduced opportunities for learning due to the absence of complex and varied stimulus response contingencies or the presence of consistent rules, routines, and structures that allow children to learn concrete and abstract rules may influence the development of both cognitive and behavioral aspects of self regulation.

Moderators of the Link Between Distal and Proximal Risk Factors

Children vary markedly in their sensitivity to environmental context. Advances in theoretical conceptualizations of individual differences in sensitivity to context can be leveraged to understand variability in developmental processes among children exposed to adverse environments. A growing body of evidence suggests that certain characteristics make children particularly responsive to environmental influences; such factors confer not only vulnerability in the context of adverse environments but also benefits in the presence of supportive environments (Belsky, Bakermans Kranenburg, & Van Ijzendoom, 2007; Belsky & Pluess, 2009; Boyce & Ellis, 2005; Ellis, Essex, & Boyce, 2005). Highly reactive temperament, vagal tone, and genetic polymorphisms that regulate the dopaminergic and serotonergic system have been identified as markers of plasticity and susceptibility to both negative and positive environmental influences (Belsky & Pluess, 2009). These plasticity markers represent potential moderators of the link between childhood adversity and disruptions in emotional processing and executive functioning.

Developmental timing of exposure to adversity also plays a meaningful role in moderating the impact of childhood adversity on emotional processing and executive functioning For example, in recent work we have shown that early environmental deprivation has a particularly pronounced impact on the development of stress response systems during the first 2 years of life (McLaughlin et al., ZOIS). These findings suggest the possibility of an early sensitive period during which the environment exerts a disproportionate effect on the development of neurobiological systems that regulate responses to stress. As noted in the beginning of this article, additional research is needed to identify developmental periods of heightened plasticity in specific subdomains of emotional processing and executive functioning and to determine the degree to which disruptions in these domains vary as a function of the timing of exposure to childhood adversity.

Moderators of Trajectories From Proximal Risk Factors to Psychopathology

A key component of Nolen Hoeksema and Watkins‘s (2011) transdiagnostic model of psychopathology involves moderators that determine the specific type of psychopathology that someone with a particular proximal risk factor will develop. Specifically, their model argues that ongoing environmental context and neurobiological factors can moderate the impact of proximal risk factors on psychopathology by raising concerns or themes that are acted upon by proximal risk factors and by shaping responses to and altering the reinforcement value of particular types of stimuli.

For example. the nature of ongoing environmental experiences might determine whether someone with an underlying vulnerability (e.g.. neuroticism) develops anxiety or depression. Specifically, a person with high neuroticism who experiences a stressor involving a high degree of threat or danger (e.g., a mugging or a car accident) might develop an anxiety disorder, whereas a person with high neuroticism who experiences a loss (e.g.. an unexpected death of a loved one) might develop major depression (Nolen Hoeksema & Watkins. 2011).

Neurobiological factors that influence the reinforcement value of certain stimuli (e.g., alcohol and other substances. food, social rejection) can also serve as moderators. For example, individual differences in rejection sensitivity might determine whether a child who is bullied develops an anxiety disorder. Although a review of these factors is beyond the scope of the current article, greater understanding of the role of ongoing environmental context as a moderator of the link between proximal risk factors and the emergence of psychopathology has relevance for research on childhood adversity. In particular, environmental factors that buffer against the emergence of psychopathology in children with disruptions in emotional processing and executive functioning can point to potential targets for preventive interventions for children exposed to adversity.

CONCLUSION

Exposure to childhood adversity represents one of the most potent risk factors for the onset of psychopathology. Recognition of the strong and pervasive influence of childhood adversity on risk for psychopathology throughout the life course has generated a burgeoning field of research focused on understanding the links between adverse early experience, developmental processes, and mental health. This article provides recommendations for future research in this area. In particular, future research must develop and utilize a consistent definition of childhood adversity across studies, as it is critical for the field to agree upon what the construct of childhood adversity represents and what types of experiences do and do not qualify.

Progress in identifying developmental mechanisms linking childhood adversity to psychopathology requires integration of studies of typical development with those focused on childhood adversity in order to characterize how experiences of adversity disrupt developmental trajectories in emotion, cognition, social behavior. and the neural circuits that support these processes, as well as greater efforts to distinguish between distinct dimensions of adverse environmental experience that differentially influence these domains of development. Greater understanding of the developmental pathways linking childhood adversity to the onset of psychopathology can inform efforts to identify protective factors that buffer children from the negative consequences of adversity by allowing a shift in focus from downstream outcomes like psychopathology to specific developmental processes that serve as intermediate phenotypes (i.e., mechanisms) linking adversity with psychopathology.

Progress in these domains will generate clinically useful knowledge regarding the mechanisms that explain how childhood adversity is associated with a wide range of psychopathology outcomes (i.e., multifinality) and identify moderators that shape divergent trajectories following adverse childhood experiences. This knowledge can be leveraged to develop and refine empirically informed interventions to prevent the long term consequences of adverse early environments on children’s development. Greater understanding of modifiable developmental processes underlying the associations of diverse forms of childhood adversity with psychopathology will provide critical information regarding the mechanisms that should be specifically targeted by intervention. Determining whether these mechanisms are general or specific is essential, as it is unlikely that a one size fits all approach to intervention will be effective for preventing the onset of psychopathology following all types of childhood adversity. Identifying processes that are disrupted following specific forms of adversity, but not others, will allow interventions to be tailored to address the developmental mechanisms that are most relevant for children exposed to particular types of adversity. Identification of moderators that buffer children either from disruptions in core developmental domains or from developing psychopathology in the presence of developmental disruptions, for example, among children with heightened emotional reactivity or poor executive functioning, will provide additional targets for intervention.

Finally, uncovering sensitive periods when emotional, cognitive, and neurobiological processes are most likely to be influenced by the environment will provide key information about when interventions are most likely to be successful. Together, these advances will help the field to generate innovative new approaches for preventing the onset of psychopathology among children who have experienced adversity.

New Zealand’s moose hunt: A century-long quest for a forest’s final secret – Charlie Mitchell.

The idea that moose roam the most remote corner of New Zealand has long been an urban legend. The New Zealand moose is no ‘Bigfoot’. It’s far more plausible than one might think.

It was listed on the map as “unexplored territory”. A dim cove in the mist, separating the fiord from the colossal forests that cloak the steep valleys of Fiordland.

The famous government steamship, the Hinemoa, had rescued shipwreck survivors in the sub-Antarctic and dropped supplies to the lonely lighthouses dotting the southern coast. But when it crept into the gorge at Doubtful Sound, past the waterfalls and the caves and the steep, rolling ridges, it had entered truly inhospitable territory.

Eight men stepped off the ship at Supper Cove, a small arc of sand at the end of the sound. More than a century earlier, Captain James Cook had anchored his ship, the Resolution, nearer the beginning of the fiord for repairs. Cook was struck by the feeling of utter isolation: ”In this bay we are all strangers,” he wrote in his journal.

The Hinemoa’s men hauled 10 large, wooden crates from the steamship, dragged them through the shallows and onto the sand. There were six females and four males, all less than a year old, about a metre and a half in height at the shoulder. The animals stepped carefully into the dim light.

They were here because the governor of Saskatchewan, Canada had received a request from New Zealand’s Prime Minister, Sir Joseph Ward, for assistance in complementing a grand vision: New Zealand as the world’s largest game reserve, collecting the Earth’s most prized, living trophies in one place.

The animals were duly caught in the frozen wilds and raised in captivity. They were fed cow’s milk from a bottle. They were docile and thought capable of surviving the treacherous boat trip across the world, through the tropics and into the cold, perpetual rain.

It was the beginning of autumn in 1910 and the air was thick with sandflies. When the animals stepped onto the beach, some were scared and returned to their crates, but the men upended the boxes and they toppled out. One animal, in a panic, attacked another, breaking its leg.

The men returned to the Hinemoa. They sailed back down the fiord, away from the darkness and the cargo they’d left behind.

And so the moose, young, small and afraid, were alone. They dissolved into the mist and the Fiordland bush, strangers in a strange land.

THE PAUSE OF AN ERA

One of the last verified photographs of a Fiordland moose, taken in 1952.

There are millions of trees in Fiordland, and Ken Tustin, a biologist, had them all to choose from when setting up his surveillance network.

He’s had cameras in the bush for more than 20 years, hoping they will capture a glimpse of the ghosts of the forest. As the years progressed, so did his cameras his latest ones automatically triggered upon sensing movement, taking photographs of deer, possums, and the occasional tramper. The cameras took many thousands of photos and videos, weathering some of the world’s harshest conditions, where it rains 20 days a month and tremendous storms emerge from the quiet, rattling the trees and turning paths into creeks and creeks into torrents.

He caught one on video, once. In 1995, the deer like animal wandered into frame; The camera was in time-lapse mode so the image was blurry, but the animal’s shape was distinctive. It was nearly black and had a curved back, a thick neck and a beaked nose, swaying through the bush with the lumbering gait of a large animal, unlike a deer but suspiciously like a moose.

It was too blurry to convince everyone, though. The camera was a “monstrous arrangement,” Tustin says, powered by car batteries and primitive by modern standards. It took a photo every four seconds and would only record video when the animal came close, which happened just as it moved out of frame. Since then, the cameras had caught nothing.

Having failed to capture his target, Tustin decided to retire his cameras late last year.

“That’s it. The end of an era”, he told the local newspaper.

“Well, the pause of an era.”

More than a century after the animals disappeared into the forest, the strange tale of the Southern Hemisphere’s only moose population has entered the realm of New Zealand folklore. The moose have encouraged intrepid explorers seeking sizeable bounties and inspired tall tales told in southern pubs.

There have been blurry photos and stray hairs, suspicious droppings and sinister hoaxes. The gossip circle of the West Coast bush still spits out the occasional story of huge antlers glimpsed in the dark, or a strange, cloying smell disrupting the thick smell of deer.

What there hasn’t been is clear, undeniable proof that the descendants of those 10 moose still roam the forest somewhere in the mist, even as the body of circumstantial evidence has continued to grow.

“We’re just talking about a remnant population, hanging on by the skin of their teeth”, Tustin says in an interview.

“The scale of Fiordland is just monstrous. They’re not living in the open, and there’s very few people who frequent the places under the canopy.”

On its face, it sounds completely implausible. A fully grown Canadian bull moose would be 6ft tall at the shoulder and weigh 350kg, roughly the size of a large horse, with giant, sprawling antlers. How could one creature that size, let alone dozens of them, remain unseen for more than 60 years?

But moose are famously elusive, and the Fiordland bush is a uniquely superb landscape for disappearing. Legendary hunting guide Jim Muir, who hunted Fiordland moose in the 1920s and 1930s, once said he could tell a moose was just metres away by its tracks, but he could not see it through the trees. They are silent and solitary and move like shadows.

“They’ve got all the senses that make humans seem rather clumsy,” Tustin says.

“I can think of half a dozen times where I’ve been within a step or two. You can smell them and you’re surrounded by sign… You feel the hair stand on the back of your neck. Out of all those years, only half a dozen times.”

He began his search for moose in the early 1970s at the behest of his then employer, the Forestry Service. During their 70 days in the bush, his team found a cast antler, what was then the most convincing evidence of a live moose in decades.

At the time, he believed the moose would soon become extinct, they would struggle to compete with deer for food. But shortly afterwards, helicopter deer hunting became popular and mass deer culls greatly reduced the population. It was a respite for the moose.

In the time since, Tustin has spent the equivalent of several years in the bush, much of it joined by his wife, Marg, searching for moose. Although he took his cameras down, he is not capitulating: He had been trying to track one particular moose since 2002, which he believed roamed through Herrick Creek every July up to about 2011. It stopped leaving physical signs, leading Tustin to assume it was dead. The cameras were pointless.

He still ventures into the forest for weeks at a time, despite his advancing age, hoping to map the route of another moose.

“I’m 72 now, which is a pain in the arse, being this old,” he says.

“It’s demanding, and I like it like that. If it was soft and easy you wouldn’t feel you were having such an adventure. I’m still on the case. Maybe not with the same intensity as a few years ago, but we’re still out there.”

‘FOLLOW YOUR NOSE’

The sheep farmer was tramping through the forest when he smelled something unusual, a cloying, honey-like scent, clinging to the wind. An animal, but not a deer, and not any of the plants familiar to him from his previous expeditions into the bush.

Steve Jones had a tarpaulin and a week of food, but chose to walk on. The sun was sinking and the hut was some ways away. He realised later what he had sensed: The elusive moose, likely bathing in a small stream near him in the Hauroko Burn.

“There was a moose not 200 metres upwind from me, and I walked on,” he says. He had ignored his own advice: “Follow your nose”.

The Australian has made several trips to Fiordland in search of the moose. His quest began when he picked up a copy of Australian Deer magazine in the 1990s, which featured a photo of famed Hastings moose hunter Eddie Herrick carrying a bull moose’s head on his back, trudging through the creek which now bears his name, where many historical moose sightings took place.

Only three moose trophies were ever obtained in New Zealand; two were shot by Herrick, including the first bull moose killed under licence, in 1929. One of the moose was old and weak and missing one of its legs, likely as a result of gangrene, it was thought to be the original moose that had broken its leg in a panic 20 years earlier.

Jones recreated that trip, an arduous slog through the wilderness. He enjoys the enormity of the landscape, the sense of wilderness: “It is somehow deeply reassuring and invigorating to be alone with all that silence, moss and vastness,” he says in an email.

He says it wasn’t the first time he had come close. On one trip, he was crawling through a stream when “something very large and dark surged up and thundered off in a cloud of spray further up the stream, giving me just the barest glimpse of it”, he recalled.

It was not a bull, as he could not see its antlers; he followed it to a patch of sand, where he saw its large, fresh prints. The animal ventured into a swamp, where he circled it for an hour, catching occasional glimpses of its leg through bush. He had his gun but refused to take the shot and so was conquered by the coming darkness.

“It simply could not have been anything else,” he says.

“I would never shoot at something I could not see clearly, it would be dangerous and unethical. I’m glad I didn’t though as they are rare and special and it would have been just a waste.”

Jones, who has hunted deer for more than 40 years, has detailed his years long hunt for the moose on his blog. Like others who have gone searching, he says the evidence is unmistakeable: Only a moose could feed on branches three metres high, leave footprints that large.

Around this time every year, Jones yearns for Fiordland. He plans to come back next year to finally capture a moose on camera. He says he has a strategy, which he declines to reveal, but may arrange helicopter supply drops ahead of time so he can stay in the bush for some time, searching.

He’s not sure if he would make the photos public; He seeks personal, not public triumph.

“The herd might be better off if I did not publicise it, so they might just be enlarged on my wall”.

The Deepest Well. Healing the long term effects of Childhood Adversity – Dr Nadine Burke Harris.

What put Evan at increased risk for waking up with half of his body paralyzed (and for numerous other diseases as well) is not rare. It’s something two-thirds of the nation’s population is exposed to, something so common it’s hiding in plain sight.

So what is it? Lead? Asbestos? Some toxic packing material?

It’s Childhood Adversity.

“Well, her asthma does seem to get worse whenever her dad punches a hole in the wall. Do you think that could be related?”

Twenty years of medical research has shown that childhood adversity literally gets under our skin, changing people in ways that can endure in their bodies for decades. It can tip a child’s developmental trajectory and affect physiology. It can trigger chronic inflammation and hormonal changes that can last a lifetime. It can alter the way DNA is read and how cells replicate, and it can dramatically increase the risk for heart disease, stroke, cancer, diabetes even Alzheimer’s.

At five o’clock on an ordinary Saturday morning, a forty three year old man, we’ll call him Evan, wakes up. His wife, Sarah, is breathing softly beside him, curled in her usual position, arm slung over her forehead. Without thinking much about it, Evan tries to roll over and slide out of bed to get to the bathroom, but something’s off. He can’t roll over and it feels like his right arm has gone numb.

Ugh, must have slept on it too long, he thinks, bracing himself for those mean, hot tingles you get when the circulation starts again.

He tries to wiggle his fingers to get the blood flowing, but no dice. The aching pressure in his bladder isn’t going to wait, though, so he tries again to get up. Nothing happens.

What the. . .

His right leg is still exactly where he left it, despite the fact that he tried to move it the same way he has been moving it all his life without thinking.

He tries again. Nope.

Looks like this morning, it doesn’t want to cooperate. It’s weird, this whole body not doing what you want it to thing, but the urge to pee feels like a much bigger problem right now.

“Hey, baby, can you help me? I gotta pee. Just push me out of bed so I don’t do it right here,” he says to Sarah, half joking about the last part.

“What’s wrong, Evan?” says Sarah, lifting her head and squinting at him. “Evan?”

Her voice rises as she says his name the second time.

He notices she’s looking at him with deep concern in her eyes. Her face wears the expression she gets when the boys have fevers or wake up sick in the middle of the night. Which is ridiculous because all he needs is a little push. It’s five in the morning, after all. No need for a full-blown conversation.

“Honey, I just gotta go pee,” he says.

“What’s wrong? Evan? What’s wrong?”

In an instant, Sarah is up. She’s got the lights on and is peering into Evan’s face as though she is reading a shocking headline in the Sunday paper.

“It’s all right, baby. I just need to pee. My leg is asleep. Can you help me real quick?” he says.

He figures that maybe if he can put some pressure on his left side, he can shift position and jump-start his circulation. He just needs to get out of the bed.

It is in that moment that he realizes it isn’t just the right arm and leg that are numb it’s his face too.

In fact, it’s his whole right side.

What is happening to me?

Then Evan feels something warm and wet on his left leg.

He looks down to see his boxers are soaked. Urine is seeping into the bed sheets.

“Oh my Godl” Sarah screams. In that instant, seeing her husband wet the bed, Sarah realizes the gravity of the situation and leaps into action. She jumps out of bed and Evan can hear her running to their teenage son’s bedroom. There are a few muffled words that he can’t make out through the wall and then she’s back. She sits on the bed next to him, holding him and caressing his face.

“You’re okay,” Sarah says. “It’s gonna be okay.” Her voice is soft and soothing.

“Babe, what’s going on?” Evan asks, looking at his wife. As he gazes up at her, it dawns on him that she can’t understand anything he’s saying. He’s moving his lips and words are coming out of his mouth, but she doesn’t seem to be getting any of it.

Just then, a ridiculous cartoon commercial with a dancing heart bouncing along to a silly song starts playing in his mind.

F stands for face drooping. Bounce. Bounce.

A stands for arm weakness. Bounce. Bounce.

S stands for speech difficulty.

T stands for time to call 911. Learn to identify signs of a stroke. Act FAST!

Holy crap!

Despite the early hour, Evan’s son Marcus comes briskly to the doorway and hands his mom the phone. As father and son lock eyes, Evan sees a look of alarm and worry that makes his heart clench in his chest. He tries to tell his son it will be okay, but it’s clear from the boy’s expression that his attempt at reassurance is only making things worse. Marcus’s face contorts with fear, and tears start streaming down his cheeks.

On the phone with the 911 operator, Sarah is clear and forceful.

“I need an ambulance right now, right now! My husband is having a stroke. Yes, I’m sure! He can’t move his entire right side. Half of his face won’t move. No, he can’t speak. It’s totally garbled. His speech doesn’t make any sense. Just hurry up. Please send an ambulance right away!”

The first responders, a team of paramedics, make it there inside of five minutes. They bang on the door and ring the bell. Sarah runs downstairs and lets them in. Their younger son is still in his bedroom asleep, and she’s worried that the noise will wake him, but fortunately, he doesn’t stir.

Evan stares up at the crown molding and tries to calm down. He feels himself starting to drift off, getting further away from the current moment. This isn’t good.

The next thing he knows, he is on a stretcher being carried down the stairs. As the paramedics negotiate the landing, they pause to shift positions. In that slice of a second, Evan glances up and catches one of the medics watching him with an expression that makes him go cold. It’s a look of recognition and pity. It says, Poor guy. I’ve seen this before and it ain’t good.

As they are passing through the doorway, Evan wonders whether he will ever come back to this house. Back to Sarah and his boys. From the way that medic looked at him, Evan thinks the answer might not be yes.

When they get to the emergency room, Sarah is peppered with questions about Evan’s medical history. She tells them every detail of Evan’s life she thinks might be relevant. He’s a computer programmer. He goes mountain biking every weekend. He loves playing basketball with his boys. He’s a great dad. He’s happy. At his last checkup the doctor said everything looked great. At one point, she overhears one of the doctors relating Evan’s case to a colleague over the phone: “Forty-three-year-old male, nonsmoker, no risk factors.”

But unbeknownst to Sarah, Evan, and even Evan’s doctors, he did have a risk factor. A mighty big one. In fact, Evan was more than twice as likely to have a stroke as a person without this risk factor. What no one in the ER that day knew was that, for decades, an invisible biological process had been at work, one involving Evan’s cardiovascular, immune, and endocrine systems. One that might very well have led to the events of this moment. The risk factor and its potential impact never came up in all of the regular checkups Evan had had over the years.

What put Evan at increased risk for waking up with half of his body paralyzed (and for numerous other diseases as well) is not rare. It’s something two-thirds of the nation’s population is exposed to, something so common it’s hiding in plain sight.

So what is it? Lead? Asbestos? Some toxic packing material?

It’s childhood adversity.

Most people wouldn’t suspect that what happens to them in childhood has anything to do with stroke or heart disease or cancer. But many of us do recognize that when someone experiences childhood trauma, there may be an emotional and psychological impact. For the unlucky (or some say the “weak”), we know what the worst of the fallout looks like: substance abuse, cyclical violence, incarceration, and mentalhealth problems. But for everyone else, childhood trauma is the bad memory that no one talks about until at least the fifth or sixth date. It’s just drama, baggage.

Childhood adversity is a story we think we know.

Children have faced trauma and stress in the form of abuse, neglect, violence, and fear since God was a boy. Parents have been getting trashed, getting arrested, and getting divorced for almost as long. The people who are smart and strong enough are able to rise above the past and triumph through the force of their own will and resilience.

Or are they?

We’ve all heard the Horatio Alger-like stories about people who have experienced early hardships and have either overcome or, better yet, been made stronger by them. These tales are embedded in Americans’ cultural DNA. At best, they paint an incomplete picture of what childhood adversity means for the hundreds of millions of people in the United States (and the billions around the world) who have experienced early life stress. More often, they take on moral overtones, provoking feelings of shame and hopelessness in those who struggle with the lifelong impacts of childhood adversity. But there is a huge part of the story missing.

Twenty years of medical research has shown that childhood adversity literally gets under our skin, changing people in ways that can endure in their bodies for decades. It can tip a child’s developmental trajectory and affect physiology. It can trigger chronic inflammation and hormonal changes that can last a lifetime. It can alter the way DNA is read and how cells replicate, and it can dramatically increase the risk for heart disease, stroke, cancer, diabetes even Alzheimer’s.

This new science gives a startling twist to the Horatio Alger tale we think we know so well; as the studies reveal, years later, after having “transcended” adversity in amazing ways, even bootstrap heroes find themselves pulled up short by their biology. Despite rough childhoods, plenty of folks got good grades and went to college and had families. They did what they were supposed to do. They overcame adversity and went on to build successful lives and then they got sick. They had strokes. Or got lung cancer, or developed heart disease, or sank into depression. Since they hadn’t engaged in high risk behavior like drinking, overeating, or smoking, they had no idea where their health problems had come from. They certainly didn’t connect them to the past, because they’d left the past behind. Right?

The truth is that despite all their hard work, people like Evan who have had adverse childhood experiences are still at greater risk for developing chronic illnesses, like cardiovascular disease, and cancer.

But why? How does exposure to stress in childhood crop up as a health problem in middle age or even retirement? Are there effective treatments? What can we do to protect our health and our children’s health?

In 2005, when I finished my pediatrics residency at Stanford, I didn’t even know to ask these questions. Like everyone else, I had only part of the story. But then, whether by chance or by fate, I caught glimpses of a story yet to be told. It started in exactly the place you might expect to find high levels of adversity: a lowincome community of color with few resources, tucked inside a wealthy city with all the resources in the world. In the Bayview Hunters Point neighborhood of San Francisco, I started a community pediatric clinic. Every day I witnessed my tiny patients dealing with overwhelming trauma and stress; as a human being, I was brought to my knees by it. As a scientist and a doctor, I got up off those knees and began asking questions.

My journey gave me, and I hope this book will give you, a radically different perspective on the story of childhood adversity, the whole story, not just the one we think we know. Through these pages, you will better understand how childhood adversity may be playing out in your life or in the life of someone you love, and, more important, you will learn the tools for healing that begins with one person or one community but has the power to transform the health of nations.

Chapter 1

Discovery

Something’s Just Not Right

As I walked into an exam room at the Bayview Child Health Center to meet my next patient, I couldn’t help but smile. My team and I had worked hard to make the clinic as inviting and family friendly as possible. The room was painted in pastel colors and had a matching checkered floor. Cartoons of baby animals paraded across the wall above the sink and marched toward the door. if you didn’t know better, you’d think you were in a pediatric office in the affluent Pacific Heights neighborhood of San Francisco instead of in struggling Bayview, which was exactly the point. We wanted our clinic to be a place where people felt valued.

When I came through the door, Diego’s eyes were glued to the baby giraffes. What a super-cutie, I thought as he moved his attention to me, flashed me a smile, and checked me out through a mop of shaggy black hair. He was perched on the chair next to his mother, who held his three year old sister in her lap. When I asked him to climb onto the exam table, he obediently hopped up and started swinging his legs back and forth. As I opened his chart, I saw his birth date and looked up at him again Diego was a cutie and a shorty.

Quickly I flipped through the chart, looking for some objective data to back up my initial impression. I plotted Diego’s height on the growth curve, then I double checked to be sure I hadn’t made a mistake. My newest patient was at the 50th percentile for height for a four year old.

Which would have been fine, except that Diego was seven years old.

That’s weird, I thought, because otherwise, Diego looked like a totally normal kid. I scooted my chair over to the table and pulled out my stethoscope. As I got closer I could see thickened, dry patches of eczema at the creases of his elbows, and when I listened to his lungs, I heard a distinct wheezing. Diego’s school nurse had referred him for evaluation for attention deficit hyperactivity disorder (ADHD), a chronic condition characterized by hyperactivity, inattention, and impulsivity. Whether or not Diego was one of the millions of children affected by ADHD remained to be seen, but already I could see his primary diagnoses would be more along the lines of persistent asthma, eczema, and growth failure.

Diego’s mom, Rosa, watched nervously as I examined her son. Her eyes were fixed on Diego and filled with concern; little Selena’s gaze was darting around the room as she checked out all the shiny gadgets.

“Do you prefer English 0 Espanol?” I asked Rosa.

Relief crossed her face and she leaned forward.

After we talked in Spanish through the medical history that she had filled out in the waiting room, I asked the same question I always do before jumping into the results of the physical exam: Is there anything specific going on that I should know about?

Concern gathered her forehead like a stitch.

“He’s not doing well in school, and the nurse said medicine could help. Is that true? What medicine does he need?”

“When did you notice he’d started having trouble in school?” I asked.

There was a slight pause as her face morphed from tense to tearful.

“jAy, Doctoral” she said and began the story in a torrent of Spanish.

I put my hand on her arm, and before she could get much further, I poked my head out the door and asked my medical assistant to take Selena and Diego to the waiting room.

The story I heard from Rosa was not a happy one. She spent the next ten minutes telling me about an incident of sexual abuse that had happened to Diego when he was four years old. Rosa and her husband had taken in a tenant to help offset the sky-high San Francisco rent. it was a family friend, someone her husband knew from his work in construction. Rosa noticed that Diego became more clingy and withdrawn after the man arrived, but she had no idea why until she came home one day to find the man in the shower with Diego.

While they had immediately kicked the man out and filed a police report, the damage was done. Diego started having trouble in preschool, and as he moved up, he lagged further and further behind academically. Making matters worse, Rosa’s husband blamed himself and seemed angry all the time. While he had always drunk more than she liked, after the incident it got a lot worse. She recognized the tension and drinking weren’t good for the family but didn’t know what she could do about it. From what she told me about her state of mind, I strongly suspected she was suffering from depression.

I assured her that we could help Diego with the asthma and eczema and that I’d look into the ADHD and growth failure. She sighed and seemed at least a little relieved.

We sat in silence for a moment, my mind zooming around. I believed, ever since we’d opened the clinic in 2007, that something medical was happening with my patients that I couldn’t quite understand. It started with the glut of ADHD cases that were referred to me. As with Diego’s, most of my patients’ ADHD symptoms didn’t just come out of the blue. They seemed to occur at the highest rates in patients who were struggling with some type of life disruption or trauma, like the twins who were failing classes and getting into fights at school after witnessing an attempted murder in their home or the three brothers whose grades fell precipitously after their parents’ divorce turned violently acrimonious, to the point where the family was ordered by the court to do their custody swaps at the Bayview police station.

Many patients were already on ADHD medication; some were even on antipsychotics. For a number of patients, the medication seemed to be helping, but for many it clearly wasn’t. Most of the time I couldn’t make the ADHD diagnosis. The diagnostic criteria for ADHD told me I had to rule out other explanations for ADHD symptoms (such as pervasive developmental disorders, schizophrenia, or other psychotic disorders) before I could diagnose ADHD. But what if there was a more nuanced answer? What if the cause of these symptoms, the poor impulse control, inability to focus, difficulty sitting still was not a mental disorder, exactly, but a biological process that worked on the brain to disrupt normal functioning? Weren’t mental disorders simply biological disorders? Trying to treat these children felt like jamming unmatched puzzle pieces together; the symptoms, causes, and treatments were close, but not close enough to give that satisfying click.

I mentally scrolled back, cataloging all the patients like Diego and the twins that I’d seen over the past year. My mind went immediately to Kayla, a ten year old whose asthma was particularly difficult to control. After the last flare-up, I sat down with mom and patient to meticulously review Kayla’s medication regimen. When I asked if Kayla’s mom could think of any asthma triggers that we hadn’t already identified (we had reviewed everything from pet hair to cockroaches to cleaning products), she responded,

“Well, her asthma does seem to get worse whenever her dad punches a hole in the wall. Do you think that could be related?”

Kayla and Diego were just two patients, but they had plenty of company. Day after day I saw infants who were listless and had strange rashes. I saw kindergartners whose hair was falling out. Epidemic levels of learning and behavioral problems. Kids just entering middle school had depression. And in unique cases, like Diego’s, kids weren’t even growing. As I recalled their faces, I ran an accompanying mental checklist of disorders, diseases, syndromes, and conditions, the kinds of early setbacks that could send disastrous ripples throughout the lives to come.

If you looked through a certain percentage of my charts, you would see not only a plethora of medical problems but story after story of heart-wrenching trauma. In addition to the blood pressure reading and the body mass index in the chart, if you flipped all the way to the Social History section, you would find parental incarcerations, multiple foster-care placements, suspected physical abuse, documented abuse, and family legacies of mental illness and substance abuse. A week before Diego, I’d seen a six year old girl with type 1 diabetes whose dad was high for the third visit in a row. When I asked him about it, he assured me I shouldn’t worry because the weed helped to quiet the voices in his head.

In the first year of my practice, seeing roughly a thousand patients, I diagnosed not one but two kids with autoimmune hepatitis, a rare disorder that typically affects fewer than three children in one hundred thousand. Both cases coincided with significant histories of adversity.

I asked myself again and again: What’s the connection?

If it had been just a handful of kids with both overwhelming adversity and poor health outcomes, maybe I could have seen it as a coincidence. But Diego’s situation was representative of hundreds of kids I had seen over the past year. The phrase statistical significance kept echoing through my head. Every day I drove home with a hollow feeling. I was doing my best to care for these kids, but it wasn’t nearly enough. There was an underlying sickness in Bayview that I couldn’t put my finger on, and with every Diego that I saw, the gnawing in my stomach got worse.

For a long time the possibility of an actual biological link between childhood adversity and damaged health came to me as a question that lingered for only a moment before it was gone. I wonder… What if… It seems like… These questions kept popping up, but part of the problem in putting the pieces together was that they would emerge from situations occurring months or sometimes years apart. Because they didn’t fit logically or neatly into my worldview at those discrete moments in time, it was difficult to see the story behind the story. Later it would feel obvious that all of these questions were simply clues pointing to a deeper truth, but like a soap-opera wife whose husband was stepping out with the nanny, I would understand it only in hindsight. It wasn’t hotel receipts and whiffs of perfume that clued me in, but there were plenty of tiny signals that eventually led me to the same thought: How could I not have seen this? It was right in front of me the whole damn time.

I lived in that state of not-quite-getting-it for years because I was doing my job the way I had been trained to do it. I knew that my gut feeling about this biological connection between adversity and health was just a hunch. As a scientist, I couldn’t accept these kinds of associations without some serious evidence. Yes, my patients were experiencing extremely poor health outcomes, but wasn’t that endemic to the community they lived in? Both my medical training and my public health education told me that this was so.

That there is a connection between poor health and poor communities is well documented. We know that it’s not just how you live that affects your health, it’s also where you live. Public health experts and researchers refer to communities as “hot spots” if poor health outcomes on the whole are found to be extreme in comparison to the statistical norm. The dominant view is that health disparities in populations like Bayview occur because these folks have poor access to health care, poor quality of care, and poor options when it comes to things like healthy, affordable food and safe housing. When I was at Harvard getting my master’s degree in public health, I learned that if I wanted to improve people’s health, the best thing I could do was find a way to provide accessible and better health care for these communities.

Straight out of my medical residency, I was recruited by the California Pacific Medical Center (CPMC) in the Laurel Heights area of San Francisco to do my dream job: create programs specifically targeted to address health disparities in the city. The hospital’s CEO, Dr. Martin Brotman, personally sat me down to reinforce his commitment to that. My second week on the job, my boss came into my office and handed me a 147 page document, the 2004 Community Health Assessment for San Francisco. Then he promptly went on vacation, giving me very little direction and leaving me to my own ambitious devices (in hindsight, this was either genius or crazy on his part). I did what any good public health nerd would do, I looked at the numbers and tried to assess the situation. I had heard that Bayview Hunters Point in San Francisco, where much of San Francisco’s African American population lived, was a vulnerable community, but when I looked at the 2004 assessment, I was floored. One way the report grouped people was by their zip code. The leading cause of early death in seventeen out of twenty-one zip codes in San Francisco was ischemic heart disease, which is the number-one killer in the United States. In three zip codes it was HIV/AIDS. But Bayview Hunters Point was the only zip code where the number one cause of early death was violence. Right next to Bayview (94124) in the table was the zip code for the Marina district (94123), one of the city’s more affluent neighborhoods. As I ran my finger down the rows of numbers, my jaw dropped. What they showed me was that if you were a parent raising your baby in the Bayview zip code, your child was two and a half times as likely to develop pneumonia than a child in the Marina district. Your child was also six times as likely to develop asthma. And once that baby grew up, he or she was twelve times as likely to develop uncontrolled diabetes.

I had been hired by CPMC to address disparities. And, boy, now I saw why.

Looking back, I think it was probably a combination of naiveté and youthful enthusiasm that spurred me to spend the two weeks that my boss was gone drawing up a business plan for a clinic in the heart of the community with the greatest need. I wanted to bring services to the people of Bayview rather than asking them to come to us. Luckily, when my boss and I gave the plan to Dr. Brotman, he didn’t fire me for excessive idealism. Instead, he helped me make the clinic a reality, which still kind of blows my mind.

The numbers in that report had given me a good idea of what the people of Bayview were up against, but it wasn’t until March of 2007, when we opened the doors to CPMC’s Bayview Child Health Center, that I saw the full shape of it. To say that life in Bayview isn’t easy would be an understatement. It’s one of the few places in San Francisco where drug deals happen in plain sight of kindergartners on their way to school and where grandmas sometimes sleep in bathtubs because they’re afraid of stray bullets coming through the walls. It’s always been a rough place and not only because of violence. In the 1960s, the U.S. Navy decontaminated radioactive boats in the shipyard, and up until the early 2000s, the toxic byproducts from a nearby power plant were routinely dumped in the area. In a documentary about the racial strife and marginalization of the neighborhood, writer and social critic James Baldwin said, “This is the San Francisco that America pretends does not exist.”

My day-to-day experience working in Bayview tells me that the struggles are real and ever present, but it also tells me that’s not the whole story. Bayview is the oily concrete you skin your knee on, but it’s also the flower growing up between the cracks. Every day I see families and communities that lovingly support each other through some of the toughest experiences imaginable. I see beautiful kids and doting parents. They struggle and they laugh and then they struggle some more. But no matter how hard parents work for their kids, the lack of resources in the community is crushing. Before we opened the Bayview Child Health Center, there was only one pediatrician in practice for over ten thousand children. These kids face serious medical and emotional problems. So do their parents. And their grandparents. In many cases, the kids fare better because they are eligible for government assisted health insurance. Poverty, violence, substance abuse, and crime have created a multigenerational legacy of ill health and frustration. But still, I believed we could make a difference. I opened my practice there because I wasn’t okay with pretending the people of Bayview didn’t exist.

from

The Deepest Well. Healing the long term effects of Childhood Adversity

by Dr Nadine Burke Harris

get it at Amazon.com

Epigenetics: The Evolution Revolution – Israel Rosenfield and Edward Ziff * The Epigenetics Revolution – Nessa Carey.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

We are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

Israel Rosenfield and Edward Ziff

At the end of the eighteenth century, the French naturalist Jean-Baptiste Lamarck noted that life on earth had evolved over long periods of time into a striking variety of organisms. He sought to explain how they had become more and more complex. Living organisms not only evolved, Lamarck argued; they did so very slowly, “little by little and successively.” In Lamarckian theory, animals became more diverse as each creature strove toward its own “perfection,” hence the enormous variety of living things on earth. Man is the most complex life form, therefore the most perfect, and is even now evolving.

In Lamarck’s view, the evolution of life depends on variation and the accumulation of small, gradual changes. These are also at the center of Darwin’s theory of evolution, yet Darwin wrote that Lamarck’s ideas were “veritable rubbish.” Darwinian evolution is driven by genetic variation combined with natural selection, the process whereby some variations give their bearers better reproductive success in a given environment than other organisms have. Lamarckian evolution, on the other hand, depends on the inheritance of acquired characteristics. Giraffes, for example, got their long necks by stretching to eat leaves from tall trees, and stretched necks were inherited by their offspring, though Lamarck did not explain how this might be possible.

When the molecular structure of DNA was discovered in 1953, it became dogma in the teaching of biology that DNA and its coded information could not be altered in any way by the environment or a person’s way of life. The environment, it was known, could stimulate the expression of a gene. Having a light shone in one’s eyes or suffering pain, for instance, stimulates the activity of neurons and in doing so changes the activity of genes those neurons contain, producing instructions for making proteins or other molecules that play a central part in our bodies.

The structure of the DNA neighboring the gene provides a list of instructions, a gene program, that determines under what circumstances the gene is expressed. And it was held that these instructions could not be altered by the environment. Only mutations, which are errors introduced at random, could change the instructions or the information encoded in the gene itself and drive evolution through natural selection. Scientists discredited any Lamarckian claims that the environment can make lasting, perhaps heritable alterations in gene structure or function.

But new ideas closely related to Lamarck’s eighteenth century views have become central to our understanding of genetics. In the past fifteen years these ideas, which belong to a developing field of study called epigenetics, have been discussed in numerous articles and several books, including Nessa Carey’s 2012 study The Epigenetic Revolution and The Deepest Well, a recent work on childhood trauma by the physician Nadine Burke Harris.

The developing literature surrounding epigenetics has forced biologists to consider the possibility that gene expression could be influenced by some heritable environmental factors previously believed to have had no effect over it, like stress or deprivation. “The DNA blueprint,” Carey writes,

Isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life.

That might seem a commonsensical view. But it runs counter to decades of scientific thought about the independence of the genetic program from environmental influence. What findings have made it possible?

In 1975, two English biologists, Robin Holliday and John Pugh, and an American biologist, Arthur Riggs, independently suggested that methylation, a chemical modification of DNA that is heritable and can be induced by environmental influences, had an important part in controlling gene expression. How it did this was not understood, but the idea that through methylation the environment could, in fact, alter not only gene expression but also the genetic program rapidly took root in the scientific community.

As scientists came to better understand the function of methylation in altering gene expression, they realized that extreme environmental stress, the results of which had earlier seemed self explanatory, could have additional biological effects on the organisms that suffered it. Experiments with laboratory animals have now shown that these outcomes are based on the transmission of acquired changes in genetic function. Childhood abuse, trauma, famine, and ethnic prejudice may, it turns out, have long term consequences for the functioning of our genes.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

Epigenesis does not change the information coded in the genes or a person’s genetic makeup, the genes themselves are not affected, but instead alters the manner in which they are “read” by blocking access to certain genes and preventing their expression.

This mechanism can be the hidden cause of our feelings of depression, anxiety, or paranoia. What is perhaps most surprising of all, this alteration could, in some cases, be passed on to future generations who have never directly experienced the stresses that caused their forebears’ depression or ill health.

Numerous clinical studies have shown that childhood trauma, arising from parental death or divorce, neglect, violence, abuse, lack of nutrition or shelter, or other stressful circumstances, can give rise to a variety of health problems in adults: heart disease, cancer, mood and dietary disorders, alcohol and drug abuse, infertility, suicidal behavior, learning deficits, and sleep disorders.

Since the publication in 2003 of an influential paper by Rudolf Jaenisch and Adrian Bird, we have started to understand the genetic mechanisms that explain why this is the case. The body and the brain normally respond to danger and frightening experiences by releasing a hormone, a glucocorticoid that controls stress. This hormone prepares us for various challenges by adjusting heart rate, energy production, and brain function; it binds to a protein called the glucocorticoid receptor in nerve cells of the brain.

Normally, this binding shuts off further glucocorticoid production, so that when one no longer perceives a danger, the stress response abates. However, as Gustavo Turecki and Michael Meaney note in a 2016 paper surveying more than a decade’s worth of findings about epigenetics, the gene for the receptor is inactive in people who have experienced childhood stress; as a result, they produce few receptors. Without receptors to bind to, glucocorticoids cannot shut off their own production, so the hormone keeps being released and the stress response continues, even after the threat has subsided.

“The term for this is disruption of feedback inhibition,” Harris writes. It is as if “the body’s stress thermostat is broken. Instead of shutting off this supply of ‘heat’ when a certain point is reached, it just keeps on blasting cortisol through your system.”

It is now known that childhood stress can deactivate the receptor gene by an epigenetic mechanism, namely, by creating a physical barrier to the information for which the gene codes. What creates this barrier is DNA methylation, by which methyl groups known as methyl marks (composed of one carbon and three hydrogen atoms) are added to DNA.

DNA methylation is long-lasting and keeps chromatin, the DNA-protein complex that makes up the chromosomes containing the genes, in a highly folded structure that blocks access to select genes by the gene expression machinery, effectively shutting the genes down. The long-term consequences are chronic inflammation, diabetes, heart disease, obesity, schizophrenia, and major depressive disorder.

Such epigenetic effects have been demonstrated in experiments with laboratory animals. In a typical experiment, rat or mouse pups are subjected to early-life stress, such as repeated maternal separation. Their behavior as adults is then examined for evidence of depression, and their genomes are analyzed for epigenetic modifications. Likewise, pregnant rats or mice can be exposed to stress or nutritional deprivation, and their offspring examined for behavioral and epigenetic consequences.

Experiments like these have shown that even animals not directly exposed to traumatic circumstances, those still in the womb when their parents were put under stress, can have blocked receptor genes. It is probably the transmission of glucocorticoids from mother to fetus via the placenta that alters the fetus in this way. In humans, prenatal stress affects each stage of the child’s maturation: for the fetus, a greater risk of preterm delivery, decreased birth weight, and miscarriage; in infancy, problems of temperament, attention, and mental development; in childhood, hyperactivity and emotional problems; and in adulthood, illnesses such as schizophrenia and depression.

What is the significance of these findings?

Until the mid-1970s, no one suspected that the way in which the DNA was “read” could be altered by environmental factors, or that the nervous systems of people who grew up in stress free environments would develop differently from those of people who did not. One’s development, it was thought, was guided only by one’s genetic makeup.

As a result of epigenesis, a child deprived of nourishment may continue to crave and consume large amounts of food as an adult, even when he or she is being properly nourished, leading to obesity and diabetes. A child who loses a parent or is neglected or abused may have a genetic basis for experiencing anxiety and depression and possibly schizophrenia.

Formerly, it had been widely believed that Darwinian evolutionary mechanisms, variation and natural selection, were the only means for introducing such long lasting changes in brain function, a process that took place over generations. We now know that epigenetic mechanisms can do so as well, within the lifetime of a single person.

It is by now well established that people who suffer trauma directly during childhood or who experience their mother’s trauma indirectly as a fetus may have epigenetically based illnesses as adults. More controversial is whether epigenetic changes can be passed on from parent to child.

Methyl marks are stable when DNA is not replicating, but when it replicates, the methyl marks must be introduced into the newly replicated DNA strands to be preserved in the new cells. Researchers agree that this takes place when cells of the body divide, a process called mitosis, but it is not yet fully established under which circumstances marks are preserved when cell division yields sperm and egg, a process called meiosis, or when mitotic divisions of the fertilized egg form the embryo. Transmission at these two latter steps would be necessary for epigenetic changes to be transmitted in full across generations.

The most revealing instances for studies of intergenerational transmission have been natural disasters, famines, and atrocities of war, during which large groups have undergone trauma at the same time. These studies have shown that when women are exposed to stress in the early stages of pregnancy, they give birth to children whose stress response systems malfunction. Among the most widely studied of such traumatic events is the Dutch Hunger Winter. In 1944 the Germans prevented any food from entering the parts of Holland that were still occupied. The Dutch resorted to eating tulip bulbs to overcome their stomach pains. Women who were pregnant during this period, Carey notes, gave birth to a higher proportion of obese and schizophrenic children than one would normally expect. These children also exhibited epigenetic changes not observed in similar children, such as siblings, who had not experienced famine at the prenatal stage.

During the Great Chinese Famine (1958-1961), millions of people died, and children born to young women who experienced the famine were more likely to become schizophrenic, to have impaired cognitive function, and to suffer from diabetes and hypertension as adults. Similar studies of the 1932-1933 Ukrainian famine, in which many millions died, revealed an elevated risk of type II diabetes in people who were in the prenatal stage of development at the time. Although prenatal and early childhood stress both induce epigenetic effects and adult illnesses, it is not known if the mechanism is the same in both cases.

Whether epigenetic effects of stress can be transmitted over generations needs more research, both in humans and in laboratory animals. But recent comprehensive studies by several groups using advanced genetic techniques have indicated that epigenetic modifications are not restricted to the glucocorticoid receptor gene. They are much more extensive than had been realized, and their consequences for our development, health, and behavior may also be great.

It is as though nature employs epigenesis to make long lasting adjustments to an individual’s genetic program to suit his or her personal circumstances, much as in Lamarck’s notion of “striving for perfection.”

In this view, the ill health arising from famine or other forms of chronic, extreme stress would constitute an epigenetic miscalculation on the part of the nervous system. Because the brain prepares us for adult adversity that matches the level of stress we suffer in early life, psychological disease and ill health persist even when we move to an environment with a lower stress level.

Once we recognize that there is an epigenetic basis for diseases caused by famine, economic deprivation, war related trauma, and other forms of stress, it might be possible to treat some of them by reversing those epigenetic changes. “When we understand that the source of so many of our society’s problems is exposure to childhood adversity,” Harris writes,

The solutions are as simple as reducing the dose of adversity for kids and enhancing the ability of caregivers to be buffers. From there, we keep working our way up, translating that understanding into the creation of things like more effective educational curricula and the development of blood tests that identify biomarkers for toxic stress, things that will lead to a wide range of solutions and innovations, reducing harm bit by bit, and then leap by leap.

Epigenetics has also made clear that the stress caused by war, prejudice, poverty, and other forms of childhood adversity may have consequences both for the persons affected and for their future unborn children, not only for social and economic reasons but also for biological ones.

The Epigenetics Revolution

Nessa Carey

DNA.
Sometimes, when we read about biology, we could be forgiven for thinking that those three letters explain everything. Here, for example, are just a few of the statements made on 26 June 2000, when researchers announced that the human genome had been sequenced:

Today we are learning the language in which God created life. US President Bill Clinton

We now have the possibility of achieving all we ever hoped for from medicine. UK Science Minister Lord Sainsbury

Mapping the human genome has been compared with putting a man on the moon, but I believe it is more than that. This is the outstanding achievement not only of our lifetime, but in terms of human history. Michael Dexter, The Wellcome Trust

From these quotations, and many others like them, we might well think that researchers could have relaxed a bit after June 2000 because most human health and disease problems could now be sorted out really easily. After all, we had the blueprint for humankind. All we needed to do was get a bit better at understanding this set of instructions, so we could fill in a few details. Unfortunately, these statements have proved at best premature. The reality is rather different.

We talk about DNA as if it’s a template, like a mould for a car part in a factory. In the factory, molten metal or plastic gets poured into the mould thousands of times and, unless something goes wrong in the process, out pop thousands of identical car parts.

But DNA isn’t really like that. It’s more like a script. Think of Romeo and Juliet, for example. In 1936 George Cukor directed Leslie Howard and Norma Shearer in a film version. Sixty years later Baz Luhrmann directed Leonardo DiCaprio and Claire Danes in another movie version of this play. Both productions used Shakespeare’s script, yet the two movies are entirely different. Identical starting points, different outcomes.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

The implications of this for human health are very wide ranging, as we will see from the case studies we are going to look at in a moment. In all these case studies it’s really important to remember that nothing happened to the DNA blueprint of the people in these case studies. Their DNA didn’t change (mutate), and yet their life histories altered irrevocably in response to their environments.

Audrey Hepburn was one of the 20th century’s greatest movie stars. Stylish, elegant and with a delicately lovely, almost fragile bone structure, her role as Holly Golightly in Breakfast at Tiffany’s has made her an icon, even to those who have never seen the movie. It’s startling to think that this wonderful beauty was created by terrible hardship. Audrey Hepburn was a survivor of an event in the Second World War known as the Dutch Hunger Winter. This ended when she was sixteen years old but the after effects of this period, including poor physical health, stayed with her for the rest of her life.

The Dutch Hunger Winter lasted from the start of November 1944 to the late spring of 1945. This was a bitterly cold period in Western Europe, creating further hardship in a continent that had been devastated by four years of brutal war. Nowhere was this worse than in the Western Netherlands, which at this stage was still under German control. A German blockade resulted in a catastrophic drop in the availability of food to the Dutch population. At one point the population was trying to survive on only about 30 per cent of the normal daily calorie intake. People ate grass and tulip bulbs, and burned every scrap of furniture they could get their hands on, in a desperate effort to stay alive. Over 20,000 people had died by the time food supplies were restored in May 1945.

The dreadful privations of this time also created a remarkable scientific study population. The Dutch survivors were a well defined group of individuals all of whom suffered just one period of malnutrition, all of them at exactly the same time. Because of the excellent healthcare infrastructure and record keeping in the Netherlands, epidemiologists have been able to follow the long term effects of the famine. Their findings were completely unexpected.

One of the first aspects they studied was the effect of the famine on the birth weights of children who had been in the womb during that terrible period. If a mother was well fed around the time of conception and malnourished only for the last few months of the pregnancy, her baby was likely to be born small. If, on the other hand, the mother suffered malnutrition for the first three months of the pregnancy only (because the baby was conceived towards the end of this terrible episode), but then was well fed, she was likely to have a baby with a normal body weight. The foetus ‘caught up’ in body weight.

That all seems quite straightforward, as we are all used to the idea that foetuses do most of their growing in the last few months of pregnancy. But epidemiologists were able to study these groups of babies for decades and what they found was really surprising. The babies who were born small stayed small all their lives, with lower obesity rates than the general population. For forty or more years, these people had access to as much food as they wanted, and yet their bodies never got over the early period of malnutrition. Why not? How did these early life experiences affect these individuals for decades? Why weren’t these people able to go back to normal, once their environment reverted to how it should be?

Even more unexpectedly, the children whose mothers had been malnourished only early in pregnancy, had higher obesity rates than normal. Recent reports have shown a greater incidence of other health problems as well, including certain tests of mental activity. Even though these individuals had seemed perfectly healthy at birth, something had happened to their development in the womb that affected them for decades after. And it wasn’t just the fact that something had happened that mattered, it was when it happened. Events that take place in the first three months of development, a stage when the foetus is really very small, can affect an individual for the rest of their life.

Even more extraordinarily, some of these effects seem to be present in the children of this group, i.e. in the grandchildren of the women who were malnourished during the first three months of their pregnancy.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

Let’s consider a different human story. Schizophrenia is a dreadful mental illness which, if untreated, can completely overwhelm and disable an affected person. Patients may present with a range of symptoms including delusions, hallucinations and enormous difficulties focusing mentally. People with schizophrenia may become completely incapable of distinguishing between the ‘real world’ and their own hallucinatory and delusional realm. Normal cognitive, emotional and societal responses are lost. There is a terrible misconception that people with schizophrenia are likely to be violent and dangerous. For the majority of patients this isn’t the case at all, and the people most likely to suffer harm because of this illness are the patients themselves. Individuals with schizophrenia are fifty times more likely to attempt suicide than healthy individuals.

Schizophrenia is a tragically common condition. It affects between 0.5 per cent and 1 per cent of the population in most countries and cultures, which means that there may be over fifty million people alive today who are suffering from this condition. Scientists have known for some time that genetics plays a strong role in determining if a person will develop this illness. We know this because if one of a pair of identical twins has schizophrenia, there is a 50 per cent chance that their twin will also have the condition. This is much higher than the 1 per cent risk in the general population.

Identical twins have exactly the same genetic code as each other. They share the same womb and usually they are brought up in very similar environments. When we consider this, it doesn’t seem surprising that if one of the twins develops schizophrenia, the chance that his or her twin will also develop the illness is very high. In fact, we have to start wondering why it isn’t higher. Why isn’t the figure 100 per cent? How is it that two apparently identical individuals can become so very different? An individual has a devastating mental illness but will their identical twin suffer from it too? Flip a coin heads they win, tails they lose. Variations in the environment are unlikely to account for this, and even if they did, how would these environmental effects have such profoundly different impacts on two genetically identical people?

Here’s a third case study. A small child, less than three years old, is abused and neglected by his or her parents. Eventually, the state intervenes and the child is taken away from the biological parents and placed with foster or adoptive parents. These new carers love and cherish the child, doing everything they can to create a secure home, full of affection. The child stays with these new parents throughout the rest of its childhood and adolescence, and into young adulthood.

Sometimes everything works out well for this person. They grow up into a happy, stable individual indistinguishable from all their peers who had normal, non abusive childhoods. But often, tragically, it doesn’t work out this way. Children who have suffered from abuse or neglect in their early years grow up with a substantially higher risk of adult mental health problems than the general population. All too often the child grows up into an adult at high risk of depression, self-harm, drug abuse and suicide.

Once again, we have to ask ourselves why. Why is it so difficult to override the effects of early childhood exposure to neglect or abuse?

Why should something that happened early in life have effects on mental health that may still be obvious decades later?

In some cases, the adult may have absolutely no recollection of the traumatic events, and yet they may suffer the consequences mentally and emotionally for the rest of their lives.

These three case studies seem very different on the surface. The first is mainly about nutrition, especially of the unborn child. The second is about the differences that arise between genetically identical individuals. The third is about long term psychological damage as a result of childhood abuse.

But these stories are linked at a very fundamental biological level. They are all examples of epigenetics. Epigenetics is the new discipline that is revolutionising biology. Whenever two genetically identical individuals are non-identical in some way we can measure, this is called epigenetics. When a change in environment has biological consequences that last long after the event itself has vanished into distant memory, we are seeing an epigenetic effect in action.

Epigenetic phenomena can be seen all around us, every day. Scientists have identified many examples of epigenetics, just like the ones described above, for many years. When scientists talk about epigenetics they are referring to all the cases where the genetic code alone isn’t enough to describe what’s happening, there must be something else going on as well.

This is one of the ways that epigenetics is described scientifically, where things which are genetically identical can actually appear quite different to one another. But there has to be a mechanism that brings out this mismatch between the genetic script and the final outcome. These epigenetic effects must be caused by some sort of physical change, some alterations in the vast array of molecules that make up the cells of every living organism. This leads us to the other way of viewing epigenetics, the molecular description.

In this model, epigenetics can be defined as the set of modifications to our genetic material that change the ways genes are switched on or off, but which don’t alter the genes themselves.

Although it may seem confusing that the word ‘epigenetics’ can have two different meanings, it’s just because we are describing the same event at two different levels. It’s a bit like looking at the pictures in old newspapers with a magnifying glass, and seeing that they are made up of dots. If we didn’t have a magnifying glass we might have thought that each picture was just made in one solid piece and we’d probably never have been able to work out how so many new images could be created each day. On the other hand, if all we ever did was look through the magnifying glass, all we would see would be dots, and we’d never see the incredible image that they formed together and which we’d see if we could only step back and look at the big picture.

The revolution that has happened very recently in biology is that for the first time we are actually starting to understand how amazing epigenetic phenomena are caused. We’re no longer just seeing the large image, we can now also analyse the individual dots that created it.

Crucially, this means that we are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

The ‘epi’ in epigenetics is derived from Greek and means at, on, to, upon, over or beside. The DNA in our cells is not some pure, unadulterated molecule. Small chemical groups can be added at specific regions of DNA. Our DNA is also smothered in special proteins. These proteins can themselves be covered with additional small chemicals. None of these molecular amendments changes the underlying genetic code. But adding these chemical groups to the DNA, or to the associated proteins, or removing them, changes the expression of nearby genes. These changes in gene expression alter the functions of cells, and the very nature of the cells themselves. Sometimes, if these patterns of chemical modifications are put on or taken off at a critical period in development, the pattern can be set for the rest of our lives, even if we live to be over a hundred years of age.

There’s no debate that the DNA blueprint is a starting point. A very important starting point and absolutely necessary, without a doubt. But it isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life. And as we shall see in Chapter 1, we would all look like big amorphous blobs, because all the cells in our bodies would be completely identical.

Huge areas of biology are influenced by epigenetic mechanisms, and the revolution in our thinking is spreading further and further into unexpected frontiers of life on our planet. Some of the other examples we’ll meet in this book include why we can’t make a baby from two sperm or two eggs, but have to have one of each. What makes cloning possible? Why is cloning so difficult? Why do some plants need a period of cold before they can flower? Since queen bees and worker bees are genetically identical, why are they completely different in form and function? Why are all tortoiseshell cats female?

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

Scientists in both the academic and commercial sectors are also waking up to the enormous impact that epigenetics has on human health. It’s implicated in diseases from schizophrenia to rheumatoid arthritis, and from cancer to chronic pain. There are already two types of drugs that successfully treat certain cancers by interfering with epigenetic processes. Pharmaceutical companies are spending hundreds of millions of dollars in a race to develop the next generation of epigenetic drugs to treat some of the most serious illnesses afflicting the industrialised world. Epigenetic therapies are the new frontiers of drug discovery.

In biology, Darwin and Mendel came to define the 19th century as the era of evolution and genetics; Watson and Crick defined the 20th century as the era of DNA, and the functional understanding of how genetics and evolution interact. But in the 21st century it is the new scientific discipline of epigenetics that is unravelling so much of what we took as dogma and rebuilding it in an infinitely more varied, more complex and even more beautiful fashion.

The world of epigenetics is a fascinating one. It’s filled with remarkable subtlety and complexity, and in Chapters 3 and 4 we’ll delve deeper into the molecular biology of what’s happening to our genes when they become epigenetically modified. But like so many of the truly revolutionary concepts in biology, epigenetics has at its basis some issues that are so simple they seem completely self evident as soon as they are pointed out. Chapter 1 is the single most important example of such an issue. It’s the investigation which started the epigenetics revolution.

Notes on nomenclature

There is an international convention on the way that the names of genes and proteins are written, which we adhere to in this book.

Gene names and symbols are written in italics. The proteins encoded by the genes are written in plain text. The symbols for human genes and proteins are written in upper case. For other species, such as mice, the symbols are usually written with only the first letter capitalised.

This is summarised for a hypothetical gene in the following table.

Like all rules, however, there are a few quirks in this system and while these conventions apply in general we will encounter some exceptions in this book.

Chapter 1

An Ugly Toad and an Elegant Man

Like the toad, ugly and venomous, wears yet a precious jewel in his head. William Shakespeare

Humans are composed of about 50 to 70 trillion cells. That’s right, 50,000,000,000,000 cells. The estimate is a bit vague but that’s hardly surprising. Imagine we somehow could break a person down into all their individual cells and then count those cells, at a rate of one cell every second. Even at the lower estimate it would take us about a million and a half years, and that’s without stopping for coffee or losing count at any stage. These cells form a huge range of tissues, all highly specialised and completely different from one another. Unless something has gone very seriously wrong, kidneys don’t start growing out of the top of our heads and there are no teeth in our eyeballs.

This seems very obvious but why don’t they? It’s actually quite odd, when we remember that every cell in our body was derived from the division of just one starter cell. This single cell is called the zygote. A zygote forms when one sperm merges with one egg.

A Zygote

This zygote splits in two; those two cells divide again and so on, to create the miraculous piece of work which is a full human body. As they divide the cells become increasingly different from one another and form specialised cell types. This process is known as differentiation. It’s a vital one in the formation of any multicellular organism.

If we look at bacteria down a microscope then pretty much all the bacteria of a single species look identical. Look at certain human cells in the same way say, a food absorbing cell from the small intestine and a neuron from the brain and we would be hard pressed to say that they were even from the same planet. But so what? Well, the big ‘what’ is that these cells started out with exactly the same genetic material as one another. And we do mean exactly, this has to be the case, because they came from just one starter cell, that zygote. So the cells have become completely different even though they came from one cell with just one blueprint.

One explanation for this is that the cells are using the same information in different ways and that’s certainly true. But it’s not necessarily a statement that takes us much further forwards. In a 1960 adaptation of H. G. Wells’s The Time Machine, starring Rod Taylor as the time travelling scientist, there’s a scene where he shows his time machine to some learned colleagues (all male, naturally) and one asks for an explanation of how the machine works. Our hero then describes how the occupant of the machine will travel through time by the following mechanism:

In front of him is the lever that controls movement. Forward pressure sends the machine into the future. Backward pressure, into the past. And the harder the pressure, the faster the machine travels.

Everyone nods sagely at this explanation. The only problem is that this isn’t an explanation, it’s just a description. And that’s also true of that statement about cells using the same information in different ways it doesn’t really tell us anything, it just re-states what we already knew in a different way.

What’s much more interesting is the exploration of how cells use the same genetic information in different ways. Perhaps even more important is how the cells remember and keep on doing it. Cells in our bone marrow keep on producing blood cells, cells in our liver keep on producing liver cells. Why does this happen? One possible and very attractive explanation is that as cells become more specialised they rearrange their genetic material, possibly losing genes they don’t require. The liver is a vital and extremely complicated organ. The website of the British Liver Trust states that the liver performs over 500 functions, including processing the food that has been digested by our intestines, neutralising toxins and creating enzymes that carry out all sorts of tasks in our bodies. But one thing the liver simply never does is transport oxygen around the body. That job is carried out by our red blood cells, which are stuffed full of a particular protein, haemoglobin. Haemoglobin binds oxygen in tissues where there’s lots available, like our lungs, and then releases it when the red blood cell reaches a tissue that needs this essential chemical, such as the tiny blood vessels in the tips of our toes. The liver is never going to carry out this function, so perhaps it just gets rid of the haemoglobin gene, which it simply never uses.

It’s a perfectly reasonable suggestion cells could simply lose genetic material they aren’t going to use. As they differentiate, cells could jettison hundreds of genes they no longer need. There could of course be a slightly less drastic variation on this, maybe the cells shut down genes they aren’t using. And maybe they do this so effectively that these genes can never ever be switched on again in that cell, i.e. the genes are irreversibly inactivated. The key experiments that examined these eminently reasonable hypotheses, loss of genes, or irreversible inactivation involved an ugly toad and an elegant man.

Turning back the biological clock

The work has its origins in experiments performed many decades ago in England by John Gurdon, first in Oxford and subsequently Cambridge. Now Professor Sir John Gurdon, he still works in a lab in Cambridge, albeit these days in a gleaming modern building that has been named after him. He’s an engaging, unassuming and striking man who, 40 years on from his ground breaking work, continues to publish research in a field that he essentially founded.

John Gurdon cuts an instantly recognisable figure around Cambridge. Now in his seventies, he is tall, thin and has a wonderful head of swept back blonde hair. He looks like the quintessential older English gentleman of American movies, and fittingly he went to school at Eton. There is a lovely story that John Gurdon still treasures, a school report from his biology teacher at that institution which says, ‘I believe Gurdon has ideas about becoming a scientist. In present showing, this is quite ridiculous.’ The teacher’s comments were based on his pupil’s dislike of mindless rote learning of unconnected facts. But as we shall see, for a scientist as wonderful as John Gurdon, memory is much less important than imagination.

In 1937 the Hungarian biochemist Albert SzentGyorgyi won the Nobel Prize for Physiology or Medicine, his achievements including the discovery of vitamin C. In a phrase that has various subtly different translations but one consistent interpretation he defined discovery as, ‘To see what everyone else has seen but to think what nobody else has thought’. It is probably the best description ever written of what truly great scientists do. And John Gurdon is truly a great scientist, and may well follow in Szent-Gyorgyi’s Nobel footsteps.

In 2009 he was a co-recipient of the Lasker Prize, which is to the Nobel what the Golden Globes are so often to the Oscars. John Gurdon’s work is so wonderful that when it is first described it seems so obvious, that anyone could have done it. The questions he asked, and the ways in which he answered them, have that scientifically beautiful feature of being so elegant that they seem entirely self-evident.

John Gurdon used non-fertilised toad eggs in his work. Any of us who has ever kept a tank full of frogspawn and watched this jelly-like mass develop into tadpoles and finally tiny frogs, has been working, whether we thought about it in these terms or not, with fertilised eggs, i.e. ones into which sperm have entered and created a new complete nucleus. The eggs John Gurdon worked on were a little like these, but hadn’t been exposed to sperm.

There were good reasons why he chose to use toad eggs in his experiments. The eggs of amphibians are generally very big, are laid in large numbers outside the body and are see-through. All these features make amphibians a very handy experimental species in developmental biology, as the eggs are technically relatively easy to handle. Certainly a lot better than a human egg, which is hard to obtain, very fragile to handle, is not transparent and is so small that we need a microscope just to see it.

John Gurdon worked on the African clawed toad (Xenopus Iaevis, to give it its official title), one of those John Malkovich ugly-handsome animals, and investigated what happens to cells as they develop and differentiate and age. He wanted to see if a tissue cell from an adult toad still contained all the genetic material it had started with, or if it had lost or irreversibly inactivated some as the cell became more specialised. The way he did this was to take a nucleus from the cell of an adult toad and insert it into an unfertilised egg that had had its own nucleus removed. This technique is called somatic cell nuclear transfer (SCNT), and will come up over and over again. ‘Somatic’ comes from the Greek word for ‘body’.

After he’d performed the SCNT, John Gurdon kept the eggs in a suitable environment (much like a child with a tank of frogspawn) and waited to see if any of these cultured eggs hatched into little toad tadpoles.

The experiments were designed to test the following hypothesis: ‘As cells become more specialised (differentiated) they undergo an irreversible loss/inactivation of genetic material.’ There were two possible outcomes to these experiments:

Either

The hypothesis was correct and the ‘adult’ nucleus has lost some of the original blueprint for creating a new individual. Under these circumstances an adult nucleus will never be able to replace the nucleus in an egg and so will never generate a new healthy toad, with all its varied and differentiated tissues.

Or

The hypothesis was wrong, and new toads can be created by removing the nucleus from an egg and replacing it with one from adult tissues.

Other researchers had started to look at this before John Gurdon decided to tackle the problem, two scientists called Briggs and King using a different amphibian, the frog Rana pipiens. In 1952 they transplanted the nuclei from cells at a very early stage of development into an egg lacking its own original nucleus and they obtained viable frogs. This demonstrated that it was technically possible to transfer a nucleus from another cell into an ‘empty’ egg without killing the cell. However, Briggs and King then published a second paper using the same system but transferring a nucleus from a more developed cell type and this time they couldn’t create any frogs. The difference in the cells used for the nuclei in the two papers seems astonishingly minor just one day older and no froglets. This supported the hypothesis that some sort of irreversible inactivation event had taken place as the cells differentiated. A lesser man than John Gurdon might have been put off by this. Instead he spent over a decade working on the problem.

The design of the experiments was critical. Imagine we have started reading detective stories by Agatha Christie. After we’ve read our first three we develop the following hypothesis: ‘The killer in an Agatha Christie novel is always the doctor.’ We read three more and the doctor is indeed the murderer in each. Have we proved our hypothesis? No. There’s always going to be the thought that maybe we should read just one more to be sure. And what if some are out of print, or unobtainable? No matter how many we read, we may never be entirely sure that we’ve read the entire collection. But that’s the joy of disproving hypotheses. All we need is one instance in which Poirot or Miss Marple reveal that the doctor was a man of perfect probity and the killer was actually the vicar, and our hypothesis is shot to pieces. And that is how the best scientific experiments are designed to disprove, not to prove an idea.

And that was the genius of John Gurdon’s work. When he performed his experiments what he was attempting was exceptionally challenging with the technology of the time. If he failed to generate toads from the adult nuclei this could simply mean his technique had something wrong with it. No matter how many times he did the experiment without getting any toads, this wouldn’t actually prove the hypothesis. But if he did generate live toads from eggs where the original nucleus had been replaced by the adult nucleus he would have disproved the hypothesis. He would have demonstrated beyond doubt that when cells differentiate, their genetic material isn’t irreversibly lost or changed. The beauty of this approach is that just one such toad would topple the entire theory and topple it he did.

John Gurdon is incredibly generous in his acknowledgement of the collegiate nature of scientific research, and the benefits he obtained from being in dynamic laboratories and universities. He was lucky to start his work in a well set-up laboratory which had a new piece of equipment which produced ultraviolet light. This enabled him to kill off the original nuclei of the recipient eggs without causing too much damage, and also ‘softened up’ the cell so that he could use tiny glass hypodermic needles to inject donor nuclei.

Other workers in the lab had, in some unrelated research, developed a strain of toads which had a mutation with an easily detectable, but non-damaging effect. Like almost all mutations this was carried in the nucleus, not the cytoplasm. The cytoplasm is the thick liquid inside cells, in which the nucleus sits. So John Gurdon used eggs from one strain and donor nuclei from the mutated strain. This way he would be able to show unequivocally that any resulting toads had been coded for by the donor nuclei, and weren’t just the result of experimental error, as could happen if a few recipient nuclei had been left over after treatment.

John Gurdon spent around fifteen years, starting in the late 1950s, demonstrating that in fact nuclei from specialised cells are able to create whole animals if placed in the right environment i.e. an unfertilised eggé. The more differentiated/specialised the donor cell was, the less successful the process in terms of numbers of animals, but that’s the beauty of disproving a hypothesis we might need a lot of toad eggs to start with but we don’t need to end up with many live toads to make our case. Just one non murderous doctor will do it, remember?

Sir John Gurdon showed us that although there is something in cells that can keep specific genes turned on or switched off in different cell types, whatever this something is, it can’t be loss or permanent inactivation of genetic material, because if he put an adult nucleus into the right environment in this case an ‘empty’ unfertilised egg it forgot all about this memory of which cell type it came from. It went back to being a naive nucleus from an embryo and started the whole developmental process again.

Epigenetics is the ‘something’ in these cells. The epigenetic system controls how the genes in DNA are used, in some cases for hundreds of cell division cycles, and the effects are inherited from when cells divide. Epigenetic modifications to the essential blueprint exist over and above the genetic code, on top of it, and program cells for decades. But under the right circumstances, this layer of epigenetic information can be removed to reveal the same shiny DNA sequence that was always there. That’s what happened when John Gurdon placed the nuclei from fully differentiated cells into the unfertilised egg cells.

Did John Gurdon know what this process was when he generated his new baby toads? No. Does that make his achievement any less magnificent? Not at all. Darwin knew nothing about genes when he developed the theory of evolution through natural selection. Mendel knew nothing about DNA when, in an Austrian monastery garden, he developed his idea of inherited factors that are transmitted ‘true’ from generation to generation of peas. It doesn’t matter. They saw what nobody else had seen and suddenly we all had a new way of viewing the world.

The epigenetic landscape

Oddly enough, there was a conceptual framework that was in existence when John Gurdon performed his work. Go to any conference with the word ‘epigenetics’ in the title and at some point one of the speakers will refer to something called ‘Waddington’s epigenetic landscape’.

from

The Epigenetics Revolution

by Nessa Carey

get it at Amazon.com

Getting Off. One Woman’s Journey Through Sex and Porn Addiction – Erica Garza.

He suggested I go to Sex and Love Addicts Anonymous (SLAA) meetings, but I destroyed our relationship instead. It was easier.


This book is for the wankers, the loners, the weirdos, the perverts, the outcasts, the bullied, the flawed, the awkward, the shunned, and the shamed.

This guy I kind of know named Clay, who has a neck tattoo and sells arty photographs to tourists, is on top of me and he’s not wearing a condom. I don’t care. I’m completely sober. He’s not.

I’m not sure what time it is. It is so dark outside that I can barely see Clay’s neck tattoo, his condomless dick, or his mouth full of crooked teeth. I hear him grunting; I feel his body’s weight, his sixfoot-eight frame on my five-foot-two, and I know he’s almost finished. I’m too tired to have an orgasm, so I wait for the inevitable end.

It’s not that I don’t enjoy this. Enjoy is not big enough a word. I have come to crave these nights with Clay.

Sometimes he calls during the day and we make plans to go out for drinks, never dinner, because what would we talk about? But then I don’t hear from him until the middle of the night, when he’s drunk or high and knocking at my front door. I don’t care. I can’t even picture him in a bar ordering drinks, sliding dollar bills over to the bartender, or making conversation with me fully clothed. It’s true that I met him in a bar many months before, so I must have seen these things, but I was so drunk and heartbroken from my last breakup that I’m not sure exactly how that night went and what things he said to get me to swallow his cum.

He called me in the morning, and even though we made plans that I knew we wouldn’t keep, I got dressed anyway and put on my mascara and took a small swig of the vodka I keep in the freezer to prepare myself for an awkward date, imagining the questions we’ll trudge through out of politeness until the drinks we’ve ordered make us courageous enough to suggest the next move, to someone’s bed, likely mine.

After the time we’d chosen to meet had long passed, I wiped off my makeup, slipped on my pajamas, and fell asleep. Sometimes he shows up in the middle of the night; sometimes he doesn’t. Either way I won’t get another call for a few days, or a week, until he’s bored and horny and we play this game again.

Tonight when I heard him knocking I woke up straightaway, but I stayed in bed a little longer than usual. For a fleeting moment I considered that letting him in might not be the best thing for me, which isn’t so much of an aha! moment, but the usual common sense that I choose to ignore. I thought about the sensation of his hips against mine; his heavy breath on my neck; the fullness that sex gives me, like having feasted on a hearty meal; but I also thought about the immediate emptiness that follows my nights with him or men like him.

I weighed the options like a sensible person. I did the expected. I took off my pajamas, opened the door naked, and led him back to my bedroom.

He turns me over, which is his favorite way to finish. My eyes, fully adjusted to the darkness now, focus on the dent forming between my headboard and the wall. I think about spackling. Then I see my reflection just above that, in the large mirror with a rattan frame that hangs above the bed.

I hold eye contact with myself while he fucks me, slipping into some sort of twisted meditation. I’m someone else, a queen or a goddess. He is just some lowly subject I use for fun. There are guards in armor waiting outside my door and maidens who will bathe me and rub me with sweet-smelling oils before putting me to bed.

But when Clay pulls out, he flips my body back over like a rag doll and comes all over my tits and stomach so a pool forms in my belly button and rolls out onto the bedspread.

Afterward, we lie there, our elbows touching. I am less sleepy than I was when I opened the door, so the awkwardness sets in fast. He asks how my day was, and then I wait in desperate anticipation for the Call you tomorrow or See you in a few days, which may or may not be true. I don’t care. I dread the nights when he tries at intimacy, holds me in the sweaty crock of his arm for a few minutes before he retreats to the farthest corner of the bed to sleep while I lie there for hours, unable to sleep beside a stranger.

Finally he feeds me his lines and gets dressed and goes, and I give myself two orgasms in the wet spot of the bed. Once, to a three-minute clip of a teenage cheerleader fucking her stepdad on the kitchen counter while her mom showers upstairs, and then again to the thought of what a miserable slut I am to allow a guy like Clay to use me for sex.

There’s nothing unique about this singular moment in bed with Clay. I can reach into my arsenal of memories and easily pick out another story just like it, sometimes not even including a man. Because what I got from Clay was more than just his penis inside of me. What I got was an elaborate mix of shame and sexual excitement I had come to depend on since I was twelve years old. And my methods of getting this only became darker and more intense so that it wreaked havoc on all aspects of my life until I became a shell of a person, isolated, on a path to certain destruction.

With Clay gone and my two orgasms over, I steep in the afterglow of having gotten what I needed.

And, by now, I’m too exhausted to consider answering the overwhelming question echoing inside of me, where he and the cheerleader and the stepdad just were.

Why am I doing this?

What I block out of my mind, because it doesn’t fit the sad story I’m devising in my head, is that I’m using Clay too. He’s probably caught up in the same emptiness I am, desperately filling it with any warm body available. For what little conversation we have, Clay and I are actually quite similar, and we could probably have a genuine connection if we talked about these things. But we don’t talk about these things because, well, it isn’t sexy. I’d rather stick with the one thing that always manages to get me off, I’m bad, bad, bad.

Introduction

THE SAME ADDICT

My favorite porn scene of all time involves two sweaty women, fifty horny men, a warehouse, a harness, a hair dryer, and a taxicab. You can put it all together in a dozen different ways and I bet you still can’t imagine just how revolting the scene actually is.

Revolting. I’ve been using this word and many adjectives like it to describe the things that have brought me to orgasm for more than two decades. I’m not just referring to porn scenes either. I’m also referring to those scenes from my own life, costarring semiconscious men in dark bedrooms and sex workers in cheaply rented rooms, where I prioritized the satisfaction of sexual release over everything else screaming inside of me, Please stop.

Revolting: that summer after college when, after downing too many shots of tequila at a party, I stripped naked and took a bubble bath in front of a group of men.

Disgusting: slipping a few twenty dollar bills to a woman who called me “baby” on the other side of a semen stained pane of glass at a Times Square peep show.

Sickening: letting daylight dissipate and with it all my plans and obligations for the day because l’d rather stay in bed with high definition clips of naughty secretaries, busty nurses, incestuous cheerleaders, drunk frat party girls, and sad Thai hookers.

I was thirty years old when I watched Steve McQueen’s provocative film Shame, which stars Michael Fassbender as Brandon, a New Yorker whose sex addiction leads him to reject intimacy and seek fulfillment through sex with prostitutes and extensive porn watching.

There was something familiar in his story. But that wouldn’t be a turning point for me. Not yet. It was more like an aura or a premonition, because over the next few years I would make many of the same mistakes I had made before, and I would make some new and more painful mistakes too, but right beside those mistakes there would be the hint of a growing awareness that can only come when you are in the midst of great change.

In 2008, three years before Shame was released, I was living in New York City with a man a decade older than me. We were engaged. He was a recovering alcoholic and went to meetings daily, sometimes twice a day, and I began to suspect that the primary reason for this frequency was to get away from me. And why wouldn’t he want to get away? At that time in life I was racked with insecurity and relentlessly jealous. On top of that I was out of work and intimidated by his successful career as a filmmaker. He paid for everything, which seemed to make both of us increasingly uncomfortable over time. When I began to question his whereabouts and raid his journals for evidence of his presumed infidelities, he began to resent me. Eventually we fell apart. But one of the things I remember most vividly about our breakdown was his accusation that I was a sex addict.

“You’re just saying that because you don’t fuck me enough!” was all I could say, though I knew then, and I had known for a long time, that I did have a problem with sex. I just didn’t know what to do about it.

He suggested I go to Sex and Love Addicts Anonymous (SLAA) meetings, but I destroyed our relationship instead. It was easier.

I wouldn’t go to SLAA for another five years, and when I did, I still wasn’t sure that l belonged there. When people talked about the emptiness that came when they watched porn and how isolated they felt, I shifted in my seat and held my breath, feeling that same sense of recognition I had watching Shame. Maybe these are my people, I thought. But when an attractive and uneasy woman admitted to picking up a “few new STDs” at her latest orgy, I thought, Well, I’m not that bad. And I judged her and judged them and went home and masturbated.

At thirty years old, at twenty-four, even at twelve, it was impossible for me to think about sexual pleasure without immediately feeling shame. I felt bad about the type of porn I watched. I felt bad sleeping with people I didn’t like. I felt bad because of the thoughts I feasted on when l was having sex with people genuinely loved.

For as far back as I can remember this is just the way it was. My sexual habits were sick and shameful. My thoughts were sick and shameful. I was sick and shameful.

But nothing would stop me from getting off. Even though I had a suspicion for a long time that this combination of pleasure and shame probably wasn’t good for me, the satisfaction I felt in acting out was worth it.

That’s why I was willing to do things like stick it out for six months with an alcoholic bartender even when he’d repeatedly piss the bed and forget to hide other women’s clothes in his apartment. I didn’t want to lose the easy, consistent access to sex and affection that being in a relationship guaranteed.

I would break plans with people who needed me, family members, friends, or not make plans at all, because I didn’t want to miss out on any potential opportunity to have sex.

In Barcelona, suffering from what felt like the worst bout of strep throat I’ve ever had (which turned out to be mono), I chose to go home with the fifth guy in the space of a few weeks. It was the only thing I could do to stop thinking about the fact that I had just ruined a three-year relationship with the man I dated after the filmmaker, someone I truly loved and felt loved by, over a hand job with a Colombian man on vacation.

Instead of attempting to repair the damage, I slept with a French waiter who fucked me so hard I bled on his bed as if I were a virgin.

And then another French waiter, who took me to his friend’s house instead of his own because his wife was there.

And then a Spanish guy, a German guy, and another Spanish guy. And I did it with the last one without a condom because who really cared at that point? Not him. Not me. I couldn’t even moan or speak to him my throat was so flared up.

In those few weeks, it didn’t matter who approached me. All that mattered was that I was approached. I didn’t need an aphrodisiac-infused dinner, a long conversation spent bonding over our favorite writers of the twentieth century, or a glimmer of a potential future. All I needed was an invitation.

Don’t get me wrong: judging someone based on the number of people they’ve slept with is absurd, and I know there are plenty of healthy, intelligent, and honorable men and women with strong sexual appetites. In some moments, with some partners, “sexually liberated” was exactly what I felt. But those moments were rare.

I’m much more familiar with the sad, anxious mess of a girl alone in her dark bedroom, hot laptop balanced on her chest, turning the volume down low, scrolling, scrolling, choosing, watching, escaping, coming.

I’m far too familiar with the girl who can’t keep her hands from shaking or her throat from clenching, the girl who is just waiting for an invitation. Waiting for someone to show her some interest so she can put the loneliness away for a few hours and find some release.

Sometimes I wonder, if there had been more research and more discussion about sexual addiction in women, would I have changed my behavior? Had there been more available examples of vulnerable, open, honest women sharing their journeys, would I have been more willing to embrace the possibility that I wasn’t alone and unfixable? It’s hard to know for sure.

What I do know is that isolation is damaging. Silence’ is damaging. And when you are isolated and silenced, all sorts of ideas, however twisted they may seem, can begin to seem real because they aren’t ever dealt with properly. I’ll also admit that, while my misery was very real to me for a long time, I was willing to suffer the repercussions because the gratification of acting out was too good and l was hooked on a culture of chaos.

My adolescent years were convoluted with ideas that chaos was good, that depression meant you were a creative person. My heroes were Kurt Cobain, Courtney Love, Nancy Spungen. Sylvia fucking Plath. Little seemed cooler than Van Gogh cutting off his ear, than Virginia Woolf drowning herself. I romanticized brokenness as a means of resisting change, isolating myself, drinking too much, throwing tantrums, and playing Russian roulette with various dicks to make a point that I just didn’t fucking care. I was a mess. I was interesting.

I filled journals with my depressed thoughts about my behavior, my loneliness, the hole I felt growing bigger inside myself, but I made no efforts to stop. If anything, all the brooding I did only intensified my habits, entrenched them. I would do everything I could to tear a relationship apart if the flip side meant having to deal with any real problem.

What began with harmless masturbation at twelve quickly became something more sinister. I wonder now if my parents suspected what I was up to all those hours behind closed doors with my computer. If they could tell by my exhaustion and dazed look that I had just binged for hours. But they never hinted at knowing. Do any parents confront their children about this?

When I was living at home I’d take my laptop to my closet because I was afraid someone would bust through the lock on the door and catch me, or see me through the window that faced the street, even though I had blackout curtains and knew that was impossible.

Porn made me paranoid, but it was free and accessible and always effective. From watching softcore on cable TV at twelve, to downloading photos at a snail’s pace on AOL at fourteen, to tuning in to streaming sites with broadband forever after, my habit became more immediate, more intense, and harder to escape.

But what was I trying to escape? I had lived a pretty normal life, I thought. I had good parents who loved me the best they could, and I’d suffered no sexually traumatic events. Was I fundamentally flawed? This question led me, over the years, to a frantic investigation of my childhood journals, desperately trying to uncover some repressed sexual trauma that I could not find. I threw my money at hypnotherapy, past-life regression, and other alternative treatments to find the missing link, eyeing my brother, my cousins, my uncles, my father, thinking, Which one of you did it? Which one of you made me this way? But when no such traumatic event could be found, the only thing left was that same unanswered emptiness and the conviction that I was inherently bad.

It wasn’t until my early thirties when I finally started to realize that this problem wasn’t just ruining my romantic relationships but all of my relationships, most notably, my relationship with myself. Because I had failed to examine all the reasons I had wanted to escape in the first place, the roots of my shame, I never developed the basic skill we all need to handle life’s twists and turns. how to cope.

Chapter One

THE GOOD GIRL

I grew up in the early eighties in Montebello, California, Southeast LA, where teenage pregnancy was on the rise and every Mexican restaurant claimed to have the best tacos north of the border. Living rooms were adorned with framed pictures of Jesus or the Virgin, and everyone believed in heaven and hell, not as abstract ideas, but as very real places. It was the kind of place where you could pick up your holy candles with your milk and bread at the local supermarket and you always knew someone celebrating a baptism or First Communion soon, giant events requiring ornate outfits and tres leches cake and a sense of relief on everyone’s part that things were good with God, no one was going to hell just yet.

I rarely met anyone who wasn’t Catholic. When it did happen, it was whispered about. Did you know Mrs. Gonzalez is a Jehovah’s Witness? Isn’t that weird? If you weren’t Catholic, to whom would you turn for help? No priest? No Bible? It was unclear how a person could distinguish right from wrong without the Commandments. And I didn’t even want to think of what happened to them after death. I imagined babies dying before they were baptized and shuddered at their unfortunate fates.

I often tell people now that I come from LA, or sometimes East LA if I want to hint at my Latino roots. LA is Hollywood glamour, money, and prestige; East LA screams danger, gangs, and irrefutable street cred. In truth, my life had neither. Montebello and all Southeast LA, home to cities like Bell Gardens, Pico Rivera, and Norwalk, were small, mediocre, boring.

My dad, a mortgage broker, helped low-income Mexicans buy first homes, while my mom, a housewife, made sure our home was intact. They balanced their checkbooks, and we bought clothes at Ross, and the only place we traveled to outside of the country was Tijuana, which my mom often said “didn’t count” since it was only two hours south. My brother, Gabe, and I ran through sprinklers in the summer or laid down giant plastic trash bags for slipping and sliding. Katie Wilkins, a white girl, lived next door to us, which was rare in a predominately Mexican neighborhood, and I’d often peer at the swimming pool in her backyard from my bedroom window with envy. Mediocrity, which I felt was directly connected to my heritage, was my first source of shame.

But, in retrospect, we seem more privileged than I realized. I vacationed in Hawaii and Walt Disney World. I attended private Catholic school, from kindergarten through high school. My dad owned and ran a mortgage company for nearly twenty years until he sold it for a large sum and bought himself his dream car, a flashy Corvette that looked like the Batmobile, and a vacation condo in Maui. And by the time I entered high school we had moved into a house with a pool. I never knew what it was to go to bed hungry or face eviction, but shame has a way of being irrational. I looked at our life and I wanted more.

I simply couldn’t understand why my parents would want to live in such a boring place. There seemed to be nothing but strip malls and taco stands, nail salons and bail bonds. But to them, and to other Mexicans, Montebello was a big deal. In the late sixties and early seventies, when they were growing up, Montebello was nicknamed “the Mexican Beverly Hills.” Housing prices were more expensive and the streets were safer than those in nearby East LA, where my mom spent her formative years. Tomas Benitez, the Chicano author and activist, said in an interview with LA’s KCET, “Montebello was mythic when I was growing up in the 1970s. It was the place where middle-class Mexican-Americans lived and came from. It had that quality, if you could get out of East LA, Montebello was Nirvana, the promised land and Beverly Hills East all rolled into one location.”

For my dad, who was born under modest circumstances in Mexico City and whose own father was an orphan, to be able to live in the Mexican Beverly Hills as an adult was a big step up. He played golf at the city’s country club every weekend and served as an important figure in the city’s Rotary International organization. We often ran into people who knew and respected him wherever we went, restaurants, the bank, the supermarket, and they’d shake his hand with sincerity, reassuring me and my older brother, “Your dad’s a good man,” in case we ever doubted it.

My mom, on the other hand, was less interested in the community. She often complained about the city’s lack of good stores and its seemingly endless pavement. Sometimes she even complained about its propensity for attracting wetbacks, always laughing after this admittance, especially if my dad was around, before she’d lovingly touch his arm and coo, “Aww, I married a wetback.”

That term wetback, coined from those Mexicans who illegally crossed the Rio Grande to get to America, was not an accurate description of my dad, who had crossed the border legally and traveled by road, not river. But that didn’t stop my mom from muttering the word whenever she was feeling playful, or worse, when she was feeling wicked. Even though she has Mexican roots herself, I always thought that her teasing meant she considered natural-born citizens superior to those who had been naturalized. She would have likely picked this idea up from her own dad, a WWII veteran whose own parents were immigrants, and whose dark skin made him feel inferior in a country that was even harsher toward Mexicans than it is today.

The problem, for me, was that my neighborhood and my place inside it didn’t resemble my preconceived notions of power. It didn’t matter that my classmates at school shared the same Spanish sounding last names and most of their grandmas didn’t speak English either. I took note of the Mexican guy selling oranges on the corner, and the busboy picking up our dishes topped with messes of ketchup and crumbs, and I thought, No, that’s not me. I even convinced myself now and again that l was superior to those kinds of Mexicans because my parents hadn’t taught me Spanish. We were outgrowing our Mexican-ness, I thought to myself. Pretty soon it would be gone completely, forgotten like a dream.

My feelings of superiority never lasted long. I knew my classmates and I were part of a minority, and I didn’t like the sound of that word, sitting heavy in my mouth and mind. I wanted to be like the blond-haired, blue-eyed Tanner girls on Full House. I wanted the calm, sensible family talks like the Seavers had on Growing Pains. I wanted a family tree that stretched back to Europe. Maybe England or Ireland, France even. But not Spain.

I got hooked on TV at a young age, marking the beginning of my intense bond with screens, and TV served as a window into the exciting world out there. I became obsessed with the families and neighborhoods I saw that were different from my own, which is to say, white. There was no George Lopez on TV then, no Sofia Vergara or America Ferrera. And I deemed the world “out there,” on the TV screen and in the heart of glittering Hollywood, to be far superior to the Mexican Beverly Hills with its baldheaded gangsters, its teenage mothers, and its paleta men making their living selling sweet treats to kids on clean, suburban pavement.

Unlike my dad, who seemed perfectly content with his roots and his chosen city of Montebello, I leaned more toward my mom’s chronic dissatisfaction and her fondness for escape. Like me, my mom also found herself captivated by screens. She loved foreign films-Cinema Paradiso, Tie Me Up! Tie Me Down], Shirley Valentine-and I’d cuddle up with her on the couch for countless cinematic escapes, placing myself in the films and imagining the adventures waiting for me in adulthood.

Sometimes I would imagine taking trips with my mom. It’s not that I didn’t love my dad or that I wanted her to leave him forever, but maybe a few months? A year? I picked up on the tension that arose between my parents if my dad was working late again or on another client call. He usually returned from the office when we were already tucked into bed and was gone in the morning before we’d had a chance to get up, always trying to get ahead at the expense of my mom’s growing resentment. My brother and I got used to having my dad around only on the weekends. But even then there were always more phone calls, more stacked files in front of him, and my mom found this difficult to accept, alternating between giving him the silent treatment and erupting in angry outbursts, depending on her mood.

My mom’s moodiness became more pronounced as I grew older. Some days she’d park herself in front of the TV, bored eyes glazed over by some daytime talk show or murder mystery. Other days she’d take me to the mall to try on clothes and feast at the food court, deep-fried corn dogs with mustard and curly french fries. And yet other days she’d be annoyed by everything, the dirty dishes, the piles of laundry, her lazy children, and I’d think to myself, She just needs a break. If we go away for a little while, she’ll feel better.

When my mom was upset, I sought solace in playing video games with Gabe, who was three years my senior. We spent hours toting machine guns in Contra, gobbling up mushrooms in Super Mario Bros., and scouring mythic lands for Zelda. I became obsessed with trying to beat him, frantically studying video game magazines to learn the latest cheats, training myself not to blink, lest I miss a bullet or fireball and lose. When I wasn’t playing, I was thinking of playing. When I was playing, I was thinking of what I’d play next.

When we weren’t saving princesses in front of the TV screen, Gabe and I were putting ourselves on the screen, acting out short films he wrote and directed, he’d decided early on he was going to be a famous filmmaker when he grew up. Gabe’s gift for screenwriting and his skillfulness with my parents’ camcorder earned him a lot of admiration. Family parties invariably involved the screening of a movie Gabe was making with my cousins and me as actors, typically toward the end of the night when our parents were tipsy and jovial.

I came to resent what I saw as Gabe’s creative genius, even when my mom encouraged my rising interest in writing, buying me books and journals that I filled mostly with complaints about my brother intermingled with praise for all the boys I had crushes on at school. And whenever she caught me complaining about something being unfair, she’d murmur, smiling, “All the best writers had rough childhoods.”

When Gabe didn’t want to play with me, I’d terrorize him with kicks and shoves until he’d even tually shove me back, at which point I’d run crying to my mom in hopes he’d get punished. When she’d reprimand him, her long, curly hair shaking wildly around her face, I’d stand behind her, laughing and waving my hands at him, thinking, I win! She loves me more than you!

Despite my mom and dad’s persistent praise of Gabe, I clung desperately to the idea that my mom loved me more. We were girls, after all, and this meant something. That’s why she let me skip school and lie lazily in bed with her some days, watching movies and eating popcorn, saying, Don’t tell Dad I let you skip again. I loved keeping secrets with her.

One night, when I was ten years old, my parents told us kids we’d have dinner in our fancy dining room. I was confused. My dad rarely made it to dinner. I thought the dining room was reserved for Thanksgiving and Christmas only. And were those candles?

I had heard the word divorce slither out of my mom’s mouth on a few occasions when she was gossiping about my uncle’s ex-wife, or when she was talking about certain kids’ parents at school, and I wondered to myself if this is what was happening. Were my parents treating my brother and me to one final moment of togetherness before my dad packed up his suitcase? Would we be divided between them? Clearly I’d stay with my mom. And then, immediately after, I wonder where we’ll travel to first.

“Your dad and l have some big news,” my mom said, an excited smile on her face. A glass tumbler sat in her hand, filled to the brim with Pepsi and ice cubes, and she took gentle sips, letting suspense build around the table.

I looked at my dad, and he was smiling too.

“What is it?” Gabe said.

I kept my mouth shut, feeling excited yet guilty. Had I actually willed this into happening? Did all my imaginings of traveling the world with my mom come to fruition simply because I thought them? I considered the gravity of what this meant, that I had the power to destroy my parents’ marriage with my mind. I pictured myself as some kind of witch, a source of power and wickedness.

“You want to tell them?” my mom asked my dad.

“OK,” he said, and I held my breath.

“Your mom’s going to have a baby!”

My mom exploded in giggles, the ice cubes in her Pepsi clanking against the glass while she stood up to give me and my brother kisses and hugs. But when she pulled me close to her, my face pressed against her cotton blouse, I burst into tears.

“Oh, baby, why are you crying? What’s wrong?” She tried to pull away to look at my face, but I clung tight, digging my nails into her arm, refusing to let go. “Erica, what’s wrong?”

I heard my brother laugh, confused by my reaction. And I felt my dad come over beside us and put his warm hand on my quivering head. But I didn’t know how to explain the panic l felt at being cast aside, overshadowed by Gabe’s talent and the importance of a brand new baby, and so I lied when they asked again, “Erica, why are you crying?” Finally, I answered, “Because I’m happy.”

I can remember, vividly, the sexual fantasies that bubbled in my brain, seemingly out of nowhere, during my mom’s pregnancy. To distract myself from thinking about my new sibling, I turned my attention to other, more captivating places and daydreamed constantly. There’s nothing like the bulging belly and emotional intensity of a pregnant woman to inspire curiosity about how it all works-babies, sex, the origin of life.

All I had ever heard about sex from my parents came from my mom when, passing the local high school, she pointed out a few pregnant girls who couldn’t have been older than sixteen and said, “Don’t ever let that happen to you”, and then, pointing to my crotch, “don’t let anyone ever touch you down there.”

My mom and dad both seemed uncomfortable when it came to addressing sex, and they were equally as aggressive about hiding it from me. When things got hot and heavy in whatever movie we were watching, the response was immediate: “Close your eyes until we say,” and I complied, listening to the indistinct sounds of what I was not allowed to see until it was all over.

I can understand my parents’ reluctance at not wanting to talk about sex with me at ten years old, but “the talk” never came. Sex was something dirty and sinful, something to blush about, something to hide. These were obviously inherited ideas. My grandparents on both sides had the same reactions when a love scene unexpectedly danced across the TV screen: a shriek of discomfort followed by covered eyes and the demand that somebody change the damn channel. Whether it was a Latino thing or a Catholic thing, I couldn’t be sure. Even my teachers laughed uncomfortably and avoided eye contact when they explained that sex was something that happened “between two married people who loved each other,” for one reason alone: procreation.

Though I had limited knowledge of how sex worked, I began gradually piecing it together when my parents weren’t around. I’d been making lists of boys I wanted to kiss in my journal for a few years, but the lists became longer during my mom’s pregnancy, and sometimes even included rudimentary drawings of body atop body next to the lists. There seemed to be no one I didn’t find attractive in my fifth-grade class; I wanted all the boys and some of the girls too, and even our teacher, Mr. Rivera.

In class, I’d stare at Mr. Rivera’s crotch, trying to imagine what he looked like under his clothes. I stared at my female teachers’ breasts and long legs. I stared at my classmates’ bodies with such unquenchable curiosity and thirst, but I had no idea what to do with this desire except to try and ignore it, though the bubbling in my brain proved difficult to control. And since no other girls were talking about this kind of thing, and I wanted desperately to be a good Catholic girl, I figured something terrible was happening to me.

Though I had attended Catholic school since kindergarten and weekly Mass was part of the curriculum, I didn’t pray much. I made the sign of the cross with holy water, I closed my eyes and folded my hands so I looked like I was deep in prayer, and I confessed to the priest when required (always the same sins: Bless me, Father, for I have sinned. I fought with my brother and I said bad words), but these rarely felt like real acts of faith. They were obligations. My parents didn’t pray much either. Not publicly, at least. For a short time, we attended Sunday Mass a few times a month, but then we turned into what my mom called “part-time Catholics,” attending only during the holiest events, like Christmas and Easter. Pretty soon, we stopped going completely, so Mass felt like another school period. Despite this lack of practice, when I found out my mom and dad were having the baby, I started praying for one thing daily: Please let the baby be a boy.

I had to maintain my specialness somehow, and being the only girl seemed the best route. I was already used to being the only girl, not only of my immediate family but also among all my cousins. When Gabe wrote a new screenplay, I naturally got all the female parts and I was the sole recipient of the kind of cabs and aahs that come with being the only kid wearing a pretty dress or sporting a new perm or having sparkly nails or whatever other girlie thing my mom bought for me that my aunts loved. I had a few female younger cousins, but they were too little to prove what good and pretty and polite little girls they were. I had that covered.

I wrote down my favorite little brother names in my journal, Freddy or Jason, because I loved horror movies, and I knelt at the foot of my bed in tireless devotion to God, whom I thought of as a magic genie then, thinking, I will be a good girl forever if you grant me this one wish.

But God showed me what he thought of my wishes when my mom brought home the shadowy sonogram print of her new baby girl.

“Look at your little sister, Erica,” my mom said, handing over the picture. “Her name is Ashley.”

I held the print in my hand, terror rising in my throat as I tried to make sense of the black and white blob, before somehow emitting a sound of false recognition. “I see her now. She’s cute,” I lied.

Mixed up in my feelings of jealousy, I also found myself contradictorily excited at the prospect of a protégé. If my brother didn’t want to play with me, it wouldn’t matter anymore because I would have my very own sister. I wrote letters to her, trying to psych myself up, but the clashing nature of my feelings only ever resulted in shame. I wanted desperately to silence my fears and be a good big sister, but I couldn’t help this mounting anxiety from getting in my way.

I tried to keep things as they were before, asking to skip school so I could lie in bed with my mom and watch movies all day. She’d sometimes let me, but I felt our bubble already significantly altered and her attention hard to place. In bed with her it was hard to ignore the growing belly between us, the place where my sister now lived. And I couldn’t help measuring myself against all the wonderful qualities I worried she’d have.

My body also experienced some scary changes around this time. I failed the vision test at school, and despite my desperate pleas that my eyes weren’t that bad, my mom bought me glasses anyway. I also saw that I now had dark brown hair on my arms, where other girls in class had smooth, pretty arms. My mom then noticed that l was often coming home from school with scraped knees and elbows from falling. When she took me to an orthopedic doctor and had me examined, his best diagnosis was clumsiness. All these things seemed serious to me. I felt as if my body were breaking down. I would be the ugly, nerdy, clumsy sister, and thoughts of self loathing filled my head.

When Ashley finally climbed out of my mother’s womb that September afternoon, my growing fears intensified. A baby needs attention, after all, and as much as I tried to understand, my young mind was shattered at how much attention she actually demanded. My mom became fond of the camcorder, filming Ashley’s every move. My dad left work earlier to pitch in, and he spent lazy afternoons with her in the hammock that swung freely in the backyard sunshine. When l’d shop with my mom and the baby, I might end up with a blouse or pair of shoes, but if I noticed Ashley had more items than I, I fumed. Everyone at the mall fussed over her chubby cheeks and happy grin.

“Don’t you just love your little sister?” they’d exclaim, and I’d nod and produce an overly enthusiastic Yes!

Angry with my mom, my new sister, my brother, and my dad, I decided to throw myself into my academics. I excelled in all my subjects, especially language arts, and even found myself on the spelling bee team, studying lists of words all day and often before bed. I imagined myself becoming a national champ, my face on the cover of Time magazine. I would become the family genius.

Placing myself under enormous pressure, I became restless and squirmy. And I was nervous all the time. Nervous I would get bad grades and be held back another year, which meant being held back from the big, beautiful life I had planned for myself. Nervous I would make my parents mad about something and be banished to my bedroom without TV or books to suffer their worst punishment: Go to your room and think about what you did. But I was most nervous about upsetting God, the mighty ruler of the sky, more Nome King from Return to OZ than magic genie. If I upset God, then he would send me to hell, which was looking less like a fiery underworld and more like my bedroom in Montebello. For eternity.

My mom and dad were both impressed with my academic achievements, hanging up my Honor Roll and Student of the Month certificates on the fridge and proudly displaying my spelling bee trophies in the living room. I looked forward to after school sessions with my fellow spelling bee enthusiasts, where we tested one another on words we didn’t even know the meaning of, eating burgers and drinking chocolate malts while we nursed lofty dreams of academic stardom. I belonged to a clique of smart, sensible achievers, and I felt comfortable there. For a while.

It wasn’t long before I noticed the intimacy of this clique and how the majority of the kids in class had no interest in words like pirouette or precipice. Their looks of boredom and back-row snickering were too intimidating to ignore, and so I purposely misspelled words and ignored my spelling bee comrades, becoming increasingly attached to a girl named Leslie, a popular tomboy who had blond hair and a surferdude inflection, despite her parents both being from Guadalajara, Mexico.

As Ashley grew into a little ball of energy and destruction around the house, tearing apart magazines and emptying drawers and cupboards out of curiosity while demanding every ounce of my mom’s wearying attention, Gabe spent more time out of the house with his friends and returned only to retreat to his room or roll his eyes at any of us should we try to interact with him. I tried to stay out of everyone’s way, continuing to excel at my academics in the most subtle way possible, so I didn’t receive any loud praise from teachers. I threw myself into my friendship with Leslie full force. Everything my mom did annoyed me, and I mimicked the way Leslie talked to her mom, rolling my eyes and protesting at simple chores, to her dismay.

We spent weekends watching movies like The Texas Chainsaw Massacre and riding skateboards through the quiet residential streets of Whittier, where Leslie lived high up in the hills among big houses and hardly anyone spoke Spanish besides her family. I loved spending time in Whittier, and I wanted to hang out there all the time. Unlike Montebello, her neighborhood had antiques shops, a college, and white people. It wasn’t long before I started listening to the same music as Leslie, Nirvana, Pearl Jam, and Smashing Pumpkins, dressing like her, and talking like her. When I spent the night at her house, we stayed up late watching MTV before falling asleep in her bed, our bodies close and warm like conjoined twins.

When I think about the term first love, it’s difficult not to think of Leslie. My attachment to her was so intense, magnified by the urgency of youth, that the relationship still sticks out for me as one of the bigger ones in my life.

But I also recognize something dangerous and foreboding. I can’t help but realize that this relationship became a model of unhealthy love. With Leslie, I learned what it was to rely too heavily on another person, besides my mother, for security and comfort. I felt, for the first time, what it was to be completely enamored of a person, how being enamored can trick the brain into thinking it’s “in love,” and how being in love can sometimes feel the same as being completely swallowed up by that love until all that’s left when it’s over is a gaping hole just waiting to be filled again.

Chapter Two

THE WEIRD GIRL

My newfound interest in rock music led me to LA’s alternative radio station KROQ, which I listened to all day. If I stayed up late enough, I could also listen to the radio show Loveline at night. Hosted by Dr. Drew Pinsky and Adam Carolla, the syndicated program offered medical and relationship advice to listeners, and often had actors and musicians as guests.

Dr. Drew would, in later years, be applauded for his work on sex addiction, but it was Loveline that first introduced me to masturbation, which would soon become my primary method of acting out.

I was twelve years old when a caller fascinated with water faucets, a woman, called in and gave me an outlet for all my pent-up sexual frustration. She’d discovered this new and gratifying way in which to have mind-blowing orgasms. I had no idea what an orgasm was, but hearing the way she talked about it, I now needed to know. She said all she had to do was sit in the bathtub, spread her…


from

Getting Off. One Woman’s Journey Through Sex and Porn Addiction

by Erica Garza

get it at Amazon.com

A Primer. A Conversation about Economics – Richard Werner CMA/CFM.

Today most nations focus on managing the balance of trade rather than seeking out ways to increase trade in a fair and sustainable way. Sustainable trade is critical to the long-term success of our modern society.


We have all had them, that conversation at work around the coffee station or with family on a holiday visit. We discuss, we listen, and we learn. Certainly it’s not like a school setting but nonetheless we are in a situation where we are immersed in an interesting conversation that often ends up teaching us something. The conversation might be about politics, cars, restaurants, or why your employer’s business is or is not doing so well.

You may have conversations that regularly touch on the subject of economics, or maybe you have overheard others discussing economic concepts, or perhaps you just have a natural and healthy curiosity about a subject that impacts every aspect of your life. Whatever your reason, and in the hope of developing a better understanding of how economics impacts your world, you scoured the earth (or internet) to find this informative, yet entertaining, book.

On the other hand, you may have accidently tripped and stumbled nose first into this book and you may be thinking… “I might like to read up on economics sometime after I have finished watching the paint dry or have counted all the sand grains on the beach”. If you are harboring any thoughts along these lines then I suspect I had best pique your interest quickly.

Much like a car enthusiast might want to engage another person in talking about what makes their car exceptional or someone who has a medical issue may want to delve into a discussion about their condition, someone who is affected by the economy (and that would be all of us) might want to spend some time trying to understand what makes our world tick. All of us have a vested interest in the economics of our world, we are all involved in our economy and through our decisions (including our choices at the polls) we all play a leadership role in how effective our local, national, and world economies perform.

Most of us, at one time or another, have played a game (be it football, golf, or a board game) and at some point in time we chose to learn the rules and strategies of that game. The reason we made the effort and found the time to learn is obvious… we wanted to understand what we needed to do in order to be successful. No reasonable person would want to spend their time doing something without knowing the rules or at least having some semblance of what it takes to win the game. And, just like you wouldn’t play a board game without reading the rules first, it’s important to develop an understanding as to how economics affect you in the game of life especially since you’re already playing the game… every day when you go to work, make a trip to the store, or choose where to invest your retirement savings you are interacting with the economy.

Economics is fundamental to living in a free society and it’s important to understand it in order to know how and why you should try to preserve it. Every day there are reports in the news citing economic concepts and no shortage of talking heads engaging in debates related to them, yet the concepts remain a mystery to many of us. We hear terms on the news such as “the economy grew 0.5 percent in March” or “consumer confidence is down which forebodes a potential recession” but few truly understand what they mean and how, or in what ways, they affect us. My hope is to help you understand basic economic concepts, help you put them into context, and help you to understand how any number of factors in our economy often, but not always, can lead to certain resulting economic conditions.

If you are still toying with counting the sand on the beach you may be asking yourself about now, “Why should I, or anyone, else take the time to learn more about economics?” or more importantly “Why did I buy a book on the subject?” Well, if you watch any amount of news or engage in political debates with friends and family, you will not be able to avoid the topic of economics for very long and chances are pretty good that you would like to come across as knowledgeable and well informed and, while economics is too complicated to glean a competent understanding and knowledge level of from a talk show or from a commercial, you don’t need a Ph.D. on the subject to be an informed citizen. Going a step further, a significant part of our political decision making process is driven by economics which means understanding the basic processes of economics is essential to performing that most important of civic responsibilities… informed voting.

Economics, at least the parts we expect our government to influence, is in many ways like tinkering with an old car or developing a better cooking recipe. More of the same is unlikely to fix a problem, like adding more salt to a favorite dish, at some point more becomes too much and you need to reverse direction. A mechanic working with an old car might start by adjusting the fuel mixture to make it richer but at some point it becomes too rich. Economic topics like taxes, jobs, and entitlement programs (to name just a few) cannot be addressed by the same answer every time. But by understanding the interworking of factors within the economy and how one affects the others, you can then be better able to make sense of the news and intelligently engage in political debates about economic matters.

Like many complex subjects, economics is one that most of us tend to develop opinions about based on what experts tell us. As an example, one such expert and a noted economist, made a comment that he thought it was impossible to effectively prosecute white collar crime. If you know more than a little about business, or have worked in situations where you have come into direct or indirect contact with any type of white collar crime, you might be appalled by the thought that an expert would propose that we cannot hold white collar workers to any kind of legal standard. As a society, whether we realize it or not, we all support or oppose beliefs, such as the one put forward by this economist, and through our voting we either support or reject these concepts. Knowing a little more about economics can help you make informed judgments and the better informed our voting population is on matters affecting economics the more likely we are to enjoy a better economy. Among the objectives of this book will be to explore concepts embodied in statements, such as the one about regulations and white collar crime, in order to give you a better understanding of the workings of our economy.

Why should you or anyone else take the time to learn more about how the economy works? There are many reasons but I am of the opinion that the two most important are (1) to be a more competent consumer and (2) to be able to make better decisions as a member of the voting population.

The reason for the first is fairly obvious, (if you don’t know just smile and nod your head, I’ll explain it in a second), and the reasons for the second are as varied as the population (I’ll get to this too, just give me a minute).

What does being a more competent consumer mean?

If you, as a consumer, are aware of how prices are set and why prices rise, you can better judge the value you are giving up (your cash) in exchange for the value in the good or service you are buying. By educating yourself on what works and does not work in economics, can clue you into what is real value and what is hyperbole. I will provide an example of this later in the book (chapter sixteen) when we delve into why marketing people earn very good livings convincing consumers (us) to spend money buying what may be just an image of greater value. This image of value conflicts with what is best for the consumer whose focus should be on the more tangible attributes of the good being purchased. For example, as a consumer buying a chair, would you be better served to buy a product based on a sexy commercial or based upon careful research of the chair manufacturer’s quality record? (and don’t say “sexy commercial”).

Throughout this book I will use examples to demonstrate how understanding economics can help shine light on what may be faulty thinking. So as not to keep you in suspense let’s get started with my first example. Suppose you are shopping for a can of chicken soup and you have two choices, a nationally recognized brand name and a lesser known brand that costs 50% less. When you do your homework you may find that both products have been made at the exact same factory using the same formulas and quality controls. So in this case what are you getting for the added cost, nothing, right? Or, maybe you might find that the cheaper soup is exactly that a cheaper, lower quality, and a less satisfying product. Now while you might not want to spend a lot of time researching soups, there are purchases that do warrant a consumer taking the time to evaluate which purchase choice is a better use of their financial resources.

Making better decisions as a member of the voting population

Earlier I mentioned that we each play a leadership role at the polls… as voters we choose which leaders to hire to manage our society. Being a knowledgeable person in the voting booth is probably the best way we can avoid our country becoming another Nazi Germany, Soviet Union, or some other failed society. This truth is probably something that extends well beyond economic knowledge but, within the confines of this text, we will stick to the economic reasons.

Opinions are strong and varied on this point, but arguably many voters do not go to the polls armed with sufficient facts and therefore cannot reflect and make decisions based upon the facts. Having moved around a bit in my career I have had the benefit of living in districts that leaned in opposing political party directions. I have personally thought that in some overwhelmingly Christian districts, Jesus himself could run against Satan and, if Jesus was of the district’s minority party, he would maybe generate the closest race in the district’s history but would still lose by a comparative landslide. Look, it’s normal to become indoctrinated into one party or the other due to family leanings or that of the city, town, or county in which you live and it’s just human nature to adopt the values of those close to you. The goal of this book is not to try to make independents out of Democrats or Republicans, but instead to help develop an understanding of what economics is and how the system works. In this way, as a voter, you can come to a more informed decision when listening to a candidate’s position on the various economic issues.

Further to the point, let me share a brief personal observation about politics, economics, and some possibly less informed voters I have known. A relative of mine is very vocal about his political leanings, his history has been one of frequent unemployment, lengthy stints on workers compensation, and prone to often be on public assistance. Another family member, who is also very politically vocal, is church going, hardworking, has never taken a dime of the government’s money, and staunchly defends the other political party. If you think the first one leans democratic and the second republican guess again. By all accounts both of these individuals are good people, never ones to cheat another and equally likely to help a stranger who has a disabled car. The point is not that either one is wrong but that neither one really understands fully what their party of choice tends to support as economic or social policy.

I am not trying to say that strong supporters of one party or the other are ignorant or that they blindly vote based on what they grew up with as children, I’m just saying that many subjects, including economics, are more mysterious than they need be. Just like the cave men of 10,000 BC, because the knowledge was not yet available to them, did not understand the movements of the Sun and the Moon in the same way if we don’t have the knowledge of basic economics the workings of our society will be just as mysterious to us. So let’s focus on taking the mystery out of economic concepts.

My intention throughout this book is to steer away from political arguments and stands on the many economic issues such as taxes (on who and how much) and whether government should or should not have a role in healthcare. Instead my goal will be to put into view how economics, within a free nation, can work in a pure state and provide insight into what really happens and why. With that said I believe everyone is subject to some form of intellectual blindness on any subject and, while I will try to keep my prejudices from tainting the descriptions here, you should keep in mind that I, like every other writer, have preconceived notions as to what is correct and as a result it is possible that facts I present may be twisted by those influences. That warning not only applies here but it also applies every time you tune into MSNBC, Fox News, or CNN and while you may not frequently hear that warning elsewhere please remember it as you go forward through this book and beyond.

In providing insight into the world of economics I will share with you stories from my life where l was confronted by economic concepts in action, description of historic events and how they demonstrate an economic concept, and the story of a fictional island where the economic concepts come to light through working examples. Often seeing the humor in a situation can be an aid to understanding and hopefully you will be amused occasionally along the way.

In terms of opinion, I will share many but in doing so will endeavor to keep my explanations closely aligned with recognized economic theory. I will only intentionally try to sell the reader on one concept… that free enterprise is a powerful economic model, one that has beaten out all challengers thus far on the planet Earth. Understanding free enterprise economics is everyone’s responsibility whether you are a voter, a parent, or a participant in an economy. I hope you will find reading this book both enjoyable and informative.

Chapter 1

The Beginning

Most Americans do not have a good understanding of the workings of our economy or how the global economy interacts with the United States’ economy. This is unfortunate because the important concepts of economics do not require an advanced degree in this science; our basic education, as provided in the United States, along with a little outside reading will provide what we need to know about economics. This book will help put the concepts in terms that will take away a good part of the mystery.

While understanding economics, or at least the basics of economics, is within the grasp of most Americans, developing that understanding does require a little study and the willingness to consider dependent activities.

Much of economics in action is just simple common sense… a matter of considering what logical choices people will make when trying to fulfill their wants and needs. If you keep in mind that the root of economics is based on that simple concept you will have set for yourself the foundation for understanding economics.

Each day you play a role in the United States’ economy whether you are a producer or a consumer of products and/or services. Your contribution to the economy can be obvious (as in a worker producing a product in a factory) or less obvious (as in an ad designer who, though not directly connected to making a good, works to facilitate the sale of that good). The economy is a complex web of people producing goods and services for each other’s consumption.

Obvious contributions to the economy can be easy for us to understand, such as how the person who makes the proverbial indispensable widget adds something of value for the rest of the population to consume. However the value of other roles in the economy, such as finance and marketing, can be more difficult to understand because the connection between what is being produced and the end value to the consumer is not as obvious. I will attempt in this text to provide the context necessary to understand what makes the economy go, what causes the economy to not work so well at times, and how interconnected jobs and resources all work together.

Economic terms, such as inflation and productivity, are used every day in the news and are as much misunderstood as they are understood. Throughout this text we will attempt to help you develop a working knowledge of what is meant by many basic economic terms. The approach we will use is to provide both real world and fictional examples using easily understood language to walk you through a demonstration of the economic concept

Since we all participate in the economy (by helping to guide our economy through the choices we make at the polls and by producing and consuming resources) it is crucial that we understand economics and develop the knowledge we need to make wise decisions. As a contributor to the economy, understanding your role will enable you to make better choices in your life whether you are a top level executive or an entry level worker.

Anyone who has ever spent time among entry level employees knows that there is no shortage of stories describing mind boggling missteps by management. These perceived missteps may simply be a case of the story teller not understanding the bigger picture but it is also just as likely that management, not having a clear understanding of what happens on the front line, made decisions that have led to a waste of economic resources.

Why should we care about economics as it relates to our work? Does it really matter if entry level workers mistake good management decisions for foolish actions? Among the selfish reasons to care about economics is any waste of economic resources diminishes the quality of life for everyone.

After graduating from high school I spent my last year as a teenager working in a high end, small lot production, furniture factory, which meant most of the jobs we ran only consisted of a couple dozen pieces up to a few hundred pieces. This factory used a job costing system that involved each employee, who performed a step in the production process, to fill out a time card for that job. To explain by example let’s say there is a job to make 30 chairs. One person in the process has the responsibility for making the chair legs; this person retrieves a rough cut board from the lumber pile and cuts it to the appropriate length. For 30 chairs this person will make 120 cuts. In our example it takes this person 30 minutes to complete the 120 cuts (four chair leg pieces per minute). This employee then fills out a time card for the job reporting he spent 30 minutes on the 30 chair project. This data leads factory management to assign a cost of one minute of work by this employee to each chair. As the chair passes through all the steps in the factory the dollar cost of each of these manufacturing operations are added together providing a cumulative labor cost per chair. The cumulative labor cost plus the cost of materials (i.e. wood, fabric, finishing chemicals, and etc.) are then totaled together to give us the final cost of the chair.

I, as the factory newbie unschooled in the ways of the factory world, was assigned to work with a very large man, who went by the nickname of Mule. Mule ran one of the most complex machines in the factory called a sticker machine. This machine took in the cut-to-length and -width pieces of wood and made all four sides’ smooth and uniform and, in some cases, applied an additional shape to one or more sides of the wood. My job was to take the finished pieces coming off the machine and stack them on a factory production cart. In this arrangement I could only work as hard as Mule chose to run the machine. Mule was very good at what he did so he could get a lot of production out in a short time. I grew up on a family farm so for me work meant getting the work done as quickly as possible and staying at the job until it was complete. Mom and Dad drilled into the heads of all of the kids in my family this approach to work, which I was soon to find out ran into conflict with this factory’s generally accepted approach to work.

From our first day in kindergarten we have had to learn to adapt to our social environment and I quickly learned that a big part of fitting into the environment and the socialization process of my new job was learning the “art” of time card completion. As you might suspect, what was reported on the cards was not a true reflection of what actually happened on the factory floor. On any given day, Mule and I would run ten to twenty jobs each requiring a separate time card. Our day would start at 7 am and for an hour an fifteen minutes we would run several jobs. Mule would then sit down with me to fill out our time cards and would account for a full two hours of work (including the 45 minutes between 8:15 and 9 am that we hadn’t worked yet). Mule would assign that 45 minutes to the various jobs we had just completed and then head off to the restroom for about a half hour, spend the next 15 minutes visiting some of his buddies around the factory, and then start his 9 am general factory break.

Mule was an intimidating guy who was nearly twice my size so I guess I could maintain that it was out of fear that I adopted a work style that was very foreign to me but, in all honesty, my true motivation came from the desire to fit in with this sub element of the local society. Unfortunately, this work style was not unique to Mule (and now me); it was a plant-wide behavior that only varied based on the creative ways individuals could come up with to waste time. The cumulative result was 25% to 50% of wasted time was absorbed into each day’s work.

What I didn’t realize, and I am sure Mule didn’t either, was that by our actions we were making the cost of furniture produced by that factory more expensive. For simplicity’s sake, let’s assume that one chair had a manufacturing cost of $100 and that the final cost included 20% materials and 80% labor (a good part which was wasted time) then taking the math forward some $20 to $40 dollars of each chair’s cost was due to this socially enforced “art” of time card completion.

Because of these shenanigans you might think that the company where Mule and I worked would have gone out of business fairly quickly but it continued producing expensive high quality furniture for more than twenty more years. Now you may be thinking, “This was probably a union plant where the union was the cause of the bad work habits”. Well you would be partly right, it was a union plant but, from the best that I could tell, the union had nothing to do with these costly work habits. I can say this because this particular union was pretty weak and all it seemed capable of achieving was securing the lowest pay and worst benefits in the area… I might go so far as to say that, at its best, this union could not get its members sunlight on a sunny day. So to blame the union environment for the inefficient use of the factory labor is most likely wrong.

What causes this type of mentality among workers, management, and others within our society? The most likely cause… a lack of understanding amongst workers as to the connection between low productivity and employee rewards, the next most likely reason is poor management. But, at the root of both, is the lack of understanding economics.

Let’s focus on the workers first, is it reasonable to assume the workers in this plant, in 1975, were not taking actions to deliberately put the plant out of business or lessen the company’s ability to give raises to its workers? Obviously neither of these goals was driving the workers to act in this wasteful way. So why did this happen?

In order to answer this question let us first examine the motivations of management; we assume that management would have more information than the plant’s nonmanagement workers on the effects of lower productivity. Anyone who has ever managed people knows how difficult it can be to motivate people to work harder. So how do we do it? Well, the easiest means to motivate people is to get them invested in the results. This is not necessarily invested in the company, in terms of stock ownership, but, in general terms, invested in working to insure its success. This brings us to a basic and well known principle known as the carrot and stick motivation approach.

Almost every company seeks to employ the carrot and stick approach in order to achieve company objectives. During my short youthful stint with this furniture company I was able to see both techniques in practice with a heavy focus on the “stick”.

On a daily basis the plant manager and foreman would look to discipline poor performance, verbal and written warnings, suspensions without pay, and firings happened frequently. One of the most talented employees, we’ll call him Jerry, went through all the discipline processes twice during his career, including being fired. Jerry was brought back about the time I was hired so I got to see him start the process all over.

Jerry taught me how to run a machine that was probably forty years old at the time and in doing so I was able to learn more than a little about what made him tick. Jerry primarily wanted to take care of Jerry. He was a man of action however his actions seldom benefited the company or his standing within the company. Jerry would prioritize his work based on which jobs he liked to run the most, he was as adept as anyone at taking the long bathroom breaks during the day, and he felt entitled to use company equipment, time and materials to complete personal projects. At different times, Jerry used the company equipment to convert company lumber into finished pieces for home projects. At one point he found some rosewood in the lumberyard and decided to use it for knife handles and we spent the better part of a couple days using the wood along with some metal from the factory to make ourselves two knives.

Jerry had the skills and intelligence to be a major contributor at this plant, he was talented and capable of doing great work and could be very productive when he wanted to, he just seldom wanted to unless of course it was for one of his personal projects. Jerry just never stayed focused on using his skills, talent, and intelligence to the benefit of the company and ultimately he chose to leave and work for another local company, a company that was known for offering the best wages and benefits.

At this point in my career I was already studying accounting and I found it curious that this free spirited and intelligent man went to work for a company that had a reputation for running a tight ship (an employer that would on the surface seem ill suited for Jerry and his propensity to avoid productive work) while at the same time the furniture company, which desperately needed a person with his skills, seemed incapable of retaining him.

The underlying reason for an individual and company mismatch, like in the case with Jerry and the furniture company, is a lack of understanding of what each needs from the other. Just like in the workplace, our role in the economy is often not clear to us and we are not able to see a clear correlation between doing well at work and being more financially successful. Understanding the workings of economics is the key to remedying this, for example, if Jerry had a clear understanding between how working hard and efficiently affected his pay and/or job security, Jerry possibly would have been more likely to perform better. Additionally, if management had been better educated in the psychology of the worker and the workings of economics they might have been better able to construct an effective method of managing and motivating Jerry. Absent this Jerry consistently underperformed and the company lost valuable production output and ultimately a very skillful employee.

As an example of this let’s say that Jerry, working at his optimum capacity, could generate an extra chair’s worth of production each day. If that chair sells for $200 then the economic cost of this lost productivity is $200 minus the cost of any direct materials going into the chair, in this case and let’s say that is $50. If you told the workers that their lack of effort was costing the company $150 each day per employee I doubt you would have heard much of an outcry from the workers. If, however, you could convince the company and the employees that the $150, if earned, would benefit both the company and employees then the collective team would view this lost productivity differently.

The company, at one point, contracted with outside consultants to create the proverbial “carrot” to provide motivation for employees to improve productivity. Unfortunately, the “carrot” that the consultants developed and implemented, a gain sharing plan, was so complex that no one, other than a finance guru, had any chance of understanding it. At that point I was well into completing an accounting degree, and though still far short of a guru, I would like to think I had enough knowledge to at least see some vague connection between my performance and a potential financial reward but the dots just weren’t there for me to connect. But let’s not digress into a commentary on the design and implementation of gain sharing plans. The important point here is that even though management knew, to some degree, it needed to gain the cooperation of the workforce in order to drive better performance, and despite knowing and attempting to do the right things, they were still unsuccessful in gaining the workers’ cooperation.

The struggle described here is a common one within working environments across our planet but, even though the goal is simple enough, you seldom hear of companies who are successful in gaining optimal employee participation in the company’s success. You can make an argument that there are simply not enough workers who have a strong work ethic in our economy. It’s certainly true that some people would not work hard on a consistent basis even under the threat of physical harm, but this does not accurately describe all or even most American workers. Generally speaking, people in this country desire to work for successful companies and want to be a success at what they do.

We all have a universal desire for many of the same basic things, food, security, and love. Within the sciences of psychology and economics these are described as a hierarchy of needs. Each person has their own weighting for these needs and because of this it makes finding a single solution to universal motivation difficult. For our purposes here it is simply important to recognize the fulfillment of these needs is desired by most people and concurrently most humans are willing to work to satisfy their various needs.

Opposing desires and needs coexist within people, among these is the need for leisure time, and it’s this need for leisure time that often comes into conflict with the need to be productive. It is this desire for leisure that prompted workers at the aforementioned furniture factory to spend hours each week in the bathroom sleeping, resting, or reading rather than working. Most would agree that few people would choose to spend hours in a restroom as their first leisure time destination. Given the choice Mule, Jerry, or I would have worked our butts off to get to go home twenty minutes early or to make an extra ten bucks. The collective failure of employees and employers to align their economic and social desires caused the loss of the productivity to be spent, among other places, in the company restrooms resulting in a waste of resources. Given the incentive of going home early or making a few extra dollars, versus taking the half hour all expenses paid vacation in the john, most employees (there will always be the exception) would have willingly forgone extended bathroom breaks in order to put out the production of an extra job or two.

Our purpose in this text is not to do an exposé on the waste at a factory several decades past but, instead, to enable you to develop an understanding of how the science of economics works. Possibly the next time you read a news story regarding a drop in factory productivity in the United States, you can now envision what that loss looks like and more importantly what that means to our economy. (I offer apologies to any reader who has just had the disturbing vision pop in their head of several thousand workers sleeping seated in innumerable company bathrooms throughout the United States). If you can, after finishing this book, better understand the connection between higher productivity and a better lifestyle for society, then the time you spent reading and the time I spent writing this book will have been worthwhile for both of us.

Chapter 2

The Island Economy

Imagine there is a small island, we will call it Adam’s Island named for a famous Scotsman, Adam Smith, who visited the island during the time it was being first settled in eighteenth century. On Adam’s Island there live three families, Farmer, Sheppard and Fisher who, in the beginning, made up the entire population of the island. These families split the island into three parcels, Farmer’s parcel is rich farm land, Sheppard’s parcel is made up of rolling grassland, and Fisher’s parcel has a bay with the best fishing access on the island.

This island represents a complete economy that has everything needed to sustain the simple needs of these three families. For a number of years each family has subsisted only on what they have been able to produce on their part of this island. Each family has 180 hours a week that they are able to devote towards producing products for their family’s consumption and, by working very hard, each family has been able to be met their basic needs without any dependence on the other two families.

Each family’s part of the island has very different capabilities in terms of producing the basic goods needed to survive.

The Farmer parcel has six hundred acres of land, the best of which can produce forty bushels of wheat per acre, requiring eight hours of work per week per acre to farm. The parcel can also be used to raise sheep however, the land is not well drained so it cannot be used for pasture much of the year and, in order to raise the sheep, the land must be used to grow hay which then needs to be harvested, stored and feed to the sheep. The family can raise three sheep per acre and each sheep requires the family to spend twelve hours per week per acre caring for them. The Farmer parcel also has access to the sea but it is a challenging walk that involves going down a steep path to the sea. The Farmer family can catch one fifth of a pound of fish per hour spent fishing.

The Sheppard family also has six hundred acres of land most of which they can use to either raise sheep or grow wheat. The family can raise five sheep per acre and the labor required to care for the sheep is twelve hours per week per acre. The land is well drained and the family must irrigate the land part of the year in order to produce wheat. Even with irrigation the best yield possible is thirty bushels of wheat per acre and requires ten and half hours of work per week per acre to produce. There is also access to water for fishing however, the waters have difficult currents resulting in poor fishing and the family is only able catch one sixth of a pound of fish per hour fishing.

The Fisher parcel, also six hundred acres, is mostly in a low lying area of which only forty acres are suitable for farming or pasturing sheep and another twenty acres can only be used for pasture. The forty acres can produce thirty-two bushels of wheat per acre but it requires fourteen hours per week per acre to grow the wheat. The land, if used for sheep pasture, can support four sheep per acre and requires eleven hours of labor per week per acre. The family has the best fishing access on the island enabling them to catch one quarter pound of fish per hour spent fishing.

To survive on this island each family needs to have a minimum of two hundred bushels of wheat, wool from twenty sheep, and five hundred pounds of fish each year. Each family can, on their own, produce enough of each of these essential goods. The following table shows the number of hours each family will spend to obtain the minimum amount of wool, fish, and wheat needed to survive.

Each family, based upon their need for the three products, is working almost every available hour and still is only just meeting their minimum needs for each of these essentials.

Much of the early history of man was spent as subsistence hunter-gatherers where each person, or family, sought to find what they needed in order to feed and clothe only their selves. Trading, which no doubt developed overtime, most likely came about when one family, who found themselves with an excess of one type of good, offered to exchange the excess for another good held by another family. You can possibly imagine a family ten thousand years ago who successfully hunted a wooly mammoth offering their near term excess supply of meat to a family who had an excess supply of dried berries.

Later in the history of mankind, humans began making a habit out of being inter-reliant. For example, tool makers provided hunting supplies to hunters and in return the hunters provided meat to the tool makers. This reciprocal type of relationship led the way for individuals to focus on improving their skill set and to become experts at generating a specific commodity that they could then trade to an expert who specialized in another type of commodity. So, as in our example, our expert tool maker could craft, not only more but, more effective weapons and then trade them to our hunters who could then spend more of their time hunting. This arrangement allowed both groups to live better than when they lived their lives completely independent from each other. Even our ancient ancestors were likely to have had special skills that set them apart from each other. If you lived in the days of our cave dwelling forefathers, you may have been the person who could rapidly make a very sharp and deadly arrowhead but could not hit a deer with a bow and arrow if it was standing still ten feet in front of you. If this was your situation, to survive in ten thousand BC, you would have needed to find a great hunter whose ability to quickly make a quality arrowhead was less than stellar.

As humans our natural tendency is to continually work towards improving our lifestyles and the same is true for our islanders.

On our little island, the heads of the Farmer and Fisher families met one day to discuss the struggle to meet their family’s respective needs. Farmer recognized that Fisher was able to catch fish faster and in greater quantities because his family’s parcel offered easy access to great fishing areas that were only available on the Fisher side of the island. Fisher, on the other hand, realized that the Farmer family was able to more easily produce greater quantities of wheat than his family could produce on their parcel. To take advantage of each family’s core commodity, Farmer offered to raise an extra eighty bushels of wheat in trade for five pounds of fish per week. Fisher gladly accepted and the two heads of family shock on the deal. Farmer returned home to her family to boast of her deal making prowess, the results of which will have the family working an additional sixteen hours per week producing wheat but, in exchange, the family will save twenty-five hours per week previously spent fishing. At the same time Fisher tells his family that by spending only twenty extra hours per week fishing they will save thirty-two hours the family would have spent each week raising wheat. Both families were very pleased with this new arrangement.

You can probably think of a time when you struck a good deal where you thought you got the better end of the bargain because the other party overvalued the good being sold or traded to them. In this case did Farmer take advantage of Fisher or vice versa? You can do the math because you have the benefit of knowing the intimate details of both families’ production capabilities but, in most trading situations, those trading or selling goods would generally not have knowledge of those details. In the case of the Fisher and Farmer families, both believed they cut a great deal and neither knew exactly how good the deal was for the other party. The important thing in trading is not that one or the other got the better deal but that both parties are better off following the trade than either was before the deal.

After the first year under this arrangement the head of the Sheppard family inquired how the other two families could have so much free time without any apparent reduction in their quality of life. After a little prodding Farmer could no longer resist telling Sheppard of the shrewd deal she had made with the Fisher family. Sheppard, being particularly sharp, recognized that Farmer had better farmland and Fisher had better fishing access and that somehow the two families were capitalizing on their respective strengths. After some thought Sheppard suggested to Farmer that his family had some unused pasture land which was superior to Farmer’s and they would be willing to use a portion of this to produce additional wool to exchange for wheat. After a bit of haggling, Sheppard agreed to produce wool from five sheep in exchange for eighty bushels of wheat from Farmer each year.

Farmer once again told her family of her negotiating prowess. By working two additional acres of wheat, adding sixteen wheat production hours per week, they could have Sheppard raise five sheep for their family. This new trade would save the Farmer family twenty hours per week. Sheppard too, thrilled his family with the news that they will save twenty-eight hours per week they would have spent growing wheat and to do so they would only need to work an extra twelve hours per week raising the five additional sheep, or a net savings of sixteen hours of work per week.

At this point all three families are trading with one another resulting in a reduction in time spent working each week… the Farmer family has saved thirteen hours each week, the Fisher family twelve hours, and Sheppard family sixteen hours. This savings will initially become an increase in the amount of leisure time each family gets to enjoy increasing the quality of life for everyone on the island.

Trade on the island continued to develop until Farmer grew all the wheat, Fisher caught all the fish, and Sheppard raised all the sheep. Under this arrangement, the following table shows the average hours per week each of the families worked in order to meet the islander’s needs.

The families each reduced their work week by more than thirty hours so, at least initially, all three families were very happy with the trade agreements and the resulting lifestyle improvements. However, despite the universal improvement not every family benefited equally. As you can see by the table, the Sheppard family is working twenty-four hours more per week than either the Farmer or Fisher families. Later we will see how this disparity in the benefits from the trading arrangement will come to cause problems for our island families.

Once trading on the island had fully evolved each family began to use a portion of their free time to produce more of what they were most efficient at. They used some of their additional production for their own consumption and the remainder they traded to the other families. Farmer grew more wheat, Sheppard produced more wool, and Fisher caught more fish. Each family began to consume more, worked less than before, and had a higher quality of life. The result was growth in the collective wealth of the island and this is exactly how the world economy has benefited since the dawn of trade.

Unfortunately throughout history most nations, and more importantly the individuals within those nations, failed to grasp the benefits of trading. If you consider what has happened on the island and how logical this move to trading appears to be, why then in the real world do people resist trading? Doesn’t it seem obvious to have the people who can create goods most efficiently do so and then trade those goods for other products the people need? Let’s return to the island to consider why this kind of change might not be welcomed.

One of Fishers son’s, Sam, was the family expert in raising wheat on their property and as such was held in high regard up until the families began trading. Once the Fisher family began to trade for its wheat instead of growing it, Sam had to join the family on the boat each day. While Sam enjoyed a shorter work week and more food and clothing as a result the intra-family trading, he did not feel like it was worth it to him personally. Sam enjoyed working the land, when he was out on the fishing boat he would often get seasick and then dockside at the end of the day he found cleaning fish to be disgusting. Sam lobbied his father constantly to allow him to resume growing some of the family’s wheat and to get out the fishing work. Ultimately Fisher relented and allowed Sam to grow wheat thereby reducing the trading with Farmer and forcing Farmer to do some of their own fishing.

What happened on our island is what happens in the real world when a good, let’s say shoes, are imported from a foreign economy at a lower cost. Most people are happy to be able to buy shoes at a lower price. But what about the shoemakers, how does this benefit them? Unfortunately they quickly lose their jobs and lobby their representatives to put a stop to the foreign shoe imports. You, as a well educated economist, meet with the shoemakers in order to convince them not to resist the new shoe import agreement because they, along with everyone else, will benefit from the increase in trade. You explain to them, using well laid out charts and graphs, how all shoemakers will have the opportunity to go to work in a new industry and that this will allow our country to operate more efficiently. If you are successful in doing so, stop studying economics and begin your career in sales you will make a fortune! The reality is the shoemakers will not see how importing shoes will do them any good. In the real world changing careers is scary, costly, and difficult and those forced into changing careers, as a result of cheaper imports, are almost never happy about having to do so.

Today we have the benefit of some education in economics being included as a part of our high school and/or college experience; however, early in the history of trade, economics education was not prevalent in most of the population and honestly, in the earliest years of trade, even the best educated people did not understand how the mutual benefit of trade worked. Most trade occurred based solely on the desire to achieve a profit and the benefit to the respective economies was accidental. Even today most nations focus on managing the balance of trade rather than seeking out ways to increase trade in a fair and sustainable way. Later in this book we will delve deeper into why sustainable trade is critical to the long-term success of our modern society.

Throughout this book I will try to show the various sides of the arguments around economic concepts especially those concepts which are controversial. Let’s start with looking at an example of a managed trade program and the negative aspects of a long-term imbalance in trade. At the start of the twenty-first century, China rose to be the preeminent area of low cost manufacturing. China has been accused of trading unfairly with the West by manipulating exchange rates of its currency (which appears to be a fair criticism in that China is managing to grow its exports and minimize its imports). The product of this manipulation resulted in the accumulation of foreign funds rather than allowing the Chinese consumers to use the currency received in trading to purchase goods and services from the West. By doing this, China has held down the cost of goods coming out of China thereby enticing further foreign investment and accelerating the movement of manufacturing from the western economies and into China. The result of China’s managed trading program has been a more rapid development of the Chinese economy and industrial base at the short-term expense of the quality of life for the consumers in China. Opponents of free trade are often quick to point to this example and others like it where trade with a foreign partner hurts one partner and benefits the other (very unlike the example I used with Adam’s Island).

One of the goals of this text is to examine why overly manipulating trade in the long-term hurts the overall world economy. Admittedly trade and the underlying economic effects are not simple enough to understand without some education on how this complex exchange works best. As we proceed, keep in mind how sensible the trade gains achieved by Farmer and Sheppard were and how that concept often works for us every day in the real world.

Chapter 3

The Introduction of Money

….

from

A Conversation about Economics

by Richard Werner CMA/CFM

get it at Amazon.com

Neoliberal Education: a faux crisis, an erroneous ‘solution’ and capital wins again – Bill Mitchell.

From an MMT (Modern Monetary Theory) perspective, there are no financial limits on the support governments can provide public education. There is also no sense to the notion that public education should “make profits” in a competitive market.


One of the ways in which the neoliberal era has entrenched itself and, in this case, will perpetuate its negative legacy for years to come is to infiltrate the educational system.

This has occurred in various ways over the decades as the corporate sector has sought to have more influence over what is taught and researched in universities. The benefits of this influence to capital are obvious. They create a stream of compliant recruits who have learned to jump through hoops to get delayed rewards.

In the period after full employment was abandoned firms also realised they no longer had to offer training to their staff in the same way they did when vacancies outstripped available workers. As a result they have increasingly sought to impose their ‘job specific’ training requirements onto universities, who under pressure from government funding constraints have, erroneously, seen this as a way to stay afloat. So traditional liberal arts programs have come under attack, they don’t have a ‘product’ to sell as the market paradigm has become increasingly entrenched. There has also been an attack on ‘basic’ research as the corporate sector demands universities innovate more. That is code for doing the privatising public research to advance profit.

But capital still can see more rewards coming if they can further dictate curriculum and research agendas. So how to proceed. Invent a crisis. If you can claim that universities will become irrelevant in the next decade unless they do what capital desires of them then the policy debate becomes further skewed away from where it should be. That ‘crisis invention‘ happened this week in Australia.

This is a case of a vested interest starting with a series of false assumptions and a non-problem and then creating a series of ‘solutions’ to that problem which have no meaning if the actual situation is correctly understood and appraised.

It is just assumed that education has to be provided on a competitive basis in a market for profit. It is never questioned whether that is an applicable paradigm in which to operate.

Then it is just assumed that within that ‘market’ some ‘firms’ (universities) will go out of ‘business’. Why? Because it is just assumed that governments will not be able to fund them any longer because it has limited ‘money’.

See the trend. One myth creates a construction that leads to further deductions that are equally false and so it goes.

That is public policy formation neoliberal style.

The so-called ‘professional services’ firm Ernst and Young, which began life as an accounting firm and morphed into something much more comprehensive and neoliberal.

Its recent history is littered with a plethora of scandals involving accounting and audit fraud, including being associated with the collapse of the Akai Holdings (2009), Sons of Gwalia (2009), Moulin Global Eyecare (2010), Lehman Brothers in 2010, along with many other incidents, where EY (as it is now known) were forced into paying settlements.

It was eviscerated by the US government for its part in “criminal tax avoidance” schemes in 2013. In 2010, it paid “$10 million to settle a New York lawsuit accusing the accounting firm of helping Lehman Brothers Holdings Inc deceive investors in the years leading up to its 2008 collapse” and facilitating a “massive accounting fraud”. This unsavoury firm has established a long list of ‘deals’ with various authorities to avoid criminal prosecution. The question is why its executives have not served time for their part in these scandals.

The 2017 book by Jesse Eisinger The Chickenshit Club: Why the Justice Department Fails to Prosecute Executives is worth a read in that regard. He says that the outcomes of increased political lobbying, a decline in culture at the US Department of Justice and the networking of defense lawyers resulted in a “blunting and removal of prosecutorial tools in white-collar corporate investigations.”

He wrote that there was a ‘revolving door’ between government justice officials and the major law firms representing these banksters and financial fraudsters which meant that the Justice Department was skewed to producing outcomes that were “ultimately to the benefit of corporations”.

As the Slate review noted (July 18, 2017), “government lawyers have too often decided they’re satisfied shaking down companies for settlement money paid for by shareholders, instead of taking on the much harder task of bringing charges against individual executives”.

We are facing a similar situation to that outlined in his book in Australia at present with the Royal Commission on Banking. Whether the criminal behaviour being revealed almost on a daily basis as the hearings continue will result in jail time is yet to be seen. In 2015, though, Australian authorities did lock up a former EY executive for 14 years for his part in “a tax fraud and moneylaundering” racket.

So there is hope.

So, overall, I would assess this firm has been an entrenched part of the neoliberal machine by providing services to all manner of questionable and criminal behaviour all around the World.

Anyway, as we have seen in history, these characters have no shame and re-emerge from scandal with new names (EY rather than Ernst and Young), new logos, flash new WWW sites and mountains of bluster and push.

In the last week (May 1, 2018), its Oceania office has released a Report The university of the future which outlines how insidious these types of outfits really are.

The main claims made by company in this report are:

“The dominant university model in Australia, a broad-based teaching and research institution, supported by a large asset base and a large, predominantly in-house back office will prove unviable in all but a few cases over the next 10-15 years.”

Why?

Because universities will have to “merge parts of the education sector with other sectors venture capital” etc.

Why?

Increased “Contestability of markets and funding” “governments face tight budgetary environments” mean that “Universities will need to compete for students and government funds as never before”.

The globalisation argument is wheeled out. Why not? It has worked as a smokescreen for some decades now. So, “global mobility will grow for students, academics, and university brands. This will not only intensify competition, but also create opportunities for much deeper global partnerships and broader access to student and academic talent.”

And then the actual agenda is unveiled:

Universities will need to build significantly deeper relationships with industry in the decade ahead to differentiate teaching and learning programs, support the funding and application of research, and reinforce the role of universities as drivers of innovation and growth.

Instrumentalism to the fore.

A spokesperson for the Report told the press that:

We should not underestimate the challenge, it’s not clear that all institutions will be able to make the leap. Universities are faculty focused and prioritise the needs of teaching and research staff over students.

And was quoted as saying:

A lot of the content of degrees no longer matches the actual work that students will be doing.

The neoliberal era has attempted to define every aspect of society in terms of the stylised free market paradigm.

Imposing a mainstream economics textbook model of the market as the exemplar of how we should value things is deeply flawed.

Even within its own logic the model succumbs to “market failure”. The existence of external effects (to the transaction) means that the private market over-allocates resources when social costs exceed social benefits and under-allocates resources to this activity when social costs are less than social benefits.

But they persist in championing the concept and primacy of ‘consumer sovereignty’, which in textbooks is held out as being the force that delivers the optimal allocation of resources because competitive firms provide goods and services at the lowest cost to satisfy the desires of the consumers.

Even in these simplistic textbook stories the dominance of the ‘supply-side’ is ignored (advertising, collusion, etc).

If ever we needed a reminder of how the firms can monopolise information, break laws (consumer protections etc), we just have to think about the behaviour of the banksters in the lead up to the GFC and beyond.

While the demand-side sovereignty story is compromised by supply-side dominance, in the area of education, it is totally inapplicable, given the nature of the process.

Education cannot be reduced to being a ‘product’ that consumers choose. Education is a process of transferring knowledge that the ‘Master’ possesses to the ‘Apprentice’ who has no knowledge (in the area). By definition, the Apprentice doesn’t know what they do not know and cannot be in a position to ‘choose’ optimal outcomes. That has to be the prerogative of the ‘Master’, who has spent years amassing knowledge and craft.

In the case of education, how can the child know what is best? How can they meaningfully appraise what is a good quality education and what is a poor quality education?

The fact that the funding cuts have led to a stream of fly-by-night education providers in Australia who have left thousands of students stranded when they have gone broke is evidence of the failure of a market model.

The reality is that children do not demand programs. The universities are increasingly pressured by politicians (via funding) and corporations (via grants etc) to tailor the programs to the “market” agenda.

Higher education can only ever be a supply-determined activity and at that point the “market model” breaks down irretrievably.

But notwithstanding all this, the neo-liberal era has imposed a very narrow conception of value in relation to our consideration of human activity and endeavour. We have been hectored and bullied into thinking that value equals private profit and that public life has to fit this conception. In doing so we severely diminish the quality of life.

In the education sphere, the bean-counters have no way of knowing what these social costs or benefits are and so the decision-making systems become more crude, how much money will an academic program make relative to how much it costs in dollars?

In some cases, this is drilled down to how much money an individual academic makes relative to his/her cost? This is a crude application of the private market calculus. It is a totally unsuitable way of thinking about education provision. It has little relevance to deeper meaning and the sort of qualities which bind us as humans to ourselves, into families, into communities, and as nations. It imposes a poverty on all of us by diminishing our concept of knowledge and forcing us to appraise everything as if it should be “profitable”.

So constructing educational activity in terms of “what students will be doing” is a fundamentally flawed way of thinking about it.

This is really what the agenda is. The Ernst and Young spokesperson claimed that:

There will most likely be much more work-integrated learning in tertiary courses, which is not necessarily students doing work experience but firms co-developing the curriculum and actually getting students to work through complex real-life problems under the mentorship of academic and industry leaders.

So the firms want to set what students are exposed to.

Education becomes training and specific, profit-oriented training at that. This is the anathema of a progressive future. It is the exemplar of the complete infiltration of neoliberal values into our core social institutions. The neoliberal era has created a conflict in the schooling and higher education sector between traditional liberal approaches and the so-called instrumentalist paradigm.

The assault on public education is one of the neoliberal battlefronts along with labour and product market regulations, public ownership, trade rules, etc. This conflict has come from three sources:

First, governments have become infested with the neoliberal myths and have imposed various cutbacks to school and higher education spending in the misguided attempt to ‘save’ money and cut fiscal deficits.

Second, this fiscal attack has been accompanied by an elevation in the view that education should be more market oriented and models of ‘consumer-driven’ structures have been imposed on educational institutions.

Schooling system administrators and a new breed of university managers took up the neo-liberal agenda with relish, not the least because their own pay sky-rocketed and the previous relativities within the academic hierarchy between the staff who taught and researched and those that took management roles lost all sense of proportion.

Instead of rebelling and making the funding cuts and the increased demand for STEM type activity (and a disregard for liberal arts/humanities curricula) a political issue which in Australia at least would have seen the government back down, the higher education managers embraced the new agenda without fail.

Come in, the bean-counters! The over-paid managers then created a phalanx of managerial bean-counters who have become obsessed with KPls and ‘busy work’ harassing staff with ever expanding lists of requirements and measurements. The bean-counters (for example, Finance divisions with Universities) are largely unproductive drains on institutional revenue and are increasingly drawn from the corporate sector with little experience in education. This trend has then dovetailed with the third source of conflict between liberal and instrumentalist views on education.

Third, capitalists have always tried to embrace the educational system as a tool for their own advancement but social democratic movements have, in varying ways, resisted the sheer instrumentalism that the business sector seeks. The education system is continually pressured by the dominant elites to act as a breeder for ‘capitalist values’ and to reproduce the hierarchical and undemocratic social relationships that are required to keep the workers at bay and expand the interests of capital.

So there is an overlap between the way education is organised and the way the workplace is organised.

Capital also sees education as being primarily involved in the development of job-specific skills (vocational, instrumental) rather than serving any broader goals. The neo-liberal era has seen this type of corporate instrumentalism within education advanced to new heights. The revolving door between profit-seeking corporations and senior management positions within the educational sector is testimony of how corporate values are being elevated above traditional educational aspirations.

You only have to considered Ernst and Young’s “Framework for Assessing and Designing a University Future Model”, which they summarise in this graphic:

Consider the language: “Customers” (not students), “Products” (not knowledge creation), “Role within Value Chain” (not pure knowledge), “Brand and market position”, etc.

I don’t consider this graphic to be remotely relevant to the educational process where knowledge is imparted in a heterodox environment and critical reasoning capacities are developed. The idea that education is a product sold in a market is as far from a progressive ideal as you can get.

From an MMT (Modern Monetary Theory) perspective, there are no financial limits on the support governments can provide public education. There is also no sense to the notion that public education should “make profits” in a competitive market.

The only way that these sorts of debates will progress, however, is to take them out of the fiscal policy realm where they are largely inapplicable and start talking about rights and higher human values and what different interpretations of these rights and value concepts have for real resources allocations and redistribution.

Conclusion

Apart from their scandalous history, Ernst and Young are, in my view disqualified from being taking seriously as a result of their inputs to the public macroeconomics debate.

In their Feeding the animal spirits Budget 2018 report, the spokesperson claims that:

There are good reasons to worry about persistent budget deficits and the national finances do need to be fixed.

And that summarises how stupid and venal the company is.

The Golden Age of Macro Historical Sociology – Randall Collins * What is Historical Sociology? – Richard Lachmann.

“A new political science is needed for a totally new world.” Alexis de Tocqueville, 1835

“The most compelling reason for the existence of historical sociology is embarrassingly obvious, embarrassingly because so often ignored. This is the importance of studying social change.” Craig Calhoun


Randall Collins

History, Durkheim remarked, should be sociology’s microscope. Not that it should magnify the tiny, he meant, but that it should be the instrument by which structures are discovered invisible to the unaided eye. Durkheim’s program in the Année Sociologique did not go far with this research, sketching static structures more than the dynamics of structural change. The charge still remains.

Whatever is large and widely connected can be brought into focus within no perspective but one larger still. Political and economic patterns, especially as they encompass states and the strains of war, property systems and markets, can best be seen in the study of many interconnected histories over a long period of time. What Durkheim wanted for sociological theory was not a microscope, but might well be called a macroscope.

Two opposing views on history have dominated the twentieth century of the Christian calender still in use in the post-Christian West. On one hand, this has been the century of macro history par excellence, the first in which a comprehensive history of the world has become possible. Hegel, writing in the generation when professional historiography was being established, had known just enough about the cycle of Chinese dynasties to posit that only the West had a history. By the time of the First World War, Spengler, Weber, and a little later Toynbee were surveying the civilizations of China and India, Egypt and Mesopotamia, Persia and the Arab world, sometimes Mexico, Peru and Polynesia, along with the more familiar comparison of Greco-Roman antiquity with medieval and modern Europe.

The opposing view of 20th century intellectuals has been to recoil from these global vistas in favor of the argument that history shows us no more than ourselves hopelessly contextualized in patternlessness. In the epistemological version of a familiar phrase, all that we learn from history is that it is impossible to learn from history. Let us briefly explore the two sides of this century of historical consciousness.

Cumulating Strands of Analytical Macro-History

Early recognition of patterns crystalized in the ambiguous insight “history repeats itself”. Toynbee began his search for the pattern of all civilizations because the world wars of Britain and Germany reminded him of the death struggle of liberal Athens and authoritarian Sparta. Spengler collated evidence of repeated sequences of cultural efflorescence and decadence throughout the world, each distinguished by its unique mentality, like a melody played in different keys. Marx, whose knowledge of non European history was not so far beyond Hegel’s, depicted its static nature in materialist form as Oriental despotism, a model elaborated in the 1950s by Wittfogel. Bracketing the non-Western world, Marx started from the insight that the class conflict of the Roman world was repeated by analogous classes in medieval feudalism and in modern capitalism.

The Marxian school of historical scholarship is largely an intellectual movement of the 20th century; it presents a materialist parallel to Spengler, discerning abstract sequences repeated in distinctive modalities for each run-through. Instances of history repeating do not necessarily imply cycles like the turning of a wheel; later generations of scholars began to see that what repeats can be treated more analytically, and that multiple processes can combine to weave a series of historical tapestries each peculiar in its details.

Of all the macro-historians of the pioneering period, Weber has survived best. In part this has been because it has taken most of the twentieth century to appreciate the scope of his work. His Protestant Ethic argument was famous by the 1930s, while only in the 1950s and 60s was there much recognition of his comparisons among the world religions designed to show why Christianity, continuing certain patterns of ancient Judaism, gave rise to the dynamism of modern capitalism whereas the civilizations of Confucianism, Buddhism, Hinduism and Islam did not.

Weber’s method of showing how multiple dimensions of social causality intertwine has also grown gradually influential. It is now conceded by scholars almost everywhere that the three dimensions of politics, economics and culture must be taken into account in every analysis, although, as structuralist Marxists of the 1970s argued, one of them may be given primacy “in the last instance”.

There is a negative side as well to Weber’s preeminence. Peeling the layers of Weber’s concepts has provided a field rich in scholarly niches, and the opportunities for developing Weberian ideas in one direction after another have given him the great classic reputation of sociological macro-history. The very process of uncovering Weber as an multi-sided icon has made it difficult for many decades to see just what it takes to go beyond him. Only now as we are becoming able to see Weber’s full achievements are we able as well to see his limits in full daylight. These limits are not so much in his analytical apparatus as in his view of world history. For all his disagreement with Hegel and Marx, Weber shares with them a Eurocentric view: for all important purposes, the histories of what lies eastwards of Palestine and Greece are taken as analytically static repetitions, while the only dynamic historical transformations are those of the West. In some of the papers collected here, I will suggest how Weber’s analytical tools can be used to take us beyond Weber’s Eurocentrism.

The period of scholarship from the mid-1960s onward, continuing into the present, can appropriately be called the Golden Age of macrohistory. The crudities of the generation of pioneers has been passed; fruitful leads have been taken up, and a generation of scholars have done the work to build a set of new paradigms. Analytically, the principal style of this period is an interplay of Weberian and Marxian ideas. Although dogmatic loyalty to one or another of the classics exists in some scholarly camps, across the creative core of this Golden Age the attitude has been pragmatic. The Marx/Weber blend has earned its prominence because a series of key ideas from these traditions have proven fruitful in unanticipated directions.

The most striking accumulation of knowledge has taken place on Marx’s favorite topic, revolution. Beginning by broadening the focus on economic causality, the result has been a paradigm revolution in the theory of revolution. Barrington Moore and Arthur Stinchcombe, followed by Jeffrey Paige and Theda Skocpol, noted that the epoch of revolutions was not so much industrial capitalism but the preceeding period of agrarian capitalism. Agricultural production for the market has been the locus of class conflicts from the English revolution to the Vietnam revolution, and the varying work relations and property patterns of agricultural capitalism have set modern political transformations on paths to left, right or center.

Going further, Skocpol and Jack Goldstone have shown class conflict alone is insufficient for revolution, and must be accompanied by a fiscal crisis of the state and an accompanying split between state elite and property owners over the repair of state finances. Skocpol marks the paradigm shift to what might be called the state breakdown theory of revolution. Skocpol and Goldstone elaborate a common model of state breakdown into alternate chains of causes further back, focussing respective upon geopolitical strains and population-induced price changes.

Another direction of research has continued a purer Marxian line. Here the premise of economic primacy has been preserved by shifting the arena of application from the traditional focus upon a nation state to a capitalist world system. This resuscitation of Marxism has been helped by a diplomatic marriage with the Annales school. Braudel’s 1949 work, The Mediterranean and the Mediterranean World in the Age of Philip ll, built up a grand historical tapestry out of the patient accumulation of scholarship on the material conditions of everyday life and the flows of trade and finance. Braudel depicted the first of the European world-system hegemonies, the Spanish/Mediterreanean world of the 16th century. Wallerstein, in a multi-volume series beginning in 1974 and still in progress, theorized Braudel’s world in a Marxian direction. Wallerstein has spear-headed a world-system school describing successive expansions of the European world-system around the globe, through successive crises and transfers of hegemony.

World-system scholarship has served as a central clearing house for the scholarship of the world, giving a theoretical resonance to work by regional specialists ranging from the trade of the Malacca Straits to commodity chains in Latin America. Like the Annales School, the world-system camp is a strategic alliance of detailed and specialized histories; the Golden Age of grand historical vision has come about by putting together researches by a century of professional historians. The expanding mass population of universities and historians within them has been the base for the Marxian revival in mid-20th century scholarship; world-system Marxism has provided the vehicle by which otherwise obscure specialities could join in a grand march towards paradigm revolution.

All active intellectual movements have their inner conflicts and unexpected lines of innovation. The world-system camp has not remained conceptually static. The earliest period, epitomized by André Gunder Frank’s dependency theory, stressed that underdevelopment, the world-system equivalent of the immiseration of the proletariat, is created by and grows apace with the penetration of world capitalism. This assertion has been attacked on factual grounds, and dependency theory has retreated to the stance of dependent development, that development can occur under capitalist dependency although the relative gap between metropole and periphery continually widens.

Moreover, there are cases of upward mobility in the world-system, from periphery to semi-periphery into the core, sometimes (like the North American region which eventually became the United States) even into hegemony within the core. On a structuralist interpretation, a capitalist world-system is a set of positions that can be filled by different geograpical regions. There is room only for a small hegemonic zone surrounded by a limited core region where capital, entrepreneurial innovation and the most privileged workers are concentrated; there are always relative gaps in wealth between this region and the semiperiphery and periphery subservient to the capital flows and technical and labor relations shaped at the center. The structuralist version of world-system theory holds that social mobility may occur upwards and downwards within the svstem but the relative privilege or subordination of the several zones always remains.

As I write in the late 1990s, this remains a hypothesis without conclusive evidence one way or the other; on similar grounds are suggestive theorizing about the dynamics of expansive and contractive waves of the world economy, and the pattern of hegemonic wars and shifts in hegemony. (Sanderson 1995, Arrighi 1994 and Chase-Dunn 1989 provide useful overviews.) Even more speculative remains the old Marxian prediction recast in world-system guise, that the future holds a crisis of such proportions that the capitalist system itself will be transformed into world socialism.

For all these uncertainties, world-system research contributes energy and vividness to the activity of this Golden Age of macro-history; it broadens and integrates the many strands of specialized and regional histories, even if the conceptual model is not on as firm a grounds as the developments which have taken place within the narrower compass of the state-breakdown model of revolutions.

Another direction of creative development from the world-system model have come from questioning its Eurocentric starting point. Wallerstein, like Marx, conceptually distinguished large regional structures which are structurally static and incapable of selfdriven economic growth (referred to as worldempires) from capitalist world-systems, balance-of-power regions among contending states which allow a manuevering space in which capitalism becomes dominant. in practice, the latter category is European capitalism, while the structural stasis of worldempires brackets the ancient Mediterrean and the non-Western world.

Wallerstein‘s starting point for the capitalist world-system is the same as Weber’s, Europe in the 16th century. Other scholars have taken the model of a capitalist world-system and applied it backwards in time, or further afield to zones of trade independent at least initially of the European world system. Janet Abu-Lughod depicts a superordinate world system of the Middle Ages, linkages among a series of world system trading zones, strung together like sausages from China through Indonesia; thence to India; to the Arab world centered on Egypt; and finally connecting to the European zone at the tail end of the chain. AbuLughod reverses the analytical question, asking how we can explain not so much the rise of the West as v the fall of the East. Braudel, too, in his later work, describes a series of separate world systems in the period 1400-1800, including not only those in AbuLudghod’s medieval network but also Turkey and Russia. Braudel suggests there was a rough parallel in economic level among all of them before the industrialism revolution until they were upset by a late European intrusion.

Other scholars have applied the logic of world system models further back in time. Chase-Dunn and Hall (1991, 1997) argue that even in regions of state-less tribes, and the period of earliest states known through the archeological record, there was never a question of isolated units undergoing their own development through local circumstances, but regional world systems with cores and peripheral trading zones.

The analytical emphasis of world systems has shifted in these various efforts to extend the model backwards in time. For some, the specifically capitalist character of world systems becomes unessential; for others, trade relations become the crucial feature rather than property, labor relations, or modes roduction. What has become seen increasingly as central in the model has been its dynamic properties: the Kondratieff-like waves of expansion and contraction over periods of approximately one-to-two centuries, punctuated by hegemonic crises and shifts in core dominance. Gills and Frank (1991) have schematized such cycles from 3000 BC. to the present. Generalizing world system models to all times and places defocusses other questions, above all what causes changes in the character of economic and political systems as different as stateless kin-based tribal networks, agrarian production coerced by military elites, and the several kinds of capitalism. This recent phase of omni-world-system theorizing is bound to be supplemented by other models.

These controversies occupy the immediate foreground of attention. More significant for the trend of contemporary thought has been a permanent gestalt switch in the way we do macrohistory. The subject of analysis can no longer be taken as the isolated unit, whether it is the isolated tribe of structural-functionalist anthropology, the isolated civilizations of Spengler’s era, or the nationstates beloved of national historians. These units exist in a world of like and unlike units; their pattern of relations with each other makes each of them what they are. This is not to say that for analytical purposes we cannot focus upon a single tribe, or cultural region, or national state. But explanations of what happens inside these units, abstracted from their world-system context are not only incomplete; that might be of relatively small consequence, since explanations always abstract out of a mass of detail in order to focus on what is most important.

The world-system viewpoint makes a stronger theoretical claim: to abstract away from this external context is to miss the most important determinants of their political and economic structures. In crucial respects, all social units are constituted from the outside in.

This gestalt switch to an outside-in causality, pionneered by contemporary neo-Marxism, has been paralleled on the neo-Weberian side. This is my way of referring to the primacy which has been given, during the contemporary Golden Age of macrohistory, to explaining states by their interstate relations, which is to say, by geopolitics. Here too there is a pre-history. The concept of geopolitics began at the turn of the 20th century, in an atmosphere associated with nationalistic military policies. Mackinder in Britain, Mahan in the US, Ratzel and Haushofer in Germany argued over the importance of land or sea power, and about the location of strategic heartlands upon the globe whose possession gave dominance over other states. The topic of geopolitics acquired a bad odour with the Nazis, and still more in the period of postwar decolonization. But gradually the historical sociology of the state made it apparent that geopolitics cannot be analytically overlooked. The old confusion has dissolved between recognizing geopolitical processes and advocating military aggrandizement; contemporary analytical geopolitics is more like to emphasize the costs and liabilities of geopolitical overextension.

The old geopoliticians tended to particularize their subject, as in Mackinder’s assertion that hegemony depends upon controlling a geographical heartland lying at the center of Eurasia. Contemporary geopolitics shows instead that the expansion and contraction of state borders is determined by the relations among the geopolitical advantages and disadvantages of neighbouring states, wherever they might lie upon the globe.

One influence in the revival of geopolitical theory has been the world history of William McNeill. McNeill’s The Rise of the West (a deliberately anti-Spenglerian title), appearing in 1963, represents the maturity of world historiography, the point at which enough scholarship had been accumulated so that the history of the globe could be written in conventional narrative form, without resort to metaphor.

In comparison to the flamboyant efforts of the generation of pioneers, McNeill’s world history is that of the professional historian, extending routine techniques and building on knowledge that had accumulated to the point where a world history was no longer a miraculous glimpse. This maturing of world historiography can be seen too in the contemporaneous appearance of other monumental works covering huge swaths of non-Western history: Joseph Needham’s multi-volumed Science and Civilization in China (1954-present), and Marshall Hodgson’s The Venture of Islam (1974).

McNeill succeeds in decentering world history from a European standpoint, giving pride of place to the process by which “ecumenes” of inter-civilizational contact have been gradually widening for several thousand years. McNeill shows the significance of geopolitical relationships in the expansion of empires, their clashes and crises; he presents a wealth of instances ranging from the far east to the far west of states undergoing invasions from their marchlands, overextending their logistics to distant frontiers, or disintegrating in internal fragmentation. The military side of the state may have been a passing concern in McNeill’s early work, but it grows into explicit importance in his later works, especially The Pursuit of Power (1982) which documents the world history of the social organization of armaments and their impact upon society.

Another type of compendium, alongside McNeill’s world history, has fostered the modern scholarship on geopolitics. This is the development of comprehensive historical atlases, such as the series edited by McEvedy [1961,1967,1972,1978,1982]. This too is an indication of the synthesis now possible by the accumulation of historical scholarship. The endless complexities of state histories come into a visual focus when we can examine them as a series of maps allowing us to see the changing territories of states in relation to one another. The difficulty of comprehending all this material in purely verbal form is one reason why older narrative histories either fragmented into specialized narratives or glossed over the general pattern by reference to a unrealistically small number of great empires. Historical atlases appearing in the 1960s and 70s marked the phase of consolidating information upon which more explicit theorizing could take place.

The geopoliticaIIy-oriented or military-centered view of the state has become increasingly important through the convergence of three areas of scholarship: geopolitical theory; the state-breakdown theory of revolution; and the historical sociology of the modern state as a expanding apparatus of military organization and tax extraction.

In the 1960s through the 80s, an analytical theory of geopolitics began to take shape. Stinchcombe, Boulding, Modelsky, van Creveld, Paul Kennedy and others developed a coherent set of geopolitical principles. In my synthetic account, these comprise a set of causes concerning the dynamics of relative economic and material resources of contending states; geographical configurations affecting the number of potential enemies upon their borders; and the logistical costs and strains of exercizing the threat of force at a varying distances from resource centers. In contrast to the older geopolitical theories of the pioneering age, contemporary geopolitical theory has become multi-dimensional: there is no single overriding cause of state expansion or decline, but a combination of a processes which can produce a wide range of outcomes. Although there remains a natural tendency to concentrate on the fate of the great hegemonic states, geopolitics applies analytically not merely to single states but to regions of state interrelations, and encompasses times and places where small states and balances of power exist as well as hegemonies and major wars. Since war and peace are analytically part of the same question, geopolitics implies a theory of peace as well as its opposite.

A second strand of research elevating the importance of geopolitics is the state breakdown theory of revolutions, especially in Skocpol’s formulation. The fiscal crisis at the heart of major revolutionary situations has most commonly been brought about by the accumulation of debts through the largest item of state expense, the military. The next step back in the chain of causes is the geopolitical conditions which determine how much a state has been fighting, with what costs, what destruction and what recouping of resources through military success. I have argued that the Skocpolian model of state breakdown not only meshes with geopolitical theory, but with a neoWeberian theory of legitimacy. The state breakdown theory is resolutely material, emphasizing hardnosed military and economic conditions.

There remains the realm of belief and emotion, the cultural and social realities which many sociologists argue are primary in human experience, a realm of lived meanings through which material conditions are filtered in affecting human action. In my argument, the theoretical circle is closed by taking up the Weberian point that the power-prestige of the state in the external arena, above all the experience of mobilization for war, is the most overwhelming of all social experiences.

The legitimacy of state rulers comes in considerable part from their people’s sense of geopolitics as it affects their own state. Militarily expanding states and prestigeful actors on the world scene increase their domestic legitimacy and even help create it out of whole cloth. Conversely, states in geopolitical straits not only go down the slope towards fiscal crisis and state breakdown, but also follow an emotional devolution which brings about delegitimation. Geopolitics leads to revolution by both material and cultural paths.

A third strand of contemporary research has shown that the modern state developed primarily through ramifications of its military organization. Historians and social scientists have documented the “military revolution”, the huge increase in scale of armies that began in the 16th and 17th centuries. In its train came organizational changes; weapons became increasingly supplied centrally by the state instead of through local provision; logistics trains became larger and more expensive; armies converted to close-order drill and bureaucratic regimentation.

Two summary works may be singled out: Michael Mann’s The Sources of Social Power, shows how prominently military spending, along with debts incurred from previous wars, has loomed in the budgets of modern states. Mann goes on to show that a series of increases in the scale of military expense, first at the time of the military revolution and the second around the Napoleonic wars, have successively motivated the penetration of the state into civil society: in part to secure funding, in part to mobilize economic resources and military manpower. This distinctively modern penetration of society by the state has proven a two-edged sword, creating national identities and loyalties, but also mobilizing classes to participate with full weight of their numbers in an overarching arena and to struggle for political representation and other concessions in response to fiscal demands.

Mann plays a neo-Weberian trump card upon the Marxist theory of class mobilization; in the state-centered model, the development of the state, through the expansion of its own specific resource, the organization of military power, determines whether classes can be mobilized at all as political and cultural actors. The same process of state penetration into society simultaneously mobilizes nationalist movements. We could add here another Weberian point: once the military-instigated penetration of society has occurred, processes of bureaucratization and interest mobilization are both set in motion; the organizational resources of the modern state now becomes an instrument to be turned to uses far removed from the original military ones, ranging from the welfare state to experiments in socialism or cultural reform.

The other modern classic summarizing the military centered theory of state development is Charles Tilly’s Coercion, Capital and European States, 990-1990. Marshalling the wealth of scholarship now available, Tilly shows how the pathways of states diverged as they underwent the military revolution. Depending upon which kinds of economic organization were in range of their forces, states relied upon extraction from urban merchants or from the conquest of agrarian territories; these several bases determined the difficulty of the fiscal task and the kinds of opposition rulers faced in raising funds for their armies.

As the large number of small medieval states winnowed down to a few through geopolitical processes, modern states crystalized into a range of democratic or autocratic polities, shaped by these differing fiscal bases. The historical pathways of state military organization mesh with their external geopolitical experiences and their internal struggles over taxation and representation; the result has been to instigate revolutions, and to shape the constitution of the various kinds of modern states.

The areas of scholarship I have just reviewed are prime evidence for my claim that we are living in a Golden Age of macro-history. Obviously not all problems have been solved; but no period of creative work ever solves all its problems, to do so would bring innovation to a standstill, and creative scholars always generate new issues as they go along. What we can say is that the range and depth of our vision of world history has permanently widened. Analytically, I believe we have the firm outlines of some important features the state breakdown theory of revolutions, the world-system gestalt in the most generic sense of looking for causal processes from the outside in, the elements of geopolitical processes, the military-resource trajectory of the development of the modern state.

I have given pride of place to political and economic topics of macrohistorical sociology, because these are the topics which have seen the most sustained research and the most cumulative theorizing. I must neglect, in a discussion of this scale, many other areas in which the maturing of modern social history has reached a critical mass, or at least passed the threshold into works of considerable sophistication. Let me just mention a few of the advances which have been made in the historical study of the family (the Laslett school; the comparative works of Jack Goody); the history of civilizing manners (Elias, Mennell, Goudshloml: the macro-history of diseases and the history of civilizing manners (Elias, Mennell, Goudsblom); the macro-history of diseases and the environment (McNeill again, Alfred Crosby); the macro-history of art (Arnold Hauser; André Malraux).

Other work has been proceeding apace in the history of gender, of sexualities, and of material culture. There is every indication that the Golden Age of macro-history is continuing. Approaches pioneered for European societies are just now being used in depth elsewhere (such as lkegami’s work on the civilizing process in Japan). Durkheim’s sociological microscope on becoming a macroscope has accumulated a first and second round of discoveries; another round surely lies ahead.

Critics of Macro History

Having viewed the side of the 20th century’s love affair with macro history, let us turn back now to the opposing side. Alongside the developing vistas of world-encompassing and analytically illuminating history, there has been a persistent countertheme attacking its misuses and denouncing its epistemology. Here too we can schematize the account into two waves, corresponding to the pioneering generation of macro-historians, and the late 20th century wave of sophisticated reflexivity.

In the 1930s and 40s, grand historical visions were repudiated on many grounds. Spengler’s vague poetic metaphors and Toynbee’s religious pronouncements were taken as the sort of flaws that are inevitable in works of this pretentious scope. Popper, in revulsion to Nazism and Soviet totalitarianism, claimed that what his idiosyncratic terminology labelled the “historicist” mentality (i.e. the search for historical laws) was at the roots of anti-democratic movements. In a narrower professional sphere, anthropologists reacted against the earlier generation which had approached ethnographic materials in a comparative and historical light, construing items of culture against the template of what kind of ”survival” they represented from the earlier track of evolutionary development. Against this approach, the structural functional program held that an entire society must be studied in depth as a kind of living organism, revealing how its various institutions meshed with one another as an integrated system operating in the present.

The first wave of objections to macro-history proved ephemeral, and a newer generation of historians and comparative sociologists began to publish the works that I have referred to as the Golden Age.

On the anthropological side, the tide turned again as well. Beginning already in 1949, and with increasingly prominence in a series of works in the 1950s and 60s, Lévi-Strauss took a new approach to writing the history of “peoples without histories”, i.e. tribal societies without written records and hence without the explicit consciousness of an historical frame of reference. Levi-Strauss proposed to read their implicit historical memory by cracking the symbolic code in which mythologies are recollected. The method led him to reconstruct events of epochmaking importance such as the practice of cooking which divides human from the animals that they eat. Lévi-Strauss’s Mythologiques parallels his earlier work on the structural patterns of kinship, in which he attempted to reconstruct the pattern of a kinship revolution by which some family lineages constituted themselves as an elite, breaking with primitive reciprocity and leading towards to the stratification of the state. Lévi-Strauss’s structuralism had an ambiguous relation to history; its affinities to structural-functionalism and to other static structuralist theories like Chomskyian linguistics gave the impression that it too dealt with unchanging structural relationships. At the same time, structures were depicted as dynamic relations, systems in disequilibrium, which both motivated historical changes, and left symbolic residues by which we can memorialize them. Lévi-Straussian structures are both historical and supra-historical in much the same way language is.

Via this ambiguity, the receding wave of enthusiasm for structuralism flowed directly into a wave of or post-structuralism. Lévi-Strauss had shown no reliable way either to decode symbolic history or to correlate symbols in a straightforward Durkheimian way with social structures. In the French intellectual world, the failure of Lévi-Strauss’s project was taken as a warrant for historicizing all the codes. The notion was retained that we live in a world structured by codes, and that we see the world only through the lenses of our codes. But what we see through them is shifting and unreliable, like using eyeglasses made of flowing water.

The movement attacking macro-history, and along with it any substantive sociological theorizing of wide analytical scope, has been fed by several streams. These include the influence of later generations of phenomenological philosophy; the extension of Hegelian reflexivity in Foucault’s expansion of the history of psychiatry contextualizing and relativizing Freud; the 1960s’ generation combining mind-blowing psychedelic “cultural revolution” with political radicalism tied no longer to industrial workers but to movements of student intellectuals; the anti-westernism of ethnic insurgencies; rebellion by feminist intellectuals against the dominance of male textual canons. The result has been a formidable alliance of political and intellectual interests. To these we might add an implicit rivalry inside the world of scholars, between specialists concerned with their own niches, and synthesizers drawing specialized researches into broader statements.

A common denominator of this contemporary wave of attack upon macro-history is the priority of contextuality and particularism. This anti-historical consciousness nevertheless arises from the same circumstances as its opposite. Today’s antihistorians arise from a surfeit of history. Postmodernist thinking might perhaps be described as a kind of vomiting up history, a choking fIt that began in disillusionment with Marxism and to some extent with Freudianism which in certain fashionable circles had been considered the only Grand Narratives worth knowing about.

Both the macro-historians of the current Golden Age, and the anti-grand-historians who are their contemporaries, are products of a rising tide of consciousness of our location within history. All of us, those who write history and those who write against it, exist and think within history; a future intellectual history will doubtless be written about the late 20th century, just like everything else. Our ideas, our very language, are part of history. There is no standard outside of history by which anything can be judged. Does this recognition weigh in favor of macro-historians, or condemn them? There is no escape from the prison of contextuality. What follows?

Theory and Analytical Particularism

Let us bring the two positions into close confrontation. I have emphasized that the Golden Age of macro-history in which we are living rests upon the accumulation of scholarly work by generations of historians. In today’s fashionable philosophies, is this not warrant from dismissing macro-history as nothing but naive empiricism? My response would be simply: we are intellectually constituted by the brute fact that a community comprising thousands of historians and social scientists have been working for several centuries, and that their accumulated archives have been tapped by McNeill, Wallerstein, Mann, Tilly, and others, just as the spottier archives were tapped years ago by Weber and Toynbee. It is a polemical simplification to suppose that attending to empirical research makes one guilty of obliviousness to theoretical activity.

It is equally arbitrary to assume that the development of theoretical interpretations proceeds by reference to nothing but other ideas, much less by mysterious ruptures in the history of consciousness. In the social reality of the intellectual world, today’s hyper-reflexive philosophies and advocates of narrow contextuality are products of the same accumulation of historical archives as the macro-historians; the only difference is that one group specializes in the history of intellectual disciplines, of literary criticism and linguistics, whereas the other has drawn upon the histories of economies, polities and religions.

The answer to conceptual embeddedness in historical contexts is not less theory, but more. Falling back on local contextuality is often a way of begging questions, leaving us not with greater sophistication but with implicit dependence upon unexamined theories encoded in the very language one uses. All history is theory-laden. The effort to disguise this fact results in bad history and bad theory.

There is no such thing as purely narrative history. It is impossible to recount particulars without reference to general concepts. Nouns and verbs contain implicit generalizations (“another one of those again”). Even proper names are not as particularistic as they might seem, for they pick out some entity assumed to have enduring contours over time, and contain an implicit theory of what holds that “thing” together: an innocuous reference to “France” or to “Paris” is laden with assumptions. To impose a name, whether abstract or particular, is to impose a scheme of what hangs together and what is separated from what; by this route, rhetorical devices become reified, and multi-dimensional processes are construed as unitary. And narrative is always selection; from the various things that could be told, some are focussed upon as significant, and their sequence implies what is supposed to cause what consequences.

Let us take an example from what is usually regarded as the most mindlessly event-driven of particularistic narrative, traditional military diplomatic history. “Napoleon marched his battle hardened veterans all day, surprising the Austrians in the late afternoon with 6000 men; by the end of the battle, Austrian control of Italy had been lost.” This has the sound of a narrative in which history is made by heroic individuals, but its effects are achieved by abstracting the individual from the organizational context. It assumes a world in which troops are organized into disciplined armies, and in such a fashion that a commander can exercise centralized control over rapid organizational response; it further assumes a theory of combat, such that the sheer number of troops amassed at certain kinds of terrain win victories; that previous combat experience makes troops more capable of such manoeuvers; that the speed and timing of troop movements determine battlefield outcomes. These assumptions may or may not be generally true; there now exists an extensive military sociology which explains the social and historical conditions under which such things do or do not come about. Napoleon’s organizational preconditions would not have existed at the time of the Gauls, and they would fail again in several particulars by World War I. The narrative also assumes a theory of the state, in which decisions are driven straightforwardly by military outcomes; again this may be true under certain conditions, but only if we specify the organizational context, victory by Visigothic armies in 410 did not result in a Visigothic empire taking control of Italy, unlike the way Napoleon’s victory in 1800 resulted in building a French empire.

The extent to which the narrated sequence of events makes a coherent story, an adequate explanation cannot be judged merely from examining one single narrative. My point is not that narrative histories of the Napoleonic type are inherently wrong; but that we only know why and to what extent they are right in the light of our more general theoretical knowledge. Such knowledge does not come out of thin air. It comes in part from having studied a wide enough range of other histories so that we can tell what are the central conditions, and which are local concomitants with no important effect upon the particular outcome.

What sociological theory does is to cumulate what we have learned from histories.

Specialized, locally contextual histories are not immune to theory; their atheoretical assertions mean that the theories they implicitly assume are only those old enough to have passed into common assumptions. Histories of democracy are particularly vitiated by unconsciously accepting popular ideological categories. Sociological macro-historians have the advantage of consciously checking whether their models of large-scale processes in time and space are coherent with what we have learned from any other areas of sociological research. The battlefield processes, mentioned above, are more securely understood to the extent that we find them consistent with analysis of organizations and their breakdowns, of face-to-face violence, of emotional solidarity within groups. The sociologist devoted to bringing out the explicit dynamics underlying historical narratives generates more confidence in being on the right track to the extent that s/he can cross-integrate historical patterns with other parts of sociology.

The end product need not be theory as a concern in itself. In the light of such cumulation of sociological knowledge via explicit theory, we are better able to produce new histories. These are not necessarily new comparisons or new cases (which is fortunate, since the amount of history is finite and the distinct cases of macro-phenomena are soon exhausted), but studies which select new facets of our previously studied narratives for analysis with greater depth and fresh insight. [For instance, there is considerable overlap among the cases studied by Moore 1966, Skocpol 1979, Goldstone 1991 and Downing 1993.] It is an old story that theory and research recycle through each other; but true nevertheless, and indispensible advice even when fashionable metatheories hold that one or the other pole is irreducibly autonomous. When history or general theory goes its own way without the other, it is really shadowed by what it has vaguely and unconsciously accepted from the other. The result is bad history and bad theory.

Let Fernand Braudel have the last word on the relation between the deeper currrents of abstract theory charted by macro-history, and the details that fill the eyes of contemporaries in the form of:

“l’histoire événementielle, the history of events: surface disturbances, crests of foam that the tides of history carry on their strong backs. A history of brief, rapid, nervous fluctuations, by definition ultrasensitive; the least tremor sets all the antennae quivering. But as such it is the most exciting of all, the richest in human interest, and also the most dangerous. We must learn to distrust this history with its still burning passions, as it was felt, described, and lived by contemporaries whose lives were as short and as short-sighted as ours…

A dangerous world, but one whose spells and enchantments we shall have exorcised by making sure first to chart those underlying currents, often noiseless, whose direction can only be discerned over long periods of time. Resounding events are often only momentary outburts, surface manifestations of these larger movements and explicable only in terms of them.” (Braudel, 1949/1972: 21, Preface to the First Edition).

Deeper currents, for today’s sociological macrohistorians, are analytically deep, not merely descriptively broad. Metaphor should not lead us to conclude that they are far beneath the surface, but rather that they mesh together to generate the endless array of patterns which are what we mean by the surface of events.

What is Historical Sociology?

Richard Lachmann

The Sense of a Beginning

Sociology was created to explain historical change. Sociology’s founders were convinced they were living through a social transformation that was unprecedented in human history, and that a new discipline was needed to describe and analyze that change, explain its origins, and explore its implications for human existence. As Tocqueville ([1835] 2003, p. 16) put it, “A new political science is needed for a totally new world.”

The founders disagreed over the nature of that change and over how their discipline should go about studying it. They also were not sure if the theories they developed to explain their own epoch of change could be used to develop a general “science of society.” Nevertheless, they all Marx, Weber, Durkheim, and their less illustrious contemporaries saw the new discipline of sociology as historical. Sociology at its beginning was historical because of the questions its founders asked.

For Marx the key questions were: What is capitalism, why did it supplant other social systems, and how is it transforming the ways in which people work, reproduce themselves biologically and socially, and gain knowledge and exploit the natural world? What effect do those changes have on relations of power, domination, and exploitation?

Weber also asked about epochal historical shifts. He sought to explain the origins of world religions, of capitalism, and of rational action, and to see how that species of rationality affected the exercise of power, the development of science (including social science), religion, and the humanities, the organization of work, government, markets and families, and pretty much everything else humans did.

Durkheim asked how the division of labor, and the historical shift from mechanical to organic solidarity, changed the organization of workplaces, schools, families, communities, and entire societies, and affected nations’ capacities to wage wars.

Since its beginnings as a historical discipline concerned with epochal social transformation, sociology has become increasingly focused on the present day and on trying to explain individual behavior. Like the children’s book All About Me (Kranz 2004), in which pages are set aside for their young owners to write about what they like to do in their “favorite place,” to describe their hobby, or to “name three things that make you feel important,” many sociologists, especially in the United States, look to their personal biographies or their immediate environs to find research topics.

Take a look at the program of the annual meeting of the American Sociological Association. It contains sociology’s version of the ages of man. First we are born, and legions of demographers explain why our mothers had us when they were 26.2 instead of 25.8 years old. We become sexually aware and active, and there are sociologists who keep on reliving their teen years in research on losing virginity or coming out of the closet. As adults, we have criminologists to tell us which ghetto youth will mug us and which will become a nerd in his failed urban school. The medical sociologists can tell us why we will be overmedicated and overbilled in our dotage. And most of this research is ahistorical and noncomparative, focused on the United States in the last five minutes.

Meanwhile, in the larger world, fundamental transformations are underway: the world’s population grew to unprecedented levels in the past century, even as those billions of people consumed resources at a pace the global ecosystem cannot sustain. Soon whole countries will run out of water or be submerged under rising seas. Global warming will force mass migrations on a scale never seen in human history. Governments lack the organizational capacity and almost certainly the desire to accommodate those refugees; many, however, will have the military means and popular support to repel needy migrants.

Today service jobs are following manufacturing and agriculture in being replaced by machines, creating the possibility that most human labor will no longer be needed to sustain current or future levels of production. The nature of war also is being transformed. Mass conscription which originated at the end of the eighteenth century, made possible wars between armies with millions of soldiers, and encouraged states to develop weapons capable both of killing thousands of enemy fighters at a time and of targeting the civilian populations that manufactured the weapons and provided the recruits for those armies has over the past half-century been abolished in almost all Western nations, which now either no longer fight wars or attempt to rely on high-tech weaponry.

Inequality within the wealthiest countries of the world has risen rapidly in the last three decades after declining for the previous four decades, while at the same time some of the countries that before World War II had been dominated by the US and Europe and were mired in poverty have achieved high levels of geopolitical autonomy and are rapidly closing the economic gap with the West. Ever fewer people on this planet live in communities that are isolated from the rest of the world, and the population of farmers that dwindled to a tiny fraction of the people living in rich countries is now rapidly declining in most of the rest of the world. For the first time in human history a majority of the world’s population lives in urban areas. Links of exploitation that were established, as Marx first explained, with the advent of capitalism now are joined with various sorts of communicative links that hold the potential for more egalitarian relations within and among nations.

Sociology is especially equipped, analytically and methodologically, to analyze the implications of these early twenty-first-century transformations, just as it was created to explain the complex of disruptive and unprecedented changes that accompanied the advent of modern capitalist societies. But sociology can help us understand what is most significant and consequential about our contemporary world only when it is historical sociology. As Craig Calhoun rightly notes: “The most compelling reason for the existence of historical sociology is embarrassingly obvious (embarrassingly because so often ignored). This is the importance of studying social change.”

My goal in this book is to turn our attention away from the sort of solipsistic and small-bore research that is presented in sociology textbooks, and which dominates too many of the major academic journals, and focus instead on understanding how sociological analyses of historical change can allow us to understand both the origins of our contemporary world and the scope and consequences of current transformations. Since much of that research is confined today to the subfield of historical sociology, this then has to be a book that examines what is historical sociology. My hope is that historical sociology‘s concerns, methods, and understandings can invigorate the broader discipline of sociology, making it once again a discipline about social change rather than one that confines itself to models and ethnographic descriptions of static social relations.

This book, and historical sociology, will not help you learn all about you. Historical sociology can help you understand the world in which you will live your life. It provides context to determine the magnitude and significance of present-day changes in gender relations, family structure, and demographic patterns, and in the organization and content of work, the economy, culture, politics, and international relations. Because historical sociology is inherently comparative, we can see what is unusual about any particular society, including our own, at each moment in time and to distinguish mere novelties from fundamental social change.

If the sociology envisioned by its founders is very different from much of contemporary sociology, that early sociology was also distinguished from the history written by historians. Since Marx, Weber, and Durkheim were trying to explain a single unprecedented social transformation, they ended up slighting and even ignoring the bulk of the world’s history that occurred before the modern era. They also decided what history to study, and how to understand the historical evidence they examined, deductively in terms of the metatheories and master concepts they advanced. That led them to rummage through the works of numerous historians, often taking the latter’s findings out of context to construct broad arguments about social change. Professional historians, not surprisingly, found it easy to ignore sociological theories that floated above, and failed to engage, the archival evidence and the specific times and places upon which they define themselves and engage with one another. As a result, Weber and Durkheim and their theories have had little influence on historians.

Durkheim has been easy for historians to ignore, since he almost never referred to or engaged specific historical events. Weber, who drew on a vast range of historical research, has suffered because virtually every contemporary historian of the Reformation rejects his most famous work, The Protestant Ethic and the Spirit of Capitalism. Fernand Braudel (1977) accurately summarizes his profession’s judgment: “All historians have opposed this tenuous theory, although they have not managed to be rid of it once and for all. Yet it is clearly false.” As a result, historians are not inclined to look to Weber for theoretical or empirical guidance on other historical changes.

Marx has faired better among historians, perhaps because they do not regard him as a sociologist. Yet, historians who define themselves as Marxist, or who seek to draw on elements of Marxism, for the most part use Marx to inform their studies of specific historical eras and problems. Few historians see themselves as contributors to Marx’s overarching project of explaining the origins of capitalism or tracing the dynamics of capitalism on a global or even a national scale.

Marx, Weber, and Durkheim’s theories also have been challenged by non-European scholars (and by Western scholars aware of the histories and intellectual traditions of the rest of the world) who doubt that the transformation those theories are designed to explain was “anything like a ‘universal human history’ ” (Chakrabarty 2007). Instead, Chakrabarty, like other “post-colonial” scholars, sees those early sociological theories and much of what Europeans and North Americans have written since as “histories that belonged to the multiple pasts of Europe drawn from very particular intellectual and historical traditions that could not claim any universal validity”. Or, as Michael Dutton (2005) puts it, “Why is it that, when it comes to Asian area studies, whenever ‘theory’ is invoked, it is invariably understood to mean ‘applied theory’ and assumed to be of value only insofar as it helps tell the story of the ‘real’ in a more compelling way?” One of my goals in this book is to explore the extent to which “Western” historical sociology can address social change elsewhere in the world, and also to see how theories and research from the “rest” of the world can inform, deepen, and challenge sociology from and about Europe.

Historical sociologists in recent decades have worked to narrow the distance between their scholarship and that of historians. Yet, the two disciplines have not merged. An aspiring academic’s decision to study and pursue a career in historical sociology rather than history still has implications for what sort of intellectual they will become and what sort of research they will undertake. While historical sociologists and historians do interact with each other, they still spend most of their time learning from and seeking to address scholars in their own discipline. That matters because history and sociology have their own histories, and the past intellectual, institutional, and career decisions made by historians and sociologists shape the questions asked, the methods employed, the data analyzed, and the arguments offered within each discipline today. While there are many historians whose work influences sociologists, and some historical sociologists who have won the respect of sociologists, in practice scholars in the two disciplines study history in quite different ways. Often undergraduate and even graduate students are not much aware of those differences and may decide which field to pursue without considering all the implications of their choice. I wrote this book in part to clarify what it means to do historical sociology so that readers who are considering studying that field will have a clear idea of what it is like to pursue an academic career as a historical sociologist.

Charles Tilly offers an apt and accurate generalization of historians: they share an “insistence on time and place as fundamental principles of variation” (1991) e.g., the eighteenth-century French Revolution is very different, because it was earlier and in a different part of the world, from the twentieth-century Chinese Revolution. As a result most historians are recognized and define themselves by the particular time and place they study, and organize their careers around that temporal and geographic specialization. The boundaries of those specializations coincide with and “are firmly embedded in institutional practices that invoke the nation-state at every step witness the organization and politics of teaching, recruitment, promotions, and publication in history departments” worldwide (Chakrabarty 2007).

Today, most academic historians everywhere in the world are hired as historians of nineteenth-century US history, Renaissance Italian history, twentieth-century Chinese history, or some other such temporal-geographic specialization. Usually, history departments will hire more specialists, and make finer distinctions, for the history of their own country than for the rest of the world. Thus a US history department might have a specialist in the military history of the Civil War among a dozen Americanists along with a single historian of China, while in China a department might have one or two Americanists along with a dozen historians who each specialize in a single dynasty.

Historians’ country specializations make sense because they “anchor most of [their] dominant questions in national politics,” which leads historians to use “documentary evidence [for the] identification of crucial actors [and the] imputation of attitudes and motives to those actors” (Tilly 1991). Historians’ country specializations, in turn, influence and limit when and how they go about making comparisons across time periods and geographic spaces. “Historians are not accustomed, or indeed trained, to make grand comparisons or even to work with general concepts, and they often view the whole past through the lens of the particular period in which they have specialized” (Burke 2003).

Immanuel Wallerstein offers a wonderful example of how national categories shape historical thinking in an essay entitled “Does lndia Exist?” (1986). Wallerstein notes that what today is India was an amalgamation of separate territories, created by British colonization in the eighteenth and nineteenth centuries. India’s political, and also cultural, unity is an artifact of Britain’s ability to colonize the entire subcontinent. Wallerstein poses a counterfactual proposition. Suppose the British colonized primarily the old Mughal Empire, calling it Hindustan, and the French had simultaneously colonized the southern (largely Dravidian) zones of the present-day Republic of India, giving it the name Dravidia. Would we today think that Madras was “historically” part of India: Would we even use the word “India”? Instead, probably, scholars from around the world would have written learned tomes, demonstrating from time immemorial “Hindustan” and “Dravidia” were two different cultures, peoples, civilizations, nations, or whatever. India’s present-day unity is a combined creation of British colonization, the nationalist resistance to British rule, and the inability of other imperial powers (such as France, which tried and failed) to grab part of the subcontinent for themselves.

Wallerstein’s point is that a contingent series of events, and non-events that failed to occur, created both a political unit and an academic terrain (the study of India) that affects not just scholarship about the era that began with British colonization but also historical and cultural studies of the centuries before then, when a unified Indian polity or culture did not yet exist. Had the contingencies of the past three centuries played out differently, not only would the present-day reality be different, but so would historians’ retrospective reading of the distant past.

Historical sociologists, in contrast, organize their research and careers around theoretical questions e.g., what are the causes of revolutions, what explains the variation in social benefits offered by governments to their citizens, how and why have family structures changed over time? These questions, like Marx, Weber, and Durkheim’s questions about social change in the modern era, cannot be answered with a focus on a single era in a single nation. History itself, thus, matters in very different ways in historians and sociologists’ explanations. For example, historians are skeptical that knowledge gained about how French people acted during their revolution in 1789 is of much help in understanding how the Chinese acted in 1949 during their revolution. Historical sociologists instead see each revolution as the culmination of a chain of events that open certain opportunities for action while foreclosing others. Thus, to a sociologist, both the French in 1789 and the Chinese in 1949 gained the opportunity to make their revolutions as a result of previous events that created certain social structures and social relations and ended others.

Historical sociologists focus their attention on comparing the structures and events of those, and other, revolutions. What is distinctive about each is secondary, in sociological analysis, to what is similar. Sociologists analyze differences systematically in an effort to find patterns that can account for each outcome. The goal, for sociologists, is to construct theories that can explain ever more cases and account for both similarities and variations.

*

from

What is Historical Sociology?

by Richard Lachmann

get it at Amazon.com

Al Walaja: the Palestinian village being slowly squeezed off the map – Oliver Holmes * The Biggest Prison on Earth. The History of the Occupied Territories – Ilan Pappe. 

As the 70th anniversary of Nakba approaches when 700,000 Palestinians lost their homes in the wake of the creation of Israel farming families on the West Bank recount their struggle to survive.

In the middle part of the last century the inhabitants of the village of Al Walaja, not far from Jerusalem, considered themselves very lucky.

Fertile hills, terraced for growing vegetables and fruit, led down to a valley where an Ottoman era railway line connected Jerusalem with the Mediterranean port of Jaffa. Close to a station, Al Walaja’s farmers always had buyers for their lentils, peppers, and cucumbers. Mohammed Salim, who estimates he is approaching 80 as he was born “sometime in the 40s”, remembers vast fields owned by A1 Walaja families. “There was nothing else here.”

Today, Salim lives in what has fast become an enclave. In 2018, Al Walaja sits on a tiny cusp of the land it commanded when he was a child. During his lifetime, two wars have displaced all of the village’s residents and swallowed most of its land. More was later confiscated for Jewish settlements. And in the past two decades a towering concrete wall and barbed wire have divided what remains of the community as Israel claims more territory.

Every year on 15 May, Palestinians mark the anniversary of the Nakba, or “catastrophe”, when hundreds of thousands were forced out of their homes or fled amid the fighting that accompanied the creation in 1948 of the state of Israel after the end of the British Mandate. For the residents of A1 Walaja, the Nakba was the beginning of a sevendecade struggle to survive.

Salim and his cousin, Umm Mohammed, remember it was dusk when the fighting flared in 1948. A civil war between Jewish forces and Arab militia raged as the British sought to withdraw, with surrounding states joining the fight. Residents had heard rumours of a massacre of hundreds of Arab Villagers in Deir Yassin at the hands of Zionist paramilitaries. Determined not to suffer the same fate, they fled in October when they heard gunfire.

“As a child, the shells looked to me like watermelons flying through the sky,” said Umm Mohammed. Her father, she recalls, held her in one arm and her brother in the other as they headed across the train tracks and up the hill on the other side.

“We built wooden houses there,” said Umm Mohammed, who can see the crumbled homes of the village from her balcony . “We thought we would return after the fighting stopped.”

According to the UNRWA, the United Nations body responsible for Palestinian refugees, about 70% of Al Walaja’s land was lost after Israel and Arab states drew demarcation lines in 1949. Of the original 1,600 people from Al Walaja, most fled to neighbouring countries. About 100, like Umm Mohammed, settled.

After the six day war in 1967, when the young Israeli state captured the West Bank from Jordan, Al Walaj a found itself occupied. Salim remembers a message that filtered through the Village, purportedly from an Israeli commander. “He said, ‘Be aware, and don’t resist?”

Israel later annexed east Jerusalem, expanding the city’s boundary and essentially cutting the Village in two. Israeli laws, including strict building restrictions, were imposed, although a few people in Al Walaja were given residency rights.

At the top of the new village was an Ottoman base, subsequently taken over by the British, Jordanians and eventually the Israeli military. During the 1970s the site was transformed into a Jewish settlement named Har Gilo, considered illegal under international law, which with another settlement blocks Al Walaja on two sides. Israeli flags flutter from the balconies.

Salim says the communities rarely talk. “So far, they are nice people,” he said, looking up at the fortified wall that surrounds the settlement.

In the early 2000s, Israel began construction of a barrier in response to Violence across the country, including suicide bombings. Al Walaja was squeezed again, finding itself further isolated by the concrete wall. The original route of the barrier would have split the existing Village in two, but Israel’s high court granted it a stay. The wall now surrounds A1 Walaja on three sides and isolates about 30% of its remaining land.

“It has become a siege around the Village,” said Khader Al Araj, 47, president of the village council. He scrambled in a metal filing cabinet full of annotated maps. “All our land has been taken.”

Now comprising 2,600 people, Al Walaja still exists but its future is, to say the least, precarious. In the past decade, Israeli police have placed a checkpoint in the valley that most residents cannot pass. Isolated fields remain uncultivated, while the Jerusalem municipality has bulldozed dozens of homes. Many more have pending demolition orders. Once famous for its springs, Al Walaja is losing them, too. A wire fence surrounds the largest one at the bottom of the hill. Farmers’ goats can no longer drink there.

The latest threat is ostensibly benign, an Israeli national park in the valley. The EU says national parks in the occupied territories are used to prevent Palestinians from building. The parks authority says it supports agricultural work but will not allow “illegal construction”. Over the past year, the barrier has been added to, with a four-metre high fence covered in barbed wire. A police checkpoint will be erected further into Al Walaja’s territory, cutting residents off from the rest of their land. Legal challenges have stalled Israeli plans, but ultimately most have gone through.

Yet Al Walaja looks like one of the Holy Land’s most charming villages. Apricot trees and flowers line its winding roads, planted out of pride, residents say, for the small spot of land they still have. A symbol of the destruction of Palestinian life, Al Walaja has attracted funding from foreign states sympathetic to what it represents. Its streets are covered in plaques, thanking various governments for freshly paved walkways and new roads.

Al Araj looks exhausted but believes that selfrespect is part of the battle: “We try very hard to keep the village beautiful.”

*

See also:

The Biggest Prison on Earth. The History of the Occupied Territories

by Ilan Pappe

The ‘Shacham Plan’, ‘The Organization of Military Rule in the Occupied Territories’.

get it at Amazon.com

Does Capitalism Have a Future?

Ways of reestablishing social order in the midst of extreme conflict might include those reminiscent of fascism, but also the possibility of a much broader democracy.

Does Capitalism Have a Future?

By Immanuel Wallerstein, Randall Collins, Michael Mann, Georgi Derluguian and Craig Calhoun.

Coming decades will deliver surprising shocks and huge challenges. Some of them will look new and some quite old. Many will bring unprecedented political dilemmas and difficult choices. This may well begin to happen soon and will certainly shape the adult lives of those who are young at present.

But that, we contend, is not necessarily or only bad. Opportunities to do things differently from past generations will also be arising in the decades ahead. In this book we explore and debate, on the basis of our sociological knowledge of world history, what those challenges and opportunities will most likely be. At bottom, most troubling is that with the end of the Cold War almost three decades ago it has become unfashionabIe, even embarrassing, to discuss possible world futures and especially the prospects of capitalism.

Our quintet gathered to write this unusual book because something big looms on the horizon: a structural crisis much bigger than the recent Great Recession, which might in retrospect seem only a prologue to a period of deeper troubles and transformations.

Immanuel Wallerstein explains the rationale for predicting the breakdown of the capitalist system. Over the next three or four decades capitalists of the world, overcrowding the global markets and hard pressed on all sides by the social and ecological costs of doing business, may find it simply impossible to make their usual investment decisions. In the last five centuries capitalism has been the cosmopolitan and explicitly hierarchical world-market economy where the elite operators, favorably located at its geographical core, were in a position to reap large and reasonably secure profits. But, Wallerstein argues, this historical situation, however dynamic, will ultimately reach its systemic limitations, as do all historical systems. In this hypothesis, capitalism would end in the frustration of capitalists themselves.

Randall Collins focuses on a more specific mechanism challenging the future of capitalism: the political and social repercussions of as many as two-thirds of the educated middle classes, both in the West and globally, becoming structurally unemployed because their jobs are displaced by new information technology. Economic commentators recently discovered the downsizing of the middle class, but they tend to leave the matter with a vague call for policy solutions. Collins systematically considers the five escapes that in the past have saved capitalism from the social costs of its drive for technological innovation. None of the known escapes appears strong enough to compensate for the technological displacement of service and administrative jobs.

Nineteenth and twentieth century capitalism mechanized manual labor but compensated with the growth of middle class positions. Now the twenty-first century trajectory of high-tech is to push the middle class into redundancy. This leads us to another hypothesis: Might capitalism end because it loses its political and social cushion of the middle class?

Craig Calhoun argues to the contrary that a reformed capitalism might be saved. Calhoun elaborates on the point, recognized by all of us, that capitalism is not merely a market economy, but a political economy. Its institutional framework is shaped by political choice. Structural contradictions may be inherent in the operation of complex markets but it is in the realms of politics that they may be remedied, or left to go unchecked to destruction.

Put differently:

Either a sufficiently enlightened faction of capitalists will face their systemic costs and responsibilities, or they will continue to behave as careless free riders, which they have been able to do since the waning of liberal/left challenges a generation ago. Just how radical will be the shift from contemporary capitalism to a revamped future system is an open question.

A centralized socialist economy is one possibility, but Chinese style state capitalism may be even more likely. Markets can exist in the future even while specifically capitalist modes of property and finance have declined. Capitalism may survive but lose some of its ability to drive global economic integration.

Michael Mann favors a social democratic solution for the problems of capitalism, but he also highlights even deeper problems that arise from the multicausal sources of power. Besides capitalism, these include politics, military geopolitics, ideology, and the multiplicity of world regions. Such complexity, in Mann’s view, renders the future of capitalism unpredictable. The overriding threat, which is entirely predictable, is the ecological crisis that will grow throughout the twenty-first century.

This could likely spill over into struggles over water and food, and result in pollution and massive population migrations, thus raising the prospect of totalitarian reactions and even warfare using nuclear weapons. Mann connects this to the central concern of this book: the future of capitalism. In Mann’s analysis, the crisis of climate change is so hard to stop because it derives from all of today’s dominant institutions gone global, capitalism as unbridled pursuit of profit, autonomous nation-states insisting on their sovereignty, and individual consumer rights legitimating both modern states and markets. Solving the ecological crisis thus will have to involve a major change in the institutional conditions of today’s life.

All these are structural projections akin to “stress tests” in civil engineering or, as we have all now heard, in banking. None of us bases our prognoses of capitalism in terms of condemnation or praise. We have our own moral and political convictions. But as historical sociologists, we recognize that the fortunes of human societies, at least in the last ten thousand years beyond the elementary level of hunter-gatherer bands, have not turned on how much good or evil they produced. Our debate is not whether capitalism is better or worse than any hitherto existing society. The question is: Does it have a future?

This question echoes an old prediction. The expectation of capitalism’s collapse was central to the official ideology of the Soviet Union that itself collapsed. Yet does this fact ensure the prospects of capitalism? Georgi Derluguian shows the actual place of the Soviet experiment in the larger picture of world geopolitics, which in the end caused its self-destruction. He also explains how China avoided the collapse of communism while becoming the latest miracle of capitalist growth. Communism was not a viable alternative to capitalism. Yet the way in which the Soviet bloc suddenly ended after 1989 in broad mobilizations from below and blinding panic among the elites may suggest something important about the political future of capitalism.

Doomsday scenarios are not what this book is about. Unlike business and security experts projecting short-run futures by changing the variables in existing set-ups, we consider specific scenarios futile. Events are too contingent and unpredictable because they turn on multiple human wills and shifting circumstances. Only the deeper structural dynamics are roughly calculable. Two of us, the same Collins and Wallerstein who now see no escape for capitalism, already in the 1970s predicted the end of Soviet communism. But nobody could predict either the date or the fact that it would be the former members of the Central Committee irrationally tearing apart their erstwhile industrial and superpower positions. This outcome was unpredictable because it did not have to happen that way.

We find hope against doom exactly in the degree to which our future is politically underdetermined. Systemic crisis loosens and shatters the structural constraints that are themselves the inheritance of past dilemmas and the institutional decisions of prior generations. Business as usual becomes untenable and divergent pathways emerge at such historical junctures. Capitalism, along with its creative destruction of older technologies and forms of production, has also been a source of inequality and environmental degradation.

Deep capitalist crisis may be an opportunity to reorganize the planetary affairs of humanity in a way that promotes more social justice and a more livable planet.

Our big contention is that historical systems can have more or less destructive ways of going extinct while morphing into something else. The history of human societies has passed through bursts of revolution, moments of expansive development, and painfully long periods of stagnation or even involution. However unwanted by anybody, the latter remains among the possible outcomes of global crisis in the future. The political and economic structures of present-day capitalism could simply lose their dynamism in the face of rising costs and social pressures. Structurally, this could lead to the world’s fragmentation into defensive, internally oppressive, and xenophobic blocs.

Some might see it as the clash of civilizations, others as the realization of an Orwellian “1984” anti-utopia enforced by the newest technologies of electronic surveillance.

Ways of reestablishing social order in the midst of extreme conflict might include those reminiscent of fascism, but also the possibility of a much broader democracy. It is what we wanted to stress above all in this book.

In recent decades the prevalent opinion in politics and mainstream social sciences has been that no major structural change is even worth thinking about. Neoclassical economics bases its models on the assumption of a fundamentally unchanging social universe. When crises happen, policy adjustments and technological innovation always bring renewals of capitalism. This is, however, only an empirical generalization. Capitalism’s existence as a system for 500 years does not prove that it will last forever. The cultural-philosophical critics of various postmodernist persuasions who emerged as a countermovement in the 1980s, when the utopian hopes of 1968 had receded into frustration and Soviet communism was visibly in crisis, came to share the same assumption of capitalism’s permanence, although not without a big dose of existential despair. Consequently, the cultural postmodernists left themselves with a dislocation of the will to look structural realities in the face. We will return in our concluding chapter to a more detailed discussion of the present world situation, including its intellectual climate.

We have deliberately written this book in a more accessible style because we intended to open our arguments to wider discussion. The elaboration of our arguments, with all the footnotes, can be found in the monographs that we have written individually. The area where we have done much of our professional research is usually called worldsystems analysis or macrohistorical sociology. Macrohistorical sociologists study the origins of capitalism and modern society, as well as the dynamics of ancient empires and civilizations. Seeing social patterns in the longer run, they find that human history moves through multiple contradictions and conflicts, crystallizing over long periods in impermanent configurations of intersecting structures. This is where we had sufficient agreement to author collectively the first and the last chapters bracketing this book. But we also have our particular theories and areas of expertise, and the resulting opinions are reflected in the individual chapters. This short book is not a manifesto sung in one voice. It is a debate of equals arguing on the basis of our knowledge about the past and present of human societies. It is therefore an invitation to ask seriously and openly what could be the next big turn in world history.

In the end, are we prophesying some kind of socialism? The reasoned answer, rather than a futile polemic deriving from ideological faith, must have two parts. First, it is not prophecy because we insist on abiding by the rules of scientific analysis. Here this means showing with reasonable exactness why things may change and how we get from one historical situation to another. Will the end destination be socialism? Our lines of reasoning extend into the middle-range future of the next several decades.

Randall Collins asks: what could possibly avert the looming destitution of middle classes whose roles in for-profit market organization become technologically redundant? It could take the form of a socialist reorganization of production and distribution, that is, a political economy in a conscious and collectively coordinated manner designed to make the majority of people relevant.

It is thus the structural extension of the problems of advanced capitalism that render socialism the most likely candidate for replacing capitalism.

But the lessons of 20th century experience with communist and social democratic states are not forgotten. Socialism had its own problems, mainly from an organizational hypercentralization that provided ample opportunities for political despotism, and the loss of economic dynamism over time. Even if the crisis of capitalism is solved along socialist lines, the problems of socialism will come back into the center of attention. Venturing even further into the long-term future, Collins suggests that socialism itself will not last forever, and the world will oscillate between various forms of capitalism and socialism as each founders on its own shortcomings.

In differently optimistic projections, Craig Calhoun and Michael Mann see the possibility of an alliance of national states uniting in the face of ecological and nuclear disasters. This, they argue, can ensure the continued vitality of capitalism in a more benign social democratic version of globalization. Whatever might come after capitalism, Georgi Derluguian argues that it would never resemble the communist pattern.

Fortunately, the historical conditions for the Sovietstyle “fortress socialism” are gone, along with the geopolitical and ideological confrontations of the last century. Immanuel Wallerstein, however, considers it intrinsically impossible to tell what might be replacing capitalism. The alternatives are either a noncapitalist system that would nonetheless continue the hierarchical and polarizing features of capitalism, or a relatively democratic and a relatively egalitarian system. Possibly several world-systems will emerge from the transition. Calhoun also argues that more loosely coupled systems may develop to deal with disruptions from external threats as well as the internal risks of capitalism. This runs against the widely shared assumption that the world has become irreversibly global. Yet, once again, what theory supports this ideological contention?

The twentieth century thinkers and political leaders of all persuasions proved to be wrong in their ideological conviction that there was a single road to the future, as passionate advocates of capitalism, communism, and fascism argued and attempted to impose. None of us subscribes to the utopian view that human will can make anything possible. Yet it is demonstrable that our societies can be put together in a certain variety of ways. The result significantly depends on the political visions and wills that prevail in the wake of major crises that produce history’s founding moments. Such moments in the past often meant political collapses and revolutions. All five of us, however, strongly doubt that the past revolutions occurring within separate states and often with considerable violence anticipate the future politics of capitalist crisis at the global level. This realization gives us hope that things can be done better in the future.

Capitalism is not a physical location like royal palace or financial district to be seized by a revolutionary crowd or confronted through an idealistic demonstration. Nor is it merely a set of “sound” policies to be adopted and corrected, as prescribed in the business editorials. it is an old ideological illusion of many liberals and Marxists that capitalism simply equals wage labor in a market economy. Such was the basic belief of the twentieth century, on all sides. We are now dealing with its damaging consequences.

Markets and wage labor had existed long before capitalism, and social coordination through markets will almost surely outlive capitalism.

Capitalism, we contend, is only a particular historical configuration of markets and state structures where private economic gain by almost any means is the paramount goal and measure of success. A different and more satisfying organization of markets and human society may yet become possible.

Grounds for this claim are in this book and our many prior writings. But for the moment, let us offer a short historical fable. Humans have dreamt about flying since ancient times, at least as long as they dreamt about social justice. For several millennia this was fantasy. Then arrived the age of hot air balloons and dirigibles. For about a century people experimented with these devices. The results, as we know, were mixed or downright disastrous. But now there existed engineers, scientists, and the social structure which supported and stimulated their inventiveness. The breakthrough eventually arrived with new kinds of engines and aluminum wings. We can all fly now. The majority are usually stuck in the cramped budget seats, while only the daring can experience the exhilaration of autonomous flight piloting small airplanes or paragliders. Human flight also brought the horrors of aerial bombardment and hovering drones. Technology proposes but humans dispose.

Old dreams may come true although this can also impose on us difficult new choices. Yet optimism is a necessary historical condition for mobilizing emotional energies in a world facing the choice of structurally divergent opportunities. Breakthroughs become possible when enough support and public attention go into thinking and arguing about alternative designs.

Chapter 1

STRUCTURAL CRISIS, OR WHY CAPITALISTS MAY NO LONGER FIND CAPITALISM REWARDIMG.

My analysis is based on two premises: The first is that capitalism is a system, and that all systems have lives; they are never eternal. The second is that to say that capitalism is a system is to say that it has operated by a specific set of rules during what I believe to be its approximately 500 years of existence, and I shall try to state these rules briefly.

Systems have lives. llya Prigogine expressed this succinctly: “We have an age, our civilization has an age, our universe has an age…. ” This means, it seems to me, that all systems from the infinitesimally small to the largest that we know (the universe), including the mid-size historical social systems, should be analyzed as consisting of three qualitatively different moments: the moment of coming into existence; their functioning during their “normal” life (the longest moment); the moment of going out of existence (the structural crisis). In this analysis of the existing situation of the modern world-system, the explanation of its coming into existence is not our subject. But the two other moments of life, the rules of capitalism’s functioning during “normal” life, and the modality of its going out of existence, are the central issues before us.

What we are arguing is that, once we have understood what the rules have been that have allowed the modern world-system to operate as a capitalist system, we will understand why it is currently in the terminal stage of structural crisis. We can then suggest how this terminal stage has been operating and is likely to continue to operate for the next 20-40 years.

What are the identifying characteristics, the sine qua non, of capitalism as a system, the modern world-system? Many analysts focus on a single institution that they consider crucial: There is wage labor. Or there is production for exchange and/or for profit. Or there is a class struggle between entrepreneurs/capitalists/bourgeoisie and wageworkers/propertyless proletarians. Or there is a “free” market. None of these definitions of defining characteristics holds much water in my opinion.

The reasons are simple. There has been some wage labor across the world for thousands of years, not only in the modern world. Furthermore, there exists much labor that is not wage labor in the modern world-system.

There has been some production for profit across the world for thousands of years. But it has never before been the dominant reality of some historical system. The “free market” is indeed a mantra of the modern world-system, but the markets in the modern world-system have never been free of government regulation or political considerations, nor could they have been.

There is indeed a class struggle in the modern world-system, but the bourgeois proletarian description of the contending classes is far too narrowly framed.

In my view, for a historical system to be considered a capitalist system, the dominant or deciding characteristic must be the persistent search for the endless accumulation of capital, the accumulation of capital in order to accumulate more capital. And for this characteristic to prevail, there must be mechanisms that penalize any actors who seek to operate on the basis of other values or other objectives, such that these nonconforming actors are sooner or later eliminated from the scene, or at least severely hampered in their ability to accumulate significant amounts of capital. All the many institutions of the modern world-system operate to promote, or at least are constrained by the pressure to promote, the endless accumulation of capital.

The priority of accumulating capital in order to accumulate still more capital seems to me a thoroughly irrational objective. To say that it is irrational, in my appreciation of material or substantive rationality is not to say that it cannot work in the sense of being able to sustain a historical system, at least for a considerable length of time (Weber’s formal rationality). The modern world-system has lasted some 500 years, and in terms of its guiding principle of the endless accumulation of capital it has been extremely successful. However, as we shall argue, the period of its ability to continue to operate on this basis has now come to an end.

CAPITALISM DURING ITS PHASE OF “NORMAL” OPERATION

How has capitalism worked in practice? All systems fluctuate. That is, the machinery of the system constantly deviates from its point of equilibrium. The example of this with which most people are very familiar is the physiology of the human body. We breathe in and then out. We need to breathe in and out. But there are mechanisms within the human body, and within the modern world-system, to bring the operation of the system back to equilibrium, a moving equilibrium to be sure, but an equilibrium. What we think of as the moment of the “normal” operation of a system is the period during which the pressure to return to equilibrium is greater than any pressure to move away from equilibrium.

There are many such mechanisms in the modern world-system. The two most important, most important in the sense that they are most determinant of the historical development of the system, are what I shall call Kondratieff cycles and hegemonic cycles. Here is how each operates.

First, the Kondratieff cycles:

In order to accumulate significant amounts of capital, producers require a quasi-monopoly. Only if they have a quasi-monopoly can they sell their products at prices far above the costs of production. In truly competitive systems with a fully free flow of the factors of production, any intelligent buyer can find sellers who will sell the products for the profit of a penny, or even below the cost of production. There can be no real profit in a perfectly competitive system. Real profit requires limits on the free market, that is, a quasi-monopoly.

However, quasi-monopolies can only be established under two conditions: (1) The product is an innovation for which there exists (or can be induced to exist) a reasonably large number of willing buyers; and (2) One or more powerful states are willing to use state power to prevent (or at least limit) the entry of other producers into the market. In short, quasi-monopolies can only exist if the market is not “free” from state involvement.

*

from

Does Capitalism Have a Future?

by Immanuel Wallerstein, Randall Collins, Michael Mann, Georgi Derluguian and Craig Calhoun

get it at Amazon.com

260 million children aren’t in school. This $10bn plan could change that – Gordon Brown.

We must become the first generation in history where every child goes to school, and in doing so we can demonstrate that international cooperation can work.

A new global fund will create 200m school places and help end child marriage, traffrcking and labour in developing countries.

Once a child refugee fleeing Sudan, and now a prize winning American entrepreneur, Manyang Kher is using a lifetime of hard knocks, a never give up attitude and some rapidly learned skills to change the world.

At the age of just three, he was caught up in his country’s civil war. During a raid, his village was razed to the ground. His father was killed and his mother vanished, presumed dead. Terrified, Manyang ran for his life and kept running.

He met others escaping. He became one of the 20,000 Lost Boys of Sudan who made a gruelling 1,600km trek to Ethiopia then spent 13 harrowing years scrimping and surviving in refugee camps.

Finally, at the age of 17, his life changed. Having reached America as an unaccompanied minor, he learned English, and after graduating from college, he founded a remarkable project in Richmond, Virginia, called Humanity Helping Sudan.

Under its banner, his 734 Coffee Company roasts coffee beans from African owned farms in the Ethiopian Gambella province that was once Manyang’s home.

I have come across numbers symbolising uncomfortable facts before: 7.34 is a theatre company (founded by the British playwright John McGrath in 1971) that was set up to remind people that 7% of the population owned 84% of the country’s wealth. In this instance, the number 734 is equally symbolic. It exactly matches the geographical coordinates on the map of the Gambella 7 degrees north and 34 degrees east highlighting the zone where children’s need for education is greatest.

Now, 8000 of 734’s profits are going to help 200,000 refugee boys and girls living in the area.

Today, with Manyang’s support, UN secretary general Antonio Guterres will back a game changing plan under which millions of children will be guaranteed a free education without having to depend on charity. On the same day, the World Bank and all multilateral development banks will make a joint statement to take forward what is an education revolution. The Global Partnership for Education and the refugee agency Education Cannot Wait see it as complementing their important work. A petition calling for countries to finance it has already been signed by 1. 5 million young people.

The International Finance Facility for Education (IFFEd) aims to provide a brand new $10bn stream of finance which, alongside additional resources from national governments, could create 200m school places and help us to end child marriage, child traficking and child labour by offering free universal education right across developing countries. When up and running, IFFEd will be the biggest single educational investment in history.

And there is good reason why it is urgently needed. More than 260 million children and young people are not going to school today nor will they go to school any other day in the near future.

Even more shocking is the number of children dropping out of school by the age of 12 or learning so little due to the poor quality of education.

The problem is so severe that in 2030, across all low and middle-income countries, more than half of the world’s children and young people 825 million will not have the basic skills or qualiiications needed for a modern workforce.

Today, 750 million people over the age of 15 are unable to read and write, and two-thirds of them are women. And in 20 countries more than half the population is illiterate.

On current projections it will take until after the year 2100 to deliver the sustainable development goal targets promised for 2030, just 12 years from now of all girls and boys completing primary and secondary education.

Yet all overseas aid to education combined offers only $10 per child a year, not even enough to pay for a secondhand text book, let alone a quality education suited for the 2lst century. Funding for global education, which was 13% of all international aid 10 years ago, has fallen to 10%.

So it is urgent that we end the world’s biggest divide, between the half of a generation trapped without education and opportunity, the majority of them girls, and the rest.

Indeed, delivering universal education is the civil rights struggle of our generation. Leading the charge are young people themselves demanding change across the world to stop child marriage in Bangladesh, to end child labour in India, reduce fees for education in Latin America, and to deliver safe schools everywhere from Nigeria where many have been kidnapped, to America where schoolchildren have been the victims of gun violence in their classrooms.

The next generation needs a sea-change response from this generation. The scale of the education challenge cannot be met in ordinary ways through traditional aid. Instead, by building on guarantees from aid donors, and incorporating a buydown facility to reduce the costs of free education for 700 million children in lower middle-income countries, every £100m of aid can deliver £400m of new educational investment.

Our recent history has shown that innovative and concerted international efforts can have a profound impact. Fifteen years ago, heightened cooperation helped create the Global Fund to Fight Aids, Tuberculosis and Malaria, and Gavi, the Vaccine Alliance, both of which have channelled billions of dollars into healthcare and saved millions of lives.

Neglected for too long, global education now warrants a moment of its own. We must become the first generation in history where every child goes to school, and in doing so we can demonstrate at the end of a week in which conflict seems to be the order of the day that international cooperation can work.

Gordon Brown is the UN special envoy for global education and a former UK prime minister

ANTITRUST, High Time for a Revival – Robert Reich.

THE MONOPOLIZATION OF AMERICA: The Biggest Economic Problem You’re Hearing Almost Nothing About.

Not long ago I visited some farmers in Missouri whose profits are disappearing. Why? Monsanto alone owns the key genetic traits to more than 90 percent of the soybeans planted by farmers in the United States, and 80 percent of the corn. Which means Monsanto can charge farmers much higher prices.

Farmers are getting squeezed from the other side, too, because the food processors they sell their produce to are also consolidating into mega companies that have so much market power they can cut the prices they pay to farmers.

This doesn’t mean lower food prices to you. It means more profits to the monopolists.

Monopolies All Around

America used to have antitrust laws that stopped corporations from monopolizing markets, and often broke up the biggest culprits. No longer. It’s a hidden upward redistribution of money and power from the majority of Americans to corporate executives and wealthy shareholders.

You may think you have lots of choices, but take a closer look:

1. The four largest food companies control 82 percent of beef packing, 85 percent of soybean processing, 63 percent of pork packing, and 53 percent of chicken processing.

2. There are many brands of toothpaste, but 70 percent of all of it comes from just two companies.

3. You may think you have your choice of sunglasses, but they’re almost all from one company: Luxottica which also owns nearly all the eyeglass retail outlets.

4. Practically every plastic hanger in America is now made by one company, Mainetti.

5. What brand of cat food should you buy? Looks like lots of brands but behind them are basically just two companies.

6. What about your pharmaceuticals? Yes, you can get lowcost generic versions. But drug companies are in effect paying the makers of generic drugs to delay cheaper versions. Such “pay for delay” agreements are illegal in other advanced economies, but antitrust enforcement hasn’t laid a finger on them in America. They cost you and me an estimated $3.5 billion a year.

7. You think your health insurance will cover the costs? Health insurers are consolidating, too. Which is one reason your health insurance premiums, copayments, and deductibles are soaring.

8. You think you have a lot of options for booking discount airline tickets and hotels online? Think again. You have only two. Expedia merged with Orbitz, so that’s one company. And then there’s Priceline.

9. How about your cable and Internet service? Basically just four companies (and two of them just announced they’re going to merge).

Why the Monopolization of America is a Huge Problem

The problem with all this consolidation into a handful of giant firms is they don’t have to compete. Which means they can and do jack up your prices.

Such consolidation keeps down wages. Workers with less choice of whom to work for have a harder time getting a raise. When local labor markets are dominated by one major big box retailer, or one grocery chain, for example, those firms essentially set wage rates for the area.

These massive corporations also have a lot of political clout. That’s one reason they’re consolidating: Power.

Antitrust laws were supposed to stop what’s been going on. But today, they’re almost a dead letter. This hurts you.

We’ve Forgotten History

The first antitrust law came in 1890 when Senator John Sherman responded to public anger about the economic and political power of the huge railroad, steel, telegraph, and oil cartels then called “trusts” that were essentially running America.

A handful of corporate Chieftains known as “robber barons” presided over all this collecting great riches at the expense of workers who toiled long hours often in dangerous conditions for little pay. Corporations gouged consumers and corrupted politics.

Then in 1901, progressive reformer Teddy Roosevelt became president. By this time, the American public was demanding action.

In his first message to Congress in December 1901, only two months after assuming the presidency, Roosevelt warned, “There is a widespread conviction in the minds of the American people that the great corporations known as the trusts are in certain of their features and tendencies hurtful to the general welfare.”

Roosevelt used the Sherman Antitrust Act to go after the Northern Securities Company, a giant railroad trust run by J.P. Morgan, the nation’s most powerful businessman. The US. Supreme Court backed Roosevelt and ordered the company dismantled.

In 1911, john D. Rockefeller’s Standard Oil Trust was broken up, too. But in its decision, the Supreme Court effectively altered the Sherman Act, saying that monopolistic restraints of trade were objectionable if they were “unreasonable” and that determination was to be made by the courts. What was an unreasonable restraint of trade?

In the presidential election of 1912, Roosevelt, running again for president but this time as a third party candidate, said he would allow some concentration of industries where there were economic efliciencies due to large scale. He’d then have experts regulate these large corporations for the public benefit.

Woodrow Wilson, who ended up winning the election, and his adviser Louis Brandeis, took a different View. They didn’t think regulation would work, and thought all monopolies should be broken up.

For the next 65 years, both views dominated. We had strong antitrust enforcement along with regulations that held big corporations in check.

Most big mergers were prohibited. Even large size was thought to be a problem. In 1945, in the case of United States v. Alcoa (1945), the Supreme Court ruled that even though Alcoa hadn’t pursued a monopoly, it had become one by becoming so large that it was guilty of violating the Sherman Act.

What Happened to Antitrust?

All this changed in the 1980s, after Robert Bork who, incidentally, I studied antitrust law with at Yale Law School, and then worked for when he became Solicitor General under President Ford wrote an influential book called The Antitrust Paradox, which argued that the sole purpose of the Sherman Act is consumer welfare.

Bork argued that mergers and large size almost always create efficiencies that bring down prices, and therefore should be legal. Bork’s ideas were consistent with the conservative Chicago School of Economics, and found a ready audience in the Reagan White House.

Bork was wrong. But since then, even under Democratic administrations, antitrust has all but disappeared.

The Monopolization of High Tech

We’re seeing declining competition even in cutting-edge, high-tech industries.

In the new economy, information and ideas are the most valuable forms of property. This is where the money is.

We haven’t seen concentration on this scale ever before.

Google and Facebook are now the first stops for many Americans seeking news. Meanwhile,

Amazon is now the first stop for more than a half of American consumers seeking to buy anything. Talk about power.

Contrary to the conventional View of an American economy bubbling with innovative small companies, the reality is quite different. The rate at which new businesses have formed in the United States has slowed markedly since the late 1970s.

Big Tech’s sweeping patents, standard platforms, fleets of lawyers to litigate against potential rivals, and armies of lobbyists have created formidable barriers to new entrants. Google’s search engine is so dominant, “Google” has become a verb.

The European Union filed formal antitrust charges against Google, accusing it of forcing search engine users into its own shopping platforms. And last June, it fined Google a record $2.7 billion.

But not in America. It’s Time to Revive Antitrust

Economic and political power cannot be separated because dominant corporations gain political influence over how markets are organized, maintained, and enforced which enlarges their economic power further.

One of the original goals of the antitrust laws was to prevent this.

Big Tech along with the drug, insurance, agriculture, and financial giants is coming to dominate both our economy and our politics.

There’s only one answer: It is time to revive antitrust.

Music and the Mind – Anthony Storr.

“Music’s the Medicine of the Mind” John Logan (1744-88)

“Since music is the only language with the contradictory attributes of being at once intelligible and untranslatable, the musical creator is a being comparable to the gods, and music itself the supreme mystery of the science of man.” Claude Levi-Strauss

Today, more people listen to music than ever before in the history of the world. The audience has increased enormously since the Second World War. Recordings, radio, and even television, have made music available to a wider range of the population than anyone could have predicted fifty years ago. In spite of dire warnings that recordings might empty opera houses and concert halls, the audience for live performances has also multiplied.

This book reflects my personal preference in that it is primarily concerned with classical or Western ‘art’ music, rather than with ‘popular’ music. That these two varieties of music should have become so divergent is regrettable. The demand for accessible musical entertainment grew during the latter half of the nineteenth century in response to the increased wealth of the middle class. It was met by Offenbach, both Johann Strausses, Chabrier, Sullivan, and other gifted composers of light music which still enchants us today. The tradition was carried on into the twentieth century by composers of the stature of Gershwin, Jerome Kern, and Irving Berlin. It is only since the 1950s that the gap between classical and popular music has widened into a canyon which is nearly unbridgeable.

In spite of its widespread diffusion, music remains an enigma. Music for those who love it is so important that to be deprived of it would constitute a cruel and unusual punishment. Moreover, the perception of music as a central part of life is not confined to professionals or even to gifted amateurs. It is true that those who have studied the techniques of musical composition can more thoroughly appreciate the structure of a musical work than those who have not. It is also true that people who can play an instrument, or who can sing, can actively participate in music in ways which enrich their understanding of it. Playing in a string quartet, or even singing as one anonymous voice in a large choir, are both life-enhancing activities which those who take part in them find irreplaceable.

But even listeners who cannot read musical notation and who have never attempted to learn an instrument may be so deeply affected that, for them, any day which passes without being seriously involved with music in one way or another is a day wasted.

In the context of contemporary Western culture, this is puzzling. Many people assume that the arts are luxuries rather than necessities, and that words or pictures are the only means by which influence can be exerted on the human mind. Those who do not appreciate music think that it has no significance other than providing ephemeral pleasure. They consider it a gloss upon the surface of life; a harmless indulgence rather than a necessity.

This, no doubt, is why our present politicians seldom accord music a prominent place in their plans for education. Today, when education is becoming increasingly utilitarian, directed toward obtaining gainful employment rather than toward enriching personal experience, music is likely to be treated as an ‘extra’ in the school curriculum which only affluent parents can afford, and which need not be provided for pupils who are not obviously ‘musical’ by nature.

The idea that music is so powerful that it can actually affect both individuals and the state for good or ill has disappeared. In a culture dominated by the visual and the verbal, the significance of music is perplexing, and is therefore underestimated. Both musicians and lovers of music who are not professionally trained know that great music brings us more than sensuous pleasure, although sensuous pleasure is certainly part of musical experience.

Yet what it brings is hard to define. This book is an exploratory search; an attempt to discover what it is about music that so profoundly affects us, and why it is such an important part of our culture.

Chapter 1

Origins and Collective Functions

“Music is so naturally united with us that we cannot be free from it even if we so desired.” Boethius

No culture so far discovered lacks music. Making music appears to be one of the fundamental activities of mankind; as characteristically human as drawing and painting. The survival of Palaeolithic cavepaintings bears witness to the antiquity of this form of art; and some of these paintings depict people dancing. Flutes made of bone found in these caves suggest that they danced to some form of music. But, because music itself only survives when the invention of a system of notation has made a written record possible, or else when a living member of a culture recreates the sounds and rhythms which have been handed down to him by his forebears, we have no information about prehistoric music. We are therefore accustomed to regarding drawing and painting as integral parts of the life of early man, but less inclined to think of music in the same way. However, music, or musical sounds of some variety, are so interwoven with human life that they probably played a greater part in prehistory than can ever be determined.

When biologists consider complex human activities such as the arts, they tend to assume that their compelling qualities are derivations of basic drives. If any given activity can be seen to aid survival or facilitate adaptation to the environment, or to be derived from behaviour which does so, it ‘makes sense’ in biological terms. For example, the art of painting may originate from the human need to comprehend the external world through vision; an achievement which makes it possible to act upon the environment or influence it in ways which promote survival.

The Palaeolithic artists who drew and painted animals on the walls of their caves were using their artistic skills for practical reasons. Drawing is a form of abstraction which may be compared with the formation of verbal concepts. It enables the draughtsman to study an object in its absence; to experiment with various images of it, and thus, at least in phantasy, to exert power over it. These artists were magicians, who painted and drew animals in order to exercise magical charms upon them. By capturing the image of the animal, early humans probably felt that they could partially control it. Since the act of drawing sharpens the perceptions of the artist by making him pay detailed attention to the forms he is trying to depict, the Palaeolithic painter did in reality learn to know his prey more accurately, and therefore increased his chances of being successful in the hunt.

The art historian Herbert Read wrote:

“Far from being an expenditure of surplus energy, as earlier theories have supposed, art, at the dawn of human culture, was a key to survival, a sharpening of the faculties essential to the struggle for existence. Art, in my opinion, has remained a key to survival.”

The art of literature probably derived from that of the primitive story-teller. He was not merely providing entertainment, but passing down to his listeners a tradition of who they were, where they had come from, and what their lives signified. By making sense and order out of his listeners’ existence, he was enhancing their feeling of personal worth in the scheme of things and therefore increasing their capacity to deal effectively with the social tasks and relationships which made up their lives. The myths of a society usually embody its traditional values and moral norms. Repetition of these myths therefore reinforces the coherence and unity of the society, as well as giving each individual a sense of meaning and purpose. Both painting and literature can be understood as having developed from activities which, originally, were adaptively useful.

But what use is music?

Music can certainly be regarded as a form of communication between people; but what it communicates is not obvious. Music is not usually representational: it does not sharpen our perception of the external world, nor, allowing for some notable exceptions, does it generally imitate it. Nor is music propositional: it does not put forward theories about the world or convey information in the same way as does language.

There are two conventional ways in which one can approach the problem of the significance of music in human life. One is to examine its origins. Music today is highly developed, complex, various and sophisticated. If we could understand how it began, perhaps we could better understand its fundamental meaning. The second way is to examine how music has actually been used. What functions has music served in different societies throughout history?

There is no general agreement about the origins of music. Music has only tenuous links with the world of nature. Nature is full of sound, and some of nature’s sounds, such as running water, may give us considerable pleasure. A survey of sound preferences amongst people in New Zealand, Canada, Jamaica and Switzerland revealed that none disliked the sounds of brooks, rivers and waterfalls, and that a high proportion enjoyed them. But nature’s sounds, with the exception of bird-song and some other calls between animals, are irregular noises rather than the sustained notes of definable pitch which go to form music. This is why the sounds of which Western music is composed are referred to as ‘tones’: they are separable units with constant auditory waveforms which can be repeated and reproduced.

Although science can define the differences between tones in terms of pitch, loudness, timbre, and waveform, it cannot portray the relation between tones which constitutes music.

Whilst there is still considerable dispute concerning the origins, purpose, and significance of music, there is general agreement that it is only remotely related to the sounds and rhythms of the natural world.

Absence of external association makes music unique amongst the arts; but since music is closely linked with human emotions, it cannot be regarded as no more than a disembodied system of relationships between sounds.

Music has often been compared with mathematics; but, as G. H. Hardy pointed out, ‘Music can be used to stimulate mass emotion, while mathematics cannot.’

If music were merely a series of artificial constructs comparable with decorative visual patterns, it would induce a mild aesthetic pleasure, but nothing more. Yet music can penetrate the core of our physical being. It can make us weep, or give us intense pleasure. Music, like being in love, can temporarily transform our whole existence. But the links between the art of music and the reality of human emotions are difficult to define; so difficult that, as we shall see, many distinguished musicians have abandoned any attempt to do so, and have tried to persuade us that musical works consist of disembodied patterns of sound which have no connection with other forms of human experience.

Can music be related to the sounds made by other species? The most obviously ‘musical’ of such sounds are those found in bird-song. Birds employ both noises and tones in their singing; but the proportion of definable tones is often high enough for some people to rate some bird-songs as ‘music’. Bird-song has a number of different functions. By locating the singer, it both advertises a territory as desirable, and also acts as a warning to rivals. Birds in search of a mate sing more vigorously than those who are already mated, thus supporting Darwin’s notion that song was originally a sexual invitation. Bird-song is predominantly a male activity, dependent upon the production of the male sex hormone, testosterone, although duets between male and female occur in some species. Given sufficient testosterone, female birds who do not usually sing will master the same repertoire of songs as the males.

Charles Hartshorne, the American ornithologist and philosopher, claims that bird-song shows variation of both pitch and tempo: accelerando, crescendo, diminuendo, change of key, and variations on a theme. Some birds, like the Wood thrush Hylochicla mustelina, have a repertoire of as many as nine songs which can follow each other in a variety of different combinations. Hartshorne argues:

“Bird songs resemble human music both in the sound patterns and in the behavior setting. Songs illustrate the aesthetic mean between chaotic irregularity and monotonous regularity The essential difference from human music is in the brief temporal span of the bird’s repeatable patterns, commonly three seconds or less, with an upper limit of about fifteen seconds. This limitation conforms to the concept of primitive musicality. Every simple musical device, even transposition and simultaneous harmony, occurs in bird music.”

He goes on to state that birds sing far more than is biologically necessary for the various forms of communication. He suggests that bird-song has partially escaped from practical usage to become an activity which is engaged in for its own sake: an expression of avian joie de vivre.

“Singing repels rival males, but only when nearby; and it attracts mates. It is persisted in without any obvious immediate result, and hence must be largely self-rewarding. lt expresses no one limited emotional attitude and conveys more information than mere chirps or squeaks. In all these ways song functions like music.”

Other observers disagree, claiming that bird-song is so biologically demanding that it is unlikely to be produced unless it is serving some useful function.

Is it possible that human music originated from the imitation of bird-song?

Géza Révész, who was a professor of Psychology at the University of Amsterdam and a friend of Béla Bartok, dismisses this possibility on two counts. First, if human music really began in this way, we should be able to point to examples of music resembling birdsong in isolated pre-literate communities. Instead, we find complex rhythmic patterns bearing no resemblance to avian music. Second, bird-song is not easily imitated. Slowing down modern recordings of birdsongs has demonstrated that they are even more complicated than previously supposed; but one only has to listen to a thrush singing in the garden to realize that imitation of his song is technically difficult.

Liszt’s ‘Légende’ for solo piano, ‘St Francois d’Assise: La Prédication aux oiseaux’, manages to suggest the twittering of birds in ways which are both ingenious and musically convincing. I have heard a tape of American bird-song which persuasively suggests that Dvorak incorporated themes derived from it following his sojourn in the Czech community in Spillville, Iowa. Olivier Messiaen made more use of bird-song in his music than any other composer. But these are sophisticated, late developments in the history of music. It is probable that early man took very little notice of birdsong, since it bore scant relevance to his immediate concerns.

Levi-Strauss affirms that music is in a special category compared with the other arts, and also agrees that bird-song cannot be the origin of human music.

“If, through lack of verisimilitude, we dismiss the whistling of the wind through the reeds of the Nile, which is referred to by Diodorus, we are left with little but bird song Lucretius’ quuidas avium voces that can serve as a natural model for music. Although ornithologists and acousticians agree about the musicality of the sounds uttered by birds, the gratuitous and unverifiable hypothesis of the existence of a genetic relation between bird song and music is hardly worth discussing.”

Stravinsky points out that natural sounds, like the murmur of the breeze in the trees, the rippling of a brook or the song of a bird, suggest music to us but are not themselves music: ‘I conclude that tonal elements become music only by virtue of their being organized, and that such organization presupposes a conscious human act.’

It is not surprising that Stravinsky emphasizes organization as the leading feature of music, since he himself was one of the most meticulous, orderly, and obsessionally neat composers in the history of music. But his emphatic statement is surely right. Bird-song has some elements of music in it, but, although variations upon inherited patterns occur, it is too obviously dependent upon in-built templates to be compared with human music.

In general, music bears so little resemblance to the sounds made by other species that some scholars regard it as an entirely separate phenomenon. This is the view of the ethnomusicologist John Blacking, who was, until his untimely death, Professor of Social Anthropology at the Queen’s University of Belfast, as well as being an accomplished musician.

“There is so much music in the world that it is reasonable to suppose that music, like language and possibly religion, is a species-specific trait of man. Essential physiological and cognitive processes that generate musical composition and performance may even be genetically inherited, and therefore present in almost every human being.

If music is indeed species-specific, there might seem to be little point in comparing it with the sounds made by other species. But those who have studied the sounds made by subhuman primates, and who have discovered what functions these sounds serve, find interesting parallels with human music. Gelada monkeys produce a wide variety of sounds of different pitches which accompany all their social interactions. They also use many different rhythms, accents, and types of vocalization. The particular type of sound which an individual produces indicates his emotional state at the time and, in the longer term, aids the development of stable bonds between different individuals. When tensions between individuals exist, these can sometimes be resolved by synchronizing and coordinating vocal expressions.

“Human beings, like geladas, also use rhythm and melody to resolve emotional conflicts. This is perhaps the main social function served by group singing in people. Music is the ‘language’ of emotional and physiological arousal. A culturally agreed upon pattern of rhythm and melody, ie, a song, that is sung together, provides a shared form of emotion that, at least during the course of the song, carries along the participants so that they experience their bodies responding emotionally in very similar ways. This is the source of the feeling of solidarity and good will that comes with choral singing: people’s physiological arousals are in synchrony and in harmony, at least for a brief period. It seems possible that during the course of human evolution the use of rhythm and melody for the purposes of speaking sentences grew directly out of its use in choral singing. It also seems likely that geladas singing their sound sequences together synchronously and harmoniously also perhaps experience such a temporary physiological synchrony.”

We shall return to the subject of group arousal in the next chapter. Meanwhile, let us consider some other speculations about the origin of music.

One theory is that music developed from the lalling of infants. All infants babble, even if they are born deaf or blind. During the first year of life, babbling includes tones as well as approximations to words: the precursors of music and language cannot be separated. According to the Harvard psychologist Howard Gardner, who has conducted research into the musical development of small children:

“The first melodic fragments produced by children around the age of a year or fifteen months have no strong musical identity. Their undulating patterns, going up and down over a very brief interval or ambitus, are more reminiscent of waves than of particular pitch attacks. Indeed, a quantum leap, in an almost literal sense, occurs at about the age of a year and a half, when for the first time children can intentionally produce discrete pitches. It is as if diffuse babbling had been supplanted by stressed words.”

During the next year, children make habitual use of discrete pitches, chiefly using seconds, minor thirds, and major thirds. By the age of two or two and a half, children are beginning to notice and learn songs sung by others. Révész is quite sure that the lalling melodies produced by children in their second year are already conditioned by songs which they have picked up from the environment or by other music to which they have been exposed. If lalling melodies are in fact dependent upon musical input from the environment, it is obviously inadmissible to suggest that music itself developed from infant lalling.

Ellen Dissanayake, who teaches at the New School for Social Research in New York and who has lived in Sri Lanka, Nigeria, and Papua New Guinea, persuasively argues that music originated in the ritualized verbal exchanges which go on between mothers and babies during the first year of life. In this type of interchange, the most important components of language are those which are concerned with emotional expressiveness rather than with conveying factual information. Metre, rhythm, pitch, volume, lengthening of vowel sounds, tone of voice, and other variables are all characteristic of a type of utterance which has much in common with poetry. She writes:

“No matter how important lexico-grammatical meaning eventually becomes, the human brain is first organized or programmed to respond to emotional/intonational aspects of the human voice.”

Since infants in the womb react both to unstructured noise and to music with movements which their mothers can feel, it seems likely that auditory perception prompts the baby’s first realization that there is something beyond itself to which it is nevertheless related. After birth, vocal interchange between mother and infant continues to reinforce mutual attachment, although vision soon becomes equally important. The crooning, cooing tones and rhythms which most mothers use when addressing babies are initially more significant in cementing the relationship between them than the words which accompany these vocalizations. This type of communication continues throughout childhood.

If, for example, I play with a child of eighteen months who can only utter a few words, we can communicate in all kinds of ways which require no words at all. It is probable that both of us will make noises: we will chuckle, grunt, and make the kinds of sounds which accompany chasing and hiding games. We may establish, at least for the time being, a relationship which is quite intimate, but nothing which passes between us needs to be expressed in words. Moreover, although relationships between adults usually involve verbal interchange, they do not always do so. We can establish relationships with people who do not speak the same language, and our closest physical relationships need not make use of words, although they usually do so. Many people regard physical intimacy with another person as impossible to verbalize, as deeper than anything which words can convey.

Linguistic analysts distinguish prosodic features of speech from syntactic: stress, pitch, volume, emphasis, and any other features conveying emotional significance, as opposed to grammatical structure or literal meaning. There are many similarities between prosodic communication and music. Infants respond to the rhythm, pitch, intensity, and timbre of the mother’s voice; all of which are part of music.

Such elements are manifestly important in poetry, but they can also be in prose. As a modern example, we can consider James Joyce’s experiments with the sound of words which are particularly evident in his later works.

“But even in his earliest stories the meaning of a word did not necessarily depend on the object it denoted but on the sonority and intonation of the speaker’s voice; for even then Joyce addressed the listener rather than the reader.”

It will be recalled that Joyce had an excellent voice and considered becoming a professional singer. He described using the technical resources of music in writing the Sirens chapter of Ulysses. Joyce portrays Molly Bloom as comprehending the hurdygurdy boy without understanding a word of his language.

One popular Victorian notion was that music gradually developed from adult speech through a separation of the prosodic elements from the syntactic. William Pole wrote in The Philosophy of Music:

“The earliest forms of music probably arose out of the natural inflections of the voice in speaking. It would be very easy to sustain the sound of the voice on one particular note, and to follow this by another sustained note at a higher or lower pitch. This, however rude, would constitute music.

We may further easily conceive that several persons might be led to join in a rude chant of this kind. If one acted as leader, others, guided by the natural instinct of their ears, would imitate him, and thus we might get a combined unison song.”

Dr Pole’s original lectures, on which his book is based, were given in 1877, and bear the impress of their time, with frequent references to savages, barbarians, and the like. Although The Philosophy of Music is still useful, Pole shows little appreciation of the fact that music amongst pre-literate peoples might be as complex as our own.

Twenty years earlier, in 1857, Herbert Spencer had advanced a similar theory of the origins of music, which was published in Fraser’s Magazine: Spencer noted that when speech became emotional the sounds produced spanned a greater tonal range and thus came closer to music. He therefore proposed that the sounds of excited speech became gradually uncoupled from the words which accompanied them, and so came to exist as separate sound entities, forming a ‘language’ of their own.

Darwin came to an opposite conclusion. He supposed that music preceded speech and arose as an elaboration of mating calls. He observed that male animals which possess a vocal apparatus generally use their voices most when under the influence of sexual feelings. A sound which was originally used to attract the attention of a potential mate might gradually be modified, elaborated, and intensified.

“The suspicion does not appear improbable that the progenitors of man, either the males or the females, or both sexes, before they had acquired the power of expressing their mutual love in articulate language, endeavoured to charm each other with musical notes and rhythm. The impassioned orator, bard, or musician, when with his various tones and cadences he excites the strongest emotions in his hearers, little suspects that he uses the same means by which, at an extremely remote period, his half-human ancestors aroused each other’s ardent passions during their mutual courtship and rivalry.”

*

from

Music and the Mind

by Anthony Storr

get it at Amazon.com

NZ is balancing a mortgage debt time bomb. Will it blow? – Liam Dann.

We kid ourselves we’re wealthier because of capital gains on our homes but in reality our collective balance sheet is looking worse than ever.

Last week I wrote about the world’s total debt hitting a record $230 trillion.

That’s a big pile of money. The rate at which it has been growing worries the International Monetary Fund which tallied it up. The IMF fears it could be a trigger for the next financial crisis.

Most of last week’s column got side-tracked by government debt and the debate about whether ours can afford to borrow more. ANZ economists made a good case for doing that.

As expected finance minister Grant Robertson ruled it out last week, reiterating his preelection commitment to fiscal responsibility.

The Government’s target of net core crown debt (20 per cent ofGDP) makes us look very conservative, the US Government owes more than 80 per cent of the country’s GDP.

But, as numerous correspondents pointed out, it’s New Zealand’s private debt that is the real problem for this county.

We are up to our neck in it and that creates a serious risk particularly if interest rates rise rapidly as they did before the global financial crisis (GFC) in 2008.

The Reserve Bank’s latest tally puts the total at $433.07 billion a whopping 160 per cent of GDP. That includes mortgages, credit cards, business borrowing and agricultural debt.

It will come as no surprise that our overcooked housing market is to blame for a big rise in mortgage debt over the past decade. That sits at $247.37b 91 per cent of GDP.

It has risen by 57 per cent in the past decade. As house prices have soared so has the amount Kiwis have to borrow to buy.

We kid ourselves we’re wealthier because of capital gains on our homes but in reality our collective balance sheet is looking worse than ever.

This is no revelation, of course. To be fair, it is the issue that probably tops the Reserve Bank’s long list of things to worry about. It is one of the reasons the Bank moved to introduce tough loan to value ratio (LVR) restrictions between 2013 and 2016 as annual growth in mortgage lending neared 10 percent (it peaked at 9.3 per cent in December 2016).

High private debt levels are one of the reasons the Government can’t afford to be reckless on the borrowing front.

New Zealand isn’t unique in this.

As the IMF pointed out, throughout the developed world we have seen debt mount rapidly in an environment of easy money and super low credit, essentially due to the radical policies put in place by central banks to avoid total meltdown in the GFC.

The next crunch will come when we find out how serviceable that debt mountain is, when interest rates rise to more normal levels.

That process is under way now and it worries many economists. They see this a time bomb. Some even predict another massive financial crisis coming our way.

I’m not going to argue this couldn’t happen. But I think it is important to keep the relative scale of the risk in perspective. Debt will almost certainly be at the centre of the next financial mess however it unfolds. But at a certain point that becomes about as meaningful as saying the next crisis will be caused by money.

Debt is effectively a form of currency that enables value transactions to take place in the future rather than just the present. Like money it works as long as there is confidence in the system that accounts for it and enforces payment.

So could the whole thing come tumbling down? Sure.

But let’s look at some reasons why it might not, at least anytime soon.

What’s happening with interest rates is not a shock for markets in fact it’s a slow, orderly process. New Zealand’s official cash rate is 1.75 per cent and it is not expected to go up for at least a year.

Mortgage rates could still rise because local banks need offshore funds to cover their lending costs.

But the proportion they need has fallen. Ten years ago when the GFC hit about 40 per cent of bank funding was sourced offshore. Now it’s less than 30 per cent.

We have learned and made some progress since the GFC.

There are plenty of headlines about US rates rising right now. Even then, the US Fed’s forecasts are for 2.9 per cent by the end of 2019 and 3.4 per cent by the end of 2020. That is hardly apocalyptic. In Europe they are still extremely low. Their forecasts suggest they’ll still be just 0.75 per cent by the end of 2020.

The other positive is that local house prices have flattened out without crashing. That has meant the annual rate of growth in the nation’s mortgage debt has stabilised at about 5.8 per cent.

If rates rise slowly and the growth in housing debt stays steady, if the Government pays down debt and if New Zealand keeps a top grade credit rating then we should be okay.

That is a lot of ”ifs”.

It is not a formula likely to reassure many of the gloomier economy watchers.

But it’s about as much optimism as I can muster on the issue. The risks are real and this country can’t afford to relax about its private debt levels.

How to Be Alone – Sara Maitland.

What changed was that I got fascinated by silence; by what happens to the human spirit, to identity and personality when the talking stops, when you press the off-button, when you venture out into that enormous emptiness.

Sara Maitland: ‘My subconscious was cleverer than my conscious in choosing to live alone’

The author of How to Be Alone on the joys of solitude, Skyping and why having a dog isn’t really cheating…

How did you come to live the solitary life? Was it a sudden decision or did it evolve gradually?

I didn’t seek solitude, it sought me. It evolved gradually after my marriage broke down. I found myself living on my own in a small country village. At first I was miserable and cross. It took me between six months and a year before I noticed that I had become phenomenally happy. And this was about being alone not about being away from my husband. I found out, for instance, how much I liked being in my garden. My subconscious was cleverer than my conscious in choosing to live alone. The discovery about solitude was a surprise in waiting.

Yet isn’t writing a book such as How to Be Alone a way of communicating with others, of not being alone?

It is. Anthony Storr [author of Solitude: A Return to the Self] is right about companionship through writing and creative work. In my book about silence [A Book of Silence, 2008] I conclude that complete silence and writing are incompatible.

How would you distinguish between solitude and loneliness?

Solitude is a description of a fact: you are on your own. Loneliness is a negative emotional response to it. People think they will be lonely and that is the problem, the expectation is also now a cultural assumption.

If someone has not chosen to be alone, is bereaved or divorced, do you think they can make solitude feel like a choice?

It is possible. That has been my autobiography. They need more knowledge about it, to read about the lives of solitaries who have enjoyed it, to take it on, see what is good in it. Since I wrote about silence, many bereaved people have written asking: how do I do it? The largest groups of people living alone are women over 65 and separated men in their 40s. A lot of solitude is not chosen. It may come to any of us.

Do you ever feel lonely?

Very seldom because I have good friends and there are telephones and Skype. But broadband was down for a week over Christmas. I couldn’t Skype the kids and did find myself asking: why didn’t I go to my brother who had warmly invited me?

So what was Christmas like on your own in rural Galloway?

It was bliss. On Christmas Eve the tiny village five miles away has a nativity play. Young adults come home, it’s a very happy event. On the day itself I drank a little bit more than I should have done sitting in front of my fire. I had a long walk. It was lovely…

How much do you use the internet and social media?

Social media not at all. But when broadband went I realised how excessively I use it. Without it, I read more. I’m making a big patchwork quilt. I did more that week than in the past three months. It made me realise I have got to get this online thing under control. When I first came here I had it switched off three days a week but that has slipped.

You seem to lead a non-materialistic life. What three things would you most hate to lose from your shepherd’s cottage?

Last Christmas my son gave me a dragon hoodie bright green with pink spikes. I’d be sad to lose it. I’d hate to lose photos of my children. And I’d be seriously sad to lose Zoe, my border collie. I took her on because she got out of control in an urban community. She was seeking a wilder, freer life.

Yet in the book you suggest it’s cheating on the solitary life to have a dog when you walk…?

The pure soul probably doesn’t have a dog. I have a dog but no television.

You mention having suffered depression earlier in your life was this related to lack of solitude?

That is a correct reading, although I would not use it diagnostically. I’m deeply fond of my family but they put a high value on extroversion. I come from an enormous family and have spent a lot of time pretending I wasn’t introverted.

Yet deciding whether one is extrovert or introvert is not straightforward?

Everyone has a differing need for solitude. I feel we haven’t created space for children to find out what they need. I’ve never heard of being sent to your room as a reward. In my childhood I had a happy home being alone was thought weird. I’d like people to be offered solitude as an ordinary thing.

Does being alone teach children to be alone? Yes, just as talk is the teacher of talk.

You write: ‘Most of us have a dream of doing something in particular which we have never been able to find anyone to do with us. And the answer is simple really: do it yourself.‘ What dream have you realised by yourself?

The one thing I really don’t like doing by myself is changing a double duvet… But I went up Merrick on my own, the highest hill in the area a week after my mother died. A little voice kept saying: this is not safe, it is stupid. What happens if you break your ankle? What happens if you get lost? Doing it was a breakthrough. Another dream I am sad about. My brother and I used to sail a dinghy. He died and I wanted to sail alone. I went on a dingy course only to discover I’m not physically strong enough to right the dinghy were it to tip over.

How does love fit into the solitary life?

How much loving are people doing if they’re socialising 24/7? And if the loving is only to be loved, what is unselfish about that? The fact you’re on your own does not mean you are not loving.

Your book is part of a self-help series. What book has helped you most?

What an interesting question. Lots of stuff. Anything good. I have just been reading Alan Garner’s phenomenally brilliant Boneland and A Voyage for Madmen [by Peter Nichols], an account of the people who sailed in the 1968 solo round-the-world race. They had the same circumstances: ill-equipped boats, not enough money, plenty of anxiety. Yet different people had different responses to the same thing. People are not righter or wrong er, they’re different. I’ve struggled with this all my life and, God, it’s hard to grasp.

How to Be Alone

by Sara Maitland.

You have just started to read a book that claims, at least, to tell you how to be alone.

Why?

It is extremely easy to be alone; you do not need a book. Here are some suggestions:

Go into the bathroom; lock the door, take a shower. You are alone.

Get in your car and drive somewhere (or walk, jog, bicycle, even swim).You are alone.

Wake yourself in the middle of the night (you are of course completely and absolutely alone while you are asleep, even if you share your bed with someone else, but you are almost certainly not conscious of it, so let’s ignore that one for the moment); don’t turn your lights on; just sit in the dark. You are alone.

Now push it a bit. Think about doing something that you normally do in company go to the cinema or a restaurant, take a walk in the country or even a holiday abroad by yourself. Plan it out; the logistics are not difficult. You know how to do these things. You would be alone.

So what is the problem? Why are you reading this book?

And of course I do not know the answer. Not in your case, at least. But I can imagine some possible motives:

For some reason, good or bad, of which bereavement is perhaps the bitterest, your normal circle of not-aloneness has been broken up; you have to tackle unexpected isolation, you doubt your resources and are courageously trying to explore possible options. You will be a member of a fast-growing group, single-occupancy households in the UK have increased from 12 per cent of all households in 1961 to nearly 30 per cent in 2011.

Someone you thought you knew well has opted for more solitude, they have gone off alone to do something that excludes you, temporarily or for a longer period; you cannot really feel jealous, because it excludes everyone else too; you are a little worried about them; you cannot comprehend why they would do anything so weird or how they will manage. You want to understand.

You want to get something done, something that feels important to you. It is quite likely, in this case, that it is something creative. But you find it difficult to concentrate; constant interruptions, the demands of others, your own busy-ness and sociability, endless connections and contacts and conversations make it impossible to focus. You realize that you will not be able to pay proper attention unless you find some solitude, but you are not sure how this might work out for you.

You want to get something done something that feels important to you and of its very nature has to be done alone (single-handed sailing, solo mountaineering and becoming a hermit are three common examples, but there are others). The solitude is secondary to you, but necessary, so you are looking for a briefing. This group is quite small, I think; most of the people who seriously want to do these sorts of things tend to be experienced and comfortable with a degree of aloneness before they become committed to their project.

You have come to the disagreeable awareness that you do not much like the people you are spending time with; yet you sense that you are somehow addicted to them, that it will be impossible to change; that any relationship, however impoverished, unsatisfying, lacking in value and meaning, is better than no relationship; is better than being alone. But you aren’t sure. You are worried by the very negative responses you get whenever you bring the subject up.

You are experiencing a growing ecological passion and love of nature. You want to get out there, and increasingly you want to get out there on your own. You are not sure why this new love seems to be pulling you away from sociability and are looking for explanations.

You are one of those courageous people who want to dare to live; and to do so believe you have to explore the depths of yourself, undistracted and unprotected by social conventions and norms. You agree with Richard Byrd, the US admiral and explorer, who explained why he went to spend the winter alone on the southern polar ice cap in 1934:

‘I wanted to go for experience’s sake; one man’s desire to know that kind of experience to the full . . . to be able to live exactly as I chose, obedient to no necessities but those imposed by wind and night and cold, and to no man’s laws but my own.’

You do not, of course, need to go all the way to Antarctica to achieve this, but you do need to go all the way into yourself. You feel that if you have not lived with yourself alone, you have not fully lived. You want to get some clues about what you might encounter in this solitary space.

You feel and do not fully understand the feeling that you are missing something. You have an inchoate, inarticulate, groping feeling that there is something else, something more, something that may be scary but may also be beautiful. You know that other people, across the centuries and from a wide range of cultures and countries, have found this something and they have usually found it alone, in solitude. You want it. Whatever it is.

You are reading this book not because you want to know how to be alone, which is perfectly easy as soon as you think about it, but because you want to know why you might want to be alone; why the whole subject fills you with both longing and deep unease. You want to know what is going on here.

But actually the most likely reason why you are reading this book (like most books) is curiosity why would someone write this book?

And I can answer that question, so that is where I am going to begin.

I live alone. I have lived alone for over twenty years now. I do not just mean that I am single I live in what might seem to many people to be ‘isolation’ rather than simply ‘solitude’. My home is in a region of Scotland with one of the lowest population densities in Europe, and I live in one of the emptiest parts of it: the average population density of the UK is 674 people per square mile (246 per square kilometre). In my valley, though, we have (on average) over three square miles each. The nearest shop is ten miles away, and the nearest supermarket over twenty. There is no mobile phone connection and very little through traffic uses the single-track road that runs a quarter of a mile below my house. Often I do not see another person all day. I love it.

But I have not always lived alone. I grew up in a big family, one of six children, very close together in age, and in lots of ways a bit like a litter of puppies. It was not a household much given to reflection or introversion we were emotional, argumentative, warm, interactive. We did things together. I am still deeply and affectionately involved with all my siblings. I became a student in 1968 and was fully involved in all the excitement and hectic optimism of those years. Then I was married and had two children. I became a writer. I have friends, friendship remains one of the core values of my life. None of this looked like a life of solitude, nor a good preparation for living up a back road on a huge, austere Scottish moor.

What changed was that I got fascinated by silence; by what happens to the human spirit, to identity and personality when the talking stops, when you press the off-button, when you venture out into that enormous emptiness. I was interested in silence as a lost cultural phenomenon, as a thing of beauty and as a space that had been explored and used over and over again by different individuals, for different reasons and with wildly differing results. I began to use my own life as a sort of laboratory to test some ideas and to find out what it felt like. Almost to my surprise, I found I loved silence. It suited me. I got greedy for more. In my hunt for more silence, I found this valley and built a house here, on the ruins of an old shepherd’s cottage. I moved into it in 2007.

In 2008 I published a book about silence. A Book of Silence was always meant to be a ‘hybrid’ book: it is both a cultural history and a personal memoir and it uses the forms and conventions of both genres melded into a single narrative. But it turned out to be a hybrid in another way that I had not intended. Although it was meant to be about silence, it turned out to be also about solitude and there was extensive and, I now think, justifiable criticism of the fact that it never explicitly distinguished between the two.

Being silent and being alone were allowed to blur into each other in ways that were confusing to readers. For example, one of the things I looked at in A Book of Silence was the actual physical and mental effects of silence ranging from a heightened sensory awareness (how good food tasted, how extreme the experiences of heat and cold were), through to some curious phenomena like voice-hearing and a profound loss of inhibition. These effects were both reported frequently by other people engaged in living silent lives and experienced by me personally in specific places like deserts or mountains. However, a number of commentators felt that these were not effects of silence per se, but of solitude of being alone.

After the book was published I also began to get letters from readers wanting advice . . . and more often than I had anticipated, it was not advice on being silent but on being alone.

Some of this was because there are at least two separate meanings to the word silence. Even the Oxford English Dictionary gives two definitions which are mutually exclusive: silence is defined as both the absence of any sounds and the absence of language. For many people, often including me, ‘natural noises’ like wind and running water do not ‘break’ silence, while talking does. And somewhere in between is the emotional experience that human-made noises (aeroplanes overhead, cars on distant roads) do kill silence even where the same volume of natural sound does not.

But it was not just a question of definitions. I came to see that although for me silence and solitude were so closely connected that I had never really needed to distinguish them, they did not need to be, and for many people they were not by any means the same. The proof cases are the communities where people are silent together, like Trappist monasteries or Quaker meetings.

The bedrock of the Quaker way is the silent meeting for worship. We seek a communal gathered stillness, where we can be open to inspiration from the Spirit of God and find peace of mind, a renewed sense of purpose for living, and joy to wonder at God’s creation.

*

from

How to Be Alone

by Sara Maitland.

get it at Amazon.com

Scientific Facts About Mindfulness from a Recovered Ruminator – Ruby Wax.

The real reason I began to practise mindfulness seriously was because of the empirical evidence of what happens in the brain. It wasn’t good enough that mindfulness helped me deal with the depression or that it brought me calm in the storm, ever the sceptic, I demanded hard-core proof. It appeared I didn’t trust my own feelings as much as I did science.

There is so much data to show the practice doesn’t just ameliorate physical and emotional pain, it sharpens your concentration and focus and therefore gives you the edge when others are floundering in the mud. (If that’s what you’re after.)

Here is just some of the evidence that swung the jury in favour of mindfulness (for me):


Connection to Feelings

A number of studies have found mindfulness results in increased blood flow to the insula and an increased volume and density of grey matter. This is a crucial area that gives the ability to focus into your body, and connects you to your feelings, such as butterflies in your stomach, or a blow to the heart. Strengthening your insula enhances introspection, which is the key to mindfulness.

Insula


Self Control

Researchers found that increased blood flow to the anterior cingulate cortex after just six 30 minute meditation sessions strengthened connections to this area, which is crucial for controlling impulse, and may help explain why mindfulness is effective in helping with self control, i.e. addictions.

Cingulate Cortex


Counteracting High Anxiety

Researchers from Stanford found that after an eight week mindfulness course participants had less reactivity in their amygdala and reported feeling fewer negative emotions.

Amygdala


Quietening the Mind

The brain stem produces neurotransmitters which regulate attention, mood and sleep. These changes may explain why meditators perform better on tests of attention, are less likely to suffer from anxiety and depression and often have improved sleep patterns.

Brain Stem


Regulating Emotions

The hippocampus is involved in learning and memory and can help with reactivity to stress. Increased density of neurons in this area may help explain why meditators are more emotionally stable and less anxious.

Hippocampus


Regulating Thoughts

Changes in the cerebellum are likely to contribute to meditators’ increased ability to respond to life events in a positive way.

Cerebellum


Curbing Addictive Behaviour

The prefrontal cortex is involved with self regulation and decision making. Mindfulness has been found to increase blood flow to this area, which enhances self awareness and self control, helping you to make constructive choices and let go of harmful ones.

Prefrontal Cortex


Curbing OCD

PET scans were performed on 18 OCD patients before and after 10 weeks of mindfulness practice, none took medication and all had moderate to severe symptoms. PET scans after treatment showed activity in the orbital frontal cortex had fallen dramatically meaning the worry circuit was unwired. It was the first study to show that mindfulness based cognitive therapy has the power to systematically change brain chemistry in a well identified brain circuit. So, intentionally making a mindful effort can alter brain function and this induces neuroplasticity. This is the first time it was established that mindfulness is a form of experience that promotes neuroplasticity.

Orbital Frontal Cortex


A Quicker Brain

Researchers from UCLA have found that meditators have stronger connections between different areas in the brain. This greater connectivity is not limited to specific regions but found across the brain at large. It also increases the ability to rapidly relay information from one area to the next giving you a quicker and more agile brain.

Training Your Brain, As Well As Your Body

A trained mind is physically different from an untrained mind. You can retain inner strength even though the world around you is frantic and chaotic. People are trying to find the antidotes to suffering so it’s time we started doing the obvious; training our brains as we do our bodies. Changing the way you think changes the chemicals in your brain. For example, the less you workout, the lower the level of acetylcholine and the less you have of this chemical, the poorer your ability to pay attention. Even with age related losses, almost every physical aspect of the brain can recover and new neurons can bloom.


More Positive Research on Mindfulness

Research from Harvard University suggests that we spend nearly 50% of our day mindwandering, typically lost in negative thoughts about what might happen, or has already happened to us. There is a mind-wandering network in the brain, which generates thoughts centred around ‘me’ and is focused in an area called the medial prefrontal cortex. Research has shown that when we practise mindfulness, activity in this ‘me’ centre decreases. Furthermore, it has been shown that when experienced practitioners’ minds do wander, monitoring areas (such as the lateral prefrontal cortex) become active to keep an eye on where the mind is going and if necessary bring attention back to the present, which results in less worrying and more living.

Medial Prefrontal Cortex


Researchers from the University of Montreal investigated the differences in how meditators and non-meditators experience pain and how this relates to brain structure. They found that the more experienced the meditators were, the thicker their anterior cingulate cortex and the lower their sensitivity to pain.


Researchers from Emory University found that the decline in cognitive abilities that typically occurs as we age, such as slower reaction times and speed of thinking, was not found in elderly meditators. Using fMRl, they also established that the physical thinning of grey matter that usually comes with ageing had actually been remarkably diminished.


Researchers from UCLA found that when people become aware of their anger and label it as ‘anger’ then the part of the brain that generates negative emotions, the amygdala, calms down. It’s almost as if once the emotional message has been delivered to the conscious mind it can quieten down a little.


Mindfulness activates the ‘rest and digest’ part of our nervous system, and increases blood flow to parts of our brains that help us regulate our emotions, such as the hippocampus, anterior cingulate cortex and the lateral parts of the prefrontal cortex. Our heart rate slows, our respiration slows and our blood pressure drops. A researcher from Harvard coined the changes in the body that meditation evokes as the ‘relaxation response’ basically the opposite to the ‘stress response’. While the stress response is extremely detrimental to the body, the relaxation response is extremely salutary and is probably at the root of the wide-ranging benefits mindfulness has been found to have, both mentally and physically.


Mindfulness and the Body

Researchers from the University of Wisconsin Madison investigated the effects of mindfulness on immune system response. They injected participants with a flu virus at the end of an eight-week course and they found that the mindfulness group had a significantly stronger immune system compared with the others.


Scientists at UCLA found mindfulness to be extremely effective at maintaining the immune system of HIV sufferers. Over an eight-week period, the group who weren’t taught mindfulness had a 25% fall in their CDT 4 cells (the ‘brains’ of the immune system) whereas the group taught mindfulness maintained their levels.


Researchers from the University of California, Davis, found that improved psychological wellbeing fostered by meditation may reduce cellular ageing. People who live to more than 100 have been found to have more active telomerase, an enzyme involved in cell replication. The researchers found that the meditators had a 30% increase in this enzyme linked to longevity following a three-month retreat.

Telomerase


Skin disorders are a common symptom of stress. The University of Massachusetts taught mindfulness to psoriasis sufferers and found their skin problems cleared four times faster than those who weren’t taught the technique.


Researchers from the University of North Carolina have found mindfulness to be an effective method of treating irritable bowel syndrome. Over a period of eight weeks, participants either were taught mindfulness or they went to a support group. Three months later, they found that on a standard 500-point IBS symptom questionnaire, the support group’s score had dropped by 30 points. The mindfulness group’s score had fallen by more than 100 points.


Researchers from Emory University investigated whether training in compassion meditation could reduce physiological responses to stress. Participants were stressed by being requested to perform a public speaking task. The researchers found that the participants who had practised the most had the lowest physiological responses to stress, as measured by reduced pro-inflammatory cytokines and also reported the lowest levels of psychological distress.


Researchers investigated the physiological effects of an eight-week mindfulness programme on patients suffering from breast cancer and prostate cancer. In addition to the patients reporting reduced stress, they found significant reductions in physiological markers of stress, such as reduced cortisol levels, pro-inflammatory cytokines, heart rate and systolic blood pleasure. A follow-up study a year later found these improvements had been maintained or enhanced further.


Mindfulness and Emotions

Researchers from the University of Massachusetts Medical School investigated the effects of an eight-week mindfulness course on generalized anxiety disorder. 90% of those taught the technique reported significant reductions in anxiety.


Studies from the University of Wisconsin suggest that meditators’ calmness is not a result of becoming emotionally numb in fact they may be able to experience emotions more fully. If asked to enter into a state of compassion, then played an emotionally evocative sound, such as a woman screaming, they showed increased activity in the emotional areas of the brain compared to novices. However, if asked to enter into a state of deep concentration, they showed reduced activity in the emotional areas of the brain compared with novices. The key is that they were better able to control their emotional reactions depending on the mental state they chose to be in.


Optimists and resilient people have been found to have more activity in the front of their brains (prefrontal cortex) on the left hand side, whereas those more prone to rumination and anxiety have more on the right. Researchers from the University of Wisconsin found that after eight weeks of mindfulness practice, participants had been able to change their baseline levels of activity moving more towards left hand activation. This suggests that mindfulness can help us change our base-line levels of happiness and optimism.


If you suffer from recurring depression, scientists suggest that mindfulness might be a way to keep you free from it. Researchers from Toronto and Exeter in the UK recently found that learning mindfulness, while tapering off anti-depressants, was as effective as remaining on medication.


Researchers from Stanford University have found that mindfulness can help with social anxiety by reducing reactivity in the amygdala, an area of the brain that is typically overactive in those with anxiety problems.


Researchers at the University of Manchester tested meditators’ response to pain, by heating their skin with a laser. They found that the more meditation the subject had done, the less they experienced pain. They also found that they had less neural activity in the anticipation of pain than controls, which is likely to be due to their increased ability to remain in the present rather than worry about the future.


A recent study from Wake Forest University found that just four sessions of 20 minutes mindfulness training a day reduced pain sensitivity by 57% an even greater reduction than drugs such as morphine.


Numerous studies have found that mindfulness on its own or in combination with medication can be effective in dealing with addictive behaviours, from drug abuse through to binge eating. Recently researchers from Yale School of Medicine found that mindfulness training of less than 20 minutes per day was more effective at helping smokers quit than the American Lung Association’s gold standard treatment. Over a period of four weeks, on average, there was a 90% reduction in the number of cigarettes smoked from 18 per day to two per day with 35% of smokers quitting completely. When they checked four months later over 30% had maintained abstinence.


Researchers investigated the impact of mindfulness on the psychological health of 90 cancer patients. After seven weeks of daily practice, the patients reported a 65% reduction in mood disturbances including depression, anxiety, anger and confusion. They also reported a 31% reduction in symptoms of stress and less stress-related heart and stomach pain.


Researchers from the University of California, San Diego investigated the impact of a four week mindfulness programme on the psychological well-being of students, in comparison to a body relaxation technique. They found that both techniques reduced distress, however mindfulness was more effective at developing positive states of mind and at reducing distractive and ruminative thoughts. This research suggests that training the mind with mindfulness delivers benefits over and above simple relaxation.


Mindfulness and Thoughts/Cognition

Researchers from Wake Forest University investigated how four sessions of 20 minutes mindfulness practice could affect critical cognitive abilities. They found that the mindfulness practitioners were significantly better than the control group at maintaining their attention and performed especially well at stressful tasks under time pressure. [This is another study demonstrating that significant benefits can be enjoyed from relatively little practise]


Researchers from the University of Pennsylvania wanted to investigate how mindfulness could help improve thinking in the face of stress. So, they taught it to marines prior to their deployment in Iraq. In cognitive tests, they found that the marines who practised for more than 10 minutes a day managed to maintain their mental abilities in spite of a stressful deployment period, whereas the control group and those practising less than 10 minutes could not.


Researchers from UCLA conducted a pilot to investigate the effectiveness of an eight-week mindfulness course for adults and adolescents with ADHD. Over 75% of the participants reported a reduction in their total ADHD symptoms, with about a third reporting clinically significant reductions in their symptoms of more than 30%.


Researchers conducted a pilot study, to investigate the efficacy of mindfulness in treating OCD. Sixty per cent of the participants experienced clinically significant reductions in their symptoms, of over 30%. The researchers suggest that the increased ability to ‘let go’ of thoughts and feelings helps stop the negative rumination process that is so prevalent in OCDs


I hope the above has not put you to sleep but for me it makes me feel I’m in well researched hands. If it’s good enough for Harvard, UCLA, University of Pennsylvania, Yale School of Medicine and Stanford, it’s good enough for me.

Modern Monetary Theory. Public Broadcasters put out economic misinformation – Bill Mitchell.

On May 1, 2018, the Australian Broadcasting Commission (ABC) published this travesty – Federal budget jargon buster: Work through the waffle – trying, no doubt, to give the impression that they are clever and the average citizen is dumb – read their introduction to see that.

Some of the entries are factual such as what ‘bracket creep’ is in a progressive tax structure.

Then you get to the “Jargon buster” entry for “Debt vs. deficit” and we read that:

Politicians of all stripes are fond of comparing the budget to a family’s finances, but this often leads to confusion.

Not confusion, plain and simple deception which creates a fictional world.

There is no comparison between the fiscal balance of a national, currency-issuing government and a household.

Households use the currency and must finance their spending. A sovereign government issues the currency and must spend first before it can subsequently tax or borrow.

A currency-issuing government can never be revenue-constrained in a technical sense and can sustain deficits indefinitely without solvency risk.

Our own personal budget experience generates no knowledge relevant to consideration of government matters.

This is related to the claims that governments have to ‘live within their means’.

Conservative politicians often claim the government will run out of money if it does not curb spending. They attempt to give this statement authority by appealing to our intuition and experience – that is, they draw on the household budget analogy, and claim that governments, like households, have to live within their means.

This analogy resonates strongly with voters because it easily relatable: we intuitively understand that as individuals, we cannot indefinitely live beyond our means.

A currency-issuing government has no intrinsic financial constraints: government will never run out of money to build a hospital or pay health professionals, but the materials to build the facility and the skilled workers to run it may not be available.

Fiscal space is thus more accurately defined as the available real goods and services available for sale in the currency of issue. These are the ‘means’ available to government to fulfil its socio-economic charter.

The currency-issuing government can always purchase whatever is for sale in its own currency.

The ABC then claim that:

When a politician says they are balancing the books and returning the budget to surplus, it gives the impression that they are clearing the government’s debt.

The reality is that the deficit is just the amount of money the government spends beyond what it receives in a financial year.

Just because you return the budget to surplus does not mean that the debt incurred by the previous deficits disappears.

Yes it does, as soon as the most-dated debt instrument reaches maturity it automatically disappears.

Public debt is issued at maturities (a certain time when it is paid back) irrespective of what happens to it after it is issued in the primary market.

Eventually the holder of the debt, at the time of maturity, will get their cash.

Public debt is the stock that relates to the flow of expenditure that is the deficit (the net outcome of two flows – spending and taxation).

If the government is running a fiscal surplus it is taking more out than it is putting in, which squeezes the non-government sector for liquidity, and forces that sector to shed financial wealth.

Of course, as I have previously noted that there is no sense that you can relate the debt issuance to increasing the spending capacity of the currency-issuing government.

The Australian public have probably forgotten by now but in 2002, the Federal government created a “Review of the Commonwealth Government Securities Market” as a result of the public debt market becoming very thin (not much for sale).

This situation arose because the Government had been retiring its net debt position as it was running fiscal surpluses. They came under pressure from the big financial market institutions (particularly the Sydney Futures Exchange) to continue issuing public debt despite the increasing surpluses.

The financial markets wanted the corporate welfare embodied in the public debt – it is risk free and a perfect vehicle to park funds in uncertain times and also as a benchmark to price private financial assets that do carry risk.

At the time, the federal government was continually claiming that it was financially constrained and had to issue debt to ‘finance’ itself.

But, given they were generating surpluses, then it was clear that according to this logic, the debt-issuance should have stopped.

The upshot was that the Government agreed to continue to issue debt even though it was running surpluses as a sop to the corrupt financial sector who needed their dose of corporate welfare.

The contradiction involved in this position was not evident in the debate although I did a lot of radio interviews trying to get the ridiculous nature of the discussion into the public arena.

The ABC article continued to ‘explain’ (not!) debt and deficits by citing current Australian government debt levels and claiming that:

… $317.2 billion sounds like (and is) a lot of money, by international standards Australia’s public debt is quite low, sitting at 18 per cent of GDP — which is the value of what Australia’s economy produces each year.

They could have said that it was irrelevant and just the record of non-government accumulation of net financial assets as a result of the government bestowing wealth on the non-government sector via their fiscal deficits.

But, no, they wanted us to believe it is “a lot of money” as a warning.

And then, we read:

While government debt can pose problems, it is not quite the same as personal debt and can often serve a purpose.

And the wheels have fallen off well and truly by this time.

They might have actually dared to articulate what “problems” government debt can pose rather than just assert it.

I presume they might have tried to rehearse the tedious conservative claim is that the sovereign issuer of currency is at risk of default if the public debt ratio rises above some threshold (often construed as 80 per cent).

But, the reality is clear, as long as the government only issues debt in its own currency and provides no assurances about convertibility into another currency, the default risk is zero. They might for political reasons decide not to pay up but that is a different matter altogether and highly unlikely.

And in their attempt to be ‘cute’ (“not quite the same”) the ABC journalists lie. Australian government debt (which carries no credit risk) is nothing like personal debt which carries credit risk.

And we know that government debt is non-government wealth whereas private debt reduces private wealth.

Further, while private debt allows the non-government sector to spend more than it earns for a time, public debt levels gives the government no extra spending capacity.

This brings us to the other side of spending more than one earns – saving.

We know that a currency-issuing government does not save in its own currency. Such a notion is nonsensical.

Fiscal surpluses do not represent ‘public saving’, which can be used to fund future public expenditure.

Saving is an act of foregoing current spending to enhance future spending possibilities and applies to a financially constrained non-government entity, such as a household.

Fiscal surpluses provide no greater capacity to governments to meet future needs, nor do fiscal deficits erode that capacity.

The constraints on government spending are not financial but are defined by the availability of real resources that are available for sale in the currency that the government issues.

The ABC journalists continue the downhill slide:

Governments take on debt by issuing bonds to investors who are paid a return, with ratings agencies allotting a credit rating to government debt. A high credit rating, such as the one Australia has, allows governments to borrow at very low rates, with investors trading profit for security.

Tell that to Japan who in the early 2000s had its credit rating downgraded to around junk and yields didn’t blip.

Ratings agencies are irrelevant. Investors know that debt from a currency-issuing government is risk free. And the government can always control yields on its debt at any maturity if it chooses.

Later, ABC journalists have an entry for “Structural deficit” and the readers are informed:

… structural deficit refers to a situation where the current tax structures of a country will fail to cover the expenses under normal economic conditions.

Or should I say ‘disinformed’.

A structural deficit has a very clear meaning – it is the fiscal position that would arise given the current policy parameters (tax and spending) if the economy was at full employment.

Full capacity or full employment is not the same thing as “normal economic conditions”.

You can see how language slips and concepts are lost.

In fact, the concept ‘structural balance’ was previously referred to as the ‘Full Employment or High Employment Budget’ balance.

The change in nomenclature is very telling because it occurred over the period that neo-liberal governments began to abandon their commitments to maintaining full employment and instead decided to use unemployment as a policy tool to discipline inflation.

So a ‘Full employment Budget’ would be balanced if total outlays and total revenue were equal when the economy was operating at total capacity. If the fiscal balance was in surplus at full capacity, then we would conclude that the discretionary structure of fiscal policy was contractionary and vice versa if the fiscal balance was in deficit at full capacity.

This is nothing like “under normal conditions”.

Further the use of the term “fail to cover” is ideologically loaded.

Finally, then ABC journalists claim that:

Governments commonly run deficits in times of economic downturn as a means of insulating the economy and ensuring services are not impacted, with the understanding that surpluses during peak times will help pay down the debt incurred by going into deficit.

The history of Australia (since data was available) and other nations shows they, as a matter of norm, run fiscal deficits.

Fiscal surpluses have been very rare events in our history – abnormal – and have corresponded with rising non-government sector indebtedness and recession soon after.

To offer a history where governments actually oscillate between deficits and surpluses over a cycle is a revisionist exercise. They do not even though they talk about doing that.

Ten million British jobs could be gone in 15 years. No one knows What happens next – John Harris.

The reality of automation is becoming clear and its terrifying. So why is there so little thinking among politicians about those who will be affected?

Plenty of people may not have heard of the retail firm Shop Direct. Its roots go back to the distant heyday of catalogue shopping, and two giants of that era, Littlewoods and Great Universal Stores. Now it is the parent company behind the online fashion brand Very and the reinvented Littlewoods.com. All this may sound innocuous enough. But in two areas of Greater Manchester, Shop Direct is newly notorious.

Until now, what the modern corporate vernacular calls “fulfilment” in other words, packing up people’s orders and seeing to returns has been dealt with at three Shop Direct sites, in Chadderton and Shaw, near Oldham, and in Little Hulton, three miles south of Bolton. But the company now has plans to transfer all such tasks to a “fully automated”, 500,000 sq ft “distribution and returns centre” located in a logistics park in the east Midlands. The compulsory consultation period begins tomorrow, and the shopworkers’ union Usdaw and local politicians are up in arms: if it happens in full, the move will entail the loss of 1,177 full-time posts, and 815 roles currently performed by agency workers; on the new site there will only be jobs for about 500 people.

At a time when apparently low unemployment figures blind people to the fragility and insecurity of so much work, the story is a compelling straw in the wind: probably the starkest example I have yet seen of this era of automation, and the disruption and pain it threatens.

Every time a self service checkout trills and beeps and issues the deathly cry “unexpected item in the bagging area”, it points to the future. So does the increasing hollowing out of British high streets. But much of the looming wave of automation in retail and so called logistics is hidden in the increasingly vast distribution centres, and the research and development departments of online giants. Inevitably, no one employed by these companies is comfortable talking about an imminent decline in human employment across the sectors in which they operate, and the accelerated growth of individual firms exemplified by Amazon, which now employs 500,000 people around the world, up from 20,000 a decade ago gives them plenty of cover. But the evidence is plain: the buying, selling, and handling of goods is rapidly becoming less and less labour intensive.

Carl Benedikt Frey, an automation specialist at the Oxford Martin School, says: “Retail is one industry in which employment is likely to vanish, as it has done in manufacturing, mining and agriculture.” In 2017 an exhaustive report he co-authored showed that 80% of jobs in “transportation, warehousing and logistics” are now susceptible to automation.

In both these connected fields, the replacement of human labour by technology is driven by the e-commerce that shifts retailing from town and city centres to the vast spaces where machines can take over all the picking and packing and in the UK, shopping habits have obviously moved hugely in that direction. The extent to which this makes jobs vulnerable is obvious: the share of total British employment in the wholesale and retail trade towers over all other sectors of the economy, accounting for 1500 of UK workers, as against only 7. 500 in manufacturing.

Across the economy as a whole, a report last year by PricewaterhouseCoopers said that the jobs of more than 10 million workers will be at high risk from automation over the next 15 years.

Technology once considered expensive and exotic is already here. Around the world, Amazon uses about 100,000 “robot drives” to move goods around its distribution centres. The warehouse within the new John Lewis “campus” near Milton Keynes has been described as a “deserted hall of clacks and hums”. It employs 860 robots and keeps the human element of shifting stuff around to an absolute minimum. Last year Credit Suisse looked at the patents the online supermarket Ocado had recently filed and concluded that the company which, among other things, is working on driverless delivery vehicles was pushing into “an automated future” in which half of its staff could be gone within a decade (even now, it operates a much less labour intensive service than its rivals).

And so to the really terrifying part. Given that it will be disproportionately concentrated on low wage, low skill employment, this wave of automation inevitably threatens to worsen inequality. There are whole swathes of Britain where, as employment in heavy industry receded, up went retail parks and distribution centres. In such places as south Wales and South Yorkshire, the result has been an almost surreal, circular economy in which people work for big retail businesses to earn money to spend in other retail businesses. What, you can only wonder, will happen next?

If we are to avoid the worst, radical plans will be necessary. The insecurity that automation brings will soon demand the comprehensive replacement of a benefits system that does almost nothing to encourage people to acquire new skills, and works on the expectation that it can shove anyone who’s jobless into exactly the kind of work that’s under threat. In many places, the disruption of the job market will make the issue of dependable and secure housing if not questions of basic subsistence even more urgent. In both schools and adult education, the system will somehow have to pull as many people as it can away from the most susceptible parts of the economy, and maximise the numbers skilled enough to work in its higher tiers which, given the current awful state of technology and computing teaching, looks like a very big ask.

On the political right, there is no serious interest in anything resembling this kind of agenda: one of the many tragedies of Tory austerity is that it blitzed the chances of any meaningful responses to automation just as its reality became clear; and Conservatives still seem either asleep, or chained to the kind of laissez faire Thatcherism that offers nothing beyond the usual insistence that the market is not to be bucked. Some of them should read an overlooked report by the independent Future of Work commission convened last year by the Labour deputy leader Tom Watson spun with the Panglossian claim that “robots can set us free”, though a lot of its analysis was spot on. It contained bracing predictions about the future of such “low skill, low pay sectors” as transport, storage and retail, and warned that “Britain’s former industrial heartlands in the Midlands and north are more vulnerable because of a concentration of jobs in risk sectors. So, without policy intervention, the technological revolution is set to exacerbate regional inequalities.”

There are signs that such things are now on the radar of senior Labour politicians as with Jeremy Corbyn’s acknowledgment last year that automation “could make so much of contemporary work redundant”. But beyond rather dreamy talk of “putting the ownership and control of the robots in the hands of those who work with them” and commendable plans for a national education service, not nearly enough flesh has been put on these bones perhaps because the conversation about automation is often still stuck in the utopian wilds of a work free, “post capitalist” future.

What does the left have to say to all those Shop Direct workers in Shaw, Chadderton and Little Hulton? How will it allay the fears of the thousands of other people likely to be faced with the same experience?

Put another way, if the future is already here, what now?