Category Archives: Psychology

Music and Empathy are psychological neighbours. Empathetic people process music differently – Stephen Johnson * Music, Empathy, and Cultural Understanding – Eric Clarke, Tia DeNora, Jonna Vuoskoski * Current Disciplinary and Interdisciplinary Debates on Empathy – Eva-Maria Engelen, Birgitt Röttger-Rössler * Neurophysiological Effects of Trait Empathy in Music Listening – Zachary Wallmark, Choi Deblieck, Marco Iacaboni. * Interpersonal Reactivity Index (IRI).

“The ‘other’ need not be a person: it can be music.” Clifton (1983)

There are two pathways when it comes to understanding each other: thinking or mind reading and feeling or empathy.

Empirical investigations have shown that people who have a tendency to be more empathic experience more intense emotions in response to music.

Listening to sounds, even outside of a musical context, significantly activates empathy circuits in the brains of high empathy people. In particular, sounds trigger parts of the brain linked to emotional contagion, a phenomenon that occurs when one takes on the emotions of another.

Musical engagement can function as a mediated form of social encounter, even when listening by ourselves. Recent research has shown that trait empathy is linked to musical preferences and listening style. If we consider music through a social psychological lens, it is plausible that individuals with a greater dispositional capacity to empathize with others might also respond to music’s social stimulus differently on a neurophysiological level by preferentially engaging brain networks previously found to be involved in trait empathy.

Music can be conceived as a Virtual Social Agent… listening to music can be seen as a socializing activity in the sense that it may train the listener’s self in social attuning and empathic relationships. In short, musical experience and empathy are psychological neighbors.

Esenherg et al (1991) define empathy as, “an emotional response that stems from another’s emotional state or condition and is congruent with the other’s emotional state or condition.”

Who or what do we empathize with when listening to music?

For some people music is able to represent a virtual person with whom to empathize, and whom they can experience as empathizing with their felt emotions. Studies that have investigated people’s reasons for listening to sad music when they already feel sad have found that some listeners can experience the music itself as providing empathy and understanding for the feelings that they are going through, functioning as a surrogate for an empathic friend.

Mirror neurons may be as much a consequence of a culture of inter-subjective engagement as they are a foundation for it.

It is quite conceivable that people who are inclined to imagine themselves from others’ perspectives also tend to take up the physical actions implied by others’ musical sounds, whether a smooth and gentle voice, a growled saxophone, or any other musical sound reflecting human actions.

It’s no surprise our level or empathy impacts how we process social interactions with other people. But how might empathy affect the way we process music?

That’s the question addressed in a first of-its-kind study published in Frontiers in Behavioral Neuroscience. The results showed that high empathy people not only got more pleasure from listening to music, but also experienced more activity in brain regions associated with social interactions and rewards.

The implication is that empathy can make you interact with music as if it were a person, or a “virtual persona,” as described in a 2007 study:

“Music can be conceived as a virtual social agent… listening to music can be seen as a socializing activity in the sense that it may train the listener’s self in social attuning and empathic relationships.”

The researchers conducted two experiments to examine how empathy impacts the way we perceive music. In the first, 15 UCLA students listened to various sounds made by musical instruments, like a saxophone, while undergoing an fMRI scan.

Activation sites correlating with trait empathy (IRI subscales) in selected contrasts.

Some of the instrument sounds were distorted and noisy. The idea was that the brain might interpret these sounds as similar to the “signs of distress, pain, or aggression” that humans and animals emit in stressful scenarios, and these “cues may elicit heightened responses” among high-empathy people. Participants also completed the Interpersonal Reactivity Index, a selfreported survey commonly used by scientists to measure one’s level of empathy.

The results confirmed what the team had hypothesized: listening to the sounds, even outside of a musical context, significantly activated empathy circuits in the brains of high-empathy people. In particular, the sounds triggered parts of the brain linked to emotional contagion, a phenomenon that occurs when one takes on the emotions of another.

But how does empathy affect the way we listen to a complete piece of music?

To find out, the researchers asked students to listen to music that they either liked or disliked, and which was either familiar or unfamiliar to them. They found that listening to familiar music triggered more activity in the dorsal striatum, a reward center in the brain, among high-empathy people, even when they listened to songs they said they hated.

Familiar music also activated parts of the lingual gyrus and occipital lobe, regions associated with visual processing, prompting the team to suggest that “empathic listeners may be more prone to visual imagery while listening to familiar music.”

In general, high-empathy people experienced more activity in brain regions associated with rewards and social interactions while listening to music than did low-empathy participants.

“This may indicate that music is being perceived weakly as a kind of social entity, as an imagined or virtual human presence,” said study author Zachary Wallmark, a professor at SiViU Meadows School of the Arts. “If music was not related to how we process the social world, then we likely would have seen no significant difference in the brain activation between high-empathy and low empathy people.”

We often conceptualize music as an abstract object for aesthetic contemplation, Wallmark said, but the new findings could help us reframe music as a way to connect others, and to our evolutionary past.

“If music can function something like a virtual “other,” then it might be capable of altering listeners’ views of real others, thus enabling it to play an ethically complex mediating role in the social discourse of music,” the team wrote.

Music, Empathy, and Cultural Understanding

Eric Clarke, Tia DeNora, and Jonna Vuoskoski

In the age of the internet and with the dramatic proliferation of mobile listening technologies, music has unprecedented global distribution and embeddedness in people’s lives. It is a source of intense experiences of both the most individual (personal stereos) and massively communal (large-scale live events, and global simulcasts) kind; and it increasingly brings together or exploits a huge range of cultures and histories, through developments in world music, sampling, the re-issue of historical recordings, and the explosion of informal and ‘bedroom’ music making that circulates via YouTube. For many people, involvement with music can be among the most powerful and potentially transforming experiences in their lives.

To what extent do these developments in music’s mediated and mediating presence facilitate contact and understanding, or perhaps division and distrust, between people? This project has pursued the idea that music affords insights into other consciousnesses and subjectivities, and that in doing so may have important potential for cultural understanding.

The project:

1) brings together and critically reviews a considerable body of research and scholarship, across disciplines ranging from the neuroscience and psychology of music to the sociology and anthropology of music, and cultural musicology, that has proposed or presented evidence for music’s power to promote empathy and social/cultural understanding through powerful affective, cognitive and social factors, and to explore ways in which to connect and make sense of this disparate evidence (and counter-evidence);

2) reports the outcome of an empirical study that tests one aspect of those claims demonstrating that ‘passive’ listening to the music of an unfamiliar culture can significantly change the cultural attitudes of listeners with high dispositional empathy.

Researchers and Project Partners

Eric Clarke, Faculty of Music, University of Oxford

Tia DeNora, Sociology, Philosophy & Anthropology, Exeter University

Jonna Vuoskoski, Faculty of Music, University of Oxford

Introduction

Music is a source of intense experiences of both the most individual (personal stereos, headphone listening) and massively communal (large-scale live events, and global simulcasts) kind; and it increasingly brings together or exploits an exceptional range of cultures and histories, through developments in ‘world music’, sampling, historical recording and hybridization. At a time when musicology, the social and cultural study of music, have become far more circumspect about essentializing and romanticizing claims, it is still not uncommon to find claims being made for music as a ‘universal language’ that can overcome (or even transcend) cultural difference, break down barriers of ethnicity, age, social class, ability/disability, and physical and psychological wellbeing.

There are widespread symptoms of this belief or claim, including the activities of the West-Eastern Divan Orchestra (founded by Edward Said and Daniel Barenboim, to bring together Israeli and Palestinian musicians); and the appointment by UNICEF of classical musicians to act as ‘goodwill ambassadors’, bringing their music to people in deprived, war-torn, or disaster-hit parts of the world so as to offer emotional support, solidarity, and a kind of communion.

An extract from the violinist Maxim Vengerov, who in 1997 was the first classical musician to be appointed a goodwill ambassador, reads: “1997, September: For Maxim Vengerov’s first official undertaking with UNICEF, he organized a musical exchange with children from Opus 118 a violin group from East Harlem, New York. The children of Opus 118, aged 6 to 13, came from three different elementary schools in this inner-city neighbourhood. This innovative programme has spurred a whole generation to learn ‘violin culture’. Along with the youths, Mr. Vengerov not only played Bach but also southern blues and tunes such as ‘Summertime’ and ‘We Shall Overcome’.”

And from the same webpage, beneath a picture showing the violinist in jeans and T-shirt playing as he leads a line of children in the manner of a latter-day Pied Piper is the caption: “In the remote village of Baan Nong Mon Tha, children from the Karen hill tribe ethnic group follow Maxim Vengerov, in a human chain, to a school run by a UNICEF-assisted NGO. Thailand, 2000.”

Similarly, the 1985 Live Aid, and 2005 Live 8, events were global pop music events intended not only to raise money (in the case of Live Aid) and popular pressure on politicians (in the case of Live 8) for the relief of famine and poverty, but also to galvanize a global consciousness and a united ‘voice’: as Bob Geldof, the prime mover of Live 8 put it: “These concerts are the start point for The Long Walk To Justice, the one way we can all make our voices heard in unison.”

And finally, the popular UK television series ‘The Choir’ (which has run to six series so far) documents the powerful ‘identity work’ and intense emotional experiences that accompany the formation of choirs in schools, workplaces, and military establishments out of groups of people who have had little or no previous formal musical experience, and who come from very varied walks of life (from bank executives to fire officers and military wives).

In all these very public examples of a much wider if less visible phenomenon, we see a complex mixture of implicit musical values, discourses about music’s ‘powers’, folk psychology and its sociological equivalent, and (in some cases) more or less grounded or unsupported claims about the impact of music on the brain (cf. Tame, 1984; Levitin, 2006). It would be easy to be hastily dismissive of some of these claims, but a considerable volume of research by highly regarded scientists and scholars, coming from disciplines that range from neuroscience and philosophy through psychology and sociology to anthropology and cultural studies has also made a significant case for the capacity of music and musicking (Small, 1998) to effect personal and social change (e.g. Becker 2004; DeNora 2013; Gabrielsson 2011; Herbert 2011). If music can effect change, and speak across barriers, it can also offer a means of intercultural understanding and identity work.

As Cook (1998: 129) puts it:

“If both music and musicology are ways of creating meaning rather than just of representing it, then we can see music as a means of gaining insight into the cultural or historical other. If music can communicate across gender differences, it can do so across other barriers as well. One example is music therapy… But the most obvious example is the way we listen to the music of other cultures (or, perhaps even more significantly, the music of subcultures within our own broader culture). We do this not just for the good sounds, though there is that, but in order to gain some insight into those (sub)cultures. And if we use music as a means of insight into other cultures, then equally we can see it as a means of negotiating cultural identity.”

In different ways, these (and other) claims seem to make use of a generalized notion of empathy. Empathy has recently seemed to gain considerable attention/currency in musicology, psychology of music, sociology of music and ethnomusicology as a way to conceptualize a whole range of affiliative, identity-forming, and ‘self-fashioning’ capacities in relation to music. But what is brought together or meant by the term ‘empathy’, and is it a useful and coherent way to think about music in relation to its individual and social effects?

Our project, and this report, arise from the disparate nature of the evidence for the claims about music’s transformative power, individually and socially, and the ‘scattering’ of the case across theories and findings in a huge disciplinary range: from research on music and mirror neurons (Overy and Molnar-Szakacs 2009) to the ethnomusicology of affect (Stokes 2010), the history of musical subjectivity (Butt 2010) and sociological studies of music and collective action (Eyerman and Jamieson 1998), the case has been made for different perspectives on music’s capacity to afford compassionate and empathetic insight and affiliation, and its consequent power to change social behaviour.

These diverse research strands all point to the crucial role that musicking plays in people’s lives, to its transformational capacity, and to the insights that it can afford. There is no single window onto ‘what it is like to be human’, but musicking seems to offer as rich, diverse, and globally distributed a perspective as any and one that engages people in a vast array of experiences located along dimensions of public and private, solitary and social, frenzied and reflective, technological and bodily, conceptual and immediate, calculated and improvised, instantaneous and timeless. The fact that music can be heard and experienced by large numbers of people simultaneously and in synchrony (orchestral concerts, stadium gigs, live simulcasts) means that the embodied experience of music can also be shared fostering entrainment and a sense of cosubjectivity. Indeed, some theories of the evolutionary significance of music highlight the importance of music’s empathy promoting aspects, suggesting that a fundamental adaptive characteristic of music is its capacity to promote group cohesion and affiliation (Cross & Morley, 2008).

While a whole range of studies has suggested that empathic interaction with other human beings is facilitated by musical engagement, the direct empirical evidence for this important possibility is scattered and disciplinarily disconnected. The aim of the project summarised in this report was to examine critically a substantial body of research evidence that relates to claims for music’s capacity to engender cultural understanding, primarily through the mediating construct of empathy; examine its consequences and significance, and provide a framework within which to connect its disparate elements and highlight points of interdisciplinary convergence and divergence; and carry out a focused empirical study that was designed to investigate a specific aspect of that complex case.

The report follows the general disciplinary outlines of the initial literature search, which revealed in excess of 300 items relating to the broad theme (‘Music, Empathy and Cultural Understanding’) of the project.

Empathy

The word empathy has had currency in English for little more than 100 years, listed by the Oxford English Dictionary as being first used by the psychologist Edward Titchener in 1909, and defined by the OED as:

“a. Psychol. and Aesthetics. The quality or power of projecting one’s personality into or mentally identifying oneself with an object of contemplation, and so fully understanding or appreciating it.

b. orig. Psychol. The ability to understand and appreciate another person’s feelings, experience, etc.”

Titchener’s ‘empathy’ was his attempt to translate the term Einfühlung, coined by the philosopher Robert Vischer (1873) in a book on visual aesthetics. But it was Theodor Lipps (1903) who really championed the concept of empathy, developing it from an essentially aesthetic category (the ability to ‘feel into’ an artwork) into a much more general psychological/philosophical concept to account for the human capacity to recognize one another as having minds. Laurence (2007) gives an important account of the origin and development of the idea of empathy, tracing a line back to Adam Smith’s (1759) The Theory of Moral Sentiments, and Smith’s appeal to a notion of sympathy and ‘fellow feeling’ as the basis for understanding and living a moral life that is based on imagining how we would feel in the circumstances of others. The distinction between imagining how we would feel and simply identifying with how another feels is crucial, since it places Smith’s notion of sympathy in the domain of imaginative reason rather than blind contagion, and makes clear the role of cultural artefacts (paintings, literature, drama, music) as a means of socially learning that sympathetic attitude.

Laurence also draws significantly on the work of Edith Stein (1917) a doctoral student of Edmund Husserl whose On the Problem of Empathy also engages with the problem of how it is that we can know or experience the mental states of others, whether this knowledge or experience is given in some direct and primordial sense, and Stein’s conclusion that empathy is dependent on the mediating role of similarity with the person (or even animal) with whom/which we attempt to empathize. Laurence ends up with definition of empathy that emphasizes empathy as both a process, and as a social and educable achievement:

“In empathizing, we, while retaining fully the sense of our own distinct consciousness, enter actively and imaginatively into others’ inner states to understand how they experience their world and how they are feeling, reaching out to what we perceive as similar while accepting difference, and experiencing upon reflection our own resulting feelings, appropriate to our own situation as empathic observer, which may be virtually the same feelings or different but sympathetic to theirs, within a context in which we care to respect and acknowledge their human dignity and our shared humanity.” (Laurence 2007: 24)

Finally, and in significant contrast to Laurence, Baron Cohen (2011) provides a wide ranging account of empathy that explicitly presents it as a psychometrically measurable trait, with a genetic and environmental basis, distributed in a particular network of brain regions, and manifested in seven ‘degrees’ ranging from the zero degrees of empathy of the psychopath or autistic person, to the six degrees of empathy of some ‘hyper empathic‘ individuals. Baron Cohen regards empathy as critically valuable human resource, and sees the erosion or loss of empathy as an issue of global importance that has the most serious consequences for social health at scales ranging from the family to international relations.

As this necessarily brief review has revealed, there is a significant range of perspectives on empathy, from which two in particular might be drawn. The first is the distinction between empathy as a skill or social achievement acquired, educable, and in some sense fundamentally collective; and empathy as a trait relatively fixed, individual, and with a genetic component. The second concerns the extent to which different perspectives emphasize the involuntary and inter subjective character of empathy (sometimes expressed through the metaphor of contagion), involving identification with the other and a loss of self; as opposed to a more cognitive and deliberate view in which empathy depends upon an imaginative projection into the circumstances of the other (closer to what Smith called sympathy).

These differences in perspective affect the scope and reach of the term empathy, and are an issue to which we return towards the end of this report in the specific context of music.

Music and Empathy across Different Fields

This section critically reviews the existing literature on music and empathy under a number of different conceptual and disciplinary headings.

1. Neuroscience

An increasing body of neuroscientific evidence indicates the very close coupling of perceptual and motor functions in the central nervous system, strongly suggesting that one way to account for the human capacity to adopt the perspective of another (sometimes referred to as ‘theory of mind’ or even ‘mind reading’) is on the basis of the way in which a person’s experience of their own actions is entangled with their perception of the actions of others. At the level of brain anatomy, it has long been recognized that there are suggestive parallels between the organization of sensory and motor cortices of the human brain and this might provide at least superficial evidence for the close relationship between perception and action.

More recently, however, and particularly in the wake of the discovery of mirror neurons in the early 1990s (e.g. Di Pellegrino, Fadiga, Fogassi, Gallese, and Rizzolatti G., 1992), there has been a surge of interest in the ways in which perception action relationships at the level of the central nervous system might provide a powerful way to explain a variety of intersubjective and empathic phenomena. Freedberg and Gallese (2007: 197) have argued that the activation of a variety of embodied neural mechanisms underlie a range of aesthetic responses, proposing that “a crucial element of esthetic response consists of the activation of embodied mechanisms encompassing the simulation of actions, emotions and corporeal sensation, and that these mechanisms are universal.” Freedberg and Gallese are primarily concerned with the embodied and empathic qualities of visual art, but Overy and co-authors (Molnar Szakacs & Overy 2006; Overy & Molnar Szakacs 2009; McGuiness & Overy 2011) have developed a persuasive model of how the embodied, emotive and empathic effects of music might be understood from a mirror neuron perspective.

(A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another.
Thus, the neuron “mirrors” the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in primate species. Birds have been shown to have imitative resonance behaviors and neurological evidence suggests the presence of some form of mirroring system.
In humans, brain activity consistent with that of mirror neurons has been found in the premotor cortex, the supplementary motor area (SMA), the primary somatosensory cortex and the inferior parietal cortex.)

In simple terms, mirror neurons (or mirror systems as they are often called) are neurons in a motor area of the brain that become active when an individual merely observes an action of the kind that these neurons are usually responsible for controlling.

These ‘as if body loops’, as Damasio (1999) has called them, provide a direct identification with the actions of another, and constitute the fundamental building blocks of what Gallese (2001; 2003) has termed the ‘shared manifold’. The shared manifold is understood as a three-leveled mechanism for inter-subjective identification: i) a phenomenological level that is responsible for our sense of similarity with others which Gallese equates with an expanded notion of empathy; ii) a functional level characterized by models of self other interaction; and iii) a subpersonal level, instantiated by the activity of mirror matching neural circuits (Mirror Neuron Systems). The aim of the shared manifold hypothesis is to ground a sense of empathy and self other identity without suggesting that human experience and neuroscience can simply be collapsed into one another: hence the distinction between phenomenological, functional and subpersonal levels.

Gallese is also at pains to point out that self other identity is not all that there is to inter subjectivity: mirror systems do not allow us to experience others exactly as we experience ourselves, since to do so would (ironically) preclude the possibility of experiencing others as such at all. Our capacity to experience an external reality with content and behaviours that we can understand is made possible by “the presence of other subjects that are intelligible, while preserving their alterity character.” (Gallese 2003: 177)

At times the mirror neuron idea has been presented as if it were a hardwired feature of the brain that acted rather like a magic bullet. But as Heyes (2010) has argued, while one way to see mirror neurons is as an evolutionary adaptation (and therefore present at the species level), an alternative is to see the development of mirror systems as acquired through the operation of associative processes through the lifetime of individuals. From this perspective, mirror processes originate in sensorimotor experience, much of which is obtained through interaction with others. Thus, the mirror neuron system is a product of social interaction, as well as a process that enables and sustains social interaction. One rather specific example of this kind of plasticity is the finding by Bangert et al. (2006) that trained pianists listening to the sound of piano music showed significantly more neural activity in the motor areas of their brains than did a matched group of non-musicians.

2. Perception-action coupling, Empathy and Embodiment

Mirror systems are one way to understand inter-subjective interaction and identity, with direct relevance to music, at a neural level.

At the behavioural level there is another extensive literature that has revealed the significance of mimicry and synchronization in mediating human relationships in general, and music in particular. In a review of the extensive literature Chartrand and Dalton (2008; see also Chartrand & Bargh 1999) make the case for the importance of mimicry in social life, ranging from postural and facial to vocal and syntactic mimicry (people unconsciously mimicking one another’s accents and sentence structures) as both manifestations of existing social bonds and affiliations as well as the means by which such social bonds may be established (e.g. Inzlicht, Gutsell & Legault, 2012). As Heyes (2011) has argued such imitative behaviours may be automatic and insuppressible, and constitute a fundamental embodied basis for a critically important domain of human social interaction.

At a similarly general level, a number of authors (e.g. Valdesolo and DeSteno 2011) have demonstrated the power of synchronization in inducing altruistic and compassionate behaviours, this synchronization in many cases serving to entrain people’s behaviours upon one another.

With this general psychological literature in mind, it is easy to see that music powerfully affords these kinds of cooperative and affiliative engagements. Music has long been associated with socially coordinated work, worship and celebration, where its rhythmically entraining attributes and opportunities for controlled mimicry and complementation (such as in the ‘call and response’ character of many vernacular musical cultures) play a central role (e.g. Clayton, Sager and Will 2005).

Hove and Risen (2009) demonstrated with a tapping task that the degree of synchrony between individuals tapping together predicted how affiliated those individuals rated one another, and in a more directly musical context both Kirschner and Tomasello (2009) and Rabinowitch, Cross & Burnard (2012) have shown that over both shorter and longer timescales children involved in rhythmically synchronized music activities subsequently behaved more cooperatively and empathically than did children who were involved in an equivalent but not synchronized activity. Music is a powerfully multi-sensory, and particularly kinaesthetic (see Stuart 2012) phenomenon whose embodied character draws people into fluid and powerful social groups at a range of scales and degrees of (im)permanence, and in doing so helps to enact a kind of empathy.

3. Dispositional empathy and music

As discussed above, some authors (e.g. Baron-Cohen 2011) have understood empathy as a trait, arguing that since some people have a tendency to experience empathy more readily than others, being more or less empathic can be understood as a personality trait or a disposition.

In its broadest sense, dispositional empathy can be defined as an individual’s general responsiveness to the observed experiences of others, involving both perspective-taking capabilities or tendencies, and emotional reactivity (e.g., Davis, 1980).

Davis (1980) has suggested that dispositional empathy is a multidimensional construct comprising at least four components:

Perspective-taking (PT) can be understood as the ability as well as the tendency to shift perspectives (i.e., to see and understand things from another’s point of view)

Fantasy can be described as the tendency to identify oneself with fictional characters in books and films, for example. By contrast,

Empathic Concern (EC) taps into the tendency to experience feelings of compassion and concern for observed individuals, whereas

Personal Distress is associated with the individual’s own feelings of fear, apprehension and discomfort in response to the negative experiences of others.

Empathic Concern (EC) and Personal Distress are associated with the more emotional side of empathy.

Theories of music-induced emotions suggest that some form of empathy may be involved in the emotional responses induced by music (e.g., Scherer & Zentner, 2001; Juslin & vastfjall, 2008; Livingstone & Thompson, 2009). The proposed mechanisms range from pre-conscious ‘motor resonance’ with musical features that resemble vocal and motor expression of emotion (Molnar-Szakacs & Overy, 2006; Livingstone & Thompson, 2009) and emotional contagion (Juslin & Vastfjal, 2008) to empathizing with emotions and notions that are construed in the listener’s imagination (e.g., Scherer & Zentner, 2001). Indeed, empirical investigations have shown that people who have a tendency to be more empathic experience more intense emotions in response to music (Vuoskoski & Eerola, 2012; Ladinig & Schellenberg, 2011), providing indirect evidence for the role of empathy in music-induced emotions.

As people with high dispositional empathy are more susceptible to emotional contagion in general (Doherty, 1997), it may be that highly empathic people also experience emotional contagion from music more readily (Vuoskoski & Eerola, 2012). A complementary explanation is that empathic people may be more likely to engage in some form of reflective empathy during music listening, involving visual or narrative imagery, for example (e.g., Vuoskoski & Eerola, 2012; 2013).

Dispositional empathy has been associated with music-induced sadness in particular, as highly empathic people have been found to experience more intense sadness after listening to sad instrumental music (Vuoskoski & Eerola, 2012).

Interestingly, empathic individuals also tend to enjoy sad music more than non-empathic individuals, suggesting that empathically experienced negative emotions such as sadness can be enjoyable in the context of music (Vuoskoski et al., 2012; Garrido & Schubert, 2011).

Similar findings have been made in the context of films, where the experience of empathic distress while watching a tragic film has been associated with greater enjoyment of the film (De Wied et al., 1994).

It is not yet known what the mechanisms behind such enjoyment are, although the portrayal of more positive themes such as friendship, love, and human perseverance often present in tragic films have been proposed as one potential source (De Wied et al., 1994). However, it is not clear whether this explanation could also apply in the context of music. Nevertheless, these findings do suggest that there is something inherently enjoyable in empathic engagement in an aesthetic context even when the experienced emotions could be nominally characterized as negative.

4. Music as a virtual person, music and subjectivity

People tend to describe music in terms of attributes commonly used to describe psychological attributes of people (Watt & Ash, 1998). Indeed, it has been suggested that music is capable of creating a ‘virtual person’ of sorts (Watt & Ash, 1998; Livingstone & Thompson, 2009). The musical expression of emotion bears a close resemblance to human vocal and motor expression of emotion, involving similar auditory and gestural cues (for a review, see Juslin & Laukka, 2003), and it has been proposed that listeners may respond to music as they would to the perceived emotional state of a conspecific (e.g., Livingstone & Thompson, 2009). However, music’s capacity to represent a virtual person seems to go beyond acoustic and gestural cues that resemble vocal and motor expression of emotion.

An example is provided by studies that have investigated people’s reasons for listening to sad music when they already feel sad. These studies have found that some listeners can experience the music itself as providing empathy and understanding for the feelings that they are going through, functioning as a surrogate for an empathic friend (Lee, Andrade & Palmer, 2013; Van den Tol & Edwards, 2013).

The participants in Van den Tol and Edwards’s study felt that:

“The music was empathizing with their circumstances and feelings, supporting them, making them feel understood, or making them feel less alone in the way they were feeling” (Van den Tol & Edwards, 2013, p. 14).

Thus, it appears at least for some people that music is able to represent a virtual person with whom to empathize, and whom they can experience as empathizing with their felt emotions.

There has been considerable interest in the musicological literature in the relationship between music and human subjectivity (e.g. Cumming, 2000; McCIary, 2004), pursuing the idea that music has attributes either of an idealized person, or of an idealized collection or community of people. Lawrence Kramer (e.g. 2001; 2003) has written extensively about music as the instantiation of a kind of imagined subjectivity not associated specifically with the composer, performers, or anyone else explicitly and literally engaged with the making of the music, nor simply as the mirror of a listener’s own subjectivity, but in a more abstracted and generic manner. Likewise, the philosopher and violinist Naomi Cumming, in a paper that focuses on the violin introduction to the aria ‘Erbarme Dich’ from J. S. Bach’s St. Matthew Passion, writes of how the listener does not just find her or his own subjectivity passively reflected back, but reconfigured:

“The pathos of Bach’s introduction, and its elevated style, are quite unmistakable, and recognition promotes empathy. Once involved with the unfolding of the phrase’s subjectivity, the listener does not, however, find a simple reflection of his or her own expectancies. The music forms the listener’s experience, and in its unique negotiation of the tension between striving and grief, it creates a knowledge of something that has been formerly unknown, something that asks to be integrated in the mind of the hearer.” (Cumming, 1997: 17)

And in a still more explicitly psychological manner, DeNora (2000; 2003; 2013) has written of the ways in which music acts as a technology that affords a listener the opportunity to structure and organize their identity in long-term ways, and as a way of managing their immediate emotional states and sense of identity. Writing of one of her informants, ‘Lucy’, DeNora points out how she (Lucy) uses music as a medium in which she can draw a connection between the musical material, her own identity, and a kind of social ideal. As Lucy herself expresses it, she ‘finds herself’, the ‘me in life’ within musical materials, in a manner that allows her to reflect on who she is and how she would like to be, a process that DeNora points out is not just private and individual:

“Viewed from the perspective of how music is used to regulate and constitute the self, ‘solitary and individualistic’ practices may be re-viewed as part of a fundamentally social process of self-structuration, the constitution and maintenance of self. In this sense then, the ostensibly private sphere of music use is part and parcel of the cultural constitution of subjectivity, part of how individuals are involved in constituting themselves as social agents.” (DeNora, 2000: 47-8)

Music and musicking, then, can be viewed as a rich environment in which more or less active participants (listeners and makers) can engage with the real and virtual subjectivities of other real and virtual participants, and in doing so come to experience (and perhaps increasingly understand) the cultural perspective that those others (real or virtual) inhabit. Music is in this way both a medium for empathic (and antagonistic) engagement with others, and an environment in which to explore and experiment with a range of more or less projected, fantasized and genuinely discovered subject positions.

5. Sociological Perspectives

Turning from the rather individualistic accounts that have dominated the previous sections, towards understandings of music and empathy that take an explicitly social stance, sociological perspectives that enhance understanding of empathic processes derive from what may be termed the ‘new sociology of art’ (de la Fuente 2007). This sub-disciplinary paradigm investigates aesthetic materials for the ways that they may be seen to frame, shape or otherwise have an impact in social life. It is linked in turn to perspectives within sociology that cluster around the so-called, ‘strong’ program of cultural sociology (Alexander 2008) in which cultural materials are understood as active mediators of psycho-social and subjective processes and in which arts are not understood to be ‘about’ society or shaped ‘by’ society but rather ‘in’ society and constitutive of social relations (Hennion 2007).

These ‘new’ sociologies of art and culture are in turn linked to a ‘meso’ perspective in sociology devoted to groups of actors understood as networks of people, practices (conventions, operations, activities with histories of use) and things (Fine 2010). They focus on interaction orders (Fine 2012), or local actions that produce forms of ordering. The interaction order is the place where meanings are created, validated and reproduced in ways that travel to other networks.

Within this meso perspective there is no macro-micro divide since both macro and micro are mutually produced within scenes and settings of activity. The focus on this concerted activity in turn offers considerable scope for examining the question of just how cultural forms, including musical forms, actually enter into action and experience (DeNora 2003).

The impetus for these perspectives comes from various distinct but complementary developments in sociology since the middle 1980s that describe the ways that aesthetic and symbolic materials ‘anchor’ action (Swidler 2002) by presenting actors with orientational materials that can inform, focus and specify styles and trajectories of action in real time. The concern with how aesthetic materials ‘get into’ action (Acord and DeNora 2012) is one that has been associated with other developments in sociology, most notably the turn from a focus on the cognitive components of action, and models of social actors as calculating beings, to a focus on embodiment and feeling (Witkin 1994). These developments resonate well with, and are further illuminated by, developments in the philosophy of consciousness that begin with notions of the ‘extended mind’ (Clark and Chalmers 1998) and draw out that concept to embrace ‘the feeling body’ (Colombetti 2013) in which embodied conditions and sensations are understood both to take shape in relation to things outside of individuals and to inform cognitive appraisal.

Insofar as feeling and embodiment can be understood to take shape through encounters with aesthetic materials and can be understood to cultivate sensibilities or predispositions in favour of some social scenarios (and thus, contrarily, away from others), aesthetic materials have been highlighted within sociology as sources of social order. In this respect, the ‘new’ sociology of art harks back to Adam Smith’s ideas of sympathy and the capacity for fellow feeling discussed above, in which Smith suggests that the capacity for fellow feeling and being able to imagine the other is a lynchpin of mutual orientation and, thus, social stability. While Smith makes it clear that sympathy (the capacity to imagine the other) is not empathy (the capacity to feel what the other is feeling, literally to share their experience), Smith’s focus on the prerequisites for achieving sympathy highlight the importance of bodily processes. Specifically, Smith describes how, if sympathy is to be achieved, it is necessary for actors to moderate their passions (tamp down, raise up levels of intensity or ‘pitch’ as Smith calls it) so as to encourage mutual engagement through shared modalities of feeling (Smith 1759: I. I. 36-39). In this respect, Smith’s interest in mutual emotional calibration, understood as a prerequisite of mutual understanding, resonates with Alfred Schutz’s concept of attunement, understood as the prerequisite for ‘making music together’ (his example is the performance of a string quartet used as a case in point of social action writ large and the need for mutual orientation, entrainment, calibration and the gestalt to which they give rise, namely, shared feeling forms.

Classical sociology can, in short, be read as offering important leads for the study of empathy, understood as emotional and embodied mutual orientation, predisposition and preference, and in this sense it can also be read as offering an excellent basis for appreciating ‘art in action’ and the role of the arts in underwriting communicative action or how we bind ourselves together in time, whether in conversation, with its prosodic and timing patterns (Scollon 1982), or more generally, as Trevarthen puts it, as the dynamic sympathetic state of a human person that allows co-ordinated companionship to arise’ (cited in Ansdell et al 2010). As such the arts, and in the case of this project, music, can be conceptualised as offering materials for shaping up the feeling body from infancy to old age, in a wide range of roles and guises.

But if the arts and music more specifically ‘get into’ action, the question, as stated earlier, remains: how does this happen and can we trace that process? And in relation to empathy, this question can be posed in terms of how shared feeling states, sensibilities and predispositions come about, and how they can be cultivated and thus also how they may be more problematically controlled (Hesmondhalgh 2013; Born 2012; DeNora 2003). Within sociology the most fruitful paradigms have focused upon learning, mostly informal situated learning, among which the classic work on this topic is Howard S. Becker’s ‘Becoming a Marijuana User’ (1953).

Becker’s piece has been used by subsequent scholars to develop new (grounded) theories of how culture gets into action from comparisons of how one learns to respond to musical ‘highs’ (Gomart and Hennion 1999) to how one learns to feel and respond sexually (Jackson and Scott 2007; DeNora 1997) and how one learns to respond in various workplaces and forms of occupation (Pieslack 2009; DeNora 2013) and how one manages and modifies emotions and energy levels as part of everyday self-care (DeNora 2000; Batt-Rawden, DeNora and Ruud 2005; Skanland 2010) or in scene-specific settings such as retail outlets (DeNora 2000). Specifically these studies have followed the ways that individuals and groups engage in processes of modelling, adjustment, tutoring and directing and attempted alignment with musical materials in ways that draw out emotional and embodied sensation and experience in musically guided ways. This work helps to highlight just how deeply culture can come to penetrate embodied processes and experiences, and thus dovetails with more recent work on the culturally mediated experience of health and wellbeing.

6. Music Therapeutic and Wellbeing Perspectives

The focus on music, health and wellbeing is a growing area (Koen et al 2008; MacDonald et al 2012; MacDonald 2013). It encompasses music therapeutic perspectives, community music, psychotherapeutic perspectives and more overtly medical applications as well as the history of medicine and healing. At the level of the individual, and in overtly medical contexts, research in these areas has documented music’s potential for the management of pain (Edwards 2005; Hanser 2009), anxiety (Drahota et al 2012), palliative care (Aasgaard 2002; Archie et al 2013; DeNora 2012), and immunology (Fancourt et al 2013; Chandra and Levitin 2013) all of which emphasise mind-bodyculture interaction. At the broader level at which music connects with and can be seen to contribute to wellbeing, music has been described ecologically as part of salutogenic (health-promoting) space (DeNora 2013).

In all of this work, there are excellent resources for the study of empathy, in particular for investigating empathy understood as sensibility, perception and orientation as musically mediated. Specifically, the focus on the malleability of consciousness and selfperception (Clarke and Clarke 2011) points to a human capacity for entering into different modes of awareness, ones that are simultaneously sensitising (aesthetic) and desensitising (anaesthetic), and in so doing indicates the importance and power of the cultural technologies through which altered states can be achieved. The case of music and pain management illustrates many aspects of this theme.

As Hanser has described it, recent theoretical understandings of pain have moved toward a multi-dimensional conception of pain perception, one in which pain is not unmediated but rather comes to be experienced in relation to cultural and situated interventions, including music. In part, musical stimuli simply compete with neural pain messages. But more interestingly, music stimulates both oxytocin and embodied sympathetic responses (Grape 2002; Hurlemann et al 2010). Recent interdisciplinary perspectives highlight how the music, in tandem with other biographical and contextual factors, may lead a person in pain into alternative situations, ones in which she/he becomes sensitised to musically inspired associations and desensitised to the former situation of being in pain. Thus, music cannot necessarily address the cause of the pain but it can redirect the sensation of pain by capturing consciousness in ways that recalibrate it (DeNora 2013). So too, in the Bonny Method of Guided Imagery and Music (Bonde 2012) music may provide a grid or template against which knowledge-production (memory, self-and mutual understanding, historical accounts) can be elaborated and scaffolded in ways that can be used to diminish ‘negative’ emotions and associations, effectively recalibrating perception and, in this case, the self-perception of pain.

Methodologically, the music therapy index (Nordoff & Robbins 2007), a highly detailed real-time log of musical and para-musical action, can be used to display key or pivotal moments of musically instigated or musically guided action (movement, shifts in comportment, utterances). So too, the ‘musical event’ scheme can be used to track with some precision the ways in which the musical permeates the paramusical and vice versa across time and in keeping with the meso focus described earlier networks (DeNora 2003; Stige and Aara 2012).

More generally, and in ways that draw music therapy and music and conflict resolution into dialogue, musical engagement may be used to transform psycho-social situations, again leading the actor or actors away from the perception of distressing features of body/environment and toward more positive features and scenarios, and in ways that may also contribute to hope, patience and general mental wellbeing (Ansdell et al 2010; Ansdell 2014) as well as broader forms of cross-cultural and interactional accord, linked to music and guided imagery (Jordanger 2007). Community Music Therapy has perhaps most notably described music’s role in the production of communitas, through joint improvisation and as a means of generating proto-social capital (Procter 2012).

Within the growing field of music and conflict transformation studies (Laurence 2007), a key theme has focused on the importance of shared practice and actual grass-roots (bottomup) musicking as a prerequisite for enduring forms of change (Bergh 2011; 2010; Robertson 2010). In particular, as Bergh has described, if music is to contribute to enduringly altered practice, or altered consciousness of the other, that endurance requires continued and repeated practice, continued and repeated participation in musical activity. And as we have already indicated, music is by no means an unmitigated ‘good’ within the conflict transformation literature: as Bergh has observed (Bergh 2011), music can be and has been used to inculcate feelings of animosity, or for purposes of oppression and torture (Cusick 2008); and historically has been incorporated into military culture through drill, march music and, more recently, through psych-op motivational techniques (Gittoes 2004; Pieslack 2009). Indeed Laurence (2007: 33), even while writing of the potential for music in conflict resolution, argues that inculcating peaceful values is one of music’s rarest uses, and that “of music’s purposes, many and probably most, serve the ongoing ends of power relationships one way or another.”

7. Cross-cultural perspectives

The final category of literature that we consider in this report touches on the potentially vast question of cultural and cross-cultural understanding. Within the psychology of music there has been an interest in the relationship between possibly ‘universal’ and culturally specific aspects of musical communication dating back to the very beginnings of both the psychology of music and ethnomusicology in the work of Carl Stumpf (Stumpf & Trippett 1911/2012).

Among other more recent empirical studies, Balkwiil, Thompson, & Matsunaga (2004) have shown that music can successfully communicate emotional meanings across different cultures, but ethnomusicologists, perhaps rightly suspicious of simplistic notions of inter-cultural communication, have pointed to issues of representation, and of the incommensurability of concepts (or in this case emotional meanings) across cultural contexts as factors that might undermine the validity of a naively empirical approach (Stock 2014). A number of authors have recently proposed the value of a ‘relational musicology’ that might tackle issues of inter-cultural understanding, including Cook (2012: 196) who argues for relational musicology as “a means of addressing key personal, social and cultural work that is accomplished by music in today’s world.”

One specific kind of ‘cultural work’ that has recently been addressed in ethnomusicology that is of direct relevance to this project is the affective and social work that is accomplished by/within modern ‘sentimental’ cultures. Martin Stokes (2007; 2010) has provided vivid accounts of the emotional, intimate and affiliative character of contemporary sentimental musical cultures in Egypt and Turkey, and Butterworth (2014) in relation to Peruvian huayno music, in which something very much like empathy (though Stokes relates it more directly to a ‘Smithian’ as in Adam Smith notion of sympathy) is understood as a cultural construct or condition. In Stokes’s words (2010: 193), one might view “sentimentalism as a kind of civic project, a way of imagining affable relations of dependence on strangers in modern society.” This is a very different perspective on empathy one that sees it as a social achievement, rather than personality trait; a collective skill, rather than the expression of a circuit of ten interconnected brain regions (cf. Baron-Cohen 2011). As Cook (2012) argues in relation to the relational understanding of musical methods from one domain applied to another (Schenkerian analysis and Chinese music; Nineteenth century Western transcriptions of Indian melodies that were presented as ‘authentic Hindostannie airs’), such encounters conceived within an appropriate relational conceptual framework offer a domain of shared experience and cross-cuitural understanding.

Figure 1. Mean IAT scores (D-values) ±standard error of the mean, grouped by condition. Positive D-values indicate an unconscious preference for West African (relative to Indian) people, and negative values indicate an unconscious preference for Indian (relative to West African) people.

To investigate the effect of listening to Indian vs. West African music on participants’ Dvalues as well as the hypothesized moderating effect of dispositional empathy on the effects of music we conducted an ANCOVA with the Type of Music (Indian or West African) as a factor, and Dispositional Empathy (global IRI scores) as a covariate. We also included an interaction term of Type of Music and Dispositional Empathy in the model. There was no significant main effect of Type of Music; F(1,54) = 2.59, p = .11, although the trend was in the anticipated direction with participants exposed to Indian music displaying a slight preference for Indian (relative to West African) people, and participants exposed to West African music displaying no apparent preference.

The mean D-values of the two groups are displayed in Figure 1. As might have been expected, Dispositional Empathy was not significantly related to IAT scores when examined across the two conditions; F(1,54) = 0.20, p = .89. However, there was a significant interaction between Type of Music and Dispositional Empathy; F(1,54) = 5.51, p = .023, n(2/p) = .09, suggesting that dispositional empathy indeed moderated participants’ susceptibility to the musical manipulations. The relationship between dispositional empathy and D-values in the two groups is displayed in Figure 2. We also investigated the potential contributions of musical training, sex, and subjective responses to the music (ratings of liking and felt emotional impact) to the D-values, but no statistically significant relationships were found.

Figure 2. The relationship between dispositional empathy and IAT scores (D value), grouped by condition. Positive D values indicate an unconscious preference for West African (relative to Indian) people, and negative values indicate an unconscious preference for Indian (relative to West African) people.

2 Conclusions

The empirical study has provided preliminary evidence for the hypothesis that listening to music without any explicit semantic content (such as comprehensible lyrics) can evoke empathy and affiliation in listeners with high dispositional empathy. This interpretation is supported by the significant interaction between Type of Music and Dispositional Empathy, which revealed that people with high dispositional empathy scores were more likely to display an unconscious preference for the ethnic group to whose music they were exposed than those with low dispositional empathy scores. The fact that high dispositional empathy made participants more susceptible to the musical manipulations suggests that the observed findings cannot be explained in terms of priming or knowledge activation effects, such as those observed in the case of background music and purchasing decisions (e.g. North, Hargreaves & McKendrick, 1999). The lack of a statistically significant relationship between the IAT scores and liking ratings also indicates that our findings cannot be accounted for by a simple preference effect (cf., Nantais & Schellenberg, 1999).

Instead, we propose that the more empathic participants may have been more open to the music, and more likely to entrain with the music involving internal mimicry and emotional contagion; and may also have been more likely to engage in reflective empathy, in the form of visual and/or narrative imagery, and/or semantic elaboration. In the context of music, entrainment comprises both temporal as well as affective components (see e.g., Phillips-Silver & Keller, 2012), and in general imitation and entrainment have been found to both reflect and elicit affiliation (Chartrand & Bargh 1999; Hove & Risen 2009). Since people with high dispositional empathy have been found to exhibit stronger motor and sensory resonance to observed actions, and the pain of others (Gazzola, Aziz-Zadeh 8i Keysers, 2006; Avenanti et al., 2008), it is possible that empathic people are also more likely to resonate with the acoustic and gestural features of music. This stronger resonance could explain why empathic individuals are more susceptible to emotional contagion from music (cf. Vuoskoski & Eerola, 2012), and why they also appear to be more sensitive to the affiliation-inducing effects of music listening.

However, further investigation is required in order to better understand the phenomenon, and to distinguish between the potential contributions of pre-reflective motor and affective resonance, and the more reflective empathy involving imagery, perspective-taking, and other extra-musical associations. As dispositional empathy comprises both emotional reactivity and cognitive perspective-taking attributes, either or both of these components may contribute to the observed affiliation-inducing effects of music listening. A possible way to investigate this would be to implement a nondemanding distractor task during the music listening, which would limit participants’ capacity to conjure up imagery and other extra-musical associations. Furthermore, the failure to find a statistically significant main effect of Type of Music on participants’ implicit associations could either be due to the fact that the variation in participants’ pre-existing preferences for Indian vs. West African people was too great in relation to our sample size, or that the participants with low dispositional empathy were simply not affected by the music.

Future studies could attempt to investigate this issue by implementing pre as well as post-manipulation measures of implicit associations, although there may be other, more problematic issues associated with exposing participants to Indian and West African images prior to the musical manipulations.

5. General Discussion, Implications, Prospects

The result of our empirical study provides some evidence for the capacity of music even when encountered in arguably the most passive circumstances (solitary headphone listening in a ‘Iaboratory’ setting) to positively influence people’s unconscious attitudes towards cultural others. Specifically, people with higher dispositional empathy scores show more differentiated positive associations with images of people from two different cultural groups after listening to music explicitly belonging to that cultural group than do people with lower dispositional empathy scores.

This is a striking result, and provides what might be characterized as narrow but ‘hard-nosed’ evidence for music’s positive inter-cultural potential, and we have speculated on the broad psychological mechanisms (including entrainment, mimicry, emotional contagion, and semantic elaboration) that may be responsible.

But a number of notes of caution also need to be sounded. We have no evidence for the robustness or duration of the effects that we have observed: it may be that this is a very temporary shift that is easily disrupted, casting doubt on the practical efficacy of music as an agent of change in cultural understanding. And in the light of the interaction with dispositional empathy, the result suggests that any practical efficacy might be confined to those individuals who are already predisposed to be empathic towards others arguably those people who are (to put it simplistically) the least urgent cases.

Are we then forced to conclude that music has little or no power to change attitudes among those people who are most resistant? Perhaps more seriously, music as we have already indicated is arguably as capable of distinguishing (Bourdieu 1984/1979), dividing and alienating people as it is of bringing them together. Hesmondhalgh (2013: 85) points out that “Music can reinforce defensive and even aggressive forms of identity that narrow down opportunities for flourishing in the lives of those individuals who adhere to such forms of identification”, and provides a vivid anecdotal example of just such a defensive/aggressive encounter with or through music. He describes a Friday night out with friends at a pub where an Elvis impersonator is performing. Having at first dreaded the performance, Hesmondhalgh and his friends, along with a large number of strangers who are also in the pub for a night out, are quickly won over and join with one another, and the performer, with increasing intensity. The chorus of the final song “elicits an ecstasy of collective singing, women and men, all at the top of our voices. There are smiles and laughter, but there’s melancholy too. It seems that bittersweet lines from the Elvis repertory are invoking thoughts about relationships, past and present… [We] stagger out of the pub feeling we’ve had a great night, and that the working week has been obliterated by laughter and bittersweet emotion. Unwittingly, I brush against a man’s drink as I’m leaving, and he follows me out demanding an apology for his spilt beer… The power of Elvis’s music, it seems, has brought strangers and acquaintances together, and with a formidable intensity. But my pursuer has reminded me unpleasantly that there are those who feel excluded from such collective pleasures. If music-based gatherings answer to our need for sociality and attachment, and combat loneliness, might they also evoke envy when others miss out?” (Hesmondhalgh 2013: 103-4)

Are we to regard music’s affiliative and divisive attributes as two sides of the same coin, or as a more fundamental incompatibility between emancipatory and oppressive qualities? Indeed, rather than considering how music might help to make a bridge between apparently pre-existent cultural ghettos, should we not be asking in what ways music is already implicated in the establishment and maintenance of those very ghettos in the first place? These are significant challenges to the potentially starry-eyed representation of music that an uncritical attitude might project; but as Hesmondhalgh, again, puts it: “Music’s ability to enrich people’s lives [and expand their empathic understanding] is fragile, but I believe it can be defended better if we understand that fragility, and do not pretend it floats free of the profound problems we face in our inner lives, and in our attempts to live together.” (Hesmondhalgh 2013: 171)

Part of understanding that ‘fragility’ is considering what, if anything, is special about music as a force for (compromised) cultural benefit. Why not football, or food, both of which can lay claim to mass engagement and global reach? Is there anything about music that affords either particular, or particularly powerful or efficacious kinds of intercultural engagement? One way to tackle these questions is consider what the mechanisms for empathy and cultural understanding might be, and in what ways those mechanisms are engaged by different cultural manifestations whether those are music, food or football. As our critical review of the literature reveals, this is a fascinating but considerable challenge, and one that turns in part on how broad or narrow a conception of empathy is entertained.

One approach might be to admit a considerable range of inter-subjective engagements as occupying different positions on an empathy spectrum, from conditions of seIf-other identity in the context of what might be called ‘deep intersubjectivity’ (perhaps emblematically represented by that pre-Oedipal oneness between mother and infant); through powerful experiences of compassionate fellow-feeling; to the operation of much more controlled and deliberate rational, imaginative projection into the circumstances of others. Some (such as Adam Smith, Felicity Laurence and Colwyn Trevarthen) would want to make firm distinctions between, say, empathy and sympathy. But an alternative might be to agree on an umbrella term (and empathy might do), and then focus on what distinguishes different positions under the umbrella, and what the implications (practical, functional, conceptual) of those differences might be.

A common thread that runs through most of these positions is the central role of embodiment in empathy. From the most neuroscientifically reductionist approach (e.g. a ‘fundamentalist’ mirror neuron perspective) to the position of Smith or Stokes, a capacity to feel the situation of another underpins the inter-subjective character of empathy/feIlow-feeling/sympathy. And arguably it is in this respect that music has ‘special properties’ properties of enactment, of synchronization and entrainment in situations ranging from a single individual alone with their music (the solitary headphone listener ‘Lost in music’ (Clarke 2014) to massively social contexts (pop festivals, simulcasts) where enormous numbers of peopie can participate in collective, synchronized, embodied engagement.

As others have pointed out (e.g. Cross 2012), music is a uniquely widespread, emotionally and physically engaging, social, participatory and fluidly communicative cultural achievement, a powerful (cultural) ecological niche that affords extraordinary possibilities for participants, and which both complements and in certain respects surpasses those other global cultural achievements in which human beings participate (language, religion, visual culture, craft). There is little, perhaps, to be gained by attempting to set any one of these up on a uniquely high pedestal but equally it is important not to flatten the terrain by failing to recognize music’s particular combination of affordances in this rich cultural mix: cognitive and emotional complexity, from solitary to mass-social engagement, compelling embodiment, floating intentionality (Cross 2012), synchronization/entrainment, flexible mimicry, temporal and ambient character, and digitaI-analog mix.

As our critical review of the literature has revealed, the empathy-affording character of this mix of affordances has been explored and theorized across an astonishing range of disciplines invoking mechanisms that range from mirror neurons to semiotics and the cultural history of sentimentalism. Are these kinds of explanation in any way compatible with one another, and is there a way to avoid a simplistic and potentially reductionist ‘layers of an onion’ approach in which supposedly ‘fundamental’ biological attributes (whether those are genetic in the case of a narrowly ‘trait’ perspective on empathy; or neurological in the case of sensorimotor contingency theory) underpin progressively more ramified and arbitrary cultural constructs? We have already seen (e.g. Heyes 2010) that from within the scientific literature itself, as well as from outside it, there is ample evidence for the plasticity of so-called fundamental properties, and for the reciprocal relationship between biology and culture. Mirror neurons may be as much a consequence of a culture of inter-subjective engagement as they are a foundation for it. But it clearly remains a considerable challenge to develop in detail the more flexible and relational approach that we point towards in this report.

Finally, there is the question of the utility of the concept or term ‘empathy’ itself. Perhaps rather like the word ‘meaning’, it both enables and suffers from the capacity to bring together a wide range of phenomena, which critics may find unhelpful in its heterogeneity. We share the concern not to confuse chalk with cheese, but against a drive to compart-mentalize (sic) we are persuaded of the long-term value of sticking with a word and its associated conceptual field which, although still just a century old, offers a rich and powerful way to try to understand a central element of human sociality. The debates about whether to understand empathy as a genetic predisposition, a personality trait, an emergent attribute of perception-action coupling, a skill, or a social achievement are symptomatic of the conceptual reach of the term.

Engelen and Rottger-Rossler (2012), in a brief overview of a speciai issue of the journal Emotion Review devoted to empathy, declare in their first sentence that “there is no accepted standard definition of empathy, either among the sciences and humanities or in the specific disciplines”, but nonetheless emphatically endorse the importance of continuing to develop better understandings of that fundamentally social capacity to “feel one’s way into others, to take part in the other’s affective situation, and adopt the other’s perspective“ to grasp the other’s intentions and thus to engage in meaningful social interaction.”

We, too, are committed to the value of that enterprise, and to the specific role that music may play in understanding empathy, and as itself a ‘medium’ for empathy. In addressing the complex network of relationships between neighbouring terms (sympathy, compassion, contagion, entrainment, ‘theory of mind‘, attunement…) we see the prospect of a more nuanced and differentiated understanding of what Baron Cohen (2011: 107) has characterized as “the most valuable resource in our world” and “an important global issue related to the health of our communities.”

Current Disciplinary and Interdisciplinary Debates on Empathy

Eva-Maria Engelen, Birgitt Röttger-Rössler, University of Konstanz, Department of Philosophy & Freie Universitat Berlin, Social and Cultural Anthropology

Almost anybody writing in the field would declare that there is no accepted standard definition of empathy, either among the sciences and humanities or in the specific disciplines. However, even when accepting that there can be no all time and universally valid definition, one can still try to clarify some aspects and establish a few landmarks that will help to ensure that the phenomenon with which various researchers are dealing is the same or has at least important features in common.

Although there is no established concept, several topics and discussions have proved to be crucial for the phenomenon that was once given this specially made up label empathy by Edward Titchner who introduced this word into English at the beginning of the 20th century in order to translate the German term Einfühlung.

The idea behind this special issue on empathy is to present a range of the currently most lively topics and discussions to be found not only within several disciplines but also across several disciplinary boundaries. This makes it interdisciplinary. Authors from different disciplines were asked to contribute to the field in a style that would be accessible for a broader range of interested readers. These contributions come from the following disciplines in which empathy is either an ongoing or an upcoming topic of academic interest: neuropsychology, developmental psychology, philosophy, literary studies, and anthropology. The commentators giving their views on the articles are sometimes experts on empathy from the same discipline as the authors and sometimes from adjoining ones. We tried as far as possible to introduce crossovers, but these did not always fit.

Points of Discussion and Open Questions

Roughly speaking, there are two pathways when it comes to understanding each other: thinking or mind reading and feeling or empathy. Nonetheless, one of the ongoing debates in psychology and philosophy concerns the question whether these two abilities, namely, understanding what the other is thinking and “understanding“ what the other is feeling, are separate or not.

Other debates refer to the best theoretical model for empathy and ask whether it makes sense to assume just one kind of empathy or whether one should differentiate between at least two kinds: cognitive and affective.

Further questions are: Does a living being have to be able to make a self-other distinction in order to be empathic? How far do emotional contagion or sympathy and pity differ from empathy? Is empathy necessarily an affective ability and does it have to be conscious? Does it occur in face to face relationships between two persons or more? And can it also occur between a reader and a fictive character in a novel (Coplan 2004)?

These are just some of the questions currently being discussed. But before addressing them in detail in the following six articles and twelve commentaries, we shall survey the different definitions of empathy presented and defended in this special issue.

A Starting Point for the Discussions

We start off with the concept of empathy in the social cognitive neurosciences. The major growth of interest in empathy is largely due to a recent debate in this field. Previously, in the late nineteenth and first half to middle of the twentieth century, it was an important term in psychology, hermeneutics, and phenomenology. Later on, interest in the concept spread to developmental psychology as well. But the currently ongoing debate received its initial impetus from the question how far mind reading and empathizing are different faculties and how far they may not be completely separable (Singer 2006).

Basically speaking, both faculties are about understanding the other, either cognitively or emotionally. What are the intentions of the other? What are his or her wishes, beliefs, or deductions? These questions belong to the mind reading side, whereas understanding the other‘s emotional state belongs to the other side: the capacity of empathy.

Nonetheless, despite these clear cut definitions, there are also concepts such as the affective theory of mind that is also called cognitive empathy. The rationale for this distinction is that empathy is based on understanding the affective states of others.

Another question that one might consider before reading the assembled articles on empathy is whether empathy has to be a process leading to a conscious state. We advise the reader to bring to mind the definition of empathy in his or her own research perspective before reading the articles presented here. Whether one agrees or disagrees with many of the arguments exchanged and discussed in the following articles and commentaries will depend on which definition of empathy one already has in mind. Hence, a reflection on one’s own implicit or explicit definition might lead one to reconsider one’s initial assumptions. Whatever the case, it will certainly help one to understand how different disciplines take divergent approaches to the subject.

One might also bear in mind that the notions of understanding and empathy to be found in the long lasting philosophical hermeneutic tradition have been used to differentiate between the sciences and the humanities. Explaining was considered to be the method of the sciences, whereas understanding and empathy were the methods of the humanities. This involves the assumption of a deep dualism, and one should be cautious about claiming a particular term for one or the other discipline and tradition without thoughtful reflection if one wishes to avoid stepping into the footprints of such dualisms.

Empathy as Embodied Capacity for Social Orientation

Coming from the humanities, we propose the following definition for empathy:

Empathy is a social feeling that consists in feelingly grasping or retracing the present, future, or past emotional state of the other; thus empathy is also called a vicarious emotion. (Vicarious: experienced in the imagination through the feelings or actions of another person.)

As a social feeling empathy is always shaped through cultural codes, which differently emphasize, modulate and train the capacity to “feel into“ another person’s emotions. The main function of this feelingly grasping is, we assume. orientation in social contexts. This can mean taking part in the precise emotional state that the other is in at a certain moment, namely: being happy when she is happy, scared when she is scared. and so forth.

But this does not have to be the case. Grasping the other’s emotional state, that is. adopting the other‘s emotional perspective, could also produce a different feeling or emotion in me than the one currently being experienced in the other. And even when the empathic adoption of the other’s perspective produces in me the same emotion as the other is having (or is fictively experiencing) at that very moment, it would not be the same emotion, because the self-other differentiation has not been overcome.

We want to make sure that we do not take empathy to mean the same as sympathy or pity. Both are, in our opinion, special forms of empathy that cover only a certain aspect of empathic processes. Whereas pity is the mode of feeling sorry for the other, sympathy is the mode of being in favor of the other. Both these feelings are ways of adopting an emotional perspective (as empathy is), but they cover only a special form of emotional perspective taking that is structured by the social bond or relation between the persons involved. Thus in social life, pity and sympathy are most likely to occur toward persons one is related to or who belong to one’s own ingroup, but less often toward outgroup members who are mostly perceived as being totally different, strange, or even malevolent, in short, as persons one can scarcely identify with.

Pity and compassion as particular kinds of empathy are deeply connected to social attachment. Frans de Waal (2009) conceives empathy as an evolved concern for others that is triggered through identification with these others. “Empathy’s chief portal is identification,” he argues, meaning that close social bonds increase, in a quasi-automatic way, the emotional responsiveness to others and thus the readiness to help and support fellow beings (de Waal. 2009, 2l3).

Continuing his line of argument, he stresses that empathy also needs a “turn off switch,” a mechanism to override and regulate automatic empathic responses. He considers that what constitutes this turn off switch of empathic processes is a lack of identification. What becomes evident here is that de Waal is implicitly equating pity and compassion with empathy, or he is conceiving them as the evolutionary basis of empathy. lf fellow beings harm or violate each other, as it is often the case in social reality, they must, according to de Waal’s model, have switched off their empathic capacity.

We deliberately take another position here: We conceive empathy as an evolutionarily grounded capacity to adopt an emotional perspective, to implicitly “feel into” the other regardless of the behavioral outcome. This may be directed toward ingroup members and be prosocial and supporting, or toward outgroup members and be destructive and harming.

We make a point of affectively grasping the emotional state of another, but that does not mean to draw a definite line between cognitive understanding and emotional grasping. There are good reasons to stick to a narrower notion when it comes to defining empathy as a “feelingly grasping” if one wants to make sense of notions such as vicarious emotion or of the history of the notion that started with Einfühlung (feeling into). However, the specific conceptual perspective one takes depends very strongly on one’s research traditions and research interest.

When it comes to the relation between empathic perspective taking and the cognitive perspective taking that is related to theory of mind (TOM), we cannot judge the discussions amongst neuropsychologists regarding whether or not these are completely different kinds of perspective taking, and whether or not these processes take place in different brain areas. However, defining the term according to an established tradition, we take empathy to be the emotional perspective taking; and mind reading (in TOM), to be the cognitive perspective taking. Nonetheless, on a purely conceptual level, one might have to admit that the two faculties cannot be separated altogether, because in cognitive perspective taking, the subject who is taking the perspective of another being has to be at least interested in the other being, and that means to care for the other in some way. First, you have to consider the other as an equal in a certain way, as a fellow human being, for instance, or at least as a creature able to feel. Second. you have to consider the other and the other‘s actions as relevant to yourself. You have to be somehow interested in order to be either emotionally involved or curious about the other’s intentions. Therefore, both cases, empathy and Tom, start with the same precondition:

You have to consider the other as being the same as you and of being your counterpart in a particular situation; there has to be a tacit analogy between the subject adopting the other‘s perspective and the other whose perspective is being taken, be it emotional or cognitive.

When specifying what we meant by empathy, we wrote of feelingly grasping or feelingly retracing something; this already suggests that the processes of feeling and of comprehending cannot always be separated clearly. And this makes empathic acts particularly interesting, because they resist the artificial dualisms in the philosophy of mind that still emboss philosophical, scientific, and everyday speech.

To recap briefly, empathy, as the embodied (or bodily grounded) capacity to feel one‘s way into others, to take part in the other’s affective situation, and adopt the other‘s perspective, is a fundamentally social capacity. It allows one to grasp the other’s intentions and thus to engage in meaningful social interaction. Empathy is a crucial means of social communication. It is not just an emotional contagiousness: in which one remains concentrated on oneself.

However, this definition of empathy fails to specify whether this comprehension involves a kind of simulation or imitation of the minds of others. In many of the following contributions, we shall see what important role simulation plays in the debates on a theoretical model of empathy.

Outline of the Contributions

The following six articles are written by distinguished scholars on empathy who come from five different disciplines. Each contribution presents recent research findings and theoretical reflections about the phenomenon of empathy within the respective discipline and simultaneously gives an insight into some currently ongoing debates on the subject within as well as across disciplinary boundaries. The following outline might already give a first impression about this.

Social Cognitive Neuroscience: Cognitive and Affective Empathy

The neuropsychologist Henrik Walter (2012) places his accent on understanding the emotional or affective states of another human being. Furthermore, he views understanding as a purely cognitive concept in this context that suggests making deductions and reasoning. Because Walter concentrates on this approach to understanding the affective states of others, conceptions such as affective theory of mind or cognitive empathy are also highly relevant for his ideas on the capacities for understanding other human beings. Whether this empathy is due to a cognitive faculty or an affective one is not the focus of this distinction. Empathy is, in this case, defined only by the understanding of the emotional state of the other and not by whether the process of understanding is either an affective one or a cognitive one. If it is a cognitive one, it is called cognitive empathy or affective theory of mind; if it is an affective one, it is called affective empathy.

Walter presents this conceptual analysis before linking it both to findings in empirical research investigating the neural basis of empathy and to data on the possible neurogenetic basis of empathy. The tradition followed by Walter when differentiating between TOM, cognitive empathy, and affective empathy is one developed in psychology since the late 1950s. It defined empathy as an emotional or affective phenomenon, and introduced the notion of cognitive empathy as a cognitive faculty or “intellectual or imaginative apprehension of another‘s condition or state of mind” (Hogan, 1969, 308). The main topic within this research tradition is the accuracy of our ability to conceive the other’s condition. Cognitive empathy is not defined in terms of shared emotions but in terms of knowing another’s state of mind by inferential processing (Ickes, 1997).

Social Cognitive Neuroscience again: Neural Overlap and Self-Other Overlap Stephanie Preston and Alicia Hofelich’s contribution (2012) comes from one of the most rapidly growing research fields on empathy, namely, the social neuroscience of empathy. Preston and Frans de Waal (2002) are well known in this field for having developed the perception-aclion model of empathy. This proposes that observing an emotion in someone else generates that emotion in the observer. Preston and Hofelich use this model to argue in favor of a neural overlap in the early stages of processing all cases of social understanding such as cognitive empathy, empathic accuracy, emolion contagion, sympathy. and helping behavior. The self-other overlap in empathy occurs only at a later state of processing. They offer some criteria for differentiating between neural overlap. subjeclive resonance, and personal distress. Because the self-other overlap is crucial for the definition of empathy. this represents an important attempt to seek empirical support for a theoretical differentiation. In addition, it offers a taxonomy of the different cases of social understanding that are supposed to be highlighted by a biological view of empathy.

The academic challenge of this undertaking lies not least in the attempt to show that there is some such thing as a self-other overlap on the neural level, and that it is not just to be found on the subjective level on which the conceptual capacities of a human being are already“at work.”

In order to engage in an empathic process, the empathic subject has to be able to differentiate between his or her own affective states and those of the being he or she is being empathic with, be this a conscious process, as is quite often the case on the subjective level, or a subconscious process on the neural level. This is also a necessary precondition for cognitive empathy and sympathy, but not for emotional contagion. Scientific research on the subjective overlap, that is, the sharing of an emotion, is the task of psychology. But in order to grasp this point on a biological level one has to avoid the subjective perspective. This is done by defining the self-other overlap via the notion of the activation of a personal representation in order to experience an observed state or action, and not via the notion of the activation of a personal representation when acting oneself or being in the state oneself. The overlap in representation on the neural level has to be reflected by a spatial overlap of brain activation between imitation and observation of facial emotional expression (on the subjective level, one is speaking about “sharing another’s emotional or intentional state”).

The process of observing or imagining someone else in a situation might therefore be crucial for determining whether a neural representation of an emotion is the representation of the emotion in somebody else, and therefore an empathic reaction, or whether it is the neural representation of one‘s own emotional process.

Developmental Psychology: The Self-Other Distinction

The developmental psychologist Doris Bischof Kohler (20l2) concentrates on the subjective level of empathy. She defines empathy as understanding and sharing the emotional state of another person. This definition implies not only that an empathic capacity is linked strongly to cognitive capacities, but also that the self-other distinction is crucial for the notion of empathy.

Bischof Kohler’s investigations on empathy are therefore related to her research on the symbolic representation of the self in imagination (self recognition). Her findings reveal that only children who are able to recognize themselves exhibit empathic behavior. This does not imply that self recognition leads to empathic behavior, but that it is a necessary precondition for empathy. And as her data show, this mode of self recognition does not have to be a kind of metarepresentation or conscious self reflection that the theory of mind predicts to first emerge only in 4 year olds. This can explain not only why empathy is already observable in 2 year old children but also why the mere recognition of a mark on one’s cheek while looking in a mirror is a transitional state to self recognition that is not linked to empathy. Her conclusion from these results is that “the capacity to empathize is an effect of maturation rather than socialization.”

Philosophy: Empathy and Simulation Theory

The philosopher Karsten Stueber (20l2) presents a model of the cognitive and afective understanding and knowledge of another human being’s mind, and demonstrates the importance of empathy for social cognition. He is well known as a representative of simulation Iheory-an approach that fits quite well with empirically based theories on empathy. In this article, he extends this basic approach by replying to some narrativist criticism. His main focus is on the cognitive mechanisms that allow us to gain knowledge of other minds and therefore on social cognition and on our understanding of individual agency. One challenge for such an approach is to give a theoretical account of resonance phenomena and projection mechanisms that does not presuppose some kind of Canesian subject who remains in a solitary state of skepticism about the existence of other minds. While insisting on the imponance of our sensitivity to differences between ourselves and other human beings, he introduces the importance of the other on the two levels distinguished in the simulation approach. The first level is the basic level of neuronal resonance phenomena. It is activated automatically by observation of the bodily activities and the accompanying bodily and facial expressions of other beings (basic empathy). The second level is the more developed stage, namely, the re enactment of the thoughts and reasonings of another human being as a rational agent (re enactive empathy). On this level, Stueber admits that in order to understand the actions of another person, we do not necessarily have to appeal to his or her beliefs and desires, but that the knowledge of the other‘s character traits or the other’s role in various social contexts could be equally important. By accepting this possibility, he opens up his model not only to some narrativist proposals for understanding the actions of others but also to the social, historical, or cultural contexts that one might have to consider in order to understand the actions of another human being. He insists, however, that this information would make neither the re enactment nor the simulation superfluous, because pretend beliefs and pretend desires are at the core of the imaginative perspective taking that is necessary for empathy.

Anthropology: The Cultural Embedednass of Empathy

The opening up of simulation theory toward an integration of personal, historical, and cultural information makes a philosophical approach like Stueber‘s attractive for a cultural and social anthropologist such as Douglas Hollan (2012). He takes up the distinction between basic empathy and re enactive empathy, although calling the latter complex empathy instead. This allows him not only to accept embodied forms of imitation and attunement as biologically evolved capacities, but also to concentrate on the more language bound evaluations and adjustments that have evolved culturally and historically. Hollan emphasizes that one has to be acquainted with the latter and with the personal background of a person in order to understand why he or she is in a certain emotional state. And, as he points out, this is necessary in order to be able to be empathic, because one has to understand not only that a person is in a certain emotional state but also why. In other words, one needs to have a cenain amount of knowledge about the normative and moral standards of a culture or society before one can evaluate the meaning of social situations and forms of behavior and comprehend another’s feeling state within the context of social circumstances. In short, empathic processes cannot be detached from the social and cultural contexts in which they are embedded. One way to narrow down the range of the meaning of the definition of empathy is to delete the need to understand why the person is in the state from the definition, leaving only the understanding that a person is in a certain emotional state.

The heuristic differentiation between basic empathy and complex empathy is in line with the ability to determine that another person is in a cenain emotional state and to understand the experience of the other. By reporting important research results on empathy in social anthropology, Douglas Hollan demonstrates not only how far some of the main features of empathy seem to be, by some means, universal, but also how far the studies on empathy need to be refined in light of some findings from anthropological research.

Intercultural findings on empathy reveal that the blending of feelingly perspective taking and cognitive perspective taking is one of the constant features of empathy. whereas the differentiation into “me” and “the other“ seems to be less distinct in empathic like responses in many non Western societies. Another finding of Hollan‘s research is that in the Pacific region, empathy is not a neutral engagement in the understanding of the emotional state of the other, but more like a sympathy that is linked very frequently to a positive attunement with that other person. And this positive attunement is expressed as an active doing rather than a passive experience.

Alongside these research results, he has noticed another, rather opposite tendency: a widespread fear that an empathic like knowledge could be used to harm others. This is why in many parts of the world-from the lndo Pacific to Latin America or Nonhem Canadapeople try to mask their faces. that is, to not express their inner feelings and thoughts but always show a “bright” face and not disclose their vulnerabilities. This phenomenon points to the fact discussed above that empathy is not linked automatically to compassion and helping attitudes, but might also be used by enemies or individual psychopaths as a way to find out how to harm the other.

Among the most challenging research desiderata that result from anthropological findings is the call for more studies on the complex interrelationship between the culture specific moral and situational contexts mediating the expression of empathy on the one side and the dispositions (or traits) that individuals develop to experience and display empathy on the other. Put succinctly, all cultures have some people who are likely to empathize more and others who are likely to empathize less. Hollan considers one of the most demanding tasks facing future research is to investigate how far personality traits interact with the culturally different modes of conceptualizing empathy.

Literary Studies: A Three-Step Model of Human Empathy

The findings on empathy filters introduced by the ethologist and primatologist Frans de Waal might well have been one of the starting points for the theory on empathy proposed by Fritz Breithaupt (2012), a scholar of German studies. As already mentioned. de Waal (2009, 213) has argued that “empathy needs both a filter that makes us select what we react to, and a turn off switch.” Breithaupt shares the hidden agenda for this approach, namely, that human beings are hyperempalhic, without equating pity and compassion with empathy. He has developed a three step model of human empathy that should account for the individual and cultural variety in empathy that also interests Douglas Hollan. According to Breithaupt’s theory, individual and cultural diferences are due to the control functions of blocking and channeling empathy.

These blucking mechanisms are important for a hyperempathic being (Step I) because of the costs accompanying such a social hyperactivity. As well as requiring energy, the danger of self loss might be another cost of empathy in this approach. This possibly ongoing activity therefore needs to be blocked (Step 2). Neurobiologists such as Marco lacobini (2008) have therefore proposed some kind of “super mirror neurons“ that control the mirror neurons. But, because Breithaupt is dealing with more conscious processes, he is hinting at cultural techniques and learning without excluding the possible existence of evolutionarily evolved mechanisms as well. Once the blocking mechanisms are in action, a third step is needed in order to be able to experience empathy at all (Step 3). This step consists in the techniques to circumvent the blocking mechanisms.

The technique to unblock the empathy inhibition on which Breithaupt is concentrating is side raking in a three person setting of empathy. The reason why he turns to a three person instead of a two person model is linked to the observation that hyperempathy in human beings goes hand in hand with hypersociabiliry, and a two person model might be too narrow to encompass this. The side taking process is deliberate: A person decides who’s side to take. After making this decision, empathy emerges (or returns), and it maintains and strengthens the initial choice, because empathy allows emotions to be released that confirm the decision. Breithaupt points out explicitly that the side taking is not involved in empathy itself (as it is in sympathy), but that it is rather “external” to it. The advantage of this model lies in the ability to combine cognitive elements in perspective taking with a caring attitude that might evolve when the side taking decision is followed by empathy.

The ambition of this special issue with its six articles from several disciplines is to give an overview on recent research on empathy. The twelve commentaries not only contribute greatly to achieving this aim but also help significantly to identify the hotspots in ongoing disciplinary and interdisciplinary debates.

Neurophysiological Effects of Trait Empathy in Music Listening

Zachary Wallmark, Choi Deblieck and Marco Iacaboni.

The social cognitive basis of music processing has long been noted, and recent research has shown that trait empathy is linked to musical preferences and listening style.

Does empathy modulate neural responses to musical sounds?

We designed two functional magnetic resonance imaging (fMRI) experiments to address this question. In Experiment 1, subjects listened to brief isolated musical timbres while being scanned. In Experiment 2, subjects listened to excerpts of music in four conditions (familiar liked (FL)/disliked (FD) and unfamiliar liked (UL)/disliked (UD).

For both types of musical stimuli, emotional and cognitive forms of trait empathy modulated activity in sensorimotor and cognitive areas: in the first experiment, empathy was primarily correlated with activity in Supplementary motor area (SMA),

Inferior frontal gyrus (IFG)

and Insula;

In Experiment 2, empathy was mainly correlated wth activity in prefrontal,

temporo-parietal

and reward areas, Taken together. these findings reveal the interactions between bottom-up and top-down mechanisms of empathy in response to musical sounds, in line with recent findings from other cognitive domains.

INTRODUCTION

Music is a portal into the interior lives of others. By disclosing the affective and cognitive states of actual or imagined human actors, musical engagement can function as a mediated form of social encounter, even when listening by ourselves. It is commonplace for us to imagine music as a kind of virtual “persona,” with intentions and emotions of its own: we resonate with certain songs just as we would with other people, while we struggle to identify with other music.

Arguing from an evolutionary perspective, it has been proposed that the efficacy of music as a technology of social affiliation and bonding may have contributed to its adaptive value, As Leman indicates: “Music can be conceived as a Virtual social agent… listening to music can be seen as a socializing activity in the sense that it may train the listener’s self in social attuning and empathic relationships.” In short, musical experience and empathy are psychological neighbors.

The concept of empathy has generated sustained interest in recent years among researchers seeking to better account for the social and affective valence of musical experience; it is also a popular topic of research in social neuroscience. However, the precise neurophysiological relationship between music processing and empathy remains unexplored. Individual differences in trait empathy modulate how we process social stimuli, does empathy modulate music processing as well?

(Valence, as used in psychology, especially in discussing emotions, means the intrinsic attractiveness/”good”-ness (positive valence) or averseness/”bad”-ness (negative valence) of an event, object, or situation. The term also characterizes and categorize specific emotions. For example, emotions popularly referred to as “negative”, such as anger and fear, have negative valence. Joy has positive valence. Positively valenced emotions are evoked by positively valenced events, objects, or situations. The term is also used to describe the hedonic tone of feelings, affect, certain behaviors (for example, approach and avoidance), goal attainment or nonattainment, and conformity with or violation of norms. Ambivalence can be viewed as conflict between positive and negative valence carriers.)

If we consider music through a social psychological lens, it is plausible that individuals with a greater dispositional capacity to empathize with others might also respond to music’s social stimulus differently on a neurophysiological level by preferentially engaging brain networks previously found to be involved in trait empathy.

In this article, we test this hypothesis in two experiments using functional magnetic resonance imaging (fMRI). In Experiment 1, we explore the neural correlates of trait empathy (as measured using the Interpersonal Reactivity Index) as participants listened to isolated instrument and vocal tones. In Experiment 2. excerpts of music in four conditions (familiar liked/disliked. unfamiliar liked/disliked) were used as stimuli. allowmg us to examine correlations of neural activity with trait empathy in naturalistic listening contexts.

Measuring Trait Empathy

Trait empathy refers to the capacity for empathic reactions as a stable feature of personality. Individual differences in trait empathy have been shown to correlate with prosocial behavior and situational “state” empathic reactions to others.

Trait empathy is commonly divided into two components: emotional empathy is the often unconscious tendency to share the emotions of others, while cognitive empathy is the ability to consciously detect and understand the internal states of others.

There are a number of scales to measure individual differences in trait empathy currently in use. including the Toronto Empathy Questionnaire (TEQ). Balanced Emotional Empathy Scale (BEES). Empathy Quotient (EQ). Questionnaire of Cognitive and Affective Empathy (QCAE) and Interpersonal Reactivity INDEX (IRI). Here we use the IRI, which is the oldest and most widely validated of these scales and frequently used in neurophysiological studies of empathy.

The IRI consists of 28 statements evaluated on a 5 point Likert scale (from “does not describe me well” to “describes me very well”). It is subdivided into four subscales meant to tap different dimensions of self reported emotional and cognitive empathy. Emotional empathy is represented by two subscales: the empathic concern scale (hereafter EC) assesses trait level “other oriented” sympathy towards misfortunate others. and the personal distress scale (PD) measures “self oriented” anxiety and distress towards misfortunate others. The two cognitive empathy subscales «most of perspedive taking (PT). or the tendency to see oneself from another’s perspective, and fantasy (PS). the tendency to imaginatively project oneself into the situations of fictional characters.

Music and Empathy

Theories of empathy have long resonated with the arts. The father of the modern concept of empathy, philosopher Theodor Lipps originally devised the notion of Emfühlung (“feeling into”) in order to explain aesthetic experience. Contemporary psychological accounts have invoked mirror neurons as a possible substrate supporting Lipps’s “inner imitation” theory of the visual and performing arts. However. the incorporation of psychological models of empathy in empirical music research is still in its early stages, Empathy remains an ambiguous concept in general. but applications to music can appear doubly vexed in an influential formulation.

Esenherg et al (1991) define empathy as. “an emotional response that stems from another’s emotional state or condition and is congruent with the other’s emotional state or condition.” Aspects of this definition. though. might seem incongruous when applied to music. which is inanimate and not capable of possessing an emotional “state”. To connect music processing to trait empathy. therefore. it is first necessary to determine the extent to which music comprises a social stimulus, who or what do we empathize with when listening to music?

Scherer and Lentner proposed that empathy toward music is often achieved via identification and sympathy with the lived experiences and expressive intentions of composers and performers. Corroborating this view, in a large web based experiment Egermann and McAdams found that “empathy for the musician” moderated between recognized and induced emotions in music: the greater the empathy. the more likely an individual was to exhibit a strong affective response when listening.

In a related study Wollner presented participants with video of a string quartet performance in three conditions audio/visual, visual only. and audio only and reported a significant correlation between trait empathy measures and perceived expressiveness in both visual conditions (music only condition was non significant), leading him to conclude: “since music is the audible outcome of actions, empathic responses to the performer‘s movements may enhance the enjoyment of music.“ Similarly, Taruiti et al found correlations between the EC and FS scales of the IRI and accuracy in emotion recognition relative to musicians’ self reported expressive encodings in an audio only task.

A music specific manifestation of trait empathy was proposed by Kreutz et al, who defined “music empathizmg“ as a cognitive style of processmg music that privileges emotional recognition and experience over the tendency to analyze and predict the rules of musical structure (or. “music systematizing”). Garrido and Schubert compared this “music empathy” scale alongside the IRI EC subscale in a study exploring individual differences in preference for sad music. They found that people who tend towards music empathizing are more likely to enjoy sad music; however. high trait empathy was not significantly correlated with enjoyment of sad music. This would seem to suggest that the music empathizing cognitive style differs from general trait empathy,

A number of other studies have investigated the relationship between trait empathy and enjoyment of sad music using the IRI. In a series of experiments. Vuoskoski and Eerola reported statistically significant correlations between EC and FS subscales and self reported liking for sad and tender music. Similarly, Kawakami and Katahora found that FS and PT were associated with preference for and intensity of emotional reactions to sad music among children.

There is evidence that musical affect is often achieved through mechanisms of emotional empathy. According to this theory, composers and performers encode affective gestures into the musical signal, and listeners decode that signal by way of mimetic, mirroring processes; musical expression is conveyed transparently as affective bodily motions are internally reenacted in the listening process. Shubert, in his Common Coding Model of Prosocial Behavior Processing, suggests that musical and social processing draw upon shared neural resources: music. in this account, is a social stimulus capable of recruiting empathy systems, including the core cingulate paracingulate supplementary motor area (SMA) insula network, along with possible sensorimotor, paralimbic and limbic representations. The cognitive empathy component, which can be minimal, is involved primarily in detecting the aesthetic context of listening, enabling the listener to consciously bracket the experience apart from the purely social. This model may help account for the perceived “visuality” of musical experience, whereby music is commonly heard as manifesting the presence of an imagined other.

In sum, trait empathy appears to modulate self reported affective reactions to music. There is also peripheral psychophysiological evidence that primed situational empathy may increase emotional reactivity to music. It is plausible that such a relationship is supported by shared social cognitive mechanisms that enable us to process music as a social stimulus; however, this hypothesis has not yet been explicitly tested at the neurophysiological level.

Neural Correlates of Trait Empathy

Corroborating the bipartite structure of trait empathy that appears in many behavioral models of empathy, two interrelated but distinct neural “routes” to empathy have been proposed, one associated with emotional contagion and the other with cognitive perspective taking. Emotional empathy is conceived as a bottom up process that enables “feeling with someone else” through perception action coupling of affective cues. Such simulation or “mirroring” models maintain that empathy is subserved by the activation of similar sensorimotor, paralimbic and limbic representations both when one observes another and experiences the same action and emotional state oneself. This proposed mechanism is generally considered to be pre reflective and phylogenetically ancient; it has also been linked behaviorally to emotional contagion, or the propensity to “catch” others’ feeling states and unconsciously co experience them. For example. several imaging studies have found evidence for shared representation of observed/experienced pain in anterior cingulate and anterior insula, as well as somatosensory cortex. Similarly, disgust for smells and tastes has been shown to recruit the insula during both perception and action, and insula has been proposed as a relay between a sensorimotor fronto parietal circuit with mirror properties and the amygdala in observation and imitation of emotional facial expressions. There is also evidence that insula functions similarly in music induced emotions, particularly involving negative valence.

In contrast to emotional empathy, trait cognitive empathy has been conceived as a deliberative tendency to engage in top down, imaginative transpositions of the self into the “other’s shoes,” with concomitant reliance upon areas of the brain associated with theory-of-mind, executive control, and contextual appraisal, including medial, ventral and orbital parts of the prefrontal cortex; somatomotor areas; temporoparietal junction; and precuneus/postenor cingulate. As implied in the functional overlap between certain emotional and cognitive empathy circuits, some have argued that the two routes are neither hierarchical nor mutually exclusive: cognitive perspective taking is premised upon emotional empathy, though it may, in turn, exert top down control over contagion circuits, modifying emotional reactivity in light of contextual cues and more complex social appraisals.

Brain studies have converged upon the importance of the human mirror neuron system in action understanding, imitation and empathy, and has been demonstrated in multiple sensorimotor domains, including the perception of action sounds. Mirror properties were initially reported in the inferior frontal gyrus (IFG) and the inferior parietal lobule (IPL); consistent with simulation theories of trait empathy. Moreover, activity in these and other sensorimotor mirror circuits has been found to correlate with IRI scales in a variety of experimental tasks, including viewing emotional facial expressions; and video of hands injected with a needle. That is, high empathy people tend to exhibit greater activation in mirror regions during the observation of others. Simulation mechanisms also appear to underpin prosocial decision making.

Implication of inferior frontal and inferior parietal mirror neuron areas is not a universal finding in the empathy literature, and some have suggested that it may reflect specific socially relevant tasks or stimulus types, not empathy in and of itself. However, evidence for mirror properties in single cells of the primate brain now exists in medial frontal and medial temporal cortex, dorsal premotor and primary motor cortex, lateral intraparietal area, and ventral intraparietal area. This means that in brain imaging data the activity of multiple brain areas may potentially be driven by cells with mirror properties.

In addition to studies using visual tasks, auditory studies have revealed correlations between mirror neuron activity and trait empathy. Gazzola et al, for instance, reported increased premotor and somatosensory activity associated with PT during a manual action sound listening task. A similar link was observed between IFG and PD scores while participants listened to emotional speech prosody. To date, however, no studies have investigated whether individual differences in empathy modulate processing of more socially complex auditory stimuli, such as music.

Study Aim

To investigate the neural substrates underlying the relationship between trait empathy and music. we carried out two experiments using fMRI.

In Experiment 1, we focused on a Single low level attribute of musical sound timbre, or “tone color”, to investigate the effects of empathy on how listeners process isolated vocal and instrumental sounds outside of musical context.

We tested two main hypotheses:

First, we anticipated that trait empathy (measured with the IRI) would be correlated with increased recruitment of empathy circuits even when listening to brief isolated sounds out of musical context (Gazzola et al).

Second, following an embodied cognitive view of timbre perception (Wallmark et al), we hypothesized that subjectively and acoustically ”noisy” timbral qualities would preferentially engage the emotional empathy system among higher empathy listeners. Abrasive. noisy acoustic features in human and many non human mammal vocalizations are often signs of distress, pain, or aggression (Isai et al): such state cues may elicit heighted responses among people with higher levels of trait EC.

To explore the relationship between trait empathy and music processing, in Experiment 2 participants passively listened to excerpts of self selected and experimenter selected “liked” and “disliked” music in familiar and unfamiliar conditions while being scanned. Musical preference and familiarity have been shown to modulate neural response. Extending previous research on the neural mechanisms of empathy, we predicted that music processing would involve circuitry shared with empathic response in non musical contexts (Schubert).

Unlike Experiment 1, we had no a priori hypotheses regarding modulatory effects of empathy specific to each of the four music conditions However, we predicted in both experiments that emotional empathy scales (EC and FD) would be associated with regions of the emotional empathy system in music listening, including sensorimotor. paralimbic and limbic areas, while cognitive empathy scales (PT and FS) would primarily be correlated With activity in prefrontal areas implicated in previous cognitive empathy studies (Singer and Lamm).

Results

Experiment 1 demonstrated that trait empathy is correlated With Increased activation of circuitry often associated with emotional contagion, including sensorimotor areas and insula, in the perception of isolated musical timbres. FS and EC also appear to be sensitive to the affective connotations of the stimuli. Timbre is arguably the most basic and quickly processed building block of music. Though sufficient to recruit empathy areas, these brief stimuli do not, however, constitute “music” per sci

In Experiment 2, we turned our focus to more naturalistic stimuli including excerpts of music selected in advance by participants in order to explore the effect of trait empathy on the processing of music.

DISCUSSION

The present study demonstrates that trait empathy is correlated with neurophysiological differences in music processing. Music has long been conceived as a social stimulus. Supporting this view, our study offers novel evidence that neural circuitry involved in trait empathy is active to a greater degree in empathic individuals during perception of both simple musical tones and full musical excerpts. Individual variances in empathy are reflected in differential recruitment of core empathy networks during music listening; specifically. IRI subscales were found to correlate with activity in regions associated with both emotional (e.g.. sensorimotor regions, insular and cingulate cortex) and cognitive empathy (cg, PFC. TPI) during passive listening tasks

Our main hypotheses were continued, though with an unexpected twist regarding the two putative empathy types (at least as structured by the MU), Both experiments seem to suggest interactions between bottom up and top down processes (indexed in our study by both IRI scores and activity in neural systems) in empathy modulated music listening. This is in line with recent findings in prosocial decision making studies. Stimulus type, however, seems associated with different patterns of neural systems engagement.

In Experiment 1, sensorimotor areas were more frequently modulated by trait empathy in the processing of musical timbre: conversely. in Experiment 2, cognitive areas were more frequently modulated by trait empathy in the processmg of (famillar) music. Together this suggests that, contrary to our initial hypothesis for Experiment 2, modulation of neural activity by empathy was driven more by stimulus type than by empathy type; that is. the emotional empathy subscale (BC) was no more selective to emotional contagion circuitry than cognitive empathy scales (PT and FS), and vice versa (the PD scale did not reveal any significant correlations with brain activity. In what follows, we interpret these results and discuss their implications.

Empathy-Modulated Sensorimotor Engagement in Timbre Processing

Using isolated 2-s instrument and vocal tones as stimuli, Experiment 1 found that the four IRI subscales modulated response to timbre. First, we found that cognitive perspective was correlated with activity in motor areas SMA for A0: and SI and anterior cingulate (ACC).

This finding is in line with numerous studies suggesting a role for ACC and SI in emotional empathy; it also replicates a result of Gazzola et al 2006. who reported a correlation of somatomotor activity and PT scores in an action sound listening task. Activity in these regions may suggest a sensorimotor simulation process whereby high PT individuals imitate internally some aspect of the production of these sounds, This result could be explained in light of Cox’s 2016 “mimetic hypothesis,” according to which music is understood by way of covert or overt motor reenactments of sound producing physical gestures. It is quite conceivable that people who are inclined to imagine themselves from others’ perspectives also tend to take up the physical actions implied by others’ musical sounds, whether a smooth and gentle voice, a growled saxophone, or any other musical sound reflecting human actions.

It is intriguing, however, that PT was not implicated in the processing of positive or negative valence. One might assume that perspective takers possess a neural preference for “good“ sounds: for example, one study reported activation of larynx control areas in the Rolandic operculum while subjects listened to pleasant music (but not unpleasant), suggesting subvocalization only to positively valenced music (K elsch et al) Our results, however. indicate that PT is not selective to valence in these sensorimotor areas.

FS also revealed motor involvement (SMA) in the task > baseline contrast. Unlike PT, FS appeared to be sensitive to both positive and negative valence of timbres: we found activity in left TH and Broca‘s area of the IFG associated with positively valenced timbres, and temporal, parietal and prefrontal activations associated with disliked timbres. TPI is an important structure for theory of mind. Together with Broca‘s area, a well studied language and voice specific motor region that has been implicated in emotional empathy. It is plausible to suggest that individuals who are prone to fantasizing may exhibit a greater tendency to attribute mental states to the virtual human agents responsible for making musical sounds, and that this attribution would be more pronounced for positively valenced stimuli.

As hypothesized. EC was correlated with activation in a range of areas previously implicated in empathy studies, including IPL, IPG and SMA, along With SI, STG. cerebellum and AIC. It was also sensitive to negative valence: noisy timbres were processed with greater involvement from SMA in individuals with higher EC. EC is an “other oriented” emotional scale measuring sympathy or compassion towards the misfortune of others. Since noisy, distorted qualities of vocal timbre are an index of generally high arousal, negatively valenced affective states, we theorize that individuals with higher trait EC exhibited greater motor attunement owing to the ecological urgency typically signaled by such sound events.

In short, we usually deploy harsh vocal timbres when distressed or endangered (e.g., screaming or shouting), not during affectively positive or neutral low arousal states, and high empathy people are more likely to pick up on and simulate the affective motor implications of others in distress. Though our sensitivity to the human voice is especially acute, researchers have hypothesized that instrumental timbre can similarly function as a “superexpressive voice” via acoustic similarities to emotional vocal expression. Our result would seem to support this theory, as motor response appears to encode the combined effects of noisy tones, both vocal and instrumental.

It is also worth noting, as might be expected given the above, that noisy voice produced a unique signature of activation among high FS and EC participants relative to the normal vocal stimuli: FS modulated processing of the noisy voice in SII and IPL, while EC was selective to noisy vocal sounds in the SMA and primary motor cortex. This result appears to be at odds with other studies of vocal affect sensitivity that report motor mimetic selectivity for pleasant vocalizations. It is likely that individual variances in empathy (plus other mediating factors) predispose listeners to differing orientations towards others’ affective vocalizations, with empathic listeners more likely to “catch” the motor affective implications of aversive sounds than low empathy people, who might only respond to sounds they find pleasant while tuning out negatively valenced vocalizations.

Cox (2016) theorizes that music can afford listeners an “invitation” for motor engagement, which they may choose to accept or decline, Seen from this perspective, it is likely that individual differences in empathy play an important role in determining how we choose to respond to music’s motor invitations.

Regarding motor engagement across IRI subscales it is apparent that SMA is the most prominent sensorimotor area involved in empathy modulated processing of timbre. SMA is a frequently reported yet undertheorized part of the core empathy network; it has also been implicated in internally generated movement and coordination of action sequences, and has been shown in a single neuron study to possess mirror properties. Most relevant to the present study, moreover, SMA contributes to the vividness of auditory imagery, including imagery for timbre. Halpern et al and Lima et al attributed SMA activity in an auditory imagery task in part to subvocalization of timbral attributes, and the present study would seem to partially corroborate this explanation. We interpret this result as a possible instance of sensorimotor integration: SMA activity could reflect a basic propensity to link sounds with their associated actions, which are internally mirrored while listening. In accordance with this view, we would argue that people do not just passively listen to different qualities of musical timbre, they enact some of the underlying physical determinants of sound production, whether through subvocalization, biography specific act sound associations.

To summarize, sensorimotor areas have been implicated in many previous studies of emotional empathy, including IFG and IPL; “pain circuit” areas in AIC and ACC; and somatomotor regions. Interestingly. these precise regions dominated results of the Experiment 1 timbre listening task. This is true, moreover, for both emotional and cognitive scales: PT and FS, though often implicated in cognitive tasks, were found in this experiment to modulate SMA, SI, primary motor cortex, IPL, MC and IFG, well documented motor affective AREAS. We theorize that the contextual impoverishment and short duration of the timbre listening task (2-s isolated tones) may have largely precluded any genuine perspective taking or fantasizing from occurring, it is much harder to put oneself in the “shoes” of an single isolated voice or instrument, of course, than it is an affectively rich piece of actual music. However, even in the absence of conscious cognitive empathizing, which presumably would have been reflected in engagement of the cognitive empathy system, individuals with high trait PT and FS still showed selective activations of sensorimotor and affective relay circuits typically associated with emotional empathy. This could be interpreted to suggest that the two “routes” to empathy are not dissociated in music listening: although conscious PT in response to abbreviated auditory cues is unlikely, people who frequently imagine themselves in the positions of others also exhibit a tendency toward motor resonance in this basic listening task, even when musical context is missing.

Prefrontal and Reward Activation During Music Listening

Experiment 2 used 16-s excerpts of self and experimenter selected music to explore the effect of dispositional empathy on the processing of music in four conditions. familiar liked (FL), familiar disliked (FD), unfamiliar liked (UL), and unfamiliar disliked (UD). Participants consisted of individuals who reported regularly experiencing intense emotional reactions while listening to music. Musical liking is associated at the group level (ie. no IRI covariates), with left basal ganglia reward areas, and disliking with activity in right AIC, primary auditory cortex and prefrontal areas (OFC and VLPFC). Musical familiarity is associated with activation across a broad region of the cortex, subcortical areas, and cerebellum, including IPL. premotor cortex and the core empathy network, while unfamiliarity recruits only the SFG.

This robust familiarity effect is even more acute among high empathy listeners: after adding empathy covariates to our analysis, there were no regions that demonstrated an affect specific response after controlling for familiarity. This result is consistent with the literature in showing a large neurophysiological effect of familiarity on musical liking; it appears that trait empathy, as well, modulates responses to familiar music to a greater degree than unfamiliar music.

Contrary to expectations, activation in regions primarily associated with emotional empathy (e.g., sensorimotor areas, ACC, AIC) was not a major component in empathy modulated music processing. Instead. the most prominent activation sites for PT and EC scales were prefrontal, including medial, lateral, and orbital portions of the cortex, as well as TPJ. These regions are involved in executive control, regulation of emotions, mentalizing, contextual appraisal, and “enactment imagination”, and have figured prominently in many studies on the neurophysiology of cognitive empathy. Additionally, FS and EC results were characterized by dorsal striatum when participants listened to familiar music. This basal ganglia structure has been frequently reported in empathy studies but not often discussed; it has also long been associated with musical pleasure.

basal ganglia

Replicating this association, our results suggest that empathic people experience a higher degree of reward and motivation when listening to familiar music compared to lower empathy people.

PT was associated with left TPI in the task > baseline contrast. Activation of this region among perspective takers is consistent with studies implicating TPI in theory of mind and the merging of self and other (Lawrence et a1, 2006). The TPI was joined by posterior cingulate, cerebellum and superior prefrontal areas when listening to familiar liked music (FL > FD), the former two of which were also identified in a study on the neural bases of perspective taking. Interestingly, these results differ substantially from the PT correlations in Experiment 1, which were entirely sensorimotor. In the context of isolated musical sounds, PT results were interpreted as a reflection of covert imitation (or, enactive perspective taking): in contrast. however, it appears here that PT may reflect a more cognitively mediated, mental form of perspective taking, which conceivably extends beyond action perception coupling of musicians‘ affective motor cues to encompass contextual appraisal, assessments of the affective intent embodied in the music, and other executive functions.

In contrast to the prominent TPI and prefrontal activation associated with PT, FS results revealed activation of dorsal striatum (caudaie and putamen) and limbic areas (thalamus, hippocampus and amygdala). Activation of reward and emotion centers may suggest that fantasizers also tend to exhibit heightened positive emotional reactions to familiar music. Indeed, we found a moderate correlation between FS and preference ratings for familiar liked music, which may tentatively corroborate this claim. Moreover, structural brain studies have found that FS is associated with increased gray matter volume in hippocampus, an important memory area, perhaps also indicating enhanced encoding of familiar liked music among fantasizers.

The contrast in activation between the two IRI cognitive empathy scales (PT and FS) is notable, and may be attributed to the different aspects of empathy they were designed to assess. PT taps the tendency to imagine oneself in other people’s shoes, whereas FS captures the tendency to imagine oneself from the perspective of fictional characters. With this distinction in mind, one could surmise that the two scales also tap different views regarding the ontology of the musical agent: in this reading, people with high trait PT are more likely to take music as a social stimulus, i.e., as if it was a real or virtual human presence (with theory of mind, goals, beliefs), while high FS listeners are more likely to hear it as “fictional” from a social perspective, i.e., as a rewarding sensory stimulus with an attenuated grip on actual social cognition. Further research is called for to explore possible explanations for the differences in cognitive scales as reflected in music listening.

Turning finally to emotional empathy, we found that EC recruits prefrontal, reward and sensorimotor affective areas in music listening, and is likewise quite sensitive to familiarity. In the Familiar > Unfamiliar contrast, we found activation of cerebellum, IPL, DLPFC, IFG, DMPFC, amygdala, anterior paracingulate, dorsal striatum, OFC and lingual gyrus, and a variation on this general pattern for the Familiar liked > Unfamiliar liked and interaction contrasts. Activation of bilateral IPL and IFG is consistent with mirror accounts of empathy. Furthermore, the ACC, paracingulate, and areas that extend dorsally (SMA, DMPFC) have been proposed as the core of the empathy network: our result would seem to extend support for the primacy of this region using an experimental task that is not explicitly social in the manner of most empathy studies. Lastly, DLPFC is an important executive control area in cognitive empathy, and has been implicated in emotional regulation. Activation of this region may reflect top down control over affective responses to familiar music, both in terms of up regulation to liked music and down regulation to disliked (or possibly up regulation to negative stimuli, as Open minded empathic listeners try to “see something positive” in the disliked music), In further research, connectivity analysis between DLPFC and limbic/reward areas may help to specify the neurophysiological mechanisms underlying empathy modulated emotional regulation during music listening.

In addition to motor, cingulate and prefrontal activity, we found the recruitment of emotion and reward processing areas as a function of EC and musical familiarity: dorsal striatum (the whole extent of the caudate nucleus, plus thalamus) may reflect increased pleasure in response to familiar music among empathic listeners. It is not surprising that the reward system would show preferential activation to familiar music, as confirmed in the basic group Liked > Disliked contrast.

Prevalence of basal ganglia for both EC and FS suggests that trait empathy may effectively sensitize people to the music they already know. This even appears to be the case for disliked music, which showed dorsal striatum activation (along with OFC) in the Familiar disliked > Unfamiliar disliked contrast. This could be interpreted to indicate that empathic people may experience heightened musical pleasure even when listening to the music they self select as “hating,” provided it is familiar. By way of contrast. no striatum activation was found for any of the unfamiliar music conditions.

In concert with limbic circuitry, then, it is apparent that musical familiarity recruits a broad region of the affect reward system in high EC listeners.

Activation of inferior parts of the lingual gyrus and occipital lobe was another novel finding, and may also be linked to musical affect. These areas are associated with visual processing, including perception and recognition of familiar sights and emotional facial expressions, as well as visual imagery. It is reasonable to think that empathic listeners may be more prone to visual imagery while listening to familiar music. Visual responses are an important mechanism of musical affect more generally, and are a fairly reliable index of musical engagement and attention.

If high EC people are more susceptible to musical affect, as suggested by our results, they may also show a greater tendency towards visual imagery in music listening.

To be clear, we did not explicitly operationalize visual imagery in this study: in the future, it would be interesting to follow up on this result by comparing visual imagery and music listening tasks using the EC scale as a covariate.

The behavioral data resonate in interesting and sometimes contradictory ways with these imaging findings. We found that EC is strongly associated with preference for liked music and unfamiliar music, and negative responses to familiar disliked music. Results suggest that high EC people are more responsive to the affective components of music, as reflected in polarity of preference responses. EC was also associated with open mindedness to new music (i,e.. higher ratings for unfamiliar music), though imaging results for this contrast did not reach significance, and might appear to be contradicted by the clear familiarity effect discussed previously.

We must be cautious in the interpretation of these findings owing to the small sample size, but this resonance between behavioral and imaging evidence is nonetheless suggestive in demonstrating a role for EC in affective responsiveness to familiar music. This conclusion is broadly consistent With previous behavioral studies, especially regarding pleasurable responses to sad music.

In sum, the present results provide complementary neural evidence that involvement of prefrontal areas and limbic/basal ganglia in music listening covaries with individual trait differences in empathy, with sensorimotor engagement playing a smaller role.

How do we account for the prominence of cognitive, prefrontal areas in music listening but not musical timbre in isolation? It must be noted that a broad swath of the emotional empathy system was involved in the basic task > baseline contrast (used to mask all IRI covariates): in other words, it is clear that music in aggregate is processed with some level of sensorimotor, paralimbic, and limbic involvement, regardless the empathy level of the listeners or the valence/familiarity of the music. However, our results seem to suggest that empathic people tend to be more attuned to the attribution of human agency and affective intention in the musical signal. as indicated by preferential engagement of cognitive empathy networks including PFC (MPF and DLPFC) and TP), as well as reward areas.

In other words, what seems to best characterize the high empathy response to musical stimuli is the tendency to take an extra cognitive step towards identification with some agentive quality of the music, over and above the work of emotional contagion mechanisms alone.

Thus while patterns of neural resonance consistent with emotional contagion appear to be common to most experiences of music and were also found among high empathy participants in Experiment 1, activation of prefrontal cognitive empathy systems for the PT and EC scales may indicate the tendency of empathic listeners to try to “get into the heads” of composers, performers, and/or the virtual persona of the music. This top down process is effortful, imaginative, and self aware, in contrast to the automatic and pre reflective mechanisms undergirding emotional contagion. Accordingly, as suggested by Schubert, the involvement of cognitive systems may not strictly speaking be required for affective musical response, which can largely be accounted for by emotional contagion circuitry alone.

A number of studies have shown that mental imagery may be supported by sensorimotor and affective components without the contribution of prefrontal areas. Nevertheless, they could betoken a more social cognitive mode of listening, a deliberative attempt on the part of listeners to project themselves into the lived experience of the musical agent. This imaginative projection is more intense, understandably, for music that empathic people already know, and also appears to interact with musical preference.

General Implications

The present study has a number of implications for social and affective neuroscience, music psychology, and musicology. For neuroscientific empathy research, we demonstrate the involvement of the core empathy network and mirror neuron system outside of tasks that are explicitly social cognitive. Most studies use transparently social experimental tasks and stimuli to assess neural correlates of state and trait empathy; for example, viewing pictures or videos of other people.

This study demonstrates that musical sound. which is perhaps not an obvious social stimulus, can elicit neural responses consistent with theories of empathy. By domg so, this study highlights the potential value of operationalizing artistic and aesthetic experience as a window into social cognitive and affective processing, a perspective that is arguably the historical progenitor of contemporary empathy research.

For music psychology, this research has at least three main implications.

First, this study demonstrates that trait empathy may modulate the neurophysiology of music listening. Although there is mounting behavioral and psychophysiological evidence pointing to this conclusion, this is the first study to investigate the effects of empathy on the musical brain.

Second, this study confirms and extends empirical claims that music cognition is inextricably linked to social cognition. Our results suggest that aspects of affective music processing can be viewed as a specialized subprocess of general social affective perception and cognition. This may begin to explain the neural bases for how music can function as a “virtual social agent”.

Third, in demonstrating neural differences in music processing as a function of empathy, we highlight the possible significance of looking at other trait features when assessing the functional neural correlates of musical tasks and stimuli. Many neurophysiological music studies take only a few trait features into account in sampling procedures and analysis, most notably sex, age, and musical training: the latter has been well explored, but other factors such as personality factors and mood are not frequently addressed. Individual differences in music processing may relate to dispositional characteristics that can be captured by psychosocial questionnaires, indirect observational techniques, or other methods. Exploring the role of such trait variables in musical behaviors and brain processing could provide a more detailed and granular account of music cognition,

Finally, these results enrich the humanistic study of music in providing a plausible psychobiological account for the social valence of musical experience observed in diverse cultural and historical settings. As music theorist Clifton claims, “the ‘other’ need not be a person: it can be music.”

In a very rough sense, this study provides empirical support for this statement: areas implicated in trait empathy and social cognition also appear to be involved in music processing, and to a significantly greater degree for individuals with high trait empathy.

If music can function something like a virtual ”other,” then it might be capable of altering listeners‘ views of real others, thus enabling it to play an ethically complex mediating role in the social discourse of music. Indeed, musicologists have historically documented moments of tense cultural encounter wherein music played an instrumental role in helping one group to realize the other’s shared humanity.

Recent research would seem to provide behavioral ballast for this view: using an implicit association task,Vuoskoski et a showed that listening to the music of another culture could positively modulate attitudes towards members of that culture among empathic listeners. Though we do not in this study explicitly address whether music can alter empathic brain circuits, it is suggestive that certain attitudes toward musical sound may have behavioral and neural bases in individual differences in trait empathy.

Limitations

A few important limitations must be considered in interpreting these results, First, this study was correlational: no causative links can thus be determined in the relationship between music and trait empathy. In the future, it would be interesting to use an empathy priming paradigm in an MRI context to compare neurophysiological correlates of trait empathy with primed “state” empathy in music listening; this could provide a powerful method for disentangling possible differences in processing between dispositional attributes of empathy and contextual factors (e.g., socially conditioned attitudes about a performer, mood when listening).

As a corollary to the above, moreover, this study does not address whether our results are specific to music listening: perhaps high empathy people utilize more of these areas when performing other non musical yet not explicitly social tasks as well (eg. viewing abstract art). Additionally. we do not explore whether there could be other mediating trait factors in music processing besides empathy and sex: personality and temperament, for instance, have been shown to modulate responses to music.

Finally, this study will need to be replicated with a larger sample size, and with participants who do not self select based on strong emotional reactions to music, in order to strengthen the statistical power and generalizability of the results.

CONCLUSION

In two experiments using fMRI, this article demonstrates that trait empathy modulates music processing. Replicating previous findings in the social neuroscience literature, isolated musical timbres are related to sensorimotor and paralimbic activation; in actual MUSIC listening, however, empathy is primarily associated with activity in prefrontal and reward areas. Empathic participants were found to be particularly sensitive to abrasive, “noisy” qualities of musical timbre, showing preferential activation of the SMA, possibly reflecting heightened motor mimetic susceptibility to sounds signaling high arousal, low valence affective states.

In the music listening task. empathic subjects demonstrated enhanced responsiveness to familiar music, with musical preference playing a mediating role. Taken together, these results confirm and extend recent research on the link between music and empathy, and may help bring us closer to understanding the social cognitive basis for music perception and cognition.

INTERPERSONAL REACTIVITY INDEX (IRI)

Reference:

Davis, M. H. (1980). A multidimensional approach to individual differences in empathy. JSAS Catalog of Selected Documents in Psychology, 10, 85.

Description of Measure:

Defines empathy as the “reactions of one individual to the observed experiences of another (Davis, 1983).”

28 items answered on a 5 point Likert scale ranging from “Does not describe me well” to “Describes me very well”. The measure has 4 subscales, each made up of 7 different items. These subscales are (taken directly from Davis, 1983):

Perspective Taking, the tendency to spontaneously adopt the psychological point of view of others.

Fantasy taps respondents‘ tendencies to transpose themselves imaginatively into the feelings and actions of fictitious characters in books, movies, and plays

Empathic Concern assesses “other-oriented” feelings of sympathy and concern for unfortunate others

Personal Distress measures “self oriented” feelings of personal anxiety and unease in tense interpersonal settings

Abstracts of Selected Related Articles:

Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of Personality and Social Psychology, 44, 113126.

The past decade has seen growing movement toward a view of empathy as a multidimensional construct. The Interpersonal Reactivity Index (IRI; Davis, 1980), which taps four separate aspects of empathy, is described, and its relationships with measures of social functioning, self esteem, emotionality, and sensitivity to others is assessed. As expected, each of the four subscales displays a distinctive and predictable pattern of relationships with these measures, as well as with previous unidimensional empathy measures. These findings, coupled with the theoretically important relationships existing among the four subscales themselves, provide considerable evidence for a multidimensional approach to empathy in general and for the use of the IRI in particular.

Pulos, S., Elison, J ,, & Lennon, R. (2004). Hierarchical structure of the Interpersonal Reactivity Index. Social Behavior and Personality, 32, 355 360.

The hierarchical factor structure of the Interpersonal Reactivity Index (IRI) (Davis, 1980) inventory was investigated with the Schmid Leiman orthogonalization procedure (Schmid & Leiman, 1957). The sample consisted of 409 college students. The analysis found that the IRI could be factored into four first order factors, corresponding to the four scales of the IRI. and two second order orthogonal factors, a general empathy factor and an emotional control factor.

INTERPERSONAL REACTIVITY INDEX

The following statements inquire about your thoughts and feelings in a variety of situations. For each item, indicate how well it describes you by choosing the appropriate letter on the scale at the top of the page: A, B, C, D, or E.

When you have decided on your answer, fill in the letter next to the item number.

READ EACH ITEM CAREFULLY BEFORE RESPONDING. Answer as honestly as you can. Thank you.

ANSWER SCALE:

A DOES NOT DESCRIBE ME WELL

B

C

D

E DESCRIBES ME VERY WELL

1. I daydream and fantasize, with some regularity, about things that might happen to me. (FS)

2. I often have tender, concerned feelings for people less fortunate than me. (EC)

3. I sometimes find it difficult to see things from the “other guy’s” point of View. (PT) (-)

4. Sometimes I don’t feel very sorry for other people when they are having problems. (EC) (-)

5. I really get involved with the feelings of the characters in a novel. (FS)

6. In emergency situations, I feel apprehensive and ill at ease. (PD)

7. I am usually objective when I watch a movie or play, and I don’t often get completely caught up in it. (FS) (-)

8. I try to look at everybody’s side of a disagreement before I make a decision. (PT)

9. When I see someone being taken advantage of, I feel kind of protective towards them. (EC)

10. I sometimes feel helpless when I am in the middle of a very emotional situation. (PD)

11. I sometimes try to understand my friends better by imagining how things look from their perspective. (PT)

12. Becoming extremely involved in a good book or movie is somewhat rare for me. (FS) (-)

13. When I see someone get hurt, I tend to remain calm. (PD) (-)

14. Other people’s misfortunes do not usually disturb me a great deal. (EC) (-)

15. If I‘m sure I’m right about something, I don’t waste much time listening to other people’s arguments. (PT) (-)

16. After seeing a play or movie, I have felt as though I were one of the characters. (FS)

17. Being in a tense emotional situation scares me. (PD)

18. When I see someone being treated unfairly, I sometimes don’t feel very much pity for them. (EC) (-)

19. I am usually pretty effective in dealing with emergencies. (PD) (-)

20. I am often quite touched by things that I see happen. (EC)

21. I believe that there are two sides to every question and try to look at them both. (PT)

22. I would describe myself as a pretty soft hearted person. (EC)

23. When I watch a good movie, I can very easily put myself in the place of a leading character. (FS)

24. I tend to lose control during emergencies. (PD)

25. When I’m upset at someone, I usually try to “put myself in his shoes” for a while. (PT)

26. When I am reading an interesting story or novel, I imagine how I would feel if the events in the story were happening to me. (FS)

27. When I see someone who badly needs help in an emergency, I go to pieces. (PD)

28. Before criticizing somebody, I try to imagine how I would feel if I were in their place. (PT)

See also:

Music and the Mind

by Anthony Storr

Advertisements

Inequality breeds stress and anxiety. No wonder so many Britons are suffering – Richard Wilkinson and Kate Pickett.

Studies of people who are most into our consumerist culture have found that they are the least happy, the most insecure and often suffer poor mental health.

Understanding inequality means recognising that it increases school shootings, bullying, anxiety levels, mental illness and consumerism because it threatens feelings of self-worth.

In equal societies, citizens trust each other and contribute to their community. This goes into reverse in countries like ours.

The gap between image and reality yawns ever wider. Our rich society is full of people presenting happy smiling faces both in person and online, but when the Mental Health Foundation commissioned a large survey last year, it found that 74% of adults were so stressed they felt overwhelmed or unable to cope. Almost a third had had suicidal thoughts and 16% had selfharmed at some time in their lives. The figures were higher for women than men, and substantially higher for young adults than for older age groups. And rather than getting better, the long-term trends in anxiety and mental illness are upwards.

For a society that believes happiness is a product of high incomes and consumption, these figures are baffling. However, studies of people who are most into our consumerist culture have found that they are the least happy, the most insecure and often suffer poor mental health.

An important part of the explanation involves the psychological effects of inequality. The greater the material differences between us, the more important status and money become. They are increasingly seen as if they were a measure of a person’s inner worth. And, as research shows, the result is that the more unequal the society, the more people feel anxiety about status and how they are seen and judged. These effects are seen across all income groups from the poorest to the richest tenth of the population.

Inequality increases our insecurities about selfworth because it emphasises status and strengthens the idea that some people are worth much more than others. People at the top appear supremely important, almost as superior beings, while others are made to feel as if they are of little or no value. A study of how people experience low social status in different countries found, predictably, that people felt they were failures. They felt a strong sense of shame and despised themselves for failing. Whether they lived in countries as rich as the UK and Norway, or as poor as Uganda and Pakistan, made very little difference to what it felt like to be near the bottom of the social ladder.

Studies have shown that conspicuous consumption is intensified by inequality. If you live in a more unequal area, you are more likely to spend money on a flashy car and shop for status goods. The strength of this effect on consumption can be seen in the tendency for inequality to drive up levels of personal debt as people try to enhance their status.

But it is not just that inequality increases status anxiety. For many, it would be nearer to the truth to say that it is an assault on their feeling of self-worth. It increases what psychologists have called the “social evaluative threat”, where social contact becomes increasingly stressful. The result for some is low self-esteem and a collapse of self-confidence. For them, social gatherings become an ordeal to be avoided. As they withdraw from social life they suffer higher levels of anxiety and depression.

Others react quite differently to the greater ego threat of invidious social comparisons. They react by trying to boost the impression they give to others. Instead of being modest about achievements and abilities, they flaunt them.

Rising narcissism is part of the increased concern with impression management. A study of what has been called “self-enhancement” asked people in different countries how they rated themselves relative to others. Rather like the tell-tale finding that 90% of the population think they are better drivers than average, more people in more unequal countries rated themselves above average on a number of different dimensions. They claimed, for example, that they were cleverer and more attractive than most people.

Nor does the damage stop there. Psychological research has shown that a number of mental illnesses and personality disorders are linked to issues of dominance and subordination exacerbated by inequality. Some, like depression, are related to an acceptance of inferiority, others relate to an endless attempt to defend yourself from being looked down on and disrespected. Still others are borne of the assumption of superiority or to an endless struggle for it. Confirming the picture, the international data shows not only that mental illness as a whole is more common in more unequal societies, but specifically that depression, schizophrenia and psychoses are all more common in those societies.

What is perhaps saddest about this picture is that good social relationships and involvement in community life have been shown repeatedly to be powerful determinants of health and happiness. But it is exactly here that great inequality throws another spanner in the works. By making class and status divisions more powerful, it leads to a decline in community life, a reduction in social mobility, an increase in residential segregation and fewer inter-class marriages.

More equal societies are marked by strong community life, high levels of trust, a greater willingness to help others, and low levels of violence. As inequality rises, all this goes into reverse. Community life atrophies, people cease to trust each other, and homicide rates are higher.

In the most unequal societies, like Mexico and South Africa, the damage has gone further: citizens have become afraid of each other. Houses are barricaded with bars on windows and doors, razor wire atop walls and fences.

And as inequality increases, a higher proportion of a country’s labour force is employed in what has been called “guard labour” the security staff, prison officers and police we use to protect ourselves from each other.

Understanding inequality means recognising that it increases school shootings, bullying, anxiety levels, mental illness and consumerism because it threatens feelings of self-worth.

Richard Wilkinson and Kate Pickett are the authors of The Inner Level: How More Equal Societies Reduce Stress, Restore Sanity and Improve Everyone’s Wellbeing

Life after Severe Childhood Trauma. I Think I’ll Make It. A True Story of Lost and Found – Kat Hurley.

Had I known I should have been squirreling away memories as precious keepsakes, I would have scavenged for more smiles, clung to each note of contagious laughter and lingered steadfast in every embrace.

Memory is funny like that: futile facts and infinitesimal details are fixed in time, yet things you miss, things you wish you paid fuller attention to, you may never see again.

“I learned this, at least, by my experiment: that if one advances confidently in the direction of his dreams, and endeavors to live the life which he has imagined, he will meet with a success unexpected in common hours.”

Henry David Thoreau, Walden: Or, Life in the Woods

To write this book, I relied heavily on archived emails and journals, researched facts when I thought necessary, consulted with some of the people who appear in the book, and called upon my own memory, which has a habitual tendency to embellish, but as it turns out, there wasn’t much need for that here. Events in this book may be out of sequence, a handful of locations were changed to protect privacy, many conversations and emails were re-created, and a few names and identifying characteristics have been changed.

It was hardly a secret growing up that psychologists predicted I would never lead a truly happy and normal life. Whether those words were intended for my ears or not seemed of little concern, given the lack of disclaimer to follow. There was no telling what exceedingly honest bits of information would slip through the cracks of our family’s filtration system of poor Roman Catholic communication. I mean, we spoke all the time but rarely talked. On the issues at least, silence seemed to suit us best, yet surprising morsels of un-sugarcoated facts would either fly straight out of the horse’s mouth or trickle their way down through the boys until they hit me, the baby.

I was five when I went to therapy. Twice. On the second visit, the dumb lady asked me to draw what I felt on a piece of plain construction paper. I stared at the few crayons next to the page when I told her politely that I’d rather not. We made small talk instead, until the end of the hour when she finally stood up, walked to the door and invited my grandma in. They whispered some before she smiled at me and waved. I smiled back, even if she was still dumb. I’m sure it had been suggested that I go see her anyway, because truth be known, psychologists were a “bunch of quacks,” according to my grandma. When I said I didn’t want to go back, nobody so much as batted an eye.

And that was the end of that.

When I draw up some of my earliest most vivid memories, what I see reminds me of an old slide projector, screening crooked, fuzzy images at random. in the earliest scenes, I am lopsidedly pigtailed, grass stained, clothes painfully clashing. In one frame I am ready for my first day of preschool in my bright red, pill-bottomed bathing suit, standing at the bottom of the stairs where my mom has met me to explain, through her contained laughter, that a carpool isn’t anything near as fun as it sounds. In another, I am in the living room, turning down the volume on my mom’s Richard Simmons tape so I can show her that, all on my own yet only with a side-puckered face, I’d learned how to snap. In one scene, I’m crouched down in the closet playing hide-and-seek, recycling my own hot Cheerio breath, patiently waiting to be found, picking my toes. Soon Mom would come home and together we’d realize that the boys weren’t seeking (babysitting) me at all, they’d simply gone down the street to play with friends.

I replay footage of the boys, Ben and Jack, pushing me in the driveway, albeit unintentionally, toward the busy road on my first day with no training wheels, and (don’t worry, I tattled) intentionally using me as the crash-test dummy when they sent me flying down the stairs in a laundry basket. I have the scene of us playing ice hockey in the driveway after a big ice storm hit, me proudly dropping the puck while my brothers Stanley Cup serious faced off.

I call up the image of me cross-legged on my parents’ bed, and my mom’s horrified face when she found me scissors in hand thrilled with what she referred to as my new “hacked” do. That same bed, in another scene, gets hauled into my room when it was no longer my parents’, and my mom, I presume, couldn’t stand to look at it any longer. I can still see the worry on her face in those days and the disgust on his. I see the aftermaths of the few fights they couldn’t help but have us witness.

Most of the scenes are of our house at the top of the hill on McClintock Drive, but a few are of Dad’s townhouse in Rockville, near the roller rink. I remember his girlfriend, Amy, and how stupid I thought she was. I remember our Atari set and all our cool new stuff over there. And, of course, I remember Dad’s really annoying crack-of-dawn routine of “Rise and Shine!”

I was my daddy’s darling, and my mommy’s little angel.

Then without warning I wasn’t.

Had I known I should have been squirreling away memories as precious keepsakes, I would have scavenged for more smiles, clung to each note of contagious laughter and lingered steadfast in every embrace. Memory is funny like that: futile facts and infinitesimal details are fixed in time, yet things you miss, things you wish you paid fuller attention to, you may never see again.

I was just a regular kid before I was ever really asked to “remember.” Up until then, I’d been safe in my own little world: every boo-boo kissed, every bogeyman chased away. And for a small voice that had never been cool enough, clever enough, or captivating enough, it was finally my turn. There was no other choice; I was the only witness.

“Tell us everything you know, Katie. It is very important that you try to remember everything you saw.”

August 11, 1983

I am five. I’ll be in kindergarten this year, Ben is going to third grade, Jack will be in seventh. I’m not sure where the boys are today; all I know is that I’m glad it’s just me and Mom. We’re in the car, driving in our Ford wagon, me bouncing unbuckled in the way back. We sing over the radio like we always do. We’re on our way to my dad’s office, for the fivehundredth time. Not sure why, again, except that “they have to talk.” They always have to talk. Ever since Dad left and got his new townhouse with his new girlfriend, all they do is talk.

Mom pulls into a space in front of the office. The parking lot for some reason is practically empty. His cleaning business is all the way in the back of this long, lonely stretch of warehouse offices, all boring beige and ugly brown, with big garage doors and small window fronts.

“You can stay here, sweetie pie I won’t be long.”

I have some of my favorite coloring books and a giant box of crayons; I’ll be fine.

Time passes in terms of works of art. Goofy, Mickey, and Donald are all colored to perfection be fore I even think to look up. I am very fond of my artistic abilities; my paint by numbers are exquisite, and my papier-maché, as far as I’m concerned, has real promise for five. All of my works are fridge-worthy; even my mom thinks so. My special notes and handmade cards litter her nightstand, dresser, and bathroom counter.

I hear a scream. Like one I’d never heard before, except on TV. Was that her? I sit still for a second, wait for another clue. That wasn’t her. But something tells me to check anyway just in case.

I scramble out from the way back, over the seat, and try to open the door, but I’m locked in why would she look me in? I tug at the lock and let myself out. With the car door still open, I scurry to the front window of my dad’s shop, and on my tiptoes, ten fingers to the ledge, I can see inside. The cage with the snakes is there, the desk and chairs are there, the cabinets and files are there, everything looks normal like the last time I was inside. Where are they?

Then through the window, I see my mom. At the end of the hall, I can see her through the doorway. But just her feet. Well, her feet and part of her legs. They are there, on the floor her sandals still on. I can make out the tip of his shoe too, at her thigh, like he’s sitting on top of her. She is still. I don’t get it. Why are they on the floor? I try to open the door, but it’s locked. I don’t recall knocking; maybe I did. I do know that I didn’t yell to be let in, call for help, or demand that I know what was going on.

It wasn’t her. It sounded like it came from down the street, I tell myself. Maybe it wasn’t a scream scream, anyway. Someone was probably just playing, I convince myself. I get back in the car. I close the door behind me and color some more.

Only two pages are colored in this time. Not Mickey and friends, Snow White now. Fairy tales. My dad knocks on the window, startling me, smiling. “Hey, princess. Your mom is on the phone with Aunt Jeannie, so you’ll just see her Monday. You’re coming with me, kiddo. We have to go get your brother.”

Everything I’ve seen is forgotten. My dad’s convincing smile, tender voice, and earnest eyes make all my fright disappear. He told me she was on the phone, and I believed him. How was I supposed to know that dads could lie?

Two days later, my brothers and I were at the beach on a job with Dad when our grandparents surprised us with the news. “Your mother is missing.” And it was only then, when I sensed the fear they tried so intently to wash from their faces, that the realization struck me as stark panic, that l was brought back to the scene for the first time and heard the scream I understood was really her.

My testimony would later become the turning point in the case, reason enough to convict my father, who in his cowardice had covered all his traces. Even after his conviction, it would be three more years until he fully confessed to the crime. I was eight when I stood, uncomfortable, in a stiff dress at her grave for the second time more flowers, same priest, same prayers.

To say I grew up quickly, though, as people have always suspected, would be a stretch. Certainly, I was more aware, but the shades of darkness were graced with laughter and lullabies and being a kid and building forts, and later, learning about my period from my crazy grandma.

I honestly don’t remember being treated any differently, from Grandma Kate at least. If I got any special attention, I didn’t know it. Life went on. Time was supposed to heal all wounds. My few memories of mom, despite my every attempt, faded with each passing holiday.

I was in Mrs. Dunne’s third grade class when my dad finally confessed. We faced a whole ’nother wave of reporters, news crews, and commotion. They replayed the footage on every channel: me, five years old again, clad in overalls, with my Care Bear, walking into the courtroom. And just like before, my grandpa taped all the news reels. “So we never forget,” he said.

For our final TV interview, my grandparents, the boys, and I sat in our church clothes in the front room to answer the reporter’s questions. I shifted around on Grandma Kate’s lap in my neatly pressed striped Easter dress. Everybody had a turn to talk. I was last. “Katie, now that the case is closed, do you think you will be able to move on?”

I’m not sure how I knew it then, especially when so many years of uncertainty were still to come, but I was confident: “Yeah.” I grinned. “I think I’ll make it.”

Chapter One

TEACHING MOMENT

“Well, I just called to tell you I’ve made up my mind.” Silence “I will not be returning to school next year.”

Silence “I don’t know where I’m going or what I’m going to do I just know I cannot come back.”

Barbara, my faculty chair, on the other end of the line, fumed. I could hear it in each syllable of Catholic guilt she spat back at me. We’d ended a face-to-face meeting the day before with, “I’ll call you tomorrow with my decision,” as we agreed to disagree on the fact that the students were more important than my mental health and well-being.

“What will they do without you? You know how much they love you. We created this new position for you, and now you’re just going to leave? Who will teach the class? It’s August!” she agonized.

God, she was good. She had this guilt thing down pat. An ex-nun, obviously an expert, and this was the first I’d been on her bad side, a whole year’s worth of smiles, waves and high-fives in the hallways seemed to get clapped out with the erasers.

It was true; I loved the kids and didn’t want to do this so abruptly, like this is August. This was not my idea of a resume builder. Nevertheless, as each bit of honesty rose from my lips, I felt freer and freer and more true to myself than I’d felt in, well, a long frickin’ time. A sense of relief washed through me in a kind of cathartic baptism, cleansing me of the guilt. I stopped pacing. A warm breeze swept over the grass on the hill in front of our condo then over me. I stood on the sidewalk still nervous, sweating, smiling, teary-eyed. I can’t believe I just did that.

St. Anne’s was a very liberal Catholic school, which ironically, had given me a new faith in the closeminded. The building housed a great energy of love and family. I felt right at home walking through its doors even at new-teacher orientation, despite it having been a while since I needed to be shown the ropes. I’d already been teaching for six years in a position where I’d been mentoring, writing curriculum and leading administrative teams. I normally didn’t do very well on the bottom rung of the totem pole, but more pay with less responsibility had its merit.

It was definitely different, but a good different. I felt newly challenged in a bigger school, looked forward to the many programs already in place and the diversity of the staff and student body. The ceremonies performed in the religion-based setting seemed foreign at first, yet witnessing the conviction of our resident nuns and tenured faculty restored a respect I had lost over the years. They were the hymns that I recognized, the verses I used to recite, the prayers I was surprised I still remembered, the responses I thought I’d never say again.

The first time we had Mass together as an entire school, I was nearly brought to tears. I got goose bumps when the notes from the piano reverberated off the backboards on the court the gym-turned-place-of-worship hardly seemed the place to recommit. Yet, hearing the harmony of our award-winning gospel choir and witnessing the level of participation from the students, faculty, and administration, I was taken aback. The maturity of devotion in the room was something I had never experienced in any of my churches growing up. Students, lip-synching their words, distracted and bored, still displayed more enthusiasm than the lumps hunched at my old parishes.

It was during that first Mass that I realized there was only one person who could have gotten me there to a place she would have been so proud to tell her bridge club I was working. She would have been thrilled for me to find God here. The God she knew, her Catholic God the one who had listened to her rosary, day after day, her pleas for her family’s health and well-being, her pleas for her own peace and forgiveness. Gma had orchestrated it all. I was certain.

As that realization unfolded, I saw a glimpse of her endearing eyes, her tender smile before me, and with that my body got hot, my lashes heavy, soaked with a teary mist. Although it would be months till I stumbled upon a glimpse of what some might call God, it was here, at St. Anne’s, where I gained a tradition I had lost, a perspective I had thought impossible, a familiarity that let me feel a part of something, and a trust that may have ultimately led me straight out the door.

Our kitchen, growing up, reeked of canned beans and burnt edges. Grandma Kate knew of only one way to cook meat, crispy. On most nights, the fire alarm let us know that dinner was ready. The table was always set before I’d come running in, at the sound of her call, breathless from playing, to scrub the dirt from my fingernails. She was a diligent housewife, though at times she played the part of something far more independent. The matriarch, we called her the gel to the whole damn bunch of us: her six, or five rather, and us three.

She responded to Grandma Kate, or just Kate, or Kitty, as her friends from St. Cecelia’s called her, or Catherine, as she generally introduced herself, or Gma, as I later deemed her all names necessary to do and be everything that she was to all of us.

She and I had our moments through my adolescence where the chasm of generations between us was more evident than we’d bother to address. They’d sold their five-bedroom home in Manor Club when it was just she and Grandpa left alone inside the walls baring all their memories. The house had character worn into its beams by years of raising six children and consequently taking the abuse of the (then) eleven grandchildren like a docile Golden Retriever.

It wasn’t long after my grandpa died that I moved back in with Gma. At fifteen, it was just she and I in their new two-bedroom condo like college roommates, bickering at each other’s annoying habits, ridiculing each other’s guests, and sharing intimate details about each other’s lives when all guards were off and each other was all we had.

Despite our differences, her narratives always fascinated me. I had grown up on Gma’s tales and adventures of her youth. In most of her stories, she depicted the trials of the Depression and conversely, the joys of simplicity. She encouraged any craft that didn’t involve sitting in front of the television. She believed in hard work, and despite her dyslexia, was the first woman to graduate from Catholic University’s Architectural School in the mid-1940s. “Of course,” she said. “There was no such thing as dyslexia in my day. Those nuns damn near had me convinced I was just plain dumb.”

She was a trained painter and teacher, a fine quilter, gardener, and proud lefty. She had more sides to her than a rainbow-scattering prism. When we were young and curious, flooding her with questions, we’d “look it up” together. When we had ideas, no matter how silly, she’d figure out a plan to somehow help us make it happen. All of us grandkids had ongoing special projects at any given time: whether it was building in the garage, sewing in the living room, painting in the basement, or taking long, often lost, “adventures” that brought us closer to her past.

She was from Washington DC, so subway rides from Silver Spring into the city were a regular episode. We spent so many hours in the Museum of Natural History I might attribute one of my cavities to its famous astronaut ice cream. We also went to see the cherry blossoms when they were in bloom each year, visited the National Zoo and toured the Washington Monument as well as several of the surviving parks and canal trails from her childhood.

It was on these journeys that she and I would discuss life, politics, war, religion, and whatever else came to mind. She was a woman of many words, so silences were few and far between. I got to know her opinion on just about everything because nothing was typically left unsaid, nothing.

By the time I was in high school and college, the only music we could agree on in the car was the Sister Act soundtrack. On our longer jaunts when conversation dripped to a minimum, I would toss in the tape before the banter went sour, which was a given with our opposite views on nearly everything. I’d slide back the sunroof, and we’d sing till our hearts were content.

“Hail mother of mercy and of love. Oh, Maria!”

She played the grouchy old nun, while I was Whoopi, trying to change her stubborn ways.

Gma and I both loved musicals, but while I was off scalping tickets to see Rent on Broadway, which she would have found too loud and too crude (God knows she would have had a thing or two to say about the “fairy” drag queen), she was content with her video of Fiddler on the Roof.

As I sat in the theater recently for the Broadway performance of Lion King, I couldn’t help but picture her sitting there beside me, her big, brown eyes shifted right with her good ear turned to the stage; it was a show we would have both agreed on.

For the theater, she would have wetted down her short gray wispy hair and parted it to the side and then patted it down just so with both hands. A blouse and a skirt would have already been picked out, lying on the bed. The blouse would get tucked in and the belt fastened not too far below her bra line. Then she’d unroll her knee highs from the toes and slip on some open toe sandals, depending on the season; she didn’t mind if the hose showed. Some clip-on earrings might have made their way to her virgin lobes, if she remembered, and she would have puckered up in the hallway mirror with a tube of Clairol’s light pink lipstick from her pocketbook before announcing that she was ready.

Gma would have loved the costumes, the music, the precision in each detail. And in the car ride home, I can hear her now, yelling over the drone of the car’s engine because her hearing aid had remained in the dresser drawer since the day she brought it home. “There wasn’t but one white fella’ in the whole gosh dern show. Every last one of ’em was black as the day is long, but boy could they sing. God, what beautiful voices they had, and even as deaf as I am I could understand what they were saying. They were all so well spoken.”

Rarely does a day go by that I don’t smile at one of her idioms or imagine one of her crazy shenanigans, her backward lessons, or silly songs. I used to feel guilty about the proportion I spent missing her over the amount I did my own mother. I guess it makes sense, though, to miss what I knew for far longer, and I suppose I had been swimming laps in the gaping void I housed for my mom.

Over the years, I often thought if I truly searched for my mom she would give me a sign, but where would I even look? Or would I even dare? Gma believed in those kinds of things, and despite having long lost my religion, she made me believe.

She told me a story once, without even looking up from the quilt she mended, about a dark angel who sat in a chair by the window in the corner of the room, accompanying her in the hospital as her mother lay on her deathbed gripped by cancer. She said the angel’s presence alone had been enough to give her peace. I had watched her get mistyeyed while she brought herself back to the scene, still pushing the thimble to the fabric. Another time, she continued, she sat on the front step of their first house on Pine Hill in hysterics as she’d just gotten word of her three-year-old daughter’s cancer diagnosis; she’d felt a hand on her shoulder enough to calm her. She knew then she wasn’t alone.

These conversations became typical when it was just us. When she cried, so did I. We wore each other’s pain like thick costume makeup, nothing a good cry and some heavy cold cream couldn’t take off. She shared with me her brinks of meltdowns after losing my mother, and I grew up knowing that she had far more depth than her overt simplicity echoed.

It wasn’t until my latter college years, though, when we had become so close we were able to overlook most of our differences. By then, I wanted all the time I spent running away back; I wanted my high school bad attitude and disrespect erased; I wanted the smell of my cigarette smoke in her station wagon to finally go away. She was my history. She was my companion. She was home to me.

In the last few years, we shared our haunts, our fears, our regrets. Yet, we laughed a lot. She never minded being the butt of any good joke. She got crazier and goofier in her old age, shedding more of her crossbred New England proper and Southern Belle style. One of my favorite memories was of the time my college roommate, Kathleen, and I taught her how to play “Asshole” at our Bethany, Delaware, beach house.

Gma had said, “The kids were all down here whoopin’ it up the other night playing a game, havin’ a good ol’ time, hootin’ and hollerin’. I would like to learn that game. They kept shouting some curse word what’s it called again?”

“Asshole?” I had said.

“Yup, that must be it. Asshole sounds right. Think you can teach this old bird?”

Kathleen and I nearly fell over at the request but were obliged to widen Gma’s eyes to the awesome college beer-drinking game full of presidents, assholes, and beer bitches. And she loved it, quite possibly a little tipsy after a few rounds. We didn’t typically play Asshole with Jacob’s Creek chardonnay.

Throughout the course of several conversations, Gma assured me that she’d had a good life and when the time came, she’d be ready. In those last few years, if I stood in her condo and so much as mentioned the slightest gesture of admiration toward anything she owned, she’d say, “Write your name on the back.” She’d have the Scotch tape and a Sharpie out before I could even reconsider.

It was 2003, a year into my teaching career, when Gma finally expressed how proud she was of me. She said that my mom had always wanted to be a teacher, that she was surely proud of me, too. I’ll never forget waking up to my brother’s phone call, his voice solemn. I was devastated.

It was my mom and Gma who helped Brooke and I get our house, I always said. I had signed a contract to start at St. Anne’s in the fall, so we needed a home outside the city that would make my new commute toward DC more bearable. Three years after Gma died, since I wasn’t speaking to God much in those days, I asked Mom and Gma to help us out if they could. Brooke, who only knew Gma through my incessant stories, was just as kooky as I was when it came to talking to the dead so she never batted an eye at the references I made to the china cabinet.

Gma’s old antique china cabinet green until she stripped, sanded and painted it maroon the year she moved to her condo sat in the dining room of our rented row house in Baltimore. (The smell of turpentine will always remind me of her leathered hands.) Sometimes, for no good reason, the door would fall slightly ajar, and each time it did, I swore she was trying to tell me something. While dating a girl I imagine Gma was not particularly fond of, I eventually had to put a matchbook in the door just to keep the damn thing closed it creeped me out in the mornings when I’d wake up to the glass door gaping.

The exact night Brooke and I put the contract in on our house, we mentioned something to Gma before going to bed, kissing our hands and casually patting the side of the paint-chipped cabinet. The next morning wide open. Two days later contract accepted. I was elated; I’d never had such a good feeling about anything.

I felt so close to my team of guardian angels then. Everything seemed to be in its delightfully divine order, and I thanked them immensely from the moment we began the purchasing process until the time we moved in, displaying my gratitude thereafter with each stroke of my paintbrush and each rock pulled from the garden. I adored the home we were blessed with, our cute little cobblestone accented condo, our very first house. Even though we knew it wasn’t a forever home, it was ours to make our own for now. And we did or we started to.

So when the fairy tale began to fall apart, just a little over a year later, I couldn’t help but question everything, intentions, meaning. There was no sign from the china cabinet. None of it made sense, the reason behind it all, I mean. Sure, I had always known growing up that everything has its reason. I have lived by that motto, but I could make no sense of this. It’s one thing for a relationship to fall apart, but to have gone all this way, with the house to tie us even further? I was beside myself.

Needless to say, my bits of gratitude tapered off as I felt like I had less and less to be thankful for. I still talked to Mom and Gma, but not without first asking, “Why?” And something, quite possibly the silence that made the question seem rhetorical told me I was going to have to get through this on my own. Perhaps it was a test of independence or a sudden stroke of bad karma for all the years spent being an obnoxious teenager, ungrateful, untrustworthy. Either way I was screwed; of that much I was certain.

I had always wanted to leave. To go away, I mean. Study abroad or go live in another state and explore. I had traveled a little in college but nowhere extensively. So, as all the boxes moved into our brand new house were unpacked and making their way into storage, the reality of being bound started creeping into my dreams through suffocation. I was faintly torn. Not enough to dampen the mood, because I imagined that somehow all that other stuff, my writing, my passions, would come later. It would all fall into place somehow. I guess I trusted even in the slightest possibility, although I knew that with each year of teaching, the job that was supposed to give me time off to be creative, I felt more and more comfortable and lackadaisical about pursuing my dreams.

I took a writing course online that drove in some discipline, only to drop it midway when things got complicated. Brooke often entertained the idea of moving to California, which kept me content, although I knew with the look of things that was only getting further and further from practical. But since being honest with myself wasn’t my strong suit, I ignored my intuition, and looking back, ignored a lot of signs that might have politely escorted me out the door rather than having it slammed in my face.

Chapter Two

SEX IN PLURALISTIC SOCIETY

I took a course, Sex in a Pluralistic Society, in my last semester of college. Somehow I thought it was going to be a lecture on the sociology of gender. Keep in mind this was the same semester I tried to cram in all my last requirements, registering for other such gems as Plagues and People; Death, Dying and Bereavement; and History of Theology.

Yes, the sex class was the lighter side to my schedule, but my prude Catholic upbringing made a sex journal, “Or, if you don’t have a partner, make it a self-love journal,” a really difficult assignment. Plus, the guy who taught the class just creeped me out. The videos he made us watch, I’m still traumatized. A classmate and I thought to complain, on several occasions, but it was both of our last semesters so it’s fair to say that, like me, she left that sort of tenacity to the underclassmen.

Despite the dildos, the pornography, and the daylong discussion on G-spots, I did take away one valuable lesson from that loony old perv. It was toward the end of the semester when the concept of love was finally introduced. By then, I had done my fair share of heart breaking and had tasted the bitter side of breakup a few times myself. I was sure I knew everything he had to say.

Instead, I was surprised to find myself taking notes when he broke down the Greeks’ take on the four different kinds of love: agape, eros, philia, storge. We discussed unconditional love versus conditional love. Yeah, yeah; I knew all that. He went on to describe eros as manic love, obsessive love, desperate love.

“This is the kind of love movies are based on. It’s high energy, high drama, requires no sleep, is built on attraction, jealousy runs rampant; it comes in like a storm and subsides often as quickly as it came in.” I cringed when he said, “It’s immature love.”

And here, I thought, this is what it was all about. All lesbian love, at least all those wonderful, electrifying things! Eros it even sounds erroneous.

It was when I was dating the most confident and beautiful, twinkly-eyed woman I’d ever laid my hands on, some four years later, that I was brought back to that lecture. Despite our good intentions and valiant attempts at maturity, Brooke and I had a relationship built on many of those very erroneous virtues. It was movie-worthy high passion infused with depths that felt like coming down from a rock star kind of party.

Perhaps it’s because it began all wrong. She was fresh out of college. I was already teaching, working weekends at a chick bar in Baltimore at the time, Coconuts, our very own Coyote Ugly. One night, a friend of hers (she admitted later) noticed me, in my finest wicker cowboy hat and cut-off shirt, slinging beers and lining up shots between stolen, flirtatious moments on the dance floor. A week later, Brooke and I were fixed up at a party. We were both in other relationships that we needed excuses to get out of, so why not? She was beautiful (did I say that already?), tall, caramel skin and hazel eyes, tomboy cute when she was feeling sporty, simply stunning when dressed to the nines. She even fell into her dad’s Brazilian accent after a few cocktails, which sealed it for me; I was enamored. Plus, she was a bona fide lesbian (a first for me), and we wore the same size shoe. What more do you need?

We did everything together: tennis, basketball, squash. She’d patiently sit on the beach while I surfed. I always said yes to her shopping trips. We even peed with the door open so as to not interrupt conversation. And I’m almost certain I slept right on top of her for at least a solid year, I’d never been considered a “peanut” before. In fact, I don’t think we separated at all for the first couple of years we dated, now that I think about it. Maybe for an odd trip, but it didn’t go without feeling like we’d lost a limb, I swear. We’d always say, “No more than five days,” as if we wouldn’t have been able to breathe on day six.

When we first started dating, I went to Japan for nine days to visit my brother Jack who had been stationed at Atsugi. I was pretty pathetic. It was my first time traveling alone, so when I stepped off the plane on foreign soil and my family wasn’t at the gate ready to collect me, I quickly reverted to my inner child, the sweeping panic stretched from my tippy toes to my fingertips.

It was the same feeling I used to get in Kmart when I’d look up from the shelf to tug at the skirt of the lady standing beside me, only to be both mortified and petrified when I realized that face and body didn’t belong to my Gma. I’m not sure who was supposed to be keeping track of whom, but whoever it was did so poorly. Hence the reason why I developed a system: I’d go sit in the back of our station wagon where I knew it would be impossible for me to be forgotten among the dusty racks of stiff clothes. The first time I put this system into place, unbeknownst to anyone, I resurfaced from the car when the two police cars arrived, to see what all the hubbub was about. Boy, were they glad to see me when I strolled back through the automatic sliding doors, unaware of all the excitement I had started.

Thank God my sister-in-law found me in Tokyo after I’d already figured out how to work the phones and had dialed home. Brooke had calmed me down by talking me through the basics: I wasn’t lost; I was just on the brink of being found, she assured me. I’d hung up and collected myself by the time Jill and the kids arrived.

Every evening in Japan, I slid away from the family and hid in my room where I clumsily punched hundreds of calling card numbers into the phone just so I could hear Brooke’s voice before bed. And like me, she was dying inside at the distance between us.

Sure, there were some caution signs, some red flags being waved, but all the good seemed to outweigh the bad, and who’s perfect, really? I thought some of my ideologies about love were too lofty and maybe, just maybe, I had to accept that I would never have all that I desired from a relationship, like say, trust. Plus, people grow, they mature, relationships mature; surely we’d be the growing kind. We liked self-help books. We had a shelf where they sat, most of them at least half-read.

Her family loved me, and I adored them. Yes, it took a while for them to get used to the idea of me being more than just Brooke’s “roommate.” Thankfully, the week Brooke came out to her family, a close family friend, battling breast cancer, took a turn for the worse. Brooke came back at her parents’ retorts with, “Well, at least I don’t have cancer.” And to that, well they had to agree.

Brooke and I traveled together. We loved the beach. We loved food and cute, quaint little restaurants. We loved playing house and raising a puppy. We loved talking about our future and a big fat gay wedding, and most of all we loved being loved. We bought each other flowers and little presents and surprised each other with dinner and trips and concert tickets. I’ll never forget the anniversary when she had me get all dressed up just to trick me into a beautiful candlelit dinner at home. I could have sat at that table forever, staring into her shimmering, smiling eyes, or let her hold me for just as long as we danced among the rose petals she’d scattered at our feet.

It was for all those reasons that the darkness never outweighed the light, the screaming matches, the silent treatments, the distrust, the jealousy. All those things seemed part of our short past when we began shopping for our first home. It was a blank slate, a new beginning signing the paperwork, picking out furniture, remodeling our kitchen.

God, we danced so much in that kitchen.

We laughed at our goofy dog, Porter. We cried on our couch, watching movies. We supported each other in our few separate endeavors. We shared chores and “mom” duty and bills and credit cards. And I think it was under the weight of all the things of which we were once so proud that it all began to crumble. “Do you have to slam the cabinets like that?” as if I were picking up new habits to purposefully push her away. “I hate fighting like this in front of him!” she’d say, pointing at Porter. “Look, we’re making him nervous.”

She sobbed and sobbed, and her big beautiful eyes remained bloodshot for at least six months as I watched her slip away from me. I begged her to tell me what she needed, and even that she couldn’t do.

Brooke finally had a social life that I supported wholeheartedly, but that social life seemed to echo more and more of what was wrong with us. During the day things appeared fine and good and normal, but at night her cold shoulder sent me shivering further and further to the opposite side of the bed until I eventually moved into the spare bedroom.

I didn’t get it. I said that I did, that I understood, but I didn’t.

She spent an awful lot of time with a “friend.” Julie, a mutual friend, or so I thought. We all hung out together, thus I didn’t think to question anything until it became more and more blatant. I would beg, “Just tell me what’s going on with the two of you. I’m a big girl. I’ll just walk away. But I can’t just sit around here feeling batty while you deny what I can see with my own two eyes!”

She wouldn’t admit to it. “Nothing is going on.” She said she just needed time to figure herself out.

In the meantime, I was still her home. I was still her best friend and even at the furthest distance she’d pushed me to, I was the one who calmed her when the weight of it all made her come unhinged.

I was the one who rubbed her back and kissed her forehead.

She wanted me to be an asshole, so she’d have an excuse. She wanted me to get pissed to lessen her compounding guilt. I’m not sure if it was that I couldn’t or that I wouldn’t do either of those things. I still hung onto what I’d promised with that sparkly little ring I’d given her, not the real thing, but a big promise. I had taken it all very seriously. “In sickness and in health.” And here she was before me, as far as I could see it, sick.

Well, sick was the only diagnosis that wouldn’t allow me to hate her as she inhabited our home with me, a platonic roommate, sometimes cold and aloof and other times recognizable and warm. I felt like we had somehow been dragged into the drama of a bad after-school special without the happy commercials of sugary cereals and toys that will never break or end up like the Velveteen Rabbit, who ironically, I was really starting to resemble in the confines of our condo with its walls caving in.

While the final days of summer strode past in their lengthy hour, the honest words, “I want to take a break,” were inescapably spoken. I felt sick, stunned by the syllables as they fell from her lips. We’d been at the beach for the weekend where I naively thought we might be able to spend some time all to ourselves, mending the stacks of broken things between us. I knew this had to do with Julie, but still nobody had the guts to admit it. I was infuriated. So much so, that I reduced myself to checking cell phone logs and sleuthing around my own home. I hated myself for the lengths I allowed her to push me.

There was no way I could return to school as my signed contract promised. I couldn’t imagine focusing on my students while I was so busy focusing on my failing relationship. Although the last thing I wanted to do was uproot myself, I had finally begun to gather the pebbles of self-respect that would eventually become its new foundation. I had to go.

And with the phone call to my faculty chair, I did exactly what I never imagined I would do. I resigned. I had never been so excited to throw in the towel well, except for that one awful restaurant where I was too much of a coward to quit so I faxed in my resignation an hour before my shift that time felt good too. But this was different. I didn’t chicken out. I stood up to Barbara’s crucifix-firing cannon and prevailed.

When Brooke and I weren’t fighting or walking on eggshells around each other, she dove into my arms expressing her undying love for me, and I held the stranger I no longer connected with, consoling her. I didn’t know what to make of all the mixed emotions. I had taken my accusations to Julie herself to try to get some answers, but she laughed at my arguments, claiming Brooke was “too confused” to be dating anyone right now. Julie was older, with graying wisps, loafers and pleated pants. To look at her anymore made me sick. And, after all, Brooke still wore the ring I’d given her. Still, after nine months, none of it made any sense.

The night she woke me, cross-legged on the floor at my bed because she couldn’t sleep and it was driving her crazy, she looked desperate. I held her and stroked her hair, calming her with my patient voice, exuding every ounce of love that could look past my own pain to reduce hers. Healthy? Probably not. But that was the only way I knew how to love her. To put everything of me aside. Everything.

I have always wanted a family. From the time I was little I knew I would be a mom. At eight, I thought marrying a rich man and becoming a housewife was the golden ticket to true happiness, along with becoming the president, a monkey trainer, and a marine biologist. My pending future changed with the weather, but rich was almost always a constant. A valid measure of success at eight, I suppose. A family, and its entire construct, was very important to me: the house, the dog, the hus or now the wife, all of it.

And that’s what Brooke and I had, or we talked like we did. Raising our puppy from ten weeks to his “man”hood and buying household goods on joint credit cards. We were all grown up like a real family. With our names linked on more than just the dog’s birth certificate, “taking a break” was really a separation and anything beyond that was really a divorce. I hadn’t reached that logic in my head, perhaps because I still refused to believe that all I imagined was disintegrating before me, where I stood, clenching fistfuls of hopeless dust.

Chapter Three

ON THE GOOD FOOT

I toyed with the idea of California, as I had always talked about. No reason to stay here. Seriously, with no excuses holding me back, I searched tirelessly for jobs on craigslist day in and day out. And there was an edge of excitement in taking control, or that’s what I convinced myself was going on. I applied for a few teaching jobs in California, Colorado, British Columbia, and even New York. I was intrigued by the schools that touted their outdoor education programs and offered classes like rock climbing and snowboarding. I reasoned with myself: teaching can’t be all that bad with a mountain backdrop and class cancelations for white-water rafting.

*

from

I Think I’ll Make It. A True Story of Lost and Found

by Kat Hurley

get it at Amazon.com

The Handbook Of Solitude. Psychological Perspectives on Social Isolation, Social Withdrawal, and Being Alone.

Some might believe that it is not fear that guides the behaviors of some of these solitary individuals. Instead, it might be proposed that some of these noninteracting individuals have a biological orientation that leads them to prefer a solitary existence.

I am quite certain that what the reader will come away with after having completed the chapters included herein is that solitude has many faces.

On Solitude, Withdrawal, and Social Isolation

Kenneth H. Rubin

As I sit in my office pondering what it is that I should be writing in the Foreword to this extraordinary compendium, I am alone. With the door closed, I am protected against possible interruptions and am reminded of the positive features of solitude, there is no one around, it is quiet, and I can concentrate on the duties at hand. Indeed, several contributors to this volume have written about the pleasantries associated with solitude; frankly, I must agree with this perspective, but do so with a number of significant provisos. I will offer a listing of these provisos in the following text. However, before so doing, I would like to suggest a thought experiment or two.

A Science Fiction Thought Experiment

Why must one understand the significance of solitude, withdrawal, and social isolation? Let’s begin with a little thought experiment. Imagine, for at least one millisecond, that we have arrived on a planet populated by billions of people. Never mind how these people came into existence. Let’s just assume that they happen to be on the planet and that we know not how they came to be. Imagine too that there is no interpersonal magnetism, that these people never come together, there are no interactions, there is no crashing together or colliding of these individuals. All we can see are solitary entities walking aimlessly, perhaps occasionally observing each other. In short, we are left with many individuals who produce, collectively, an enormous social void. From an Earthly perspective, we might find the entire enterprise to be rather intriguing or boring or frightening and would likely predict that prospects for the future of this planet are dim.

Given that this is a supposed “thought exercise,” please allow me to humor myself and replace the aforementioned noun “people” with “atoms” or their intrinsic properties of electrons, protons, and neutrons. By so doing, one might have to contemplate such topics as magnetism and collision and the products of these actions. This would immediately give rise to thoughts of mass, electricity, and excitement. Without magnetism (attraction), electricity, and excitement, whatever would we be left with? As I move more forcefully into this exercise, I find myself in increasingly unfamiliar territory I may study pretense, but I am not a pretender at least insofar as suggesting to anyone willing to listen (or read) that l have “real” knowledge about anything pertaining to physics. In fact, I am ever so happy to leave the study of the Higgs boson to that group of scholars engaged in research at CERN’s Large Hadron Collider.

For the time being, I will escape from any contemplation of physics and swiftly return to thinking about a planet on which people appear to exist without laws of attraction. If the “people” who inhabit the planet do not collide, we are left with the inevitability of what solitude would eventually predict, a nothingness, an emptiness, a void. If “people” did not collide, did not interact, there would be no “us.” Relationships would not exist; there would be no human groups, no communities, no cultures. There would be no sense of values, norms, rules, laws. Social hierarchies would not exist; there would be no need to think about mindreading, perspective taking, interpersonal problemsolving. Liking, loving, accepting, rejecting, excluding, victimizing none of these significant constructs would be relevant. Social comparison, self-appraisal, felt security, loneliness, rejection sensitivity topics that tend to appear regularly in the Developmental, Social, Personality, Cognitive, and Clinical Psychology literatures would be irrelevant. From my admittedly limited perspective, as a Developmental Scientist (and thankfully not as a Physicist), there would be nothing to write, think, feel, or be about.

Thank goodness for those nuclear researchers at CERN. They have taught us that magnetism matters, that interactions matter, that clusters matter (and may collide to produce new entities). These folks are not pondering what happens with people, they are thinking at the subatomic level. I, on the other hand, have spent the past 40-some years thinking about people, their individual characteristics, their interactions and collisions with one another, the relationships that are formed on the basis of their interactions, and the groups, communities, and cultures within which these individuals and relationships can be found. Indeed, I have collected more than a fair share of data on these topics. In so doing, I am left with the conclusion that solitude, isolation, and social withdrawal can be ruinous. It ain’t science fiction.

A Second Thought Experience

Let’s move to a rather different thought experience. Imagine that the community within which we live teaches its inhabitants, from early childhood, that normative sociocultural expectations involve helping, sharing, and caring with and for each other; teaching each other about that which defines the “good, bad, and ugly”; communicating with each other about norms and what may happen when one conforms to or violates them. Imagine too, that in such a community within which interaction, cooperation, and relationships matter, there are some individuals who, for whatever reason, do not interact with their confreres. One might suppose that the remaining members of the community could ponder why it is that these solitary individuals behave as they do. And several suggestions may be offered for their solitude.

For example, it may be suggested that some of these noninteracting individuals have some biological or perhaps some genetic orientation that leads them to feel uncomfortable in the presence of others. Perhaps members of the community may have read something about a gene that is associated with diminished 5-HTT transcription and reduced serotonin uptake. Some in the community may have read somewhere that without the regulating effects of serotonin, the amygdala and hypothalamic-pituitary-adrenal (HPA) system can become overactive, leading to the physiological profile of a fearful or anxious individual.

Fear may be a guiding force for these solitary individuals fear of what may happen if they approach others in the community; fear of what may happen if they attempt to develop a nonfamilial relationship with another in the community; fear of leaving a negative impression on those who may judge their actions, thoughts, emotions, and behaviors.

Or perhaps, some might believe that it is not fear that guides the behaviors of some of these solitary individuals. Instead, it might be proposed that some of these noninteracting individuals have a biological orientation that leads them to prefer a solitary existence. These individuals may feel more positively inclined when in the company of inanimate objects things.

At this point, our second thought experience leaves us with the identification of two “types” of solitary individuals:

1. Those who are motivated by fear, the prospects of social appraisal, and heightened sensitivity to the possibility of rejection;

2. Those who have a distinct preference for solitude.

Regardless of the epidemiological “causes” of solitary behavior, in a society that has strong beliefs in the importance of cooperation, collaboration, and caregiving, it is likely that the majority of individuals who adhere to the cultural ethos would begin to think unpleasant thoughts about the noninteracting minority. They may think of solitary individuals as displaying unacceptable, discomfiting behavior; they may begin to feel negatively about them; they may discuss among themselves the need to exclude these noninteractors or to alter the behavior of these nonconforming individuals. Indeed, from the extant research, it is known that those who display behaviors considered to be inappropriate or abhorrent to the majority may be isolated by the group-at-large.

And so now we have a third group of solitary individuals those who have been isolated by the social group.

But how would these hypothetical community responses affect the nonsocial, nonconforming individual? What kinds of interactive/noninteractive cycles would be generated? And what would the solitary individuals think and feel about the larger community responses to them?

The Point

The preceding verbiage brings me to the singular message that I am attempting to convey. From “all of the above,” I am willing to step out on a limb to suggest, straight-out, that solitude can be punishing, humbling, debilitating, and destructive.

I do admit that it would be foolish to ignore the perspectives of those who have sung the praises of solitude. This would include several authors of chapters in this compendium. It would also include the many beloved and respected authors, poets, painters, philosophers, spiritualists, and scientists who have suggested that their best work or their deepest thoughts derive from those moments when they are able to escape the madding crowd. Here are a few examples:

1 “You do not need to leave your room. Remain sitting at your table and listen. Do not even listen, simply wait, be quiet still and solitary. The world will freely offer itself to you to be unmasked, it has no choice, it will roll in ecstasy at your feet.” Franz Kafka

2 “How much better is silence; the coffee cup, the table. How much better to sit by myself like the solitary seabird that opens its wings on the stake. Let me sit here forever with bare things, this coffee cup, this knife, this fork, things in themselves, myself being myself.” Virginia Woolf

I could offer hundreds of quotations about the glories of solitude from rather well known people. Nevertheless, from my perhaps distorted, limited, and ego-centered perspective, I find it difficult to believe that one can lead a productive and happy life locked in a closet, a cave, a tent, a room. Virginia Woolf committed suicide; Kafka had documented psychological difficulties vis-a-vis his inability to develop and maintain positive and supportive relationships with others.

One may prefer solitude and many of us require solitude for contemplation, exploration, problem solving, introspection, and the escape of pressures elicited by the social/academic/employment/political communities. As I noted in the opening paragraph, solitude may be an entirely acceptable pursuit. But this statement comes with several provisos.

The “ifs”

If one spends time alone voluntarily, and if one can join a social group when one wants to, and if one can regulate one’s emotions (e.g., social fears and anger) effectively, and if one can initiate and maintain positive, supportive relationships with significant others, then the solitary experience can be productive.

But the provisos that I have appended to the solitary experience are rather significant. I am quite certain that what the reader will come away with after having completed the chapters included herein is that solitude has many faces. These faces have varied developmental beginnings, concomitants, and courses. And these faces may be interpreted in different ways in different contexts, communities, and cultures. And perhaps most importantly, the provisos offered previously must be kept in mind regardless of context, community, and culture. Frankly, if one fails to be mindful of these provisos, one can return to the introductory thought experiment and be assured that the failure of individuals to “collide” with one another will result in unpleasant consequences.

People do need to collide, or better put, interact with others. Of course, these interactions must be viewed by both partners as acceptable, positive, and productive. These interactions must be need fulfilling. Drawing from the wisdom of others who have written of the significance of such interactions (e.g., John Bowlby and Robert Hinde), one might expect that a product of these interactive experiences is the expectation of the nature of future interactions with the same partners. Furthermore, from this perspective, one might expect that each partner is likely to develop a set of expectations about the nature of future interactions with unknown others. If the interactions experienced are pleasant and productive, then positive dyadic relationships may result. if, however, the interactions experienced are unpleasant or agonistic, the partners may avoid each other. And in some cases, if a particular individual comes to expect that all interactions will eventually prove negative, withdrawal from the social community may result.

A Final Comment: Annus horribilis

During the first six months of 2012, I “lived” in a hospital after having endured a heart transplant and numerous health complications. Although I was surrounded by medical staff and had many regular visitors, I was literally isolated from the “outside world.” For the first two months of my hospitalization, my mind and body were at the river’s edge. But when the neurons began firing somewhat normally (beginning March 2012), and when I was able to converse with hospital staff and visitors, I nevertheless felt totally alone. It did not help that when visitors (and medical staff) met with me, they were required to wear masks, gloves, and medical gowns of one sort or another.

Eventually, it struck me that I was living at the extreme edge of what I had been studying for most of my professional career. And just as I had found through the use of questionnaires, interviews, rating scales, and observations (with samples of children and adolescents, and their parents, peers, and friends), solitude brought with it intrapersonal feelings of loneliness, sadness, anxiety, helplessness, and hopelessness. I felt disconnected from my personal and professional communities. Despite visitors’ generosity and kindness, I was miserable. Of course, when I was able to read and use my laptop, I could have taken the opportunity to play with ideas and data; my solitude could have been productive. But negative affect (emotion dysregulation) got in the way.

Upon return home, I rehabilitated and received visitors family, friends, colleagues, students, former golf and hockey “buddies.” I welcomed news about family (I was especially grateful to be reunited with my grandchildren!), friends, academe, and the world at large. I began to catch up on the various projects that my lab was involved in. Within a matter of weeks, I was coauthoring manuscripts and preparing abstracts for submission to various conferences. Although physically weak and incapable of taking lengthy walks or lifting anything heavier than a few pounds, my spirits were greatly improving, I was no longer alone! And finally, by August, when I returned to campus for the first time, I felt reconnected and valued!

The bottom line is that my personal solitude, especially given that it was experienced for a lengthy period of time and “enforced” externally and involuntarily, resulted in unpleasant consequences. The good news is that I have come to believe that the data my colleagues and I have collected over the years are actually meaningful beyond the halls of academe! Spending an inordinate time alone; feeling disconnected, rejected, and lonely; being excluded and perhaps victimized by others; being unable to competently converse with and relate to others (which may well result from solitude) can create a life of misery and malcontent; in some cases, this combination of factors may result in attempts at self-harm; in other cases it may result in attempts to harm others. Think for a moment about how often perpetrators of violence (e.g., Columbine, Virginia Tech, Newton High School, and the Boston Marathon bombings) have been described as loners, withdrawn, victimized, isolated, and friendless. Indeed, think about how some of the perpetrators have described themselves.

As I write this last sentence, my mind drifts to the lyricist/songwriting team of Eddie Vedder and Jeff Ament. Their evocative song “Jeremy” is based, in part, on the description of the death of Jeremy Wade Delle, a 15-year-old high school student in Richardson, Texas. Jeremy is portrayed as a quiet, sad adolescent who “spoke in class today” by committing suicide (by gunshot) in the presence of his classmates. The lyrics also suggest that the Jeremy in the song suffered parental abuse and/or neglect. In the music video, Jeremy appears to be rejected, excluded, and isolated by his peers. The words “harmless,” “peers,” and “problem” appear throughout the video. And in interviews about the “meanings” of the lyrics, Vedder has suggested that he was attempting to draw attention to one possible consequence of difficulties that can be produced by familial and peer disruptions. More importantly, he argued that one must gather one’s strength to fight against the seeming inevitability of the negative consequences of isolation, solitude, and rejection. I would suggest that the central message is that family members, peers, school personnel, and community leaders should be aware of the signs that presage intra and interpersonal desolation.

Of course, not all people described as “solitary” or “isolated” have intra or interpersonal problems. As noted previously, solitude and social withdrawal are not “necessarily evil.” We all need time alone to energize and re-energize, to mull, to produce this-and-that without interruption. But our species is a social species. So much is gained when people interact, collaborate, help, and care for others, develop relationships, and become active members of groups and communities. However, when combined with dysregulated emotions, social incompetence, and a lack of supportive relationships, solitude, much like many other behavioral constructs studied by psychologists, can induce miserable consequences. The “trick” is to know if, when, and how to intervene within the family, peer group, and community.

In closing, it is with pleasure and pride that two of my former students (and current colleagues and close friends) have done such a wonderful job in putting together this compendium on solitude. After all, I do believe that once upon a time, I may have introduced the constructs of social withdrawal and solitude to Rob Coplan and Julie Bowker! Somehow, I doubt that I instructed or commandeered Rob and Julie to study solitude, isolation, and aloneness. If memory serves me correct, they were each interested in things social. All I happened to do was provide them with a personal, historical (perhaps hysterical) note about how and why I became interested in the research I was doing. Of course, I could never claim to have played a role in the thoughts and research of those who have examined solitude from the perspectives of anthropology, biology, computer science, divinity, neuroscience, political science, primatology, psychoanalysis, sociology, and those tracks of psychology that focus primarily on personality, the environment, autism, and adult relationships. Therein lies the beauty of this compendium.

Editors Coplan and Bowker have cleverly taken a twisty turn that curves beyond their own comfort zones of Developmental Science. By so doing, they have left me absolutely delighted. Coplan and Bowker have clearly attempted to move the reader into multiple zones of cognitive disequilibration and to appreciate that if we are to truly understand any given phenomenon, we must look well beyond the silos within which we are typically reinforced to reside.

You now hold in your hands a selection of readings that describe a variety of perspectives on solitude. You will read what solitude looks like; why it is that people spend time alone; why it is that solitude can be a necessary experience; how it feels and what one thinks about when one spends a good deal of time avoiding others or being rejected and excluded by one’s social community. There is no compendium quite like the one that you are handling. I applaud the editors’ efforts, and I do hope that the reader does herself/himself justice by closely examining chapters that move well beyond their own self-defined areas of expertise and intrapersonal comfort tunnels.

1 All Alone

Multiple Perspectives on the Study of Solitude

Robert J. Coplan, Department of Psychology, Carleton University, Ottawa, Canada

Julie C. Bowker, Department of Psychology, University at Buffalo, USA


Seems I’m not alone in being alone. Gordon Matthew Sumner (1979)

The experience of solitude is a ubiquitous phenomenon. Historically, solitude has been considered both a boon and a curse, with artists, poets, musicians, and philosophers both lauding and lamenting being alone. Over the course of the lifespan, humans experience solitude for many different reasons and subjectively respond to solitude with a wide range of reactions and consequences. Some people may retreat to solitude as a respite from the stresses of life, for quiet contemplation, to foster creative impulses, or to commune with nature. Others may suffer the pain and loneliness of social isolation, withdrawing or being forcefully excluded from social interactions. Indeed, we all have and will experience different types of solitude in our lives.

The complex relationship we have with solitude and its multifaceted nature is reflected in our everyday language and culture. We can be alone in a crowd, alone with nature, or alone with our thoughts. Solitude can be differentially characterized along the full range of a continuum from a form of punishment (e.g., timeouts for children, solitary confinement for prisoners) to a less than ideal context (e.g., no man is an island, one is the loneliest number, misery loves company), all the way to a desirable state (e.g., taking time for oneself, needing your space or alone time).

In this Handbook, we explore the many different faces of solitude, from perspectives inside and outside of psychology. In this introductory chapter, we consider some emergent themes in the historical study of solitude (see Figure 1.1) and provide an overview of the contents of this volume.

Figure 1.1 Emergent themes in the psychological study of solitude.

Emergent Themes

The study of solitude cuts across virtually all psychology subdisciplines and has been explored from multiple and diverse theoretical perspectives across the lifespan. Accordingly, it is not surprising that there remains competing hypotheses regarding the nature of solitude and its implications for well-being. Indeed, from our view, these fundamentally opposed differential characterizations of solitude represent the most pervasive theme in the historical study of solitude as a psychological construct.

In essence, this ongoing debate about the nature of solitude can be distilled down to an analysis of its costs versus benefits.

Solitude is bad

Social affiliations are relationships that have long been considered to be adaptive to the survival of the human species. Indeed, social groups offer several well-documented evolutionary advantages: e.g., protection against predators, cooperative hunting, and food sharing. The notion that solitude may have negative consequences has a long history and can literally be traced back to biblical times: Genesis 2:18, And the LORD God said “It is not good for the man to be alone”.

Within the field of psychology, Triplett (1898) demonstrated in one of the earliest psychology experiments that children performed a simple task (pulling back a fishing reel) more slowly when alone than when paired with other children performing the same task. Thus, at the turn of the century, it was clear that certain types of performance were hindered by solitude.

Developmental psychologists have also long suggested that excessive solitude during childhood can cause psychological pain and suffering, damage critically important family relationships, impede the development of the self-system, and prevent children from learning from their peers. The profound psychological impairments caused by extreme cases of social isolation in childhood, in cases such as Victor (Lane, 1976) or Genie (Curtiss, 1977), have emphasized that human contact is a basic necessity of development.

Social psychologists have also long considered the need for affiliation to be a basic human need. Early social psychology studies on small group dynamics, such as the Robbers Cave experiments, further highlighted the ways in which intergroup conflict can emerge and how out-group members can become quickly perceived negatively and in a stereotypical fashion and become mistreated. More recently, the need to belong theory has suggested that we all have a fundamental need to belong or be accepted and to maintain positive relationships with others, and that the failure to fulfill such needs can lead to significant physical and psychological distress. Relatedly, social neuroscientists now suggest that loneliness and social isolation can be bad not only for our psychological functioning and well-being but also for our physical health.

Finally, from the perspective of clinical psychology, social isolation has been traditionally viewed as a target criterion for intervention. In the first edition of the Diagnostic statistical manual of mental disorders, people who failed to relate effectively to others could be classified as suffering from either a psychotic disorder, such as schizophrenia; a psychoneurotic disorder, such as anxiety; or a personality disorder, such as an inadequate personality, characterized by inadaptability, ineptness, poor judgment, lack of physical and emotional stamina, and social incompatibility.

Schizoid personality disorder is described as another personality disorder characterized by social difficulties, specifically social avoidance. Interestingly, children with schizoid personalities are described as quiet, shy, and sensitive; adolescents were described as withdrawn, introverted, unsociable, and as shut-ins.

Solitude can be good

In stark contrast, and from a very different historical tradition, many theorists and researchers have long called attention to the benefits of being alone.

For example, a central question for ancient Greek and Roman philosophers was the role of the group in society and the extent to which the individual should be a part of and separate from the group in order to achieve wisdom, excellence, and happiness. Later, Montaigne acknowledged the difficulties of attaining solitude but argued that individuals should strive for experiences of solitude to escape pressures, dogma, conventional ways of thinking and being, vices, and the power of the group. For Montaigne, the fullest experiences of solitude could not be guaranteed by physical separation from others; instead, solitude involved a state of natural personal experience that could be accomplished both alone and in the company of others.

Related ideas can be found in religious writings and theology. For example, Thomas Merton, a Trappist monk who spent many years in solitude, passionately argued in several books and essays that solitude offered unique experiences for contemplation and prayer and that solitary retreats are necessary to achieve authentic connections with others.

Ideas about the benefits of solitude can also be found in the writings of Winnicott (1958). For Winnicott, solitude was an experience of aloneness afforded by a good enough facilitating environment and was a necessary precondition during infancy and childhood for later psychological maturity and self-discovery and self-realization.

In adulthood, spending time alone and away from others has also long been argued by philosophers, authors, and poets to be necessary for imaginative, creative, and artistic enterprises (e.g., Thoreau, 1854). In these perspectives, solitary experiences provide benefits when the individual chooses to be alone. However, personal stories of several accomplished authors, such as Beatrix Potter and Emily Dickinson, suggest that creativity and artistic talents may also develop in response to long periods of painful social isolation and rejection (Middleton, 1935; Storr, 1988).

Underlying mechanisms of solitude

Although the costs versus benefits debate regarding solitude is somewhat all-encompassing, nested within this broader distinction is a theme pertaining to the different mechanisms that may underlie our experiences of solitude. To begin with, it is important to distinguish between instances when solitude is other imposed versus sought after. Rubin (1982) was one of the first psychologists to describe these different processes as distinguishing between social isolation, where the individual is excluded, rejected, or ostracized by their peer group, and social withdrawal, where the individual removes themselves from opportunities for social interaction.

As we have previously discussed, there are long-studied negative consequences that accompany being socially isolated from one’s group of peers. Thus, we turn now to a consideration of varying views regarding why individuals might chose to withdraw into solitude.

Within the psychological literature, researchers have highlighted several different reasons why individuals may seek out solitude, including a desire for privacy (Pedersen, 1979), the pursuance of religious experiences (Hay & Morisey, 1978), the simple enjoyment of leisure activities (Purcell & Keller, 1989), and seeking solace from or avoiding upsetting situations (Larson, 1990).

Biological and neurophysiological processes have also been considered as putative sources of solitary behaviors. For example, the ancient Greeks and Romans argued that biologically based individual differences in character help to determine mood (such as fear and anxiety) and social behavioral patterns (such as the tendency to be sociable or not), ideas which were precursors to the contemporary study of child temperament (Kagan & Fox, 2006). As well, recent interest in the specific neural systems that may be involved in social behaviors can be traced to the late 1800s with the case of Phineas Gage, who injured his orbitofrontal cortex in a railroad construction accident and afterwards was reported to no longer adhere to social norms or to be able to sustain positive relationships (Macmillan, 2000).

Finally, there is also a notable history of research pertaining to motivations for social contact (e.g., Murphy, 1954; Murray, 1938), which has been construed as a primary substrate of human personality (Eysenck, 1947). An important distinction was made between social approach and social avoidance motivations (Lewinsky, 1941; Mehrabian & Ksionzky, 1970). It has since been argued that individual differences in these social motivations further discriminate different reasons why individuals might withdraw from social interactions. For example, a low social approach motivation, or solitropic orientation, is construed as a non-fearful preference for solitude in adults (Burger, 1995; Cheek & Buss, 1981; Leary, Herbst, & McCrary, 2001) and children (Asendorpf, 1990; Coplan, Rubin, Fox, Calkins, & Stewart, 1994). In contrast, the conflict between competing social approach and social avoidance motivations (i.e., approach-avoidance conflict) is thought to lead to shyness and social anxiety (Cheek & Melchior, 1990; Jones, Briggs, & Smith, 1986).

Developmental timing effects of solitude

Our final theme has to do with developmental timing or when (or at what age/developmental period) experiences of solitude occur. The costs of solitude are often assumed to be greater during childhood than in adolescence or adulthood given the now widely held notion that the young developing child requires a significant amount of positive peer interaction for healthy social, emotional, and social-cognitive development and well-being (Rubin, Bukowski, & Parker, 2006). This pervasive belief may explain, in part, why considerably more developmental research on the concomitants of social withdrawal has focused on children as compared to adolescents. In addition, it is during adolescence that increasing needs for and enjoyment of privacy and solitude are thought to emerge (Larson, 1990). For this reason, it has been posited that some of the negative peer consequences often associated with social withdrawal during childhood, such as peer rejection and peer victimization, may diminish during the adolescent developmental period (Bowker, Rubin, & Coplan, 2012). However, it has also long been argued that solitude at any age can foster loneliness and psychological angst, particularly if it is other-imposed.

As mentioned previously, social needs are thought to exist in individuals of all ages, with several social and developmental theories suggesting that psychological well-being is determined by whether social needs are satisfied. For example, Sullivan (1953) posited that all individuals have social needs but that with development, the nature of the social needs change (e.g., with puberty, needs for sexual relations emerge), as well as the type of relationship required to fulfill the needs (e.g., relationships with parents might satisfy early needs for tenderness; same-sex chumships or best friendships might satisfy needs for intimacy that emerge in early adolescence). Regardless of the developmental changes, however, Sullivan argued that if social needs were not fulfilled, significant negative self-system and psychological consequences would ensue. Consistent with these latter ideas are research findings that have identified loneliness, at any age, as one of the strongest risk factors for psychological illbeing (Heinrich & Gullone, 2006).

The debate as to when in development solitude might carry the greatest costs is yet to be resolved. However, it must also be acknowledged that the very nature of solitary experiences likely changes with age. For example, young children may retreat to their rooms, engage in solitary play in the company of peers, or find themselves forced to the periphery of social groups. Although other-imposed solitude might be manifested similarly at older ages (e.g., adolescents being forced to eat alone at lunchtime, adults being left out of afterwork gatherings), adolescents and adults have greater control over and increased opportunities for selfselected solitary experiences relative to children. For example, adolescents are sometimes left alone without parental supervision in their homes or able to take themselves to places of their choosing. Adults can also choose to travel alone, can engage in meditative and religious retreats, and can select relatively solitary occupations and ways to spend their free time. In contrast, there may come a time in the life of an older adult where they are significantly impeded in their ability to actively seek out social contacts. It remains to be seen how these potential differences in agency pertaining to solitude across the lifespan speak to the relation between solitude and well-being.

Final Comments: Solitude…Together?

It is somewhat ironic that the future study of solitude will likely be pursued within the context of an everexpanding and increasingly connected global social community. The chapter authors in this Handbook span 13 countries and represent only the very tip of the iceberg in terms of cross-cultural research in this area. There is growing evidence to suggest that both the meaning and impact of (different types of) solitude differ substantively across cultures (e.g., Chen & French, 2008). Accordingly, it is critically important to embed this psychological research within a larger cultural context.

Moreover, as evidenced by the chapters in the final section of this volume, psychologists have much to learn about the study of solitude from our colleagues in other disciplines. Indeed, we should expect interdisciplinary collaboration to eventually become the norm in these (and other) research areas. Such collaborations will allow us to further explore both the depth and breadth of our experiences of solitude and perhaps help to resolve some of the great debates in theory and research on solitude, such as when and why solitude causes harm or brings benefits.

Finally, rapidly evolving technological advances intend to connect all of us all of the time to social and informational networks. This inevitably leads to the question as to whether any of us will ever truly be alone in the future. It is certain that our relationship with solitude will necessarily evolve in the digital age. In this regard, it remains to be seen if the experience of solitude is itself doomed to become an archaic remnant of a past era.

*

from

The Handbook Of Solitude. Psychological Perspectives on Social Isolation, Social Withdrawal, and Being Alone.

get it at Amazon.com

Mental Illness, Why Some and not Others? Gene-Environment Interaction and Differential Susceptibility – Scott Barry Kaufman * Gene-Environment Interaction in Psychological Traits and Disorders – Danielle M. Dick * Differential Susceptibility to Environmental Influences – Jay Belsky.

“Whether your story is about having met with emotional pain or physical pain, the important thing is to take the lid off of those feelings. When you keep your emotions repressed, that’s when the body starts to try to get your attention. Because you aren’t paying attention. Our childhood is stored up in our bodies, and one day, the body will present its bill.”

Bernie Siegel MD


In recent years numerous studies show the importance of gene-environment interactions in psychological development, but here’s the thing: we’re not just finding that the environment matters in determining whether mental illness exists. What we’re discovering is far more interesting and nuanced: Some of the very same genes that under certain environmental conditions are associated with some of the lowest lows of humanity, under supportive conditions are associated with the highest highs of human flourishing.

Evidence that adverse rearing environments exert negative effects particularly on children and adults presumed “vulnerable” for temperamental or genetic reasons may actually reflect something else: heightened susceptibility to the negative effects of risky environments and to the beneficial effects of supportive environments. Putatively vulnerable children and adults are especially susceptible to both positive and negative environmental effects.

Children rated highest on externalizing behavior problems by teachers across the primary school years were those who experienced the most harsh discipline prior to kindergarten entry and who were characterized by mothers at age 5 as being negatively reactive infants.

Susceptibility factors are the moderators of the relation between the environment and developmental outcome. Is it that negativity actually reflects a highly sensitive nervous system on which experience registers powerfully negatively when not regulated by the caregiver, but positively when coregulation occurs?
Referred to by some scientists as the “differential susceptibility hypothesis”, these findings shouldn’t be understated. They are revolutionary, and suggest a serious rethinking of the role of genes in the manifestation of our psychological traits and mental “illness”. Instead of all of our genes coding for particular psychological traits, it appears we have a variety of genetic mutations that are associated with sensitivity to the environment, for better and worse.

Known epigenetic modifications (cell specialization, X inactivation, genomic imprinting) all occur early in development and are stable. The discovery that epigenetic modifications continue to occur across development, and can be reversible and more dynamic, has represented a major paradigm shift in our understanding of environmental regulation of gene expression.

Glossary
Gene: Unit of heredity; a stretch of DNA that codes for a protein.
GxE: Gene-environment Interaction.
Epigenetics: Modifications to the genome that do not involve a change in nucleotide sequence.
Heritability: The proportion of total phenotypic variance that can be accounted for by genetic factors.
Logistic Regression: A statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes).
In logistic regression, the dependent variable is binary or dichotomous, i.e. it only contains data coded as 1 (TRUE, success, pregnant, etc.) or 0 (FALSE, failure, non-pregnant, etc.)
Transcription Factor: In molecular biology, a transcription factor (TF) (or sequence-specific DNA-binding factor) is a protein that controls the rate of transcription of genetic information from DNA to messenger RNA, by binding to a specific DNA sequence. The function of TFs is to regulate – turn on and off – genes in order to make sure that they are expressed in the right cell at the right time and in the right amount throughout the life of the cell and the organism.
Nucleotide: Organic molecules that are the building blocks of DNA and RNA. They also have functions related to cell signaling, metabolism, and enzyme reactions.
MZ: Monozygotic. Of twins derived from a single ovum (egg), and so identical.
DZ: Dizygotic. Of twins derived from two separate ova (eggs). Fraternal twin or nonidentical twin.
DNA: Deoxyribonucleic Acid.
RNA: Ribonucleic acid is a polymeric molecule essential in various biological roles in coding, decoding, regulation, and expression of genes. RNA and DNA are nucleic acids, and, along with lipids, proteins and carbohydrates, constitute the four major macromolecules essential for all known forms of life.
Polymorphism: A location in a gene that comes in multiple forms.
Allele: Natural variation in the genetic sequence; can be a change in a single nucleotide or longer stretches of DNA.
GWAS: Genome-wide Association Study.
ORs: Odds Ratios.
Phenotype: The observed outcome under study; can be the manifestation of both genetic and/or environmental factors.
Dichotomy: A division or contrast between two things that are or are represented as being opposed or entirely different.
Chromosome: A single piece of coiled DNA containing many genes, regulatory elements, and other nucleotide sequences.

Gene-Environment Interaction and Differential Susceptibility

Scott Barry Kaufman

Only a few genetic mutations have been discovered so far that demonstrate differential susceptibility effects. Most of the genes that have been discovered contribute to the production of the neurotransmitters dopamine and serotonin. Both of these biological systems contribute heavily to many aspects of engagement with the world, positive emotions, anxiety, depression, and mood fluctuations. So far, the evidence suggests (but is still tentative) that certain genetic variants under harsh and abusive conditions are associated with anxiety and depression, but that the very same genetic variants are associated with the lowest levels of anxiety, depression, and fear under supportive, nurturing conditions. There hasn’t been too much research looking at differential susceptibility effects on other systems that involve learning and exploration, however.

Enter a brand new study

Rising superstar Rachael Grazioplene and colleagues focused on the cholinergic system, a biological system crucially involved in neural plasticity and learning. Situations that activate the cholinergic system involve “expected uncertainty” such as going to a new country you’ve never been before and knowing that you’re going to face things you’ve never faced before. This stands in contrast to “unexpected uncertainty”, which occurs when your expectations are violated, such as thinking you’re going to a Las Vegas family friendly Cirque Du Soleil only to realize you’ve actually gotten a ticket to an all-male dance revue called “Thunder from Down Under” (I have no idea where that example came from). Those sorts of experiences are more strongly related to the neurotransmitter norepinephrine.

Since the cholinergic system is most active in situations when a person can predict that learning is possible, this makes the system a prime candidate for the differential susceptibility effect. As the researchers note, unpredictable and novel environments could function as either threats or incentive rewards. When the significance of the environment is uncertain, both caution and exploration are adaptive. Therefore, traits relating to anxiety or curiosity should be influenced by cholinergic genetic variants, with developmental experiences determining whether individuals find expected uncertainty either more threatening or more promising.

To test their hypothesis, they focused on a polymorphism in the CHRNA4 gene, which builds a certain kind of neural receptor that the neurotransmitter binds to. These acetylcholine receptors are distributed throughout the brain, and are especially involved in the functioning of dopamine in the striatum. Genetic differences in the CHRNA4 gene seem to change the sensitivity of the brain’s acetylcholine system because small structural changes in these receptors make acetylcholine binding more or less likely. Previous studies have shown associations between variation in the CHRNA4 gene and neuroticism as well as laboratory tests of attention and working memory.

The researchers looked at the functioning of this gene among a group of 614 children aged 8-13 enrolled in a week long day camp. Half of the children in the day camp were selected because they had been maltreated (sexual maltreatment), whereas the other half was carefully selected to come from the same socioeconomic status but not have experienced any maltreatment. This study provides the ideal experimental design and environmental conditions to test the differential susceptibility effect. Not only were the backgrounds of the children clearly defined, but also dramatically different from each other. Additionally, all children engaged in the same novel learning environment, an environment well suited for cholinergic functioning. What did they find?

Individuals with the T/T variation of the CHRNA4 gene who were maltreated showed higher levels of anxiety (Neuroticism) compared to those with the C allele of this gene. They appeared to be more likely to learn with higher levels of uncertainty. In contrast, those with the T/T allele who were not maltreated were low in anxiety (Neuroticism) and high in curiosity (Openness to Experience). What’s more, this effect was independent of age, race, and sex.

These environments, the T/T allele (which is much rarer in the general population than the C allele) may be beneficial, bringing out lower levels of anxiety and increased curiosity in response to situations containing expected uncertainty.

These results are certainly exciting, but a few important caveats are in order. For one thing, the T/T genotype is very rare in the general population, which makes it all the more important for future studies to attempt to replicate these findings. Also, we’re talking vanishingly small effects here. The CHRNA4 variant only explained at most 1% of the variation in neuroticism and openness to experience. So we shouldn’t go around trying to predict individual people’s futures based on knowledge of a single gene and a single environment.

Scientifically speaking though, this level of prediction is expected based on the fact that all of our psychological dispositions are massively polymorphic (consists of many interacting genes). Both gene-gene and gene-environment interactions must be taken into account.

Indeed, recent research found that the more sensitivity (“plasticity”) genes relating to the dopamine and serotonin systems adolescent males carried, the less selfregulation they displayed under unsupportive parenting conditions. In line with the differential susceptibility effect, the reverse was also found: higher levels of selfregulation were displayed by the adolescent males carrying more senstivity genes when they were reared under supportive parenting conditions.

The findings by Grazioplene and colleagues add to a growing literature on acetylcholine’s role in the emergence of schizophrenia and mood disorders. As the researcher’s note, these findings, while small in effect, may have maltreatment is a known risk factor for many psychiatric disorders. Children with the T/T genotype of CHRNA4 rsl 044396 may be more likely to learn fearful responses in harsh and abusive environments, but children with the very same genotype may be more likely to display curiosity and engagement in response to uncertainty under normal or supportive conditions.

While it’s profoundly difficult predicting the developmental trajectory of any single individual, this research suggests we can influence the odds that people will retreat within themselves or unleash the fundamentally human drive to explore and create.

Gene-Environment Interaction in Psychological Traits and Disorders

Danielle M. Dick

There has been an explosion of interest in studying gene-environment interactions (GxE) as they relate to the development of psychopathology. In this article, I review different methodologies to study gene-environment interaction, providing an overview of methods from animal and human studies and illustrations of gene-environment interactions detected using these various methodologies. Gene-environment interaction studies that examine genetic influences as modeled latently (e.g., from family, twin, and adoption studies) are covered, as well as studies of measured genotypes.

Importantly, the explosion of interest in gene-environment interactions has raised a number of challenges, including difficulties with differentiating various types of interactions, power, and the scaling of environmental measures, which have profound implications for detecting gene-environment interactions. Taking research on gene-environment interactions to the next level will necessitate close collaborations between psychologists and geneticists so that each field can take advantage of the knowledge base of the other.

INTRODUCTION

Gene-environment interaction (GxE) has become a hot topic of research, with an exponential increase in interest in this area in the past decade. Consider that PubMed lists only 24 citations for “gene environment interaction” prior to the year 2000, but nearly four times that many in the first half of the year 2010 alone! The projected publications on gene-environment interaction for 2008–2010 are on track to constitute more than 40% of the total number of publications on gene-environment interaction indexed in PubMed.

Where does all this interest stem from? It may, in part, reflect a merging of interests from fields that were traditionally at odds with one another. Historically, there was a perception that behavior geneticists focused on genetic influences on behavior at the expense of studying environmental influences and that developmental psychologists focused on environmental influences and largely ignored genetic factors. Although this criticism is not entirely founded on the part of either field, methodological and ideological differences between these respective fields meant that genetic and environmental influences were traditionally studied in isolation.

More recently, there has been recognition on the part of both of these fields that both genetic and environmental influences are critical components to developmental outcome and that it is far more fruitful to attempt to understand how these factors come together to impact psychological outcomes than to argue about which one is more important. As Kendler and Eaves argued in their article on the joint effect of genes and environments, published more than two decades ago:

It is our conviction that a complete understanding of the etiology of most psychiatric disorders will require an understanding of the relevant genetic risk factors, the relevant environmental risk factors, and the ways in which these two risk factors interact. Such understanding will only arise from research in which the important environmental variables are measured in a genetically informative design. Such research will require a synthesis of research traditions within psychiatry that have often been at odds with one another in the past. This interaction between the research tradition that has focused on the genetic etiology of psychiatric illness and that which has emphasized environmental causation will undoubtedly be to the benefit of both. (Kendler & Eaves 1986, p. 288)

The PubMed data showing an exponential increase in published papers on gene-environment interaction suggest that that day has arrived. This has been facilitated by the rapid advances that have taken place in the field of genetics, making the incorporation of genetic components into traditional psychological studies a relatively easy and inexpensive endeavor. But with this surge of interest in gene-environment interaction, a number of new complications have emerged, and the study of gene-environment interaction faces new challenges, including a recent backlash against studying gene-environment interaction (Risch et al. 2009). Addressing these challenges will be critical to moving research on gene-environment interaction forward in a productive way.

In this article, I first review different study designs for detecting gene-environment interaction, providing an overview of methods from animal and human studies. I cover gene-environment interaction studies that examine genetic influences as modeled latently as well as studies of measured genotypes. In the study of latent gene-environment interaction, specific genotypes are not measured, but rather genetic influence is inferred based on observed correlations between people who have different degrees of genetic and environmental sharing. Thus, latent gene-environment interaction studies examine the aggregate effects of genes rather than any one specific gene.

Molecular genetic studies, in contrast, have generally focused on one specific gene of interest at a time. Relevant examples of gene-environment interaction across these different methodologies are provided, though these are meant to be more illustrative than exhaustive, intended to introduce the reader to relevant studies and findings generated across these various designs.

Subsequently I review more conceptual issues surrounding the study of gene-environment interaction, covering the nature of gene-environment interaction effects as well as the challenges facing the study of gene-environment interaction, such as difficulties with differentiating various types of interactions, and how issues such as the scaling of environmental measures can have profound implications for studying gene-environment interaction.

I include an overview of epigenetics, a relatively new area of study that provides a potential biological mechanism by which the environment can moderate gene expression and affect behavior.

Finally, I conclude with recommendations for future directions and how we can take research on gene-environment interaction to the next level.

DEFINING GENE-ENVIRONMENT INTERACTION AND DIFFERENTIATING GENE-ENVIRONMENT CORRELATION

It is important to first address some aspects of terminology surrounding the study of gene-environment interaction. In lay terms, the phrase gene-environment interaction is often used to mean that both genes and environments are important. In statistical terms, this does not necessarily indicate an interaction but could be consistent with an additive model, in which there are main effects of the environment and main effects of genes.

But in a statistical sense an interaction is a very specific thing, referring to a situation in which the effect of one variable cannot be understood without taking into account the other variable. Their effects are not independent. When we refer to gene-environment interaction in a statistical sense, we are referring to a situation in which the effect of genes depends on the environment and/or the effect of the environment depends on genotype. We note that these two alternative conceptualizations of gene-environment interaction are indistinguishable statistically. It is this statistical definition of gene-environment interaction that is the primary focus of this review (except where otherwise noted).

It is also important to note that genetic and environmental influences are not necessarily independent factors. That is to say that although some environmental influences may be largely random, such as experiencing a natural disaster, many environmental influences are not entirely random (Kendler et al. 1993).

This phenomenon is called gene-environment correlation.

Three specific ways by which genes may exert an effect on the environment have been delineated (Plomin et al. 1977, Scarr & McCartney 1983):

(a) Passive gene-environment correlation refers to the fact that among biologically related relatives (i.e., nonadoptive families), parents provide not only their children’s genotypes but also their rearing environment. Therefore, the child’s genotype and home environment are correlated.

(b) Evocative gene-environment correlation refers to the idea that individuals’ genotypes influence the responses they receive from others. For example, a child who is predisposed to having an outgoing, cheerful disposition might be more likely to receive positive attention from others than a child who is predisposed to timidity and tears. A person with a grumpy, abrasive temperament is more likely to evoke unpleasant responses from coworkers and others with whom he/she interacts than is a cheerful, friendly person. Thus, evocative gene-environment correlation can influence the way an individual experiences the world.

(c) Active gene-environment correlation refers to the fact that an individual actively selects certain environments and takes away different things from his/her environment, and these processes are influenced by an individual’s genotype. Therefore, an individual predisposed to high sensation seeking may be more prone to attend parties and meet new people, thereby actively influencing the environments he/she experiences.

Evidence exists in the literature for each of these processes. The important point is that many sources of behavioral influence that we might consider “environmental” are actually under a degree of genetic influence (Kendler & Baker 2007), so often genetic and environmental influences do not represent independent sources of influence. This also makes it difficult to determine whether the genes or the environment is the causal agent. If, for example, individuals are genetically predisposed toward sensation seeking, and this makes them more likely to spend time in bars (a gene-environment correlation), and this increases their risk for alcohol problems, are the predisposing sensation-seeking genes or the bar environment the causal agent?

In actuality, the question is moot, they both played a role; it is much more informative to try to understand the pathways of risk than to ask whether the genes or the environment was the critical factor. Though this review focuses on gene-environment interaction, it is important for the reader to be aware that this is but one process by which genetic and environmental influences are intertwined. Additionally, gene-environment correlation must be taken into account when studying gene-environment interaction, a point that is mentioned again later in this review. Excellent reviews covering the nature and importance of gene-environment correlation also exist (Kendler 2011).

METHODS FOR STUDYING GENE-ENVIRONMENT INTERACTION

Animal Research

Perhaps the most straightforward method for detecting gene-environment interaction is found in animal experimentation: Different genetic strains of animals can be subjected to different environments to directly test for gene-environment interaction. The key advantage of animal studies is that environmental exposure can be made random to genotype, eliminating gene-environment correlation and associated problems with interpretation.

The most widely cited example of this line of research is Cooper and Zubek’s 1958 experiment, in which rats were selectively bred to perform differently in a maze-running experiment (Cooper & Zubek 1958). Under standard environmental conditions, one group of rats consistently performed with few errors (“maze bright”), while a second group committed many errors (“maze dull”). These selectively bred rats were then exposed to various environmental conditions: an enriched condition, in which rats were reared in brightly colored cages with many moveable objects, or a restricted condition, in which there were no colors or toys. The enriched condition had no effect on the maze bright rats, although it substantially improved the performance of the maze dull rats, such that there was no difference between the groups.

Conversely, the restrictive environment did not affect the performance of the maze dull rats, but it substantially diminished the performance of the maze bright rats, again yielding no difference between the groups and demonstrating a powerful gene-environment interaction.

A series of experiments conducted by Henderson on inbred strains of mice, in which environmental enrichment was manipulated, also provides evidence for gene-environment interaction on several behavioral tasks (Henderson 1970, 1972). These studies laid the foundation for many future studies, which collectively demonstrate that environmental variation can have considerable differential impact on outcome depending on the genetic make-up of the animal (Wahlsten et al. 2003).

However, animal studies are not without their limitations. Gene-environment interaction effects detected in animal studies are still subject to the problem of scale (Mather & Jinks 1982), as discussed in greater detail later in this review.

Human Research

Traditional behavior genetic designs

Demonstrating gene-environment interaction in humans has been considerably more difficult where ethical constraints require researchers to make use of natural experiments so environmental exposures are not random. Three traditional study designs have been used to demonstrate genetic influence on behavior: family studies, adoption studies, and twin studies. These designs have been used to detect gene-environment interaction also, and each is discussed in turn.

Family studies

Demonstration that a behavior aggregates in families is the first step in establishing a genetic basis for a disorder (Hewitt & Turner 1995). Decreasing similarity with decreasing degrees of relatedness lends support to genetic influence on a behavior (Gottesman 1991). This is a necessary, but not sufficient, condition for heritability. Similarity among family members is due both to shared genes and shared environment; family studies cannot tease apart these two sources of variance to determine whether familiality is due to genetic or common environmental causes (Sherman et al. 1997).

However, family studies provide a powerful method for identifying gene-environment interaction. By comparing high-risk children, identified as such by the presence of psychopathology in their parents, with a control group of low-risk individuals, it is possible to test the effects of environmental characteristics on individuals varying in genetic risk (Cannon et al. 1990).

In a high-risk study of Danish children with schizophrenic mothers and matched controls, institutional rearing was associated with an elevated risk of schizophrenia only among those children with a genetic predisposition (Cannon et al. 1990). When these subjects were further classified on genetic risk as having one or two affected parents, a significant interaction emerged between degree of genetic risk and birth complications in predicting ventricle enlargement: The relationship between obstetric complications and ventricular enlargement was greater in the group of individuals with one affected parent as compared to controls, and greater still in the group of individuals with two affected parents (Cannon et al. 1993). Another study also found that among individuals at high risk for schizophrenia, experiencing obstetric complications was related to an earlier hospitalization (Malaspina et al. 1999).

Another creative method has made use of the natural experiment of family migration to demonstrate gene-environment interaction: The high rate of schizophrenia among African-Caribbean individuals who emigrated to the United Kingdom is presumed to result from gene-environment interaction. Parents and siblings of first-generation African-Caribbean probands have risks of schizophrenia similar to those for white individuals in the area. However, the siblings of second-generation African-Caribbean probands have markedly elevated rates of schizophrenia, suggesting that the increase in schizophrenia rates is due to an interaction between genetic predispositions and stressful environmental factors encountered by this population (Malaspina et al. 1999, Moldin & Gottesman 1997).

Although family studies provide a powerful design for demonstrating gene-environment interaction, there are limitations to their utility. High-risk studies are very expensive to conduct because they require the examination of individuals over a long period of time. Additionally, a large number of high-risk individuals must be studied in order to obtain a sufficient number of individuals who eventually become affected, due to the low base rate of most mental disorders. Because of these limitations, few examples of high-risk studies exist.

Adoption studies

Adoption and twin studies are able to clarify the extent to which similarity among family members is due to shared genes versus shared environment. In their simplest form, adoption studies involve comparing the extent to which adoptees resemble their biological relatives, with whom they share genes but not family environment, with the extent to which adoptees resemble their adoptive relatives, with whom they share family environment but not genes.

Adoption studies have been pivotal in advancing our understanding of the etiology of many disorders and drawing attention to the importance of genetic factors. For example, Heston’s historic adoption study was critical in dispelling the myth of schizophrenogenic mothers in favor of a genetic transmission explaining the familiality of schizophrenia (Heston & Denney 1967).

Furthermore, adoption studies provide a powerful method of detecting gene-environment interactions and have been called the human analogue of strain-by-treatment animal studies (Plomin & Hershberger 1991). The genotype of adopted children is inferred from their biological parents, and the environment is measured in the adoptive home. Individuals thought to be at genetic risk for a disorder, but reared in adoptive homes with different environments, are compared to each other and to control adoptees.

This methodology has been employed by a number of research groups to document gene-environment interactions in a variety of clinical disorders: In a series of Iowa adoption studies, Cadoret and colleagues demonstrated that a genetic predisposition to alcohol abuse predicted major depression in females only among adoptees who also experienced a disturbed environment, as defined by psychopathology, divorce, or legal problems among the adoptive parents (Cadoret et al. 1996).

In another study, depression scores and manic symptoms were found to be higher among individuals with a genetic predisposition and a later age of adoption (suggesting a more transient and stressful childhood) than among those with only a genetic predisposition (Cadoret et al. 1990).

In an adoption study of Swedish men, mild and severe alcohol abuse were more prevalent only among men who had both a genetic predisposition and more disadvantaged adoptive environments (Cloninger et al. 1981).

The Finnish Adoptive Family Study of Schizophrenia found that high genetic risk was associated with increased risk of schizophrenic thought disorder only when combined with communication deviance in the adoptive family (Wahlberg et al. 1997).

Additionally, the adoptees had a greater risk of psychological disturbance, defined as neuroticism, personality disorders, and psychoticism, when the adoptive family environment was disturbed (Tienari et al. 1990).

These studies have demonstrated that genetic predispositions for a number of psychiatric disorders interact with environmental influences to manifest disorder.

However, adoption studies suffer from a number of methodological limitations. Adoptive parents and biological parents of adoptees are often not representative of the general population. Adoptive parents tend to be socioeconomically advantaged and have lower rates of mental problems, due to the extensive screening procedures conducted by adoption agencies (Kendler 1993). Biological parents of adoptees tend to be atypical, as well, but in the opposite way. Additionally, selective placement by adoption agencies is confounding the clear-cut separation between genetic and environmental effects by matching adoptees and adoptive parents on demographics, such as race and religion. An increasing number of adoptions are also allowing contact between the biological parents and adoptive children, further confounding the traditional genetic and environmental separation that made adoption studies useful for genetically informative research.

Finally, greater contraceptive use is making adoption increasingly rare (Martin et al. 1997). Accordingly, this research strategy has become increasingly challenging, though a number of current adoption studies continue to make important contributions to the field (Leve et al. 2010; McGue et al. 1995, 1996).

Twin studies

Twins provide a number of ways to study gene-environment interaction. One such method is to study monozygotic twins reared apart (MZA). MZAs provide a unique opportunity to study the influence of different environments on identical genotypes. In the Swedish Adoption/Twin Study of Aging, data from 99 pairs of MZAs were tested for interactions between childhood rearing and adult personality (Bergeman et al. 1988).

Several significant interactions emerged. In some cases, the environment had a stronger impact on individuals genetically predisposed to be low on a given trait (based on the cotwin’s score). For example, individuals high in extraversion expressed the trait regardless of the environment; however, individuals predisposed to low extraversion had even lower scores in the presence of a controlling family.

In other traits, the environment had a greater impact on individuals genetically predisposed to be high on the trait: Individuals predisposed to impulsivity were even more impulsive in a conflictual family environment; individuals low on impulsivity were not affected.

Finally, some environments influenced both individuals who were high and low on a given trait, but in opposite directions: Families that were more involved masked genetic differences between individuals predisposed toward high or low neuroticism, but greater genetic variation emerged in less controlling families.

The implementation of population-based twin studies, inclusion of measured environments into twin studies, and advances in biometrical modeling techniques for twin data made it possible to study gene-environment interaction within the framework of the classic twin study. Traditional twin studies involve comparisons of monozygotic (MZ) and dizy-gotic (DZ) twins reared together. MZ twins share all of their genetic variation, whereas DZ twins share on average 50% of their genetic make-up; however, both types of twins are age-matched siblings sharing their family environments. This allows heritability, or the proportion of variance attributed to additive genetic effects, to be estimated by (a) doubling the difference between the correlation found between MZ twins and the correlation found between DZ twins, for quantitative traits, or ( b ) comparing concordance rates between MZs and DZs, for qualitative disorders (McGue & Bouchard 1998).

Biometrical model-fitting made it possible for researchers to address increasingly sophisticated research questions by allowing one to statistically specify predictions made by various hypotheses and to compare models testing competing hypotheses. By modeling data from subjects who vary on exposure to a specified environment, one could test whether there is differential expression of genetic influences in different environments.

Early examples of gene-environment interaction in twin models necessitated “grouping” environments to fit multiple group models. The basic idea was simple: Fit models to data for people in environment 1 and environment 2 separately and then test whether there were significant differences in the importance of genetic and environmental factors across the groups using basic structural equation modeling techniques. In an early example of gene-environment interaction, data from the Australian twin register were used to test whether the relative importance of genetic effects on alcohol consumption varied as a function of marital status, and in fact they did (Heath et al. 1989).

Having a marriage-like relationship reduced the impact of genetic influences on drinking: Among the younger sample of twins, genetic liability accounted for but half as much variance in drinking among married women (31%) as among unmarried women (60%). A parallel effect was found among the adult twins: Genetic effects accounted for less than 60% of the variance in married respondents but more than 76% in unmarried respondents (Heath et al. 1989).

In an independent sample of Dutch twins, religiosity was also shown to moderate genetic and environmental influences on alcohol use initiation in females (with nonsignificant trends in the same direction for males): In females without a religious upbringing, genetic influences accounted for 40% of the variance in alcohol use initiation compared to 0% in religiously raised females. Shared environmental influences were far more important in the religious females (Koopmans et al. 1999).

In data from our population-based Finnish twin sample, we also found that regional residency moderates the impact of genetic and environmental influences on alcohol use. Genetic effects played a larger role in longitudinal drinking patterns from late adolescence to early adulthood among individuals residing in urban settings, whereas common environmental effects exerted a greater in-fluence across this age range among individuals in rural settings (Rose et al. 2001).

When one has pairs discordant for exposure, it is also possible to ask about genetic correlation between traits displayed in different environments.

One obvious limitation of modeling gene-environment interaction in this way was that it constrained investigation to environments that fell into natural groupings (e.g., married/unmarried; urban/rural) or it forced investigators to create groups based on environments that may actually be more continuous in nature (e.g., religiosity). In the first extension of this work to quasi-continuous environmental moderation, we developed a model that allowed genetic and environmental influences to vary as a function of a continuous environmental moderator and used this model to follow-up on the urban/rural interaction reported previously (Dick et al. 2001).

We believed it likely that the urban/rural moderation effect reflected a composite of different processes at work. Accordingly, we expanded the analyses to incorporate more specific information about neighborhood environments, using government-collected information about the specific municipalities in which the twins resided (Dick et al. 2001). We found that genetic influences were stronger in environments characterized by higher rates of migration in and out of the municipality; conversely, shared environmental influences predominated in local communities characterized by little migration.

We also found that genetic predispositions were stronger in communities composed of a higher percentage of young adults slightly older than our age-18 Finnish twins and in regions where there were higher alcohol sales.

Further, the magnitude of genetic moderation observed in these models that allowed for variation as a function of a quasi-continuous environmental moderator was striking, with nearly a fivefold difference in the magnitude of genetic effects between environmental extremes in some cases.

The publication of a paper the following year (Purcell 2002) that provided straightforward scripts for continuous gene-environment interaction models using the most widely used program for twin analyses, Mx (Neale 2000), led to a surge of papers studying gene-environment interaction in the twin literature. These scripts also offered the advantage of being able to take into account gene-environment correlation in the context of gene-environment interaction. This was an important advance because previous examples of gene-environment interaction in twin models had been limited to environments that showed no evidence of genetic effects so as to avoid the confounding of gene-environment interaction with gene-environment correlation.

Using these models, we have demonstrated that genetic influences on adolescent substance use are enhanced in environments with lower parental monitoring (Dick et al. 2007c) and in the presence of substance-using friends (Dick et al. 2007b). Similar effects have been demonstrated for more general externalizing behavior: Genetic influences on antisocial behavior were higher in the presence of delinquent peers (Button et al. 2007) and in environments characterized by high parental negativity (Feinberg et al. 2007), low parental warmth (Feinberg et al. 2007), and high paternal punitive discipline (Button et al. 2008).

Further, in an extension of the socioregional-moderating effects observed on age-18 alcohol use, we found a parallel moderating role of these socioregional variables on age-14 behavior problems in girls in a younger Finnish twin sample. Genetic influences assumed greater importance in urban settings, communities with greater migration, and communities with a higher percentage of slightly older adolescents.

Other psychological outcomes have also yielded significant evidence of gene-environment interaction effects in the twin literature. For example, a moderating effect, parallel to that reported for alcohol consumption above, has been reported for depression symptoms (Heath et al. 1998) in females. A marriage-like relationship reduced the influence of genetic liability to depression symptoms, paralleling the effect found for alcohol consumption: Genetic factors accounted for 29% of the variance in depression scores among married women, but for 42% of the variance in young unmarried females and 51% of the variance in older unmarried females (Heath et al. 1998).

Life events were also found to moderate the impact of factors influencing depression in females (Kendler et al. 1991). Genetic and/or shared environmental influences were significantly more important in influencing depression in high-stress than in low-stress environments, as defined by a median split on a life-event inventory, although there was insufficient power to determine whether the moderating influence was on genetic or environmental effects.

More than simply accumulating examples of moderation of genetic influence by environmental factors, efforts have been made to integrate this work into theoretical frameworks surrounding the etiology of different clinical conditions. This is critical if science is to advance beyond individual observations to testable broad theories.

A 2005 review paper by Shanahan and Hofer suggested four processes by which social context may moderate the relative importance of genetic effects (Shanahan & Hofer 2005).

The environment may (a) trigger or (b) compensate for a genetic predisposition, (c) control the expression of a genetic predisposition, or (d ) enhance a genetic predisposition (referring to the accentuation of “positive” genetic predispositions).

These processes are not mutually exclusive and can represent different ends of a continuum. For example, the interaction between genetic susceptibility and life events may represent a situation whereby the experience of life events triggers a genetic susceptibility to depression. Conversely, “protective” environments, such as marriage-like relationships and low stress levels, can buffer against or reduce the impact of genetic predispositions to depressive problems.

Many different processes are likely involved in the gene-environment interactions observed for substance use and antisocial behavior. For example, family environment and peer substance use/delinquency likely constitute a spectrum of risk or protection, and family/friend environments that are at the “poor” extreme may trigger genetic predispositions toward substance use and antisocial behavior, whereas positive family and friend relationships may compensate for genetic predispositions toward substance use and antisocial behavior.

Social control also appears to be a particularly relevant process in substance use, as it is likely that being in a marriage-like relationship and/or being raised with a religious upbringing exert social norms that constrain behavior and thereby reduce genetic predispositions toward substance use.

Further, the availability of the substance also serves as a level of control over the ability to express genetic predispositions, and accordingly, the degree to which genetic influences will be apparent on an outcome at the population level. In a compelling illustration of this effect, Boardman and colleagues used twin data from the National Survey of Midlife Development in the United States and found a significant reduction in the importance of genetic influences on people who smoke regularly following legislation prohibiting smoking in public places (Boardman et al. 2010).

Molecular analyses

All of the analyses discussed thus far use latent, unmeasured indices of genetic influence to detect the possible presence of gene-environment interaction. This is largely because it was possible to test for the presence of latent genetic influence in humans (via comparisons of correlations between relatives with different degrees of genetic sharing) long before molecular genetics yielded the techniques necessary to identify specific genes influencing complex psychological disorders.

However, recent advances have made the collection of deoxyribonucleic acid (DNA) and resultant genotyping relatively cheap and straightforward. Additionally, the publication of hig profile papers brought gene-environment interaction to the forefront of mainstream psychology. In a pair of papers published in Science in 2002 and 2003, respectively, Caspi and colleagues analyzed data from a prospective, longitudinal sample from a birth cohort from New Zealand, followed from birth through adulthood.

In the 2002 paper, they reported that a functional polymorphism in the gene encoding the neurotransmitter-metabolizing enzyme monoamine oxidase A (MAOA) moderated the effect of maltreatment: Males who carried the genotype conferring high levels of MAOA expression were less likely to develop antisocial problems when exposed to maltreatment (Caspi et al. 2002). In the 2003 paper, they reported that a functional polymorphism in the promoter region of the serotonin transporter gene (5-HTT) was found to moderate the influence of stressful life events on depression. Individuals carrying the short allele of the 5-HTT promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than did individuals homozygous for the long allele (Caspi et al. 2003).

Both studies were significant in demonstrating that genetic variation can moderate individuals’ sensitivity to environmental events.

These studies sparked a multitude of reports that aimed to replicate, or to further extend and explore, the findings of the original papers, resulting in huge literatures surrounding each reported gene-environment interaction in the years since the original publications (e.g., Edwards et al. 2009, Enoch et al. 2010, Frazzetto et al. 2007, Kim-Cohen et al. 2006, McDermott et al. 2009,Prom-Wormley et al. 2009, Vanyukov et al. 2007, Weder et al. 2009). It is beyond the scope of this review to detail these studies; however, of note was the publication in 2009 of a highly publicized meta-analysis of the interaction between 5-HTT, stressful life events, and risk of depression that concluded there was “no evidence that the serotonin transporter genotype alone or in interaction with stressful life events is associated with an elevated risk of depression in men alone, women alone, or in both sexes combined” (Risch et al. 2009). Further, the authors were critical of the rapid embracing of gene-environment interaction and the substantial resources that have been devoted to this research.

The paper stimulated considerable backlash against the study of gene-environment interactions, and the pendulum appeared to be swinging back the other direction. However, a recent review by Caspi and colleagues entitled “Genetic Sensitivity to the Environment: The Case of the Serotonin Transporter Gene and Its Implications for Studying Complex Diseases and Traits” highlighted the fact that evidence for involvement of 5-HTT in stress sensitivity comes from at least four different types of studies, including observational studies in humans, experimental neuroscience studies, studies in nonhuman primates, and studies of 5HTT mutations in rodents (Caspi et al. 2010).

Further, the authors made the distinction between different cultures of evaluating gene-environment interactions: a purely statistical (theory-free) approach that relies wholly on meta-analysis (e.g., such as that taken by Risch et al. 2009) versus a construct-validity (theory-guided) approach that looks for a nomological network of convergent evidence, such as the approach that they took.

It is likely that this distinction also reflects differences in training and emphasis across different fields. The most cutting-edge genetic strategies at any given point, though they have changed drastically and rapidly over the past several decades, have generally involved atheoretical methods for gene identification (Neale et al. 2008). This was true of early linkage analyses, where ~400 to 1,000 markers were scanned across the genome to search for chromosomal regions that were shared by affected family members, suggesting there may be a gene in that region that harbored risk for the particular outcome under study. This allowed geneticists to search for genes without having to know anything about the underlying biology, with the ideas that the identification of risk genes would be informative as to etiological processes and that our understanding of the biology of most psychiatric conditions is limited.

Although it is now recognized that linkage studies were underpowered to detect genes of small effect, such as those now thought to be operating in psychiatric conditions, this atheoretical approach was retained in the next generation of gene-finding methods that replaced linkage, the implementation of genome-wide association studies (GWAS) (Cardon 2006). GWAS also have the general framework of scanning markers located across the entire genome in an effort to detect association between genetic markers and disease status; however, in GWAS over a million markers (or more, on the newest genetic platforms) are analyzed.

The next technique on the horizon is sequencing, in which entire stretches of DNA are sequenced to know the exact base pair sequence for a given region (McKenna et al. 2010).

From linkage to sequencing, common across all these techniques is an atheoretical framework for finding genes that necessarily involves conducting very large numbers of tests. Accordingly, there has been great emphasis in the field of genetics on correction for multiple testing (van den Oord 2007). In addition, the estimated magnitude of effect size of genetic variants thought to influence complex behavioral outcomes has been continually shifted downward as studies that were sufficiently powered to detect effect sizes previously thought to be reasonable have failed to generate positive findings (Manolio et al. 2009). GWAS have led the field to believe that genes influencing complex behavioral outcomes likely have odds ratios (ORs) on the order of magnitude of 1.1. This has led to a need for incredibly large sample sizes, requiring meta-analytic GWAS efforts with several tens of thousands of subjects (Landi et al. 2009, Lindgren et al. 2009).

It is important to note there has been increasing attention to the topic of gene-environment interaction from geneticists (Engelman et al. 2009). This likely reflects, in part, frustration and difficulty with identifying genes that impact complex psychiatric outcomes. Several hypotheses have been put forth as possible explanations for the failure to robustly detect genes involved in psychiatric outcomes, including a genetic model involving far more genes, each of very small effect, than was previously recognized, and failure to pay adequate attention to rare variants, copy number variants, and gene-environment interaction (Manolio et al. 2009).

Accordingly, gene-environment interaction is being discussed far more in the area of gene finding than in years past; however, these discussions often involve atheoretical approaches and center on methods to adequately detect gene-environment interaction in the presence of extensive multiple testing (Gauderman 2002, Gauderman et al. 2010). The papers by Risch et al. (2009) and Caspi et al. (2010) on the interaction between 5-HTT, life stress, and depression highlight the conceptual, theoretical, and practical differences that continue to exist between the fields of genetics and psychology surrounding the identification of gene-environment interaction effects.

THE NATURE OF GENE-ENVIRONMENT INTERACTION

An important consideration in the study of gene-environment interaction is the nature, or shape, of the interaction that one hypothesizes. There are two primary types of interactions.

One type of interaction is the fan-shaped interaction. In this type of interaction, the influence of genotype is greater in one environmental context than in another. This is the kind of interaction that is hypothesized by a diathesis-stress framework, whereby genetic influences become more apparent, i.e., are more strongly related to outcome, in the presence of negative environmental conditions. There is a reduced (or no) association of genotype with outcome in the absence of exposure to particular environmental conditions.

The literature surrounding depression and life events would be an example of a hypothesized fan-shaped interaction: When life stressors are encountered, genetically vulnerable individuals are more prone to developing depression, whereas in the absence of life stressors, these individuals may be no more likely to develop depression. In essence, it is only when adverse environmental conditions are experienced that the genes “come on-line.”

Gene-environment interactions in the area of adolescent substance use are also hypothesized to be fan-shaped, where some environmental conditions will allow greater opportunity to express genetic predispositions, allowing for more variation by genotype, and other environments will exert social control in such a way as to curb genetic expression (Shanahan & Hofer 2005), leading to reduced genetic variance.

Twin analyses yielding evidence of genetic influences being more or less important in different environmental contexts are generally suggestive of fan-shaped interactions. Changes in the overall heritability do not necessarily dictate that any one specific susceptibility gene will operate in a parallel manner; however, a change in heritability suggests that at least a good portion of the involved genes (assuming many genes of approximately equal and small effect) must be operating in that manner for a difference in heritability by environment to be detectable.

The diathesis-stress model has largely been the dominant model in psychiatry. Gene-finding efforts have focused on the search for vulnerability genes, and gene-environment interaction has been discussed in the context of these genetic effects becoming more or less important under particular environmental conditions.

Different types of gene-environment interactions.

More recently, an alternative framework has been proposed by Belsky and colleagues, the differential susceptibility hypothesis, in which the same individuals who are most adversely affected by negative environments may also be those who are most likely to benefit from positive environments. Rather than searching for “vulnerability genes” influencing psychiatric and behavioral outcomes, they propose the idea of “plasticity genes,” or genes involved in responsivity to environmental conditions (Belsky et al. 2009).

Belsky and colleagues reviewed the literatures surrounding gene-environment interactions associated with three widely studied candidate genes, MAOA, 5-HTT, and DRD4, and suggested that the results provide evidence for differential susceptibility associated with these genes (Belsky et al. 2009).

Their hypothesis is closely related to the concept of biological sensitivity to context (Ellis & Boyce 2008). The idea of biological sensitivity to context has its roots in evolutionary developmental biology, whereby selection pressures should favor genotypes that support a range of phenotypes in response to environmental conditions because this flexibility would be beneficial from the perspective of survival of the species. However, biological sensitivity to context has the potential for both positive effects under more highly supportive environmental conditions and negative effects in the presence of more negative environmental conditions. This theory has been most fully developed and discussed in the context of stress reactivity (Boyce & Ellis 2005), where it has been demonstrated that highly reactive children show disproportionate rates of morbidity when raised in adverse environments, but particularly low rates when raised in low-stress, highly supportive environments (Ellis et al. 2005). In these studies, high reactivity was defined by response to different laboratory challenges, and the authors noted that the underlying cellular mechanisms that would produce such responses are currently unknown, though genetic factors are likely to play a role (Ellis & Boyce 2008).

Although fan-shaped and crossover interactions are theoretically different, in practice, they can be quite difficult to differentiate. One can imagine several “variations on the theme” for both fan-shaped and crossover interactions. In general for a fan-shaped interaction, a main effect of genotype will be present as well as a main effect of the environment. There is a main effect of genotype at both environmental extremes; it is simply far stronger in environment 5 (far right side of the graph) as compared to environment 1 (far left side). But one could imagine a fan-shaped interaction where there was no genotypic effect at one extreme (e.g., the lines converge to the same phenotypic mean at environment).

Further, fan-shaped interactions can differ in the slope of the lines for each genotype, which indicate how much the environment is modifying genetic effects. In the crossover interaction shown above, the lines cross at environment 3 (i.e., in the middle). But crossover interactions can vary in the location of the crossover. It is possible that crossing over only occurs at the environmental extreme.

As previously noted, the crossing over of the genotypic groups in the Caspi et al. publications of the interactions between the 5-HTT gene, life events, and depression (Caspi et al. 2003) and between MAOA, maltreatment, and antisocial behavior (Caspi et al. 2002) occurred at the extreme low ends of the environmental measures, and the degree of crossing over was quite modest. Rather, the shape of the interactions (and the way the interactions were conceptualized in the papers) was largely fan-shaped, whereby certain genotypic groups showed stronger associations with outcome as a function of the environmental stressor.

Also, in both cases, the genetic variance was far greater under one environmental extreme than the other, rather than being approximately equivalent at both ends of the distribution, but with genotypic effects in opposite directions. In general, it is assumed that main effects of genotype will not be detected in crossover interactions, but this will actually depend on the frequency of the different levels of the environment. This is also true of fan-shaped interactions, but to a lesser degree.

Evaluating the relative importance, or frequency of existence, of each type of interaction is complicated by the fact that there is far more power to detect crossover interactions than fan-shaped interactions. Knowing that most of our genetic studies are likely underpowered, we would expect a preponderance of crossover effects to be detected as compared to fan-shaped effects purely as a statistical artifact. Further, even when a crossover effect is observed, power considerations can make it difficult to determine if it is “real.” For example, an interaction observed in our data between the gene CHRM2, parental monitoring, and adolescent externalizing behavior yielded consistent evidence for a gene-environment interaction, with a crossing of the observed regression lines. However, the mean differences by genotype were not significant at either end of the environmental continuum, so it is unclear whether the crossover reflected true differential susceptibility or simply overfitting of the data across the environmental levels containing the majority of the observations, which contributed to a crossing over of the regression lines at one environmental extreme (Dick et al. 2011).

Larger studies would have greater power to make these differentiations; however, there is the unfortunate paradox that the samples with the greatest depth of phenotypic information, allowing for more complex tests about risk associated with particular genes, usually have much smaller sample sizes due to the trade-off necessary to collect the rich phenotypic information. This is an important issue for gene-environment interaction studies in general: Most have been underpowered, and this raises concerns about the likelihood that detected effects are true positives. There are several freely available programs to estimate power (Gauderman 2002, Purcell et al. 2003), and it is critical that papers reporting gene-environment interaction effects (or a lack thereof) include information about the power of their sample in order to interpret the results.

Another widely contested issue is whether gene-environment interactions should be examined only when main effects of genotype are detected. Perhaps not surprisingly, this is the approach most commonly advocated by statistical geneticists (Risch et al. 2009) and that was recommended by the Psychiatric GWAS Consortium (Psychiatr. GWAS Consort. Steer. Comm. 2008). However, this strategy could preclude the detection of crossover interaction effects as well as gene-environment interactions that occur in the presence of relatively low-frequency environments. In addition, if genetic effects are conditional on environmental exposure, main effects of genotype could vary across samples, that is to say, a genetic effect could be detected in one sample and fail to replicate in another if the samples differ on environmental exposure.

Another issue with the detection and interpretation of gene-environment interaction effects involves the range of environments being studied. For example, if we assume that the five levels of the environment shown above represent the true full range of environments that exist, if a particular study only included individuals from environments 3–5, it would conclude that there is a fan-shaped gene-environment interaction. Belsky and colleagues (2009) have suggested this may be particularly problematic in the psychiatric literature because only in rare exceptions (Bakermans-Kranenburg & van Ijzendoorn 2006, Taylor et al. 2006) has the environment included both positive and negative ends of the spectrum. Rather, the absence of environmental stressors has usually constituted the “low” end of the environment, e.g., the absence of life stressors (Caspi et al. 2003) or the absence of maltreatment (Caspi et al. 2002). This could lead individuals to conclude there is a fan-shaped interaction because they are essentially failing to measure, with reference to figure above, environments 0-3, which represent the positive end of the environmental continuum. One can imagine a number of other incorrect conclusions that could be drawn about the nature of gene-environment interaction effects as a result of restricted range of environmental measures. For example, in B, measurement of individuals from environments 0-3 would lead one to conclude that genetic effects play a stronger role at lower levels of environmental exposure. Measurement of individuals from environments 3-5 would lead one to conclude that genetic effects play a stronger role at higher levels of exposure to the same environmental variable. In Figure A, if measurement of individuals was limited to environments 0-3, depending on sample size, there may be inadequate power to detect deviation from a purely additive genetic model, e.g., the slope of the genotypic lines may not be significantly different.

It is also important to note that not only are there several scenarios that would lead one to make incorrect conclusions about the nature of a gene-environment interaction effect, there are also scenarios that would lead one to conclude that a gene-environment interaction exists when it actually does not. Several of these are detailed in a sobering paper by my colleague Lindon Eaves, in which significant evidence for gene-environment interaction was detected quite frequently using standard regression methods, when the simulated data reflected strictly additive models (Eaves 2006). This was particularly problematic when using logistic regression where a dichotomous diagnosis was the outcome. The problem was further exaggerated when selected samples were analyzed.

An additional complication with evaluating gene-environment interactions in psychology is that often our environmental measures don’t have absolute scales of measurement. For example, what is the “real” metric for measuring a construct like parent-child bonding, or maltreatment, or stress? This becomes critical because fan-shaped interactions are very sensitive to scaling. Often a transformation of the scale scores will make the interaction disappear. What does it mean if the raw variable shows an interaction but the log transformation of the scale scores does not? Is the interaction real? Is one metric for measuring the environment a better reflection of the “real” nature of the environment than another?

Many of the environments of interest to psychologists do not have true metrics, such as those that exist for measures such as height, weight, or other physiological variables. This is an issue for the study of gene-environment interaction. It becomes even more problematic when you consider that logistic regression is the method commonly used to test for gene-environment interactions with dichotomous disease status outcomes. Logistic regression involves a logarithmic transformation of the probability of being affected. By definition, this changes the nature of the relationship between the variables being modeled. This compounds problems associated with gene-environment interactions being scale dependent.

EPIGENETICS: A POTENTIAL BIOLOGICAL MECHANISM FOR GENE-ENVIRONMENT INTERACTION

An enduring question remains in the study of gene-environment interaction: how does the environment “get under the skin”? Stated in another way:

What are the biological processes by which exposure to environmental events could affect outcome?

Epigenetics is one candidate mechanism. Excellent recent reviews on this topic exist (Meaney 2010, Zhang & Meaney 2010), and I provide a brief overview here.

It is important to note, however, that although epigenetics is increasingly discussed in the context of gene-environment interaction, it does not relate directly to gene-environment interaction in the statistical sense, as differentiated previously in this review. That is to say that epigenetic processes likely tell us something about the biological mechanisms by which the environment can affect gene expression and impact behavior, but they are not informative in terms of distinguishing between additive versus interactive environmental effects.

Although variability exists in defining the term, epigenetics generally refers to modifications to the genome that do not involve a change in nucleotide sequence. To understand this concept, let us review a bit about basic genetics.

The expression of a gene is influenced by transcription factors (proteins), which bind to specific sequences of DNA. It is through the binding of transcription factors that genes can be turned on or off. Epigenetic mechanisms involve changes to how readily transcription factors can access the DNA. Several different types of epigenetic changes are known to exist that involve different types of chemical changes that can regulate DNA transcription.

One epigenetic process that affects transcription binding is DNA methylation. DNA methylation involves the addition of a methyl group (CH3) onto a cytosine (one of the four base pairs that make up DNA). This leads to gene silencing because methylated DNA hinders the binding of transcription factors.

A second major regulatory mechanism is related to the configuration of DNA. DNA is wrapped around clusters of histone proteins to form nucleosomes. Together the nucleosomes of DNA and histone are organized into chromatin. When the chromatin is tightly condensed, it is difficult for transcription factors to reach the DNA, and the gene is silenced. In contrast, when the chromatin is opened, the gene can be activated and expressed. Accordingly, modifications to the histone proteins that form the core of the nucleosome can affect the initiation of transcription by affecting how readily transcription factors can access the DNA and bind to their appropriate sequence.

Epigenetic modifications of the genome have long been known to exist. For example, all cells in the body share the same DNA; accordingly, there must be a mechanism whereby different genes are active in liver cells than, for example, brain cells. The process of cell specialization involves silencing certain portions of the genome in a manner specific to each cell. DNA methylation is a mechanism known to be involved in cell specialization.

Another well known example of DNA methylation involves X-inactivation in females. Because females carry two copies of the X chromosome, one must be inactivated. The silencing of one copy of the X chromosome involves DNA methylation.

Genomic imprinting is another long established principle known to involve DNA methylation. In genomic imprinting the expression of specific genes is determined by the parent of origin. For example, the copy of the gene inherited from the mother is silenced, while the copy inherited from the father is active (or vice versa). The silent copy is inactive through processes involving DNA methylation. These changes all involve epigenetic processes parallel to those currently attracting so much attention.

However, the difference is that these known epigenetic modifications (cell specialization, X inactivation, genomic imprinting) all occur early in development and are stable.

The discovery that epigenetic modifications continue to occur across development, and can be reversible and more dynamic, has represented a major paradigm shift in our understanding of environmental regulation of gene expression.

Animal studies have yielded compelling evidence that early environmental manipulations can be associated with long-term effects that persist into adulthood. For example, maternal licking and grooming in rats is known to have long-term influences on stress response and cognitive performance in their offspring (Champagne et al. 2008, Meaney 2010). Further, a series of studies conducted in macaque monkeys demonstrates that early rearing conditions can result in long-term increased aggression, more reactive stress response, altered neurotransmitter functioning, and structural brain changes (Stevens et al. 2009). These findings parallel research in humans that suggests that early life experiences can have long-term effects on child development (Loman & Gunnar 2010). Elegant work in animal models suggests that epigenetic changes may be involved in these associations (Meaney 2010, Zhang & Meaney 2010).

Evaluating epigenetic changes in humans is more difficult because epigenetic marks can be tissue specific. Access to human brain tissue is limited to postmortem studies of donated brains, which are generally unique and unrepresentative samples and must be interpreted in the context of those limitations. Nonetheless, a recent study of human brain samples from the Quebec Suicide Brain Bank found evidence of increased DNA methylation of the exon 1F promoter in hippocampal samples from suicide victims compared with controls, but only if suicide was accompanied with a history of childhood maltreatment (McGowan et al. 2009). Importantly, this paralleled epigenetic changes originally observed in rat brain in the ortholog of this locus.

Another line of evidence suggesting epigenetic changes that may be relevant in humans is the observation of increasing discordance in epigenetic marks in MZ twins across time. This is significant because MZ twins have identical genotypes, and therefore, differences between them are attributed to environmental influences. In a study by Fraga and colleagues (2005), MZ twins were found to be epigenetically indistinguishable during the early years of life, but older MZ twins exhibited remarkable differences in their epigenetic profiles. These findings suggest that epigenetic changes may be a mechanism by which environmental influences contribute to the differences in outcome observed for a variety of psychological traits of interest between genetically identical individuals.

The above studies complement a growing literature demonstrating differences in gene expression in humans as a function of environmental experience. One of the first studies to analyze the relationship between social factors and human gene expression compared healthy older adults who differed in the extent to which they felt socially connected to others (Cole et al. 2007). Using expression profiles obtained from blood cells, a number of genes were identified that showed systematically different levels of expression in people who reported feeling lonely and distant from others.

Interestingly, these effects were concentrated among genes that are involved in immune response.

The results provide a biological mechanism that could explain why socially isolated individuals show heightened vulnerability to diseases and illnesses related to immune function.

Importantly, they demonstrate that our social worlds can exert biologically significant effects on gene expression in humans (for a more extensive review, see Cole 2009).

CONCLUSIONS

This review has attempted to provide an overview of the study of gene-environment interaction, starting with early animal studies documenting gene-environment interaction, to demonstrations of similar effects in family, adoption, and twin studies.

Advances in twin modeling and the relative ease with which gene-environment interaction can now be modeled has led to a significant increase in the number of twin studies documenting changing importance of genetic influence across environmental contexts. There is now widespread documentation of gene-environment interaction effects across many clinical disorders (Thapar et al. 2007).

These findings have led to more integrated etiological models of the development of clinical outcomes. Further, since it is now relatively straightforward and inexpensive to collect DNA and conduct genotyping, there has been a surge of studies testing for gene-environment interaction with specific candidate genes.

Psychologists have embraced the incorporation of genetic components into their studies, and geneticists who focus on gene finding are now paying attention to the environment in an unprecedented way. However, now that the initial excitement surrounding gene-environment interaction has begun to wear off, a number of challenges involved in the study of gene-environment interaction are being recognized.

These include difficulties with interpreting interaction effects (or the lack thereof), due to issues surrounding the measurement and scaling of the environment, and statistical concerns surrounding modeling gene-environment interactions and the nature of their effects.

So where do we go from here? Individuals who jumped on the gene-environment interaction bandwagon are now discovering that studying this process is harder than it first appeared. But there is good reason to believe that gene-environment interaction is a very important process in the development of clinical disorders. So rather than abandon ship, I would suggest that as a field, we just need to proceed with more caution.

SUMMARY POINTS

– Gene-environment interaction refers to the phenomenon whereby the effect of genes depends on the environment, or the effect of the environment depends on genotype. There is now widespread documentation of gene-environment interaction effects across many clinical disorders, leading to more integrated etiological models of the development of clinical outcomes.

– Twin, family, and adoption studies provide methods to study gene-environment interaction with genetic effects modeled latently, meaning that genes are not directly measured, but rather genetic influence is inferred based on correlations across relatives. Advances in genotyping technology have contributed to a proliferation of studies testing for gene-environment interaction with specific measured genes. Each of these designs has its own strengths and limitations.

– Two types of gene-environment interaction have been discussed in greatest detail in the literature: fan-shaped interactions, in which the influence of genotype is greater in one environmental context than in another; and crossover interactions, in which the same individuals who are most adversely affected by negative environments may also be those who are most likely to benefit from positive environments. Distinguishing between these types of interactions poses a number of challenges.

– The range of environments studied and the lack of a true metric for many environmental measures of interest create difficulties for studying gene-environment interactions. Issues surrounding power, and the use of logistic regression and selected samples, further compound the difficulty of studying gene-environment interactions. These issues have not received adequate attention by many researchers in this field.

– Epigenetic processes may tell us something about the biological mechanisms by which the environment can affect gene expression and impact behavior. The growing literature demonstrating differences in gene expression in humans as a function of environmental experience demonstrates that our social worlds can exert biologically significant effects on gene expression in humans.

– Much of the current work on gene-environment interactions does not take advantage of the state of the science in genetics or psychology; advancing this area of study will require close collaborations between psychologists and geneticists.

Differential Susceptibility to Environmental Influences

Jay Belsky

Evidence that adverse rearing environments exert negative effects particularly on children and adults presumed “vulnerable” for temperamental or genetic reasons may actually reflect something else: heightened susceptibility to the negative effects of risky environments and to the beneficial effects of supportive environments.

Building on Belsky’s (Belsky & Pluess) evolutionary inspired differential susceptibility hypothesis, stipulating that some individuals, including children, are more affected, both for better and for worse, by their environmental exposures and developmental experiences, recent research consistent with this claim is reviewed. It reveals that in many cases, including both observational field studies and experimental intervention ones, putatively vulnerable children and adults are especially susceptible to both positive and negative environmental effects. In addition to reviewing relevant evidence, unknowns in the differential susceptibility equation are highlighted.

Introduction

Most students of child development probably do not presume that all children are equally susceptible to rearing (or other environmental) effects; a long history of research on interactions between parenting and temperament, or parenting by temperament interactions, clearly suggests otherwise. Nevertheless, it remains the case that most work still focuses on effects of environmental exposures and developmental experiences that apply equally to all children so-called main effects of parenting or poverty or being reared by a depressed mother, thus failing to consider interaction effects, which reflect the fact that whether, how, and how much these contextual conditions influence the child may depend on the child’s temperament or some other characteristic of individuality.

Research on parenting-by-temperament interactions is based on the premise that what proves effective for some individuals in fostering the development of some valued outcome, or preventing some problematic one may simply not do so for others. Commonly tested are diathesis-stress hypotheses derived from multiplerisk/transactional frameworks in which individual characteristics that make children “vulnerable” to adverse experiences placing them “at risk” of developing poorly are mainly influential when there is at the same time some contributing risk from the environmental context (Zuckerman, 1999).

Diathesis refers to the latent weakness or vulnerability that a child or adult may carry (e.g., difficult temperament, particular gene), but which does not manifest itself, thereby undermining well-being, unless the individual is exposed to conditions of risk or stress.

After highlighting some research consistent with a diathesis-stress or dual-risk perspective, I raise questions on the basis of other findings about how the first set of data has been interpreted, advancing the evolutionary inspired proposition that some children, for temperamental or genetic reasons, are actually more susceptible to both (a) the adverse effects of unsupportive parenting and (b) the beneficial effects of supportive rearing.

Finally, I draw conclusions and highlight some “unknowns in the differential-susceptibility equation.”

Diathesis-Stress, Dual-Risk and Vulnerability

The view that infants and toddlers manifesting high levels of negative emotion are at special risk of problematic development when they experience poor quality rearing is widespread.

Evidence consistent with this view can be found in the work of Morrell and Murray, who showed that it was only highly distressed and irritable 4-month-old boys who experienced coercive and rejecting mothering at this age who continued to show evidence, 5 months later, of emotional and behavioural dysregulation. Relatedly, Belsky, Hsieh, and Cernic observed that infants who scored high in negative emotionality at 12 months of age and who experienced the least supportive mothering and fathering across their second and third years of life scored highest on externalizing problems at 36 months of age. And Deater, Deckard and Dodge reported that:

Children rated highest on externalizing behavior problems by teachers across the primary school years were those who experienced the most harsh discipline prior to kindergarten entry and who were characterized by mothers at age 5 as being negatively reactive infants.

The adverse consequences of the co-occurrence of a child risk factor (ie, a diathesis; e.g., negative emotionality) and problematic parenting also is evident in Caspi and Moflitt’s ground breaking research on gene-by-environment (GXE) interaction. Young men followed from early childhood were most likely to manifest high levels of antisocial behavior when they had both (a) a history of child maltreatment and (b) a particular variant of the MAO-A gene, a gene previously linked to aggressive behaviour. Such results led Rutter, like others, to speak of “vulnerable individuals,” a concept that also applies to children putatively at risk for compromised development due to their behavioral attributes. But is “vulnerability” the best way to conceptualize the kind of person-environment interactions under consideration?

Beyond Diathesis, Stress, DualRisk and Vulnerability

Working from an evolutionary perspective, Belsky (Belsky & Pluess) theorized that children, especially within a family, should vary in their susceptibility to both adverse and beneficial effects of rearing influence. Because the future is uncertain, in ancestral times, just like today, parents could not know for certain (consciously or unconsciously) what rearing strategies would maximise reproductive fitness, that is, the dispersion of genes in future generations, the ultimate goal of Darwinian evolution.

To protect against all children being steered, inadvertently, in a parental direction that proved disastrous at some later point in time, developmental processes were selected to vary children’s susceptibility to rearing (and other environmental influences).

In what follows, I review evidence consistent with this claim which highlights early negative emotionality and particular candidate genes as “plasticity factors” making individuals more susceptible to both supportive and unsupportive environments, that is, “for better and for worse”.

Negative Emotionality as Plasticity Factor

The first evidence which Belsky could point to consistent with his differential susceptibility hypothesis concerned early negative emotionality. Children scoring high on this supposed “risk factor”, particularly in the early years, appeared to benefit disproportionately from supportive rearing environments.

Feldman, Greenbaum, and Yirmiya found, for example, that 9-month-olds scoring high on negativity who experienced low levels of synchrony in mother-infant interaction manifested more noncompliance during clean-up at age two than other children did. When such infants experienced mutually synchronous mother-infant interaction, however, they displayed greater self-control than did children manifesting much less negativity as infants. Subsequently, Kochanska, Aksan, and Joy observed that highly fearful 15-month-olds experiencing high levels of power-assertive paternal discipline were most likely to cheat in a game at 38 months, yet when cared for in a supportive manner such negatively emotional, fearful toddlers manifested the most rule-compatible conduct.

In the time since Belsky and Pluess reviewed evidence like that just cited, highlighting the role of negative emotionality as a “plasticity factor”, even more evidence to this effect has emerged in the case of children. Consider in this regard work linking (1) maternal empathy and anger with externalizing problems; (2) mutual responsiveness observed in the mother-child dyad with effortful control; (3) intrusive maternal behavior and poverty with executive functioning; and (4) sensitive parenting with social, emotional and cognitive-academic development.

Experimental studies designed to test Belsky’s differential susceptibility hypothesis are even more suggestive than the longitudinal correlational evidence just cited. Blair discovered that it was highly negative infants who benefited most in terms of both reduced levels of externalizing behavior problems and enhanced cognitive functioning from a multi-faceted infant-toddler intervention program whose data he reanalyzed. Thereafter, Klein Velderman, Bakermans-Kranenburg, Juffer, and van Ijzendoorn found that experimentally induced changes in maternal sensitivity exerted greater impact on the attachment security of highly negatively reactive infants than it did on other infants. In both experiments, environmental influences on “vulnerable” children were for better instead of for worse.

As it turns out, there is ever growing experimental evidence that early negative emotionality is a plasticity factor. Consider findings showing that it is infants who score relatively low on irritability as newborns who fail to benefit from an otherwise security promoting intervention and infants who show few, if any, mild perinatal adversities known to be related to limited negative emotionality who fail to benefit from computer based instruction otherwise found to promote preschoolers’ phonemic awareness and early literacy.

In other words, only the putatively “vulnerable”, those manifesting or likely to manifest high levels of negativity experienced developmental enhancement as a function of the interventions cited. Similar results emerge among older children, as Scott and O’Connor’s parenting intervention resulted in the most positive change in conduct among emotionally dysregulated children (i.e., loses temper, angry, touchy).

Genes as Plasticity Factors

Perhaps nowhere has the diathesis-stress framework informed person-X-environment interaction research more than in the study of GXE interaction. Recent studies involving measured genes and measured environments also document both for better and for worse environmental effects, in the case of susceptible individuals as it turns out. Here I consider evidence pertaining to two specific candidate genes before turning attention to research examining multiple genes at the same time.

DRD4

One of the most widely studied genetic polymorphisms in research involving measured genes and measured environments pertains to a particular allele (or variant) of the dopamine receptor gene, DRD4. Because the dopaminergic system is engaged in attentional, motivational, and reward mechanisms and one variant of this polymorphism, the 7-repeat allele, has been linked to lower dopamine reception efficiency. Van Ijzendoorn and Bakerman Kranenburg predicted this allele would moderate the association between maternal unresolved loss or trauma and infant attachment disorganization. Having the 7-repeat DRD4 allele substantially increased risk for disorganization in children exposed to maternal unresolved loss/trauma, as expected, consistent with the diathesis-stress framework; yet when children with this supposed “vulnerability gene” were raised by mothers who had no unresolved loss, they displayed significantly less disorganization than agemates without the allele, regardless of mothers’ unresolved loss status.

Similar results emerged when the interplay between DRD4 and observed parental insensitivity in predicting externalizing problems was studied in a group of 47 twins. Children carrying the 7-repeat DRD4 allele raised by insensitive mothers displayed more externalizing behaviors than children without the DRD4 7-repeat (irrespective of maternal sensitivity), whereas children with the 7-repeat allele raised by sensitive mothers showed the lowest levels of externalizing problem behavior.

Such results suggest that conceptualizing the 7-repeat DRD4 allele exclusively in risk-factor terms is misguided, as this variant of the gene seems to heighten susceptibility to a wide variety of environments, with supportive and risky contexts promoting, respectively, positive and negative functioning.

In the time since I last reviewed such differential susceptibility related evidence, ever more GXE findings pertaining to DRD4 (and other polymorphisms) have appeared consistent with the notion that there are individual differences in developmental plasticity. Consider in this regard recent differential susceptibility related evidence showing heightened or exclusive susceptibility of individuals carrying the 7repeat allele when the environmental predictor and developmental outcome were, respectively, (a) maternal positivity and prosocial behavior; (b) early nonfamilial childcare and social competence; (c) contextual stress and support and adolescent negative arousal; (d) childhood adversity and young adult persistent alcohol dependence; and (e) newborn risk status (i.e., gestational age, birth weight for gestational age, length of stay in NICU) and observed maternal sensitivity.

Especially noteworthy, perhaps are the results of a meta-analysis of GXE research involving dopamine related genes showing that children eight and younger respond to positive and negative developmental experiences and environmental exposures in a manner consistent with differential susceptibility.

As in the case of negative emotionality, intervention research also underscores the susceptibility to 7-repeat carriers of the DRD4 gene to benefit disproportionately from supportive environments. Kegel, Bus and van I]zendoorn tested and found support for the hypothesis that it would be DRD4-7R carriers who would benefit from specially designed computer games promoting phonemic awareness and, thereby, early literacy in their randomized control trial (RCT). Other such RCT results point in the same direction with regard to DRD4-7R, including research on African American teenagers in which substance use was the outcome examined.

5-HTTLPR

Perhaps the most studied polymorphism in research on GXE interactions is the serotonin transporter gene, 5-HTTLPR. Most research distinguishes those who carry one or two short alleles (8/3, 3/1) and those homozygous for the long allele (1/1). The short allele has generally been associated with reduced expression of the serotonin transporter molecule, which is involved in the reuptake of serotonin from the synaptic cleft and thus considered to be related to depression, either directly or in the face of adversity. Indeed, the short allele has often been conceptualized as a “depression gene”.

Caspi and associates were the first to show that the 5-HTTLPR moderates effects of stressful life events during early adulthood on depressive symptoms, as well as on probability of suicide ideation/attempts and of major depression episode at age 26 years. Individuals with two 3 alleles proved most adversely affected whereas effects on 1/1 genotypes were weaker or entirely absent. Of special significance, however, is that carriers of the 3/3 allele scored best on the outcomes just mentioned when stressful life events were absent, though not by very much.

Multiple research groups have attempted to replicate Caspi et al.’’s findings of increased vulnerability to depression in response to stressful life events for individuals with one or more copies of the 5 allele, with many succeeding, but certainly not all. The data presented in quite a number of studies indicates, however, that individuals carrying short alleles (s/s, s/l) did not just function most poorly when exposed to many stressors, but best, showing least problems when encountering few or none. Calling explicit attention to such a pattern of results, Taylor and associates reported that young adults homozygous for short alleles (s/s) manifested greater depressive symptomatology than individuals with other allelic variants when exposed to early adversity (i.e., problematic child rearing history), as well as many recent negative life events, yet the fewest symptoms when they experienced a supportive early environment or recent positive experiences. The same for-better-and-for-worse pattern of results concerning depression are evident in Eley et al.’s research on adolescent girls who were and were not exposed to risky family environments.

The effect of 5-HTTLPR in moderating environmental influences in a manner consistent with differential susceptibility is not restricted to depression and its symptoms. It also emerges in studies of anxiety and ADHD, particularly ADHD which persists into adulthood. In all these cases, emotional abuse in childhood or a generally adverse childrearing environment, it proved to be those individuals carrying short alleles who responded to developmental or concurrent experiences in a for-better-and-for-worse manner, depending on the nature of the experience in question.

Since last reviewing such 5-HTTLPR-related GXE research consistent with differential susceptibility, ever more evidence in line with the just cited work has emerged. Consider in this regard evidence showing for-better-and-for-worse results in the case of those carrying one or more short alleles of 5-HTTLPR when the rearing predictor and child outcome were, respectively, (a) maternal responsiveness and child moral internalization, (b) child maltreatment and children’s antisocial behavior, and (c) supportive parenting and children’s positive affect.

Differential susceptibility related findings also emerged (among male African-American adolescents) when (d) perceived racial discrimination was used to predict conduct problems; (e) when life events were used to predict neuroticism, and (f) life satisfaction of young adults; and (g) when retrospectively reported childhood adversity was used to explain aspects of impulsivity among college students (e.g., pervasive influence of feelings, feelings trigger action). Especially noteworthy are the results of a recent meta-analysis of GXE findings pertaining to children under 18 years of age, showing that short allele carriers are more susceptible to the effects of both positive and negative developmental experiences and environmental exposures, at least in the case of Caucasians.

As was the case with DRD4, there is also evidence from intervention studies documenting differential susceptibility. Consider in this regard Drury and associates data showing that it was only children growing up in Romanian orphanages who carried 5-HTTLPR short alleles who benefited from being randomly assigned to high quality foster care in terms of reductions in the display of indiscriminant friendliness. Eley and associates also documented intervention benefits restricted to short allele carriers in their study of cognitive behavior therapy for children suffering from severe anxiety, but their design included only treated children (i.e., did not involve a randomly assigned control group).

Polygenetic Plasticity

Most GxE research, like that just considered, has focused on one or another polymorphism, like DRD4 or 5-HTTLPR. In recent years, however, work has emerged focusing on multiple polymorphisms and thus reflecting the operation of epistatic (i.e., GXG) interactions, as well as GxGxE ones.

One can distinguish polygenetic GxE research in terms of the basis used for creating multigene composites. One strategy involves identifying genes which show main effects and then compositing only these to then test an interaction with some environmental parameter. Another approach is to composite genes for a secondary, follow-up analysis that have been found in a first round of inquiry to generate significant GxE interactions.

When Cicchetti and Rogosch applied this approach using four different polymorphisms, they found that as the number of sensitivity-to-the-environment alleles increased, so did the degree to which maltreated and non-maltreated low-income children differed on a composite measure of resilient functioning in a for-better-and-for-worse manner.

A third approach which has now been used successfully a number of times to chronicle differential susceptibility involves compositing a set of genes selected on an apriori basis before evaluating GxE. Consider in this regard evidence indicating that 2-gene composites moderate links (a) between sexual abuse and adolescent depression/anxiety and somatic symptoms (b) between perceived racial discrimination and risk related cognitions reflecting a fast vs. slow life-history strategy (c) between contextual stress/support and aggression in young adulthood and (d) between social class and post-partum depression.

Of note, too is evidence that a 3-gene composite moderates the relation between a hostile, demoralizing community and family environment and aggression in early adulthood and that a 5-gene composite moderates the relation between parenting and adolescent self-control.

Given research already reviewed, it is probably not surprising that there is also work examining genetically moderated intervention effects focusing on multi-gene composites rather than singular candidate genes. Consider in this regard the Drury et al.’s findings showing that even though the genetic polymorphism brain derived neurotrophic factor, BDNF, did not all by itself operate as a plasticity factor when it came to distinguishing those who did and did not benefit from the aforementioned foster-care intervention implemented with institutionalized children in Romania, the already-noted moderating effect of 5-HTTLPR was amplified if a child carried Met rather than Val alleles of BDNF along with short 5-HTTLPR alleles. In other words, the more plasticity alleles children carried, the more their indiscriminate friendliness declined over time when assigned to foster care and the more it increased if they remained institutionalized.

Consider next Brody, Chen and Beach’s confirmed prediction that the more GABAergic and Dopaminergic genes African American teens carried, the more protected they were from increasing their alcohol use over time when enrolled in a whole-family prevention program. Such results once again call attention to the benefits of moving beyond single polymorphisms when it comes to operationalizing the plasticity phenotype. They also indicate that even if a single gene may not by itself moderate an intervention (or other environmental) effect, it could still play a role in determining the degree to which an individual benefits. These are insights future investigators and interventionists should keep in mind when seeking to illuminate “what works for whom?”

Unknowns in the Differential Susceptibility Equation

The notion of differential susceptibility, derived as it is from evolutionary theorizing, has gained great attention in recent years, including a special section in the journal Development and Psychopathology.

Although research summarized here suggests that the concept has utility, there are many “unknowns,” several of which are highlighted in this concluding section.

Domain General or Domain Specilic?

Is it the case that some children, perhaps those who begin life as highly negatively emotional, are more susceptible both to a wide variety of rearing influences and with respect to a wide variety of developmental outcomes as is presumed in the use of concepts like “fixed” and “plastic” strategists, with the latter being highly malleable and the former hardly at all? Boyce and Ellis contend that a general psychobiological reactivity makes some children especially vulnerable to stress and thus to general health problems. Or is it the case, as Belsky wonders and Kochanska, Aksan, and Joy argue, that different children are susceptible to different environmental influences (e.g., nurturance, hostility) and with respect to different outcomes? Pertinent to this idea are findings of Caspi and Mofiitt indicating that different genes differentially moderated the effect of child maltreatment on antisocial behavior (MAO-A) and on depression (5HTT).

Continuous Versus Discrete Plasticity?

The central argument that children vary in their susceptibility to rearing influences raises the question of how to conceptualize differential susceptibility: categorically (some children highly plastic and others not so at all) or continuously (some children simply more malleable than others)? It may even be that plasticity is discrete for some environment-outcome relations, with some individuals affected and others not at all (e.g., gender specific effects), but that plasticity is more continuous for other susceptibility factors (e.g., in the case of the increasing vulnerability to stress of parents with decreasing dopaminergic efficiency. Certainly the work which composites multiple genotypes implies that there is a “plasticity gradient”, with some children higher and some lower in plasticity.

Mechanisms

Susceptibility factors are the moderators of the relation between the environment and developmental outcome, but they do not elucidate the mechanism of differential influence.

Several (non-mutually exclusive) explanations have been advanced for the heightened susceptibility of negatively emotional infants. Suomi posits that the timidity of “uptight” infants affords them extensive opportunity to learn by watching, a view perhaps consistent with Bakermans-Kranenburg and van Ijzendoorn’s aforementioned findings pertaining to DRD4, given the link between the dopamine system and attention. Kochanska et al., contend that the ease with which anxiety is induced in fearful children makes them highly responsive to parental demands.

And Belsky speculates that negativity actually reflects a highly sensitive nervous system on which experience registers powerfully negatively when not regulated by the caregiver but positively when coregulation occurs, a point of view somewhat related to Boyce and Ellis’ proposal that susceptibility may reflect prenatally programmed hyper-reactivity to stress.

*

Chronic Childhood Stress and a Dysfunctional Family – Kylie Matthews * Different Adversities Lead to Similar Health Problems – Donna Jackson Nakazawa.

Children from unhappy, dysfunctional families who experience chronic adversity undergo changes in brain architecture that create lasting physical scars, that look pretty similar no matter who you are, where you lived, or what happened to make you unhappy when you were growing up.

Happy families may succeed not because of what they do right, but because of everything they don’t do wrong.

The stress of Adverse Childhood Experiences causes toxicity to the neurons and neural pathways that integrate different areas of the brain. These brain changes have a profound effect on our decision making abilities, self regulatory processes, attention, emotional regulation, thoughts, and behavior.

Cutting her mother out of her life was the only conceivable way she could survive.
Janet Camilleri reveals why she cut ties with her mother.

Kylie Matthews

What do you do when a close family relationship, such as a parent or sibling, becomes so dysfunctional it’s toxic?

For some, completely cutting off from that person can be the only solution for them to heal and move forward but it‘s by no means ever an easy one.

Blogger and mother of two Janet Camilleri, 51, is a survivor of a childhood overshadowed by violence and psychological abuse so profound that cutting her mother out of her life was the only conceivable way she could survive.

“My mother was violent, irrational her mood could change at the drop of a hat,” she recalls. “I call it the Dr Jekyll and Mr Hyde personality because she was a very outgoing, extroverted, life of the party type person in the company of others but at home she was like the devil and you just never knew what would trigger her.

I would have ended up a basket case if I’d kept her in my life as just a phone call with her would reduce me to a quivering lump of jelly, that was the effect she had on me. For the sake of my own marriage and children, I had to cut her off to look after my own mental health.”

My Mum, the narcissist

Janet describes her mother regularly sabotaging her school work, throwing things around the room in a rage, embarrassing her at school and in front of friends, playing favourites with her siblings, using her as a gobetween to pitch venom at her father and regularly inflicting physical violence in the home.

In hindsight, Janet says she recognises that her mother had many severe narcissistic traits exacerbated by other personality disorders, and underlined by a clinical diagnosis of bipolar disorder.

Helen Gibbons, Director and Principal Psychologist of Australia’s Autogenic Therapy and Training Institute, says that toxic behaviour in families can be identified by dysfunctional dynamics.

“Most commonly you see in any dysfunctional set up narcissistic traits in one or more family members that, in its most severe form, can result in premeditated abusive, manipulative and controlling behaviours,” she says.

Narcissism is a condition that presents a set of personality traits such as arrogance, selfcentredness, manipulation, a lack of empathy and remorse, dishonesty, dominance, a strong sense of entitlement, an inability to handle criticism and a grandiose sense of self.

When children grow up in the shadow of a severe narcissist, Ms Gibbons says their emotional needs are seldom met.

“These children are having their brains shaped based on a lack of positive stimulation, love and validation, which does seem to impact heavily on the formation of their limbic system and, in particular, the amygdala, the centre of emotional control in the brain,” she says.

Janet describes her mother’s behaviour becoming increasingly worse after her parents separated and, as her mother’s mental health continued to deteriorate, at the age of just 10 years old, Janet was thrust into the role of ‘carer’ for her younger siblings.

“For a lot of it I protected the younger ones; I was like a mother figure to them because Mum just wasn’t capable of it,” she explains. “It was a lot of responsibility Mum dumped a lot of stuff on me that a kid that age should never be exposed to.”

Leaving home

One Christmas, things came to a head and Janet says she stood up to her mother for the first time and told her she was leaving, to which her mother replied, ‘If you leave, you will never be able to come back’.

“I was nearly 20 years old and I was like, ‘I just can’t do this anymore’. I cut off from her then and we didn’t talk after that for about eight months,” she says. “I had no money, no job and very little support; I just had to survive.”

Janet tried to reinstate contact with her mum at least three times after that. “I tried really hard but it was always awkward and strained,” she says. ”She always upset me whenever we spoke on the phone.”

Janet continued to walk on eggshells, as she had always done, and accommodated poor behaviour to keep the peace, even forgiving her mother for not attending her wedding. But just prior to the birth of her first child, after yet another argument, Janet decided enough was enough and cut off from her mother for the last time.

Cutting off communication

Ms Gibbons says that for people like Janet, any attempt to communicate and rectify problems in a rational way with the narcissistic family member would most likely result in even more abusive behaviour.

“Malignant narcissists are experts at blaming others and the family scapegoat is always the easiest target so cutting off contact may be the only option available to them for a peaceful life,” she says. “No contact literally means no contact ..You don’t explain yourself, you disappear, block them on Facebook and don’t return phone calls.”

Once you’ve gone ‘no contact’, however, you can go into shock and potentially suffer from acute stress symptoms.

“It can be a very lonely and confusing time going no contact because you may find that you are not getting the understanding, support and validation you so desperately need from those around you,” Ms Gibbons says. “When you tell your friends, a lot of people, even though they’re well meaning, believe that all mothers love their children.

“A very common experience is that the friends that you want to believe and validate you will immediately try and support the mother in some way, by saying, ‘Oh yes, but she loves you’ or, ‘Being a parent is difficult’, so it can be very lonely and confusing.”

Janet says the pressure she felt from her family, friends and colleagues to reconcile with her mother was significant. “I was part of a church and the pressure I felt was huge,” she says. “In church, nobody could understand because it’s like, ‘Honour thy father and thy mother’ and all that and I felt like the lowest of the low for not being able to do that.

“I remember talking to somebody at work once, an older fellow, I was pretty bitter and upset at the time, and I mentioned something about my mum that was probably not very nice and he turned to me and said, ‘I think it’s disgusting the way you talk about your mother’. I was just gobsmacked.”

The toxic devastation

Janet went on to survive her childhood but says by the time she managed to escape it, the damage had already been done. “I just tried to be a good kid and to stay out of trouble … I didn’t want to attract her attention because she’d thump me if I did,” she says. “Growing up in a household like mine leads to a few issues so I’ve been and seen a psychologist to help me out at different times.”

Ms Gibbons can’t stress enough the importance of therapy for people who have experienced this kind of trauma.

“Therapy is really important for someone who has suffered abuse from narcissistic family members,” she says. ”it’s so important to speak with a psychologist who is experienced in psychological abuse and to work through the impact that those relationships have had on you so that you can start to make better, healthier choices.”

Janet suffered from Post Traumatic Stress Disorder (PTSD) after leaving home, which later developed into postnatal depression after the birth of both her children.

“I remember going shopping with my kids and seeing other young women with their babies and their mums by their side and just bursting into tears,” she says. “I was like, ‘Why don’t I have a mum like that?‘ I have a wonderful, supportive husband but it very much felt like I was on my own at that time.”

The final blow

It was five years after her mother’s death that Janet first learnt of her passing by accident. And despite having been estranged from her mother for almost 20 years, her grief sent her into a spiral of total despair.

“I was just devastated, I always thought I’d done my mourning when the kids were little because that was a really tough time and I’d grieved the loss of the relationship then, but I think deep down I always hoped that one day a miracle would happen and we’d work it out,” she says. “Even though we were estranged, I never wished ill for her and I sincerely wanted Mum to be happy, despite everything.”

To add insult to injury, Janet learned that she hadn’t been told of her mother’s death at the time because her mum had specifically requested she and her siblings not be. “To learn that she had been so bitter, so twisted and angry, right up until the moment of her death makes me very sad,” she says.

Hope and healing

Despite the devastation she felt over her mother’s passing, Janet says she has no regrets about cutting off communication.

“I didn’t feel guilty when I found out because I knew I’d done everything I humanly could to try and have a relationship with my mother and that no matter what I did, I would never, ever have pleased her so it was never going to work,” she says.

“It helped that I had my wonderful husband beside me saying, ‘You need to get rid of this influence in your life’ and he helped me to be strong and to realise that other people might not approve but that it was something I had to do.”

Some years ago, while driving her young children to school, Janet recalls pulling up beside a bus that had an advertisement about child abuse. “A little voice from the back seat asked, ‘That’s what happened to you, didn’t it, Mummy?‘ and I said, ‘Yes, it was, she says.

“And they’d replied, ‘It’s OK, Mummy, we love you now’.”

Different Adversities Lead to Similar Health Problems

Donna Jackson Nakazawa

The opening line of Tolstoy’s Anna Karenina, “Happy families are all alike; every unhappy family is unhappy in its own way.”, has inspired a philosophical dictum called “The Anna Karenina Principle.” The idea is this: it’s possible to fail at something in many ways; it’s far harder to succeed at something, because success requires not failing in any of those ways.

Happy families may succeed not because of what they do right, but because of everything they don’t do wrong.

And according to ACE research, 64 percent of us grew up in families in which at least one thing went wrong: we’ve had at least one Adverse Childhood Experience. Every one of these unhappy families may be unhappy in its own, unique way. But there is one way in which unhappy families are alike, according to neurobiologists who study childhood adversity:

Children from unhappy, dysfunctional families who experience chronic adversity undergo changes in brain architecture that create lasting physical scars that look pretty similar no matter who you are, where you lived, or what happened to make you unhappy when you were growing up.

How Your Biography Becomes Your Biology

To better understand how toxic childhood stress changes our brain, let’s first review how our stress response is supposed to work when it’s functioning optimally.

Let’s say you’re lying in bed and everyone else in the house is asleep. It’s one am. You hear a creak on the steps. Then another creak. Now it sounds as if someone is in the hallway. You feel a sudden rush of alertness, even before your conscious mind weighs the possibilities of what might be going on. A small region in your brain known as the hypothalamus releases hormones that stimulate two little glands, the pituitary and adrenal glands, to pump chemicals throughout your body. Adrenaline and cortisol trigger immune cells to secrete powerful messenger molecules that whip up your body’s immune response.

Your pulse drums under your skin as you lie there, listening. The hair on the surface of your arms stands up. Muscles tighten. Your body gets charged up to do battle in order to protect life and limb.

Then you recognize those footsteps as those of your teenager coming up the steps after finishing his midnight bowl of cereal. Your body relaxes. Your muscles loosen. The hair on your arms flattens back down. Your hypothalamus, as well as your pituitary and adrenal glands, the “HPA stress axis”, calm down. And, whew, so do you.

When you have a healthy stress response, you respond quickly and appropriately to stress. After the stressful event, your body dampens down the fight-or-flight response. Your system recovers and returns to a baseline state of rest and recovery. In other words, you pass through both the first and the second half of the human stress cycle, coming full circle.

Even so, emotions affect our body in real and significant ways. Emotions are physical. We feel a “knot in our stomach,” or get “all choked up,” or see a relative or coworker as a “big pain in the neck.”

There is a powerful relationship between mental stress and physical inflammation. When we experience stressful emotions, anger, fear, worry, anxiety, rumination, grief, loss, the HPA axis releases stress hormones, including cortisol and inflammatory cytokines, that promote inflammation.

Let’s say your immune system has to fight a viral or bacterial infection. Lots of white blood cells charge to the site of the infection. Those white blood cells secrete inflammatory cytokines to help destroy the infiltrating pathogens and repair damaged tissues. However, when those cytokines aren’t well regulated, or become too great in number, rather than repair tissue, they cause tissue damage. Toxic shock syndrome is an extreme example of how this can happen in the body very quickly.

More subtle types of tissue damage can happen slowly, over time, in response to chronic stress. When your system is repeatedly overstimulated, it begins to downshift its response to stress. On the face of it, that might sound like it’s a good thing, as if a downshifted stress response should translate into less inflammation, right?

But remember, this stress response is supposed to react to a big stressor, pump into defensive action, then quickly recover and return to a state of quiet homeostasis, relaxing into rest and recovery.

The problem is, when you are facing a lot of chronic stress, the stress response never shuts off. You’re caught, perpetually, in the first half of the stress cycle. There is no state of recovery. Instead, the stress response is always mildly on, pumping out a chronic low dose of inflammatory chemicals.

The stress glands, the hypothalamus, the HPA axis, secrete low levels of stress hormones all the time, leading to chronic cytokine activity and inflammation.

In simplest terms: chronic stress leads to a dysregulation of our stress hormones, which leads to unregulated inflammation. And inflammation translates into symptoms and disease.

This is the basic science on how stress hormones play a part in orchestrating our immune function and the inflammatory process. And it explains why we see such a significant link between individuals who experience chronic stress and significantly higher levels of inflammation and disease.

As Stanford professor Robert Sapolsky, PhD, a MacArthur Fellowship recipient for his research on the neurobiological impact of emotional stress on the immune system, has said:

“The stress response does more damage than the stressor itself as we wallow in stress hormones.”

Research bears out the relationship between stress and physical inflammation. For example, adults under the stress of taking care of spouses with dementia display increased levels of a cytokine that increases inflammation. Likewise, if an adult sibling dies, your risk of having a heart attack rises greatly. If you’re pregnant and face a big, stressful event, your chance of miscarrying doubles. Encountering serious financial problems raises a man’s risk of falling down and being injured in the months that follow. A child’s death triples a parent’s chance of developing multiple sclerosis. States of intense emotional fear or loss can precipitate a type of cardiomyopathy known as “broken heart syndrome,” a severe physical weakening of the heart muscle that presents almost exactly like, and is often misdiagnosed as, a full-blown heart attack.

Why Stress Is More Damaging to a Child

Emotional stress in adult life affects us on a physical level in quantifiable, life-altering ways.

But when children or teens meet up with emotional stressors and adversity, they leave even deeper scars.

These potential stressors include chronic put-downs, emotional neglect, parental divorce, a parent’s death, the mood shifts of a depressed or addicted parent, sexual abuse, medical trauma, the loss of a sibling, and physical or community violence. In each case, the HPA (hypothalamus-pituitary-adrenal) stress response can become reprogrammed so that it revs up one’s inflammatory stress hormone response for the rest of one’s life.

In young and growing children, the HPA stress axis is developing, and healthy maturation is heavily influenced by the safety or lack of safety we encounter in the day-to-day environment. When a young brain is repeatedly thrust into a state of hyperarousal or anxiety because of what’s happening at a child’s home, community, or school, the stress axis gets tipped into reaction over and over again, and the body becomes routinely flooded with inflammatory stress neurochemicals. This can lead to deep physiological changes that lead to long lasting inflammation and disease.

More than half of women suffering from irritable bowel syndrome report childhood trauma. Children whose parents divorce are far more likely to have strokes as adults. ACE Scores are linked to a far greater likelihood of diseases including cancer, lung disease, diabetes, asthma, headaches, ulcers, multiple sclerosis, lupus, irritable bowel syndrome, and chronic fatigue.

The more categories of Adverse Childhood Experiences a child has faced, the greater the chances of developing heart disease as an adult. Again, a child who has 7 or more ACEs grows up with a 360 percent higher chance of developing heart disease.

Medical Adverse Experience

Not all Adverse Childhood Experiences are about poor parenting.

Michele had the kind of lovely parents who created a home life that fits the happy family mold; they gave their son and daughter all the parental love and support that every child deserves. “Life was good,” Michele says. Then, when she was thirteen years old, she had a bladder infection and was placed on a routine course of antibiotics. “Within twenty-four hours I had a headache and rash.”

Michele’s doctor told Michele’s mom, “it’s a virus.”

But the rash didn’t go away. Michele started wincing at bright lights. The eye doctor couldn’t figure out what was wrong. Blisters began developing along her upper lip. The pediatrician didn’t know what to do, so Michele’s parents took her to the hospital. They saw a dermatologist who had read about Michele’s symptoms in an article. He thought she might have Stevens-Johnson syndrome, or SJS, a rare illness caused by a severe allergic reaction to a medication.

Michele was admitted to Columbia Presbyterian hospital in New York City. Within twenty four hours blisters the size of large hands broke out across her body. At first, they covered “30 percent of my body, then 100 percent,” Michele says. She was diagnosed with an advanced form of SJS, known as toxic epidermal necrolysis syndrome, or TENS. The bowl sized blisters began to “connect and combine until my entire torso was one enormous blister. Even my corneas were blistered.”

Today, when patients develop TENS, they’re put in an induced coma, because the physical pain is simply too unbearable. But in 1981, when Michele was diagnosed with the illness, doctors “just watched the progression.” Her physicians converted Michele’s hospital room into a burn unit, she looked like a burn victim, so she was treated as if she’d been rescued from a fire. “I felt as if I were being scalped over every inch of my skin.” Michele says she started dissociating from her body. “My body and l parted ways in that hospital, we stopped talking to each other. I couldn’t bear to feel that pain.”

Miraculously, Michele survived. She missed two months of school, while her mom helped to nurse her back to health. Little by little, life began to regain a rhythm of normality, except for the fear Michele still carried within. Every year on the anniversary of the day she was first admitted to the hospital, “my hair would fall out,” she says. “Then it would grow slowly back in.” Michele attended the University of Pennsylvania and held it together, but the whole time, she says, “I was having insomnia and recurring nightmares.” In her late twenties she was diagnosed with chronic fatigue, Epstein-Barr virus, and irritable bowel syndrome. “I had terrible muscle pains all over my body and chronic sinus infections. I had trouble sitting still for even five minutes because of the pain.” Her liver enzymes went “sky high.” It was, she says, “one mysterious illness after another.”

And then, at the age of thirty five, Michele’s doctor sat her down and told her that she had “severe, advanced osteoporosis.” Her bones were going to start disintegrating, he said. “If we don’t get this under control, sometime in the next ten years your bones are going to spontaneously crumble.”

Michele’s early adversity had nothing to do with bad parenting. But her early life stress was extreme, and the damage that stress did to her developing immune system and cells was just as corrosive.

Life is complex and messy, and suffering comes in many forms. Bad things happen. Parents get sick or pass away. Accidents come out of nowhere, as do medical crises.

How do the biophysical changes and inflammation triggered by very different types of early childhood adversity translate years later into autoimmune diseases, heart disease, and cancer?

Flipping Crucial Genetic Switches

On an unusually brisk December morning, Margaret McCarthy, PhD, professor of neuroscience at the University of Maryland School of Medicine, meets me at a downtown Baltimore coffee shop. Her schedule is tight, so we pick up two cups of soup to go and head to her office. As we enter the hall that serves as the main artery for McCarthy’s four room lab, we pass a sign that says, “Research saves lives,” on a large photo of a young, smiling girl holding a stuffed bunny. McCarthy has taught science for years to med students, grad students, and even high schoolers, whom she takes on as lab assistants to help “turn them on to science.”

She offers a primer on what research has unveiled about childhood adversity and altered brain development.

“Early stress causes changes in the brain that reset the immune system so that either you no longer respond to stress or you respond in an exacerbated way and can’t shut off that stress response,” she says.

This change to our lifelong stress response happens through a process known as epigenetics. Epigenetic changes occur when early environmental influences both good (nurturing caregivers, a healthy diet, clean air and water) and bad (stressful conditions, poor diet, infections, or harmful chemicals) permanently alter which genes become active in the body.

These epigenetic shifts take place due to a process called gene methylation. McCarthy explains, “Our DNA is not just sitting there. It’s wrapped up very tightly and coated in protected proteins, which together make up the chromosome. It doesn’t matter what your genome is; what matters is how your genome is expressed. And for genes to be expressed properly, the chromosome has to be unwound and opened up, like a flower, right at that particular gene.”

McCarthy unfurls the fingers of both hands. “Imagine this,” she says. “You’re watching a flower bloom, and as it opens up, it’s covered with blemishes.” She folds several of her fingers back in, as if they’re suddenly unable to budge. “Those blemishes keep it from flourishing as it otherwise would. If, when our DNA opens up, it’s covered with these methylation marks, that gene can’t express itself properly in the way that it should.”

When such “epigenetic silencing” occurs, McCarthy continues, these small chemical markers, also known as methyl groups, adhere to specific genes that are supposed to govern the activity of stress hormone receptors in our brain. These chemical markers silence important genes in the segment of our genome that oversees our hippocampus’s regulation of stress hormones in adulthood. When the brain can’t moderate our biological stress response, it goes into a state of constant hyperarousal and reactivity. Inflammatory hormones and chemicals keep coursing through the body at the slightest provocation.

In other words, when a child is young and his brain is still developing, if he is repeatedly thrust into a state of fight or flight, this chronic stress state causes these small, chemical markers to disable the genes that regulate the stress response, preventing the brain from properly regulating its response for the rest of his life.

Researcher Joan Kaufman, PhD, director of the Child and Adolescent Research and Education (CARE) program at Yale School of Medicine, analyzed the DNA in the saliva of ninety six children who’d been taken away from their parents due to abuse or neglect as well as that of ninety six other children who were living in what we might think of as seemingly happy family settings. Kaufman found significant differences in epigenetic markers in the DNA of the children who’d faced hardship, in almost three thousand sites on their DNA, and on all twenty three chromosomes.

The children who’d been maltreated and separated from their parents showed epigenetic changes in specific sites on the human genome that determine how appropriately and effectively they will later respond to life’s stressors.

Seth Pollak, PhD, professor of psychology and director of the Child Emotion Laboratory at the University of Wisconsin, found that fifty children with a history of adversity and trauma showed changes in a gene that helps to manage stress by signaling the cortisol response to quiet down so that the body can return to a calm state after a stressor. But because this gene was damaged, the body couldn’t rein in its heightened stress response. Says Pollak:

“A crucial set of brakes are off.”

This is only one of hundreds of genes that are altered when a child faces adversity.

When the HPA stress axis is overloaded in childhood or the teenage years, it leads to long lasting side effects, not just because of the impact stress has on us at that time in our lives, but also because early chronic stress biologically reprograms how we will react to stressful events for our entire lives. That long term change creates a new physiological set point for how actively our endocrine and immune function will churn out a damaging cocktail of stress neurochemicals that barrage our bodies and cells when we’re thirty, forty, fifty, and beyond. Once the stress system is damaged, we overrespond to stress and our ability to recover naturally from that reactive response mode is impaired. We’re always responding.

Imagine for a moment that your body receives its stress hormones and chemicals through an IV drip that’s turned on high when needed, and when the crisis passes, it’s switched off again. Now think of it this way: kids whose brains have undergone epigenetic changes because of early adversity have an inflammation promoting drip of fight-or-flight hormones turned on high every day, and there is no off switch.

When the HPA stress system is turned on and revved to go all the time, we are always caught in that first half of the stress cycle. We unwittingly marinate in those inflammatory chemicals for decades, which sets the stage for symptoms to be at full throttle years down the road, in the form of irritable bowel syndrome, autoimmune disease, fibromyalgia, chronic fatigue, fibroid tumors, ulcers, heart disease, migraines, asthma, and cancer.

These changes that make us vulnerable to specific diseases are already evident in childhood. Joan Kaufman and her colleagues discovered, in the first study to find such direct correlations, that:

Children who had been neglected showed significant epigenetic differences “across the entire genome” including in genes implicated in cardiovascular disease, diabetes, obesity, and cancer.

Yet by the time signs of an autoimmune condition creep up at forty or a heart condition rears its head at fifty, we often can’t link what happened when we were children to our adult illness. We become used to that old sense of emotional stress, of not being okay. It just seems normal. We have a long daily commute, a thirty year mortgage, and our particular mix of family dynamics. We generally deal with it and we’re usually okay. Then something minuscule happens: we have an argument with our sister over something said at a family dinner; we get a notice in the mail that our insurance isn’t going to cover a whopping medical bill; the refrigerator tanks the day before a big dinner party; our boss approves a colleague’s ideas in a meeting and ignores ours; a car honks long and hard as it swerves from behind to cut in front of us on the freeway. We react to these events as if they are a matter of life or death. We trigger easily. We begin to realize that we’re not so fine.

An adult who came of age without experiencing traumatic childhood stress might meet the same stressor and experience that same spike in cortisol, but once that stressor has passed, he or she quickly returns to a state of rest and relaxation. But if we had early trauma, our adult HPA stress axis can’t distinguish between real danger and perceived stress. Each time we get sidetracked by a stressful event, it sends split-second signals that cause our immune system to rev into high gear. We get that adrenaline rush but the genes that should tell our stress system to return to a state of rest and relaxation don’t do their job.

Over days and years, the disparity between a long “cortisol recovery” period and a short one makes a significant, life changing difference in the number of hours we spend marinating in our own inflammatory stress hormones. And over time, that can deeply distort your life.

The Ever Alert Child

Adults with Adverse Childhood Experiences are on alert. It’s a habit they learned in childhood, when they couldn’t be sure when they’d face the next high tension situation.

After her terrifying childhood illness, Michele never felt at peace, or whole, as an adult: “I was afraid I could be blindsided by any small medical crisis that could morph and change my entire life.”

Laura, as an adult, holds a high profile DC job that requires lightning decisions and heightened awareness. She’s good at it since, as a child, Laura and her brain learned to always be on high alert for the next snipe from her mother, as if being prepared could make it hurt less. “I became an expert at gauging my mom’s moods,” she says. “Whenever I was in the same room with her, I was thinking about how to slink away.”

By the time she was nine, Laura had learned to be “unconsciously on the lookout for a very subtle narrowing of my mom’s eyes,” which would tell her that she was about to be blamed for “something I didn’t even know that I’d done, like eating half of a sandwich in the fridge or taking too long to tie my shoe.” Laura grew up “learning to toe my way forward, as if blindfolded, to figure out what was coming next, where the next emotional ledge might be, so I wouldn’t get too near to my mother’s sharp edges?

From Laura’s perspective, her mom was dangerous. “I knew she would never physically hurt me,” she explains. “But I was terrified, even when she was in a good mood. At night, when I would hear her lightly snoring, I would feel this overwhelming sense of freedom, relief.”

Laura’s life was never at risk of course, she lived in a safe suburban neighborhood, had food to eat, clothes to wear. But she felt as if her life was at stake. Like all children whose parents display terrifying behavior, Laura carried the overwhelming biological fear that if her primary caregiver turned against her, she would not survive. After all, if the person upon whom you depend for food, shelter, and life itself, turns on you, how are you going to stay alive in the world? You feel as if your life depends on the adult’s goodwill, because when you were very small, how your caregiver treated you really was a matter of life or death.

As an adult, Laura “schooled” herself to believe that the early adversity she faced wasn’t that bad compared with that of other people who also grew up with alcoholic, angry, divorced, or depressed parents. She keeps telling herself that she’s over her childhood troubles.

But her body is far from over them. Laura lives with heart disease and a defibrillator in her chest. Like Michele, her anxiety sensors are set on high alert, and she doesn’t know how to turn them off.

The Rattled Cage

We might reasonably intuit that some types of childhood adversity are more damaging to us than others. For instance, we’d expect that the trauma that Kat experienced in knowing that her father murdered her mother would have a dramatically worse biological impact on her than Laura’s having been chronically put down by her depressive mom.

We’d certainly assume that Kat’s story would be more biologically damaging than that of Ellie, who was the second youngest of five children and grew up in a quiet suburban neighborhood outside Philadelphia. Ellie remembers having a very close relationship with her parents, but, as she got older, she says, “I knew something wasn’t right. My two oldest brothers were time bombs of violent emotion, just waiting to go off. Sitting at the dinner table with my parents, talking about politics, they’d start fighting each other over nothing at all, and the fights got ugly.”

Soon the boys were getting into trouble with alcohol and drugs, “and the police started showing up.” Ellie recalls, “I’d often hear my parents and my brothers screaming at each other at two in the morning. My mom and dad would come in my room and tell my little sister and me not to be scared, that everything was okay, but it was terrifying,” especially when her older brother ended up in jail.

Ellie got good grades, despite the stressors at home, and went to college in California on an athletic scholarship. But, after college, she began having suicidal thoughts and, at age twenty-four, was diagnosed with severe autoimmune psoriasis. “My body was attacking itself,” she says.

According to ACE research, growing up with a family member who is in jail is related to a much higher risk of poor health related outcomes as an adult.

Chronic unpredictable stress

Laura, John, Georgia, Kat, Michele, and Ellie tell six unique stories of childhood adversity. And yet their brains reacted to these different levels of trauma in a similar biological way. The developing brain reacts to different types and degrees of trauma so similarly because all the categories of Adverse Childhood Experience stressors have a very simple common denominator: they are all unpredictable. The child can’t predict exactly when, why, or from where the next emotional or physical hit is coming.

Researchers refer to stress that happens in unpredictable ways and at unpredictable times as “chronic unpredictable stress,” and they have been studying its effects on animal development for decades, long before Felitti and Anda’s investigation into ACEs first began.

In classic studies, investigators expose animals to different types of stressors for several weeks, to see how those stressful stimuli affect their behavior. In one experiment, McCarthy and her postdocs exposed male and female rats to three weeks of chronic unpredictable mild stress. Every day, rats were exposed to a few low grade stressors: their cage was rotated; they were given a five minute swim, their bedding was dampened; they went for a day without food; they were physically restrained for thirty minutes; or they were exposed to thirty minutes of strobe lights.

At the end of the three weeks, McCarthy’s team examined the rats to evaluate brain differences. In the group exposed to chronic unpredictable mild stress, she and her team found significant changes in the receptors in the brain’s hippocampus, an area of the brain associated with emotion, which would normally help modulate stress hormone production and put the brakes on feelings of stress and anxiety after a stressor has passed.

The rats who’d been exposed to chronic unpredictable stress weren’t able to turn off the stress response, but the control group that experienced no stress showed no brain changes.

However, when stress is completely predictable, even if it is more traumatic, such as giving a rat a regularly scheduled foot shock accompanied by a sharp, loud sound, the stress does not create these exact same brain changes. “Rats exposed to a much more traumatic stressor get used to it if it happens at the same time and in the same way every day,” says McCarthy. “They manage. They know it’s coming, then it’s over.” Moreover, she says, “They don’t show signs of these same brain changes, or inflammation, or illness.”

On the other hand, she adds, “if you introduce more moderate but unpredictable stressful experiences at a different time each day, with different levels of intensity, adding in different noises, such as loud clapping at unpredictable intervals, those rats show significant changes to the brain. And they get physically sick; they get ulcers.”

This is why researchers believe that it is the unpredictability of stress that is particularly damaging.

On a walking tour of her lab, McCarthy points out the metal stand on which rodents’ cages can be gently shaken for a short time. “Even the most mild unpredictable stressors, something as simple as gently shaking the cage, playing rock music, putting a new object in the cage that they aren’t used to, all these cause very specific changes in the brain when we do them without warning.”

The bottom line, McCarthy says, is that the brain can “tolerate severely stressful events if they are predictable, but you cannot tolerate even mild stressful events if they are very unpredictable.”

Yet even though researchers have known for years about the effects of chronic unpredictable stress on the adult brain, only recently have they examined what happens to the brains of children exposed to chronic unpredictable stressors.

The Difficulty of Not Knowing

Mary, now in her midfifties, grew up as the oldest of four kids in a small town in Oregon. Life with her artist parents was a lot like living with unpredictable cage rattling, shaking, and odd, loud noises. Mary’s dad had his own damaged childhood. He’d grown up never knowing who his own dad was, and his mom died when he was seven. His maternal grandparents adopted his brother, but not him because his parents hadn’t married, and he was seen as damaged goods. (His ACE Score was very high.)

Years later, when he was a father of four, he drank heavily, partied, and played cards. “I remember hearing my dad and friends, up all night drinking and swearing really loud in the living room outside my bedroom, even on school nights. I didn’t feel safe.” Mary has large, sympathetic eyes and shoulder length brown hair that she neatly tucks behind her ears with long, graceful fingers. “I can remember my mother yelling at him, ‘You need to make them leave. Your children can’t sleep!’ And he’d yell back, ‘I can’t make them leave, these are my friends!’ My sleep, and sense of being safe, weren’t important to him.”

Mary’s mother was managing her own anxiety that came with having four children and being stuck in a marriage with an alcoholic, so she, too, was emotionally absent. “I got bullied a lot in grade school,” Mary tells me. “I was scrawny and short and kids would terrorize me.” Her mom was preoccupied with an affair her dad was having and didn’t listen to Mary’s problems; eventually she took Mary and her younger siblings to the East Coast to live with her own mother.

“A part of me quite enjoyed that time away from all the partying and the tension in their marriage, the fighting,” says Mary.

After her parents got back together, at first Mary was happy and hopeful. But her dad was still drinking heavily. He’d get so drunk that “my mom would literally kick him out of bed and he’d come sleep with me.” Nothing happened, Mary says, “nothing like that.” But still, it was disconcerting to sometimes wake in the night at age ten to find her dad in bed with her, sleeping off another drunken stupor.

At school things hadn’t improved much. “We were still wearing dresses back then,” says Mary. “All the boys called me ‘Gladiator’ because when they’d tease me I’d go at them, I’d fight back.” That made the bullying worse. “They’d chase me and hold me down on the ground and forcibly pull off my underwear.”

Mary didn’t even think of telling her father about the bullying. “When he was drunk, he would spank us really hard. Once when my sister was in second grade, he pulled down her pants and spanked her in front of all his drunken friends.”

Mary’s sensors were always on high alert, getting ready for the next unpredictable, incoming emotional bomb. Her stress axis was constantly kicking into high gear; her immune system, in overdrive. By then, Mary had started to show signs of an autoimmune disorder called vitiligo, in which the body’s immune cells attack the pigmentation in the skin. Areas of her skin turned white, as if the skin had been bleached or burned in the past and new skin was trying to form over it.

“Our skin is our first line of defense against the world, the thing that’s supposed to keep us safe, secure our physical boundaries,” Mary says. “And yet my parents hadn’t set any boundaries to keep me or my siblings safe.” It was as if her skin were pleading for her parents to set those boundaries, the kind of safe zone parents are supposed to set for kids.

Even worse than the skin disorder, however, “were my constant stomachaches,” she recalls. “I’d have chronic constipation and cramping, and then terrible diarrhea, all the symptoms of irritable bowel, though we didn’t know what to call it back then.” Sometimes, she’d find herself getting physically jittery and nervous “seemingly for no reason. I’d just be standing there and I’d get these rushes of fear, ripping and prickling through my body.”

Over time, her father’s boundaryless behavior grew more bizarre. When Mary was fourteen, he cut out hundreds of naked bodies from a stack of old Playboys, took off their heads, and pasted their disembodied boobs, legs, butts, and crotches on the walls of the kitchen. Andrea, one of Mary’s few friends, told her parents about the “wallpaper.” “After that, Andrea wasn’t allowed at my house,” Mary says. “I started to realize that other kids weren’t comfortable around me because of my dad.”

When she was fifteen, her parents moved to a house in the country. “I think they were trying to salvage their marriage.” One crisp winter night, when Mary was coming out of the garage, one of her dad’s drunken friends was standing by his car in the driveway. “As I walked by to go in the house he stared at me hard and said, ‘You are so beautiful!’ Then he threw me into the backseat of his car and got on top of me. He stuck his tongue down my throat and was groping me.”

Mary forced him off and ran in to tell her dad, who was also drunk. “He told me to stop making such a big deal about it.”

And yet, at other times, Mary’s dad did show concern for her. Once, when Mary was in a car accident, “he got in the ambulance with me and cried the whole way to the hospital.” He was completely unpredictable.

By the time she was eighteen, Mary had developed “unwavering depression,” which would progress over the next thirty years, getting worse after she married and had her four children. She developed a severe lower back problem that worsened every year. And her autoimmune vitiligo started to cover her arms and neck.

“I fell into a postpartum depression after each birth, and after my fourth son, I was suicidal. My physical and emotional pain had snowballed. If I was driving without any of my kids in the car, I’d find myself thinking, ‘How can I crash this car into a tree in such a way that no one will know it’s suicide, and so that I’m not just impaired and a burden to my family afterward?”

And that was when, says Mary, “I realized something potent was haunting me; something was terribly wrong with how unsafe I felt in the world. I had these beautiful sons and I just didn’t feel okay inside in any way, shape, or form.”

To the developing brain, knowing what’s coming next matters most. This makes sense if you think back to how the stress response works optimally. You meet a bear in the woods and your body floods with adrenaline and cortisol so that you can decide quickly: do you run away or try to frighten away the bear? After you deal with the crisis, you recover, your stress hormones abate, and you go home with a great story.

McCarthy presents another situation. “What if that bear is circling the house and you can’t get away from it and you never know if it’s going to strike, or when, or what it will do next? There it is, threatening you every single day. You can’t fight or flee.” Then, she says, “Your emergency response system is set into overdrive over and over again. Your anxiety sensors are always going full blast.”

Even subtle, common forms of childhood stress, e.g., a hypercritical, narcissistic, or manic depressive parent can cause just as much damage as a parent who deals out angry, physical beatings or just disappears.

And in that sense, Kat’s story and Mary’s story are very similar to Laura’s, John’s, Georgia’s, Michele’s, and Ellie’s. All of them, even in adult life, felt that the bear was still out there, somewhere, circling in the woods, stalking, and might strike again any day, anytime.

According to Vincent Felitti, the one area in which a “yes” answer on the Adverse Childhood Experiences questionnaire has been correlated to a slightly higher level of adult negative health outcomes is in response to ACE question number 1, which addresses the issue of “chronic humiliation.” Would adults in the home often swear at you, insult you, put you down, or humiliate you?

This strong correlation between adverse health problems and unpredictable, chronic humiliation by a parent suggests that it is not knowing if you are safe from the “bear” that matters most.

There are a lot of bears out there. Depression, bipolar disease, alcohol, and other addictions are remarkably prevalent adult afflictions. According to the National Institute of Mental Health, over 18 percent of adults, or nearly forty four million Americans, suffer from a diagnosable mental health disorder in any given year. Twenty three million adult Americans suffer from an alcohol or drug addiction. Indeed, according to the original ACE Study, one in four people with Adverse Childhood Experiences had a parent who was addicted to alcohol.

Often, alcoholism and depression go hand in hand, addiction can be an unconscious effort to self medicate a mood disorder. But even when they are not working in tandem, mood disorders and alcoholism share one thing: both make adults behave in emotionally undependable ways. The parent who hugs you one day when picking you up from school might humiliate you in front of your friends the next afternoon. The sense of not knowing what’s coming next never goes away.

The Sadness Seed

Adversity in childhood can be the precursor to deep depression and anxiety later in life. A growing body of research shows that there is a close correlation between Adverse Childhood Experiences and emotional health disorders in adulthood. In Felitti and Anda’s Adverse Childhood Experiences Study, 18 percent of individuals with an ACE Score of 1 had suffered from clinical depression, and the likelihood rose sharply with each ACE Score. Thirty percent of those with an ACE Score of 3 and nearly 50 percent of those with an ACE Score of 4 or more had suffered from chronic depression.

Twelve and a half percent of respondents to the Adverse Childhood Experiences Study cite having an ACE Score of 4 or more.

For women, the correlation is even more disturbing. While 19 percent of men with an ACE Score of 1 suffered from clinical depression, 24 percent of women with that score did. Likewise, while 24 percent of men with a score of 2 developed adult clinical depression, 35 percent of women did. Thirty percent of men with a score of 3 developed clinical depression, compared to 42 percent of women who had three categories of Adverse Childhood Experiences. And 35 percent of men, versus nearly 60 percent of women, with a score of 4 or more suffered from chronic depression.

The strongest precursor of adult depression turned out to be Adverse Childhood Experiences that fell into the category of “childhood emotional abuse.”

Whether you are male or female, the loss of a parent in childhood triples your chances of depression in adulthood. Being raised by a mother who suffers from depression puts you at a higher risk of living with chronic pain as an adult. Children who experienced severe trauma before the age of sixteen are three times more likely to develop schizophrenia later in life.

Most disturbing are the statistics on suicide: while only 1 percent of those with an ACE Score of 0 have ever attempted suicide, almost one in five individuals with an ACE Score of 4 or more has tried to end his or her life. Indeed, a person with an ACE Score of 4 or more is, statistically, 1,220 percent more likely to attempt suicide than someone with an ACE Score of 0.

It certainly makes sense that childhood emotional trauma will spill out in our adulthood. Psychology and psychotherapy help us understand the link between our childhood wounds and adult emotional problems, and making this connection can help free us from the pain of our past.

But research tells us that often, childhood adversity leads to more deep seated changes within the brain, and that depression and mood dysregulation are also set in motion on a cellular and neurobiological level.

So what is causing neurobiological changes inside the brain itself?

How Early Adversity Changes the Shape and Size of the Brain

When a young child faces emotional adversity or stressors, cells in the brain release a hormone that actually shrinks the size of the brain’s developing hippocampus, altering his or her ability to process emotion and manage stress. Magnetic resonance imaging (MRI) studies show that the higher an individual’s childhood trauma score, the smaller the cerebral gray matter, or brain volume, is in key processing areas of the brain, including the prefrontal cortex, an area related to decision making and self regulatory skills; the amygdala, or fear processing center of the brain; and the sensory association cortices and cerebellum, both of which affect how we process and regulate emotions and moods.

MRIs also show that kids raised in orphanages have much smaller brains than those of others. That smaller brain volume may be due to a reduction in the brain’s gray matter which is made up of brain cells, or neurons, as well as white matter, which includes nerves (with coated, or myelinated, axons) that allow for the fast transmission of messages in the brain. Other studies show that this smaller sized amygdala in adults who’ve experienced childhood maltreatment shows marked “hyperactivity.” Frontal regions of the brain display “atypical activation” throughout daily life making individuals hyperreactive to even very small stressors.

The lnflamed Brain

“Early stress impacts the developing brain in a way that, until very recently, we just didn’t think was possible,” McCarthy says. “It turns out that chronic, early unpredictable stress can trigger a process of low grade inflammation within the brain itself.”

That is pretty revolutionary news. Until recently, most scientists thought that inflammation could not be generated by the brain. “We thought that the brain was what we call ‘immune-privileged,’ ” explains McCarthy. “Inflammation in the brain occurred only when there was an external event, such as a brain injury or head trauma, or an infection such as meningitis.”

But, “That has turned out not to be the case. When we are chronically stressed, the brain responds by creating a state of neuro inflammation. And that neuro inflammation can be present at levels that, until very recently, we could not even detect.”

This type of inflammation develops due to a type of non neuronal brain cell known as microglia. Our microglial cells make up about one tenth of our brain cells. For years, researchers thought that these microglia cells were “just there to get rid of stuff we didn’t need,” explains McCarthy. “They were taking out the trash, so to speak?

Microglia play an integral role in pruning our brain’s neurons and in brain development. They are crucial to the normal processing of the brain all the time, continuously scanning their environment, determining, Are we good here? Or not so good? Are we safe? Or not safe?

Shake the cage. Flash the lights. The microglia in the brain take note, fast. They don’t like chronic, unpredictable stress. They don’t like it at all.

“Microglia go off kilter in the face of chronic unpredictable stress,” says McCarthy. “They get really worked up, they crank out neurochemicals that lead to neuro inflammation. And this below-the-radar state of chronic neuro inflammation can lead to changes that reset the tone of the brain for life.”

“It is very possible that when microglia go off kilter, they are actually pruning away neurons,” McCarthy says. That is, they are killing off brain cells that we need.

In a healthy brain, microglia control the number of neurons that the cerebral cortex needs, but unhappy microglia can excessively prune away cells in areas that would normally play a key role in basic executive functions, like reasoning and impulse control. They are essential in a healthy brain, but in the face of chronic unpredictable stress, they can start eating away at the brain’s synapses.

“In some cases, microglia are engulfing and destroying dying neurons, and they are taking out the trash, just as we always thought,” says McCarthy. “But in other cases, microglia are destroying healthy neurons and in that case, it’s more like murder.” This excessive pruning can lead to what McCarthy refers to as a “reset tone” in the brain. You might think of that stressed brain as a muscle that’s lost its tone and is atrophying. And that loss of gray and white matter can trigger depression, anxiety disorders, and even more extreme psychopathology such as schizophrenia and Alzheimer’s disease.

Microglia may also prune a special group of neurons in the hippocampus that are capable of regenerating. “We used to think that you could never make new neurons but one of the most revolutionary new findings in the last decade is the discovery that there are new neurons being born in the hippocampus all the time,” says McCarthy. The growth of new neurons is very important to adult mental health. “If something interferes with their growth, depression can set in.” Indeed research suggests, says McCarthy, that “microglia, when they are overly exuberant, may kill these new neurons as soon as they are born.”

Scientists have introduced healthy microglia back into the mouse brain. The results have been stunning: once mice brains are repopulated with microglia, all signs of depression completely disappear.

So much depends on the microglia in our brain being happy, unrattled. So much depends on our microglia not pruning away too many neurons.

We might hypothesize that “angry, worked-up microglia could impair the growth of healthy new neurons in the brain’s hippocampus,“ says McCarthy. “When healthy neurons in the hippocampus die, our emotional well-being would be impaired over the long term.”

Facing situation after situation of sudden and unpredictable stress in childhood can trigger microglia to prune away important neurons and initiate a state of neuroinflammation that resets the tone of the brain, creating the conditions for long lasting anxiety and depression.

A Perfect Storm: Childhood Stress, Brain Pruning, and Adolescence

When children come into adolescence, they naturally undergo a period of developmental pruning of neurons. When we are very young, we have an overproduction of neurons and synaptic connections. Some of them die off naturally to allow us to “turn down the noise in the brain,” says McCarthy, and to increase our mastery in skills that interest us. The brain prepares for becoming more specialized at the things we’re good at and interested in, while we lose what we don’t need.

But if, due to childhood stress, lots of neurons and synapses have already been pruned away, then when the natural pruning that occurs during adolescence begins to take place, and the brain starts to naturally prune neurons it doesn’t need so that a teenager can focus on building particular skills, baseball, singing, poetry, then suddenly, there may be too much pruning going on.

Dan Siegel, MD, child neuropsychiatrist and clinical professor at the University of California, Los Angeles (UCLA), is the pioneer of a growing field known as “interpersonal biology,” which integrates the fields of neuroscience and psychology. According to Siegel, “The stress of Adverse Childhood Experiences causes toxicity to the neurons and neural pathways that integrate different areas of the brain.” When adolescent pruning occurs in the integrated circuitry between the hippocampus, which is important in storing memories; the corpus callosum, which links the left and right hemispheres of the brain; and the prefrontal cortex, these brain changes, says Siegel, have a profound effect on our decision making abilities, self-regulatory processes, attention, emotional regulation, thoughts, and behavior.

When these integrated circuits are affected by adversity, or genetic vulnerability, or both, during preadolescence, says Siegel, and then puberty hits, “adolescent pruning pares down the existing but insufficient number of integrated fibers, which makes a child vulnerable to mood dysregulation. It is when this brain integration is impaired that a dysfunction in mood regulation may emerge.”

Imagine, hypothetically speaking, that all kids start with 4,000 neurons (that’s a made-up number, for illustration purposes). Now, let’s say that we have two five year old boys, Sam and Joe. Sam faces early adversity and Joe doesn’t. As Sam meets up with chronic unpredictable stress in his childhood, his neurons are slowly pruned away. By the time Sam is twelve, after a lot of stress-related neuronal pruning, he has 1,800 neurons left. He is still okay, functioning well; 1,800 neurons are enough (using our hypothetical numbers) to get by on, since kids start out with so many more than they need in the first place.

But then Sam and Joe both go through the adolescent period of neuronal pruning. Let’s say that Sam and Joe, like all kids, each lose a hypothetical 1,000 more neurons during adolescence. Sam, who grew up with early chronic unpredictable stress, begins to emerge with a notably different brain from Joe.

Suddenly, the difference between Sam’s brain and Joe’s trauma free brain becomes extreme. Joe, who’s grown up fairly adversity free, still has his 3,000 neurons, plenty to go forward and live a healthy and happy life.

Meanwhile, Sam is left with only 800 neurons.

And that makes all the difference. It is not enough for the brain to function in a healthy manner.

For kids who have already had pruning due to early stress, Siegel explains, “when average adolescent pruning occurs, what remains may be insufficient for mood to be kept in balance. If stressors are high, this pruning process may be even more intense, and more of the at risk circuits may be diminished in number and effectiveness.”

The child who faced Adverse Childhood Experiences will be more likely to develop depression, bipolar disorder, eating disorders, anxiety disorders, or poor executive function and decision making, many of which can lead to substance abuse. This may be why, statistically, so many young people first show signs of depression or bipolar disorders in high school, and in college, even kids who just a year or two earlier seemed absolutely fine.

Stephen’s parents, both investment bankers, were hardly around when he was growing up in New York City. Stephen ate dinner at night with his older sister and their nanny. When his parents came home around nine o’clock, a time when most kids were getting tucked into bed and kissed good night, they’d all sit down together at the kitchen table, and the nanny would give her daily report. She was an older woman who loved to “give a laundry list of what we’d done wrong.” Stephen “lived in fear of that moment. Especially for my sister.”

His sister, who was five years older, was “already expected to be a genius like our parents, by the time she was in fourth grade. If she brought home an eighty-five on a math test, my parents would drill her on math problems until eleven o’clock.” Then, they’d tell their friends at the next weekend party at our country house how “Alexis is already doing algebra!”

Stephen, as the baby, often got off lightly when he was young, and recalls feeling “that my parents loved me and wanted everything for me. But they were also terrifying.”

As Stephen got older, his parents stopped “treating me like the cute baby.” He did well academically and his standardized test scores were sometimes off the charts. “My parents decided that I must be the genius they’d been waiting for. I got their laser focus.”

But he soon started to feel that “I wasn’t as smart as my parents hoped I’d be.” When he was nine, Stephen started having acute asthma attacks. He was also “perpetually forgetful. I’d lose everything. I’d forget to bring my sweater or my Spanish book home. I’d leave my clarinet in the band room. It made my parents furious. They’d tell me, ‘Get it together! We don’t have time for your nonsense, Stephen!’ ”

Once, while staying at a plush lakeside resort, he walked into water with his flip-flops on to look for tadpoles. As he walked out, one flip-flop got stuck in the mud. “I tried to find it. I was digging in the muck. My dad just lost it. He stood on the edge of the lake yelling, ‘You lost your flip-flop? Really, Stephen? You can’t take a walk without losing your shoes? You think we’re going to just buy you another pair? We’re not buying you anything!’ ” On the ride home, Stephen had a full-blown asthma attack.

Stephen was also a “nonjock.” He liked to read more than he liked to play ball. “My dad started calling me ‘pretty boy.’ I’d come in the door from being at a concert with my friends and he’d say, ‘Hey, pretty boy, good time?’ He was pissed that I hadn’t spent the weekend on an athletic field the way he had when he was seventeen, the way his colleagues’ and friends’ kids were.”

As many adult children recall, “It wasn’t all bad. My dad taught me how to fish, how to sail, and how to analyze the financial pages of the newspaper. My mom left work to come to every single concert I was in when I played in the state youth orchestra. Sometimes when my dad was out of town, she’d let my sister and me snuggle in her bed and we’d watch movies and eat sandwiches from the deli downstairs. She’d tell me, ‘Your dad loves you so much, he’s just very stressed with work, it’s not about you, Stevie.’ She was not affectionate. But she tried.”

In high school Stephen, despite high test scores, couldn’t seem to manage his workload and get papers in on time, and was diagnosed with attention deficit disorder, high stakes performance anxiety, and depression. “I just stopped wanting to go out with my friends, or do anything. I wanted the world to just let me be.” Then he developed a condition known as alopecia areata, in which the immune system attacks the hair follicles and segments of hair fall out, leaving bald patches. “My hair started falling out in huge chunks.’ ”

Stephen went on to grad school, getting his PhD in psychology. Today, Stephen is forty-two, a high school counselor. He shaves his head so that he doesn’t have to deal with the recurring bald spots from alopecia. “For me, knowing what not to do with the kids I teach, I like to think that’s the gift my parents gave me. I can see when a kid is showing signs of anxiety or depression. I see how at this age, some kids who have been struggling to hold it together for so long just can’t anymore. Things start to fall apart, and they just can’t understand what’s happening to them. I was that kid.”

The research on neuro inflammation, pruning, and the brain helps to explain why adverse experiences in childhood are so highly correlated to depression and anxiety disorders in adulthood. It also sheds light on why, according to the National Institute of Mental Health (NIMH), depression affects eighteen million Americans. The World Health Organization recently cited depression as “the leading cause of disability worldwide,” responsible for more years of disability than cancer, HIV/AIDS, and cardiovascular and respiratory diseases combined.

This also may explain other brain based health disorders. For instance, a recent study of brain scans of people suffering from chronic fatigue syndrome, or CFS, myalgic encephalomyelitis, or ME, show higher levels of inflammation in specific parts of the brain, including the hippocampus and amygdala. The greater a patient’s level of self-reported CFS symptoms, the greater the degree of visible brain inflammation.

This may also help to account for why it is that those who faced Adverse Childhood Experiences are six times more likely to develop chronic fatigue in the first place.

The Walking Wounded

It’s impossible to estimate how many adults who experienced Adverse Childhood Experiences are getting by, day by day, unwittingly navigating a state of low grade neuro inflammation, functioning despite their “reset tone” in the brain, dealing with general low mood, depression, and anxiety.

This lowered “set point of well-being,” this generalized emotional misery, predicts with startling accuracy how likely we are to find ourselves as adults navigating mood fluctuations, anxiety, sadness, fear-reacting to life without resilience, rather than really living life fully.

It’s kind of the proverbial cat chasing its tail. Epigenetic changes in life cause inflammatory chemicals to increase. Chronic unpredictable stress sends microglia off kilter. Microglia murder neurons. Neurons die, synapses are less able to connect. Microglia proliferate and create a state of neuro inflammation. Essential gray matter areas of the brain lose volume and tone. White matter, the myelin in the brain that allows for synapses to connect between neurons is lost. This lack of brain tone impairs thought processes, making negative thoughts, fears, reactivity, and worries more likely over time. An Uberalert, fearful brain leads to increased negative reactions and thoughts, creating more inflammatory hormones and chemicals that lead to more microglial dysfunction and pruning and chronic inflammation in the brain. The cycle continues.

As McCarthy puts it, “Neuro inflammation becomes a runaway process.”

This, she says, “contributes to a chronic overreactivity. Things that most people would get over quickly would send someone with a low level of inflammation into a tailspin. They may not be able to sort out rational thought about what’s happening around them, is what’s happening right now good, or is it bad? They may be far more prone to see everything as bad.”

This is the new psychosocial theory of everything: our early emotional stories determine the body and brain’s operating system and how well they will be able to guard our optimal physical and emotional health all of our adult lives.

We take whatever reactive brain and increased sensitivity to stress we develop in childhood with us wherever we go, at any age. We’re likely to feel bad mentally and physically a lot of the time. That state of neuro inflammation means we are more likely to walk around in an irritable mood, be easily ticked off and annoyed.

Our relationships will suffer. We see hurt where none is intended. We’ll likely find the world more aggravating than gratifying. Our chances for a healthy, stable, and satisfying life narrow, and continue to narrow as the years go by. But we can take action to remove the early “fingerprints” that childhood adversity leaves on our neurobiology so that imprint does not stay with us.

The Really Good News

As scientists have learned more about how childhood adversity becomes biologically embedded, they have also learned how we can intervene in this process to reverse the damage of early stress, no matter whether we grew up in a happy, functional family or an often unhappy, dysfunctional one. And no matter what happened to us when we were young.

“The beauty of epigenetics is that it’s reversible, and the beauty of the brain is that it’s plastic,” says McCarthy:

“There are many ways that we can immuno-rehabilitate the brain to overcome early negative epigenetic changes so that we can respond normally to both pleasure and pain. The brain can restore itself.”

We can heal those early scars to get back to who it is we really are, who we might have been had we not faced so much adversity in the first place. But to do that, we first have to understand why we may be more prone to epigenetic changes than others in the first place, even though we are no less capable of epigenetic reversal and change.


from

Childhood Disrupted. How Your Biography Becomes Your Biology, and How You Can Heal

by Donna Jackson Nakazawa

get it at Amazon.com

If You Like Being Alone You Have These 5 Amazing Traits.

“I have to be alone very often. I’d be quite happy if I spent from Saturday night until Monday morning alone in my apartment. That’s how I refuel.” Audrey Hepburn

Let’s clear something up: being alone is not the same as being lonely. In fact, many people prefer being alone because that’s their way to recharge and refuel their energy.

Being a loner and enjoying solitude can be a great thing. And people who enjoy being alone are one of the most interesting and fun people to be with. They have many, many amazing qualities that make them extraordinary human beings.

Here are 5 of them:

1. They Are Open-Minded

Many would perceive someone who is reserved and quiet as being judgmental and unsocial. However, this is not true. People who are comfortable being alone are actually more open minded than one would think because they can discuss any topic due to their massive knowledge they have gained during their alone time by reading books watching documentaries, or just focusing on themselves and their thoughts.

2. They Are Exquisite Listeners

All introverts are amazing listeners. This is because when people spend time alone they process things in their heads instead of saying them out loud. So, in turn, their listening ratio is higher than their talking ratio.

They would listen to anyone as long as the conversation doesn’t involve small talk. They hate small talk more than anything.

3. They Are Emotionally Stable

No, they are not neurotic as many people would believe. The word neurotic typically encompasses feelings of anger, fear, worry, anxiety, loneliness, and depressive mood. However, people who enjoy solitude are not by default experiencing those feelings. In fact, they are more in touch with themselves and their emotions.

4. They Are Quickly Over-Stimulated

Studies have shown that people who enjoy spending time alone have a different brain structure than those who are overly social. Namely, people who are socially active have more dopamine reward action in their brain.

The introverts, on the other hand, prefer the acetylcholine, a brain chemical that is similar to dopamine and is connected with the reward system as well. The main difference between them is that this chemical gets activated when people are by themselves and turn inward.

This is why extroverts enjoy loud music and noise, they think it is a part of the fun while introverts prefer the quiet dinners and the comfort of their home.

5. They DO Like People

They have small circles of friends but this doesn’t mean that they don’t like people. They just despise small talk. That’s it.

Curious Mind Magazine

Childhood Disrupted. How Your Biography Becomes Your Biology, and How You Can Heal – Donna Jackson Nakazawa * The Origins of Addiction. Evidence from the Adverse Childhood Experiences Study – Vincent J. Felitti, MD.

Chronic adversities change the architecture of a child’s brain, altering the expression of genes that control stress hormone output, triggering an overactive inflammatory stress response for life, and predisposing the child to adult disease.

“I felt myself a stranger at life’s party.”

New findings in neuroscience, psychology, and medicine have recently unveiled the exact ways in which childhood adversity biologically alters us for life. The past can tick away inside us for decades like a silent time bomb, until it sets off a cellular message that lets us know the body does not forget the past. Something that happened to you when you were five or fifteen can land you in the hospital thirty years later, whether that something was headline news, or happened quietly, without anyone else knowing it, in the living room of your childhood home.

No matter how old you are, or how old your children may be, there are scientifically supported and relatively simple steps that you can take to reboot the brain, create new pathways that promote healing, and come back to who it is you were meant to be.

Our findings are disturbing to some because they imply that the basic causes of addiction lie within us and the way we treat each other, not in drug dealers or dangerous chemicals. They suggest that billions of dollars have been spent everywhere except where the answer is to be found. Our findings indicate that the major factor underlying addiction is adverse childhood experiences that have not healed with time and that are overwhelmingly concealed from awareness by shame, secrecy, and social taboo.

“I wept, I saw how much people had suffered and I wept.” Robert Anda

“Our findings exceeded anything we had conceived. The correlation between having a difficult childhood and facing illness as an adult offered a whole new lens through which we could view human health and disease. Here was the missing piece as to what was causing so much of our unspoken suffering as human beings. Time does not heal all wounds. One does not ‘just get over’ something, not even fifty years later. Instead time conceals. And human beings convert traumatic emotional experiences in childhood into organic disease later in life.” Vincent Felitti

Adverse childhood experiences are the main determinant of the health and social well being of a nation.

This book explores how the experiences of childhood shape us into the adults we become. Cutting-edge research tells us that what doesn’t kill you doesn’t necessarily make you stronger. Far more often, the opposite is true: the early chronic unpredictable stressors, losses, and adversities we face as children shape our biology in ways that predetermine our adult health. This early biological blueprint depicts our proclivity to develop life altering adult illnesses such as heart disease, cancer, autoimmune disease, fibromyalgia, and depression. It also lays the groundwork for how we relate to others, how successful our love relationships will be, and how well we will nurture and raise our own children.

My own investigation into the relationship between childhood adversity and adult physical health began after I’d spent more than a dozen years struggling to manage several life limiting autoimmune illnesses while raising young children and working as a journalist. In my forties, I was paralyzed twice with an autoimmune disease known as Guillain-Barré syndrome, similar to multiple sclerosis, but with a more sudden onset. I had muscle weakness; pervasive numbness; a pacemaker for vasovagal syncope, a fainting and seizing disorder; white and red blood cell counts so low my doctor suspected a problem was brewing in my bone marrow; and thyroid disease.

Still I knew: I was fortunate to be alive, and I was determined to live the fullest life possible. If the muscles in my hands didn’t cooperate, I clasped an oversized pencil in my fist to write. If I couldn’t get up the stairs because my legs resisted, I sat down halfway up and rested. I gutted through days battling flulike fatigue, pushing away fears about what might happen to my body next; faking it through work phone calls while lying prone on the floor; reserving what energy I had for moments with my children, husband, and family life; pretending that our “normal” was really okay by me. It had to be, there was no alternative in sight.

Increasingly, I devoted my skills as a science journalist to helping women with chronic illness, writing about the intersection between neuroscience, our immune systems, and the innermost workings of our human hearts. I investigated the many triggers of disease, reporting on chemicals in our environment and foods, genetics, and how inflammatory stress undermines our health. I reported on how going green, eating clean, and practices like mind-body meditation can help us to recuperate and recover. At health conferences I lectured to patients, doctors, and scientists. My mission became to do all I could to help readers who were caught in a chronic cycle of suffering, inflammation, or pain to live healthier, better lives.

In the midst of that quest, three years ago, in 2012, I came across a growing body of science based on a groundbreaking public health research study, the Adverse Childhood Experiences Study, or ACE Study. The ACE Study shows a clear scientific link between many types of childhood adversity and the adult onset of physical disease and mental health disorders. These traumas include being verbally put down and humiliated; being emotionally or physically neglected; being physically or sexually abused; living with a depressed parent, a parent with a mental illness, or a parent who is addicted to alcohol or other substances; witnessing one’s mother being abused; and losing a parent to separation or divorce. The ACE Study measured ten types of adversity, but new research tells us that other types of childhood trauma, such as losing a parent to death, witnessing a sibling being abused, violence in one’s community, growing up in poverty, witnessing a father being abused by a mother, being bullied by a classmate or teacher, also have a longterm impact.

These types of chronic adversities change the architecture of a child’s brain, altering the expression of genes that control stress hormone output, triggering an overactive inflammatory stress response for life, and predisposing the child to adult disease. ACE research shows that 64 percent of adults faced one ACE in their childhood, and 40 percent faced two or more.

My own doctor at Johns Hopkins medical institutions confessed to me that she suspected that, given the chronic stress I’d faced in my childhood, my body and brain had been marinating in toxic inflammatory chemicals my whole life, predisposing me to the diseases I now faced.

My own story was a simple one of loss. When I was a girl, my father died suddenly. My family struggled and became estranged from our previously tight knit, extended family. I had been exceptionally close to my father and I had looked to him for my sense of being safe, okay, and valued in the world. In every photo of our family, I’m smiling, clasped in his arms. When he died, childhood suddenly ended, overnight. If I am honest with myself, looking back, I cannot recall a single “happy memory” from there on out in my childhood. It was no one’s fault. It just was. And I didn’t dwell on any of that. In my mind, people who dwelled on their past, and especially on their childhood, were emotionally suspect.

I soldiered on. Life catapulted forward. I created a good life, worked hard as a science journalist to help meaningful causes, married a really good husband, and brought up children I adored, children I worked hard to stay alive for. But other than enjoying the lovely highlights of a hard won family life, or being with close friends, I was pushing away pain.

I felt myself a stranger at life’s party. My body never let me forget that inside, pretend as I might, I had been masking a great deal of loss for a very long time. I felt myself to be “not like other people.”

Seen through the lens of the new field of research into Adverse Childhood Experiences, it suddenly seemed almost predictable that, by the time I was in my early forties, my health would deteriorate and I would be brought, in my case, quite literally, to my knees.

Like many people, I was surprised, even dubious, when I first learned about ACEs and heard that so much of what we experience as adults is so inextricably linked to our childhood experiences. I did not consider myself to be someone who had had Adverse Childhood Experiences. But when I took the ACES questionnaire and discovered my own ACE Score, my story also began to make so much more sense to me. This science was entirely new, but it also supported old ideas that we have long known to be true: “the child is father of the man.” This research also told me that none of us is alone in our suffering.

One hundred thirty three million Americans suffer from chronic illness and 116 million suffer from chronic pain. This revelation of the link between childhood adversity and adult illness can inform all of our efforts to heal. With this knowledge, physicians, health practitioners, psychologists, and psychiatrists can better understand their patients and find new insights to help them. And this knowledge will help us ensure that the children in our lives, whether we are parents, mentors, teachers, or coaches, don’t suffer from the long term consequences of these sorts of adversity.

To learn everything I could, I spent two years interviewing the leading scientists who research and study the effects of Adverse Childhood Experiences and toxic childhood stress. I combed through seventy research papers that comprise the ACE Study and hundreds of other studies from our nation’s best research institutions that support and complement these findings. And I followed thirteen individuals who suffered early adversity and later faced adult health struggles, who were able to forge their own lifechanging paths to physical and emotional healing.

In these pages, I explore the damage that Adverse Childhood Experiences can do to the brain and body; how these invisible changes contribute to the development of disease including autoimmune diseases, long into adulthood; why some individuals are more likely to be affected by early adversity than others; why girls and women are more affected than men; and how early adversity affects our ability to love and parent.

Just as important, I explore how we can reverse the effects of early toxic stress on our biology, and come back to being who we really are. I hope to help readers to avoid spending so much of their lives locked in pain.

Some points to bear in mind as you read these pages:

– Adverse Childhood Experiences should not be confused with the inevitable small challenges of childhood that create resilience. There are many normal moments in a happy childhood, when things don’t go a child’s way, when parents lose it and apologize, when children fail and learn to try again. Adverse Childhood Experiences are very different sorts of experiences; they are scary, chronic, unpredictable stressors, and often a child does not have the adult support needed to help navigate safely through them.

– Adverse Childhood Experiences are linked to a far greater likelihood of illness in adulthood, but they are not the only factor. All disease is multifactorial. Genetics, exposures to toxins, and infection all play a role. But for those who have experienced ACEs and toxic stress, other disease promoting factors become more damaging.

To use a simple metaphor, imagine the immune system as being something like a barrel. If you encounter too many environmental toxins from chemicals, a poor processed food diet, viruses, infections, and chronic or acute stressors in adulthood, your barrel will slowly fill. At some point, there may be one certain exposure, that last drop that causes the barrel to spill over and disease to develop.

Having faced the chronic unpredictable stressors of Adverse Childhood Experiences is a lot like starting life with your barrel half full. ACEs are not the only factor in determining who will develop disease later in life. But they may make it more likely that one will.

– The research into Adverse Childhood Experiences has some factors in common with the research on post-traumatic stress disorder, or PTSD. But childhood adversity can lead to a far wider range of physical and emotional health consequences than the overt symptoms of posttraumatic stress. They are not the same.

– The Adverse Childhood Experiences of extreme poverty and neighborhood violence are not addressed specifically in the original research. Yet clearly, growing up in unsafe neighborhoods where there is poverty and gang violence or in a war-torn area anywhere around the world creates toxic childhood stress, and that relationship is now being more deeply studied. It is an important field of inquiry and one I do not attempt to address here; that is a different book, but one that is no less important.

– Adverse Childhood Experiences are not an excuse for egregious behavior. They should not be considered a “blame the childhood” moral pass. The research allows us to finally tackle real and lasting physical and emotional change from an entirely new vantage point, but it is not about making excuses.

This research is not an invitation to blame parents. Adverse Childhood Experiences are often an intergenerational legacy, and patterns of neglect, maltreatment, and adversity almost always originate many generations prior to one’s own.

The new science on Adverse Childhood Experiences and toxic stress has given us a new lens through which to understand the human story; why we suffer; how we parent, raise, and mentor our children; how we might better prevent, treat, and manage illness in our medical care system; and how we can recover and heal on a deeper level than we thought possible.

And that last bit is the best news of all. The brain, which is so changeable in childhood, remains malleable throughout life. Today researchers around the world have discovered a range of powerful ways to reverse the damage that Adverse Childhood Experiences do to both brain and body. No matter how old you are, or how old your children may be, there are scientifically supported and relatively simple steps that you can take to reboot the brain, create new pathways that promote healing, and come back to who it is you were meant to be.

To find out about how many categories of ACEs you might have faced when you were a child or teenager, and your own ACE Score, turn the page and take the Adverse Childhood Experiences Survey for yourself.

TAKE THE ADVERSE CHILDHOOD EXPERIENCES (ACE) SURVEY

You may have picked up this book because you had a painful or traumatic childhood. You may suspect that your past has something to do with your current health problems, your depression, or your anxiety. Or perhaps you are reading this book because you are worried about the health of a spouse, partner, friend, parent, or even your own child, who has survived a trauma or suffered adverse experiences. In order to assess the likelihood that an Adverse Childhood Experience is affecting your health or the health of your loved one, please take a moment to fill out the following survey before you read this book.

ADVERSE CHILDHOOD EXPERIENCES SURVEY

Prior to your eighteenth birthday:

1. Did a parent or another adult in the household

often or very often . . . swear at you, insult you, put you down, or humiliate you? Or act in a way that made you afraid that you might be physically hurt?

Yes No, If yes, enter 1

2. Did a parent or another adult in the household

often or very often . . . push, grab, slap, or throw something at you? Or ever hit you so hard that you had marks or were injured?

Yes No, If yes, enter 1

3. Did an adult or person at least five years older than you

ever touch or fondle you or have you touch their body in a sexual way? Or attempt to touch you or touch you inappropriately or sexually abuse you?

Yes No, If yes, enter 1

4. Did you often or very often feel that

noone in your family loved you or thought you were important or special? Or feel that your family members didn’t look out for one another, feel close to one another, or support one another?

Yes No, If yes, enter 1

5. Did you often or very often

feel that you didn’t have enough to eat, had to wear dirty clothes, and had no one to protect you? Or that your parents were too drunk or high to take care of you or take you to the doctor if you needed it?

Yes No, If yes, enter 1

6. Was a biological parent ever lost to you

through divorce, abandonment, or another reason?

Yes No, If yes, enter 1

7. Was your mother or stepmother often or very often

pushed, grabbed, slapped, or have something thrown at her? Or was she sometimes, often, or very often kicked, bitten, hit with a fist, or hit with something hard? Or ever repeatedly hit over the course of at least a few minutes or threatened with a gun or knife?

Yes No, If yes, enter 1

8. Did you live with anyone who was

a problem drinker or alcoholic, or who used street drugs?

Yes No, If yes, enter 1

9. Was a household member

depressed or mentally ill, or did a household member attempt suicide?

Yes No, If yes, enter 1

10. Did a household member go to prison?

Yes No, If yes, enter 1

Add up your “Yes” answers: (this is your ACE Score)

Now take a moment and ask yourself how your experiences might be affecting your physical, emotional, and mental well-being. Is it possible that someone you love has been affected by Adverse Childhood Experiences they experienced? Are any children or young people you care for in adverse situations now?

Keep your Adverse Childhood Experiences Score in mind as you read the stories and science that follow, and keep your own experiences in mind, as well as those of the people you love. You may find this science to be the missing link in understanding why you or your loved one is having health problems. And this missing link will also lead to the information you will need in order to heal.

PART 1

How It Is We Become Who We Are

CHAPTER ONE

Every Adult Was Once a Child

If you saw Laura walking down the New York City street where she lives today, you’d see a well dressed forty six year old woman with auburn hair and green eyes who exudes a sense of “I matter here.” She looks entirely in charge of her life, as long as you don’t see the small ghosts trailing after her.

When Laura was growing up, her mom was bipolar. Laura’s mom had her good moments: she helped Laura with school projects, braided her hair, and taught her the name of every bird at the bird feeder. But when Laura’s mom suffered from depressive bouts, she’d lock herself in her room for hours. At other times she was manic and hypercritical, which took its toll on everyone around her. Laura’s dad, a vascular surgeon, was kind to Laura, but rarely around. He was, she says, “home late, out the door early, and then just plain out the doom”

Laura recalls a family trip to the Grand Canyon when she was ten. In a photo taken that day, Laura and her parents sit on a bench, sporting tourist whites. The sky is blue and cloudless, and behind them the dark, ribboned shadows of the canyon stretch deep and wide. It is a perfect summer day.

“That afternoon my mom was teaching me to identify the ponderosa pines,” Laura recalls. “Anyone looking at us would have assumed we were a normal, loving family.” Then, something seemed to shift, as it sometimes would. Laura’s parents began arguing about where to set up the tripod for their family photo. By the time the three of them sat down, her parents weren’t speaking. As they put on fake smiles for the camera, Laura’s mom suddenly pinched her daughter’s midriff around the back rim of her shorts, and told her to stop “staring off into space.” Then, a second pinch: “no wonder you’re turning into a butterball, you ate so much cheesecake last night you’re hanging over your shorts!”

If you look hard at Laura’s face in the photograph, you can see that she’s not squinting at the Arizona sun, but holding back tears.

When Laura was fifteen, her dad moved three states away with a new wife to be. He sent cards and money, but called less and less often. Her mother’s untreated bipolar disorder worsened. Laura’s days were punctuated with put downs that caught her off guard as she walked across the living room. “My mom would spit out something like, ‘You look like a semiwide from behind. If you’re ever wondering why no boy asks you out, that’s why!”’ One of Laura’s mother’s recurring lines was, “You were such a pretty baby, I don’t know what happened.” Sometimes Laura recalls, “My mom would go on a vitriolic diatribe about my dad until spittle foamed on her chin. I’d stand there, trying not to hear her as she went on and on, my whole body shaking inside.”

Laura never invited friends over, for fear they’d find out her secret: her mom “wasn’t like other moms.”

Some thirty years later, Laura says, “In many ways, no matter where I go or what I do, I’m still in my mother’s house.” Today, “If a car swerves into my lane, a grocery store clerk is rude, my husband and I argue, or my boss calls me in to talk over a problem, I feel something flip over inside. It’s like there’s a match standing inside too near a flame, and with the smallest breeze, it ignites.” Something, she says, “just doesn’t feel right. Things feel bigger than they should be. Some days, I feel as if I’m living my life in an emotional boom box where the volume is turned up too high.”

To see Laura, you would never know that she is “always shaking a little, only invisibly, deep down in my cells.”

Laura’s sense that something is wrong inside is mirrored by her physical health. In her mid thirties, she began suffering from migraines that landed her in bed for days at a time. At forty, Laura developed an autoimmune thyroid disease. At forty four, during a routine exam, Laura’s doctor didn’t like the sound of her heart. An EKG revealed an arrhythmia. An echocardiogram showed that Laura had a condition known as dilated cardiomyopathy. The left ventricle of her heart was weak; the muscle had trouble pumping blood into her heart. Next thing Laura knew, she was a heart disease patient, undergoing surgery. Today, Laura has a cardioverter defibrillator implanted in the left side of her chest to prevent heart failure. The two-inch scar from the implant is deceivingly small.

John’s parents met in Asia when his father was deployed there as an army officer. After a whirlwind romance, his parents married and moved to the United States. For as long as John can remember, he says, “my parents’ marriage was deeply troubled, as was my relationship with my dad. I consider myself to have been raised by my mom and her mom. I longed to feel a deeper connection with my dad, but it just wasn’t there. He couldn’t extend himself in that way.”

John occasionally runs his hands through his short blond hair, as he carefully chooses his words. “My dad would get so worked up and pissed off about trivial things. He’d throw out opinions that we all knew were factually incorrect, and just keep arguing.” If John’s dad said the capital of New York was New York City, it didn’t matter if John showed him it was Albany. “He’d ask me to help in the garage and I’d be doing everything right, and then a half hour into it I’d put the screwdriver down in the wrong spot and he’d start yelling and not let up. There was never any praise. Even when he was the one who’d made a mistake, it somehow became my fault. He could not be wrong about anything.”

As John got older, it seemed wrong to him that “my dad was constantly pointing out all the mistakes that my brother and I made, without acknowledging any of his own.” His dad chronically criticized his mother, who was, John says, “kinder and more confident.”

When John was twelve, he interjected himself into the fights between his parents. One Christmas Eve, when he was fifteen, John awoke to the sound of “a scream and a commotion. I realized it was my mother screaming. I jumped out of bed and ran into my parents’ room, shouting, ‘What the hell is going on here?’ My mother sputtered, ‘He’s choking me!’ My father had his hands around my mother’s neck. I yelled at him: ‘You stay right here! Don’t you dare move! Mom is coming with me!’ I took my mother downstairs. She was sobbing. I was trying to understand what was happening, trying to be the adult between them.”

Later that Christmas morning, John’s father came down the steps to the living room where John and his mom were sleeping. “No one explained,” he says. “My little brother came downstairs and we had Christmas morning as if nothing had happened.”

Not long after, John’s grandmother, “who’d been an enormous source of love for my mom and me,” died suddenly. John says, “It was a terrible shock and loss for both of us. My father couldn’t support my mom or me in our grieving. He told my mom, ‘You just need to get over it!’ He was the quintessential narcissist. If it wasn’t about him, it wasn’t important, it wasn’t happening.”

Today, John is a boyish forty. He has warm hazel eyes and a wide, affable grin that would be hard not to warm up to. But beneath his easy, open demeanor, John struggles with an array of chronic illnesses.

By the time John was thirty three, his blood pressure was shockingly high for a young man. He began to experience bouts of stabbing stomach pain and diarrhea and often had blood in his stool. These episodes grew more frequent. He had a headache every day of his life. By thirty four, he’d developed chronic fatigue, and was so wiped out that sometimes he struggled to make it through an entire day at work.

For years, John had loved to go hiking to relieve stress, but by the time he was thirty five, he couldn’t muster the physical stamina. “One day it hit me, I’m still a young man and I’ll never go hiking again.’ ”

John’s relationships, like his physical body, were never quite healthy. John remembers falling deeply in love in his early thirties. After dating his girlfriend for a year, she invited him to meet her family. During his stay with them, John says, “I became acutely aware of how different I was from kids who grew up without the kind of shame and blame I endured.” One night, his girlfriend, her sisters, and their boyfriends all decided to go out dancing. “Everyone was sitting around the dinner table planning this great night out and I remember looking around at her family and the only thing going through my mind were these words: ‘I do not belong here.’ Everyone seemed so normal and happy. I was horrified suddenly at the idea of trying to play along and pretend that I knew how to be part of a happy family.”

So John faked “being really tired. My girlfriend was sweet and stayed with me and we didn’t go. She kept asking what was wrong and at some point I just started crying and I couldn’t stop. She wanted to help, but instead of telling her how insecure I was, or asking for her reassurance, I told her I was crying because I wasn’t in love with her.”

John’s girlfriend was, he says, “completely devastated.” She drove John to a hotel that night. “She and her family were shocked. No one could understand what had happened.” Even though John had been deeply in love, his fear won out. “I couldn’t let her find out how crippled I was by the shame and grief I carried inside.”

Bleeding from his inflamed intestines, exhausted by chronic fatigue, debilitated and distracted by pounding headaches, often struggling with work, and unable to feel comfortable in a relationship, John was stuck in a universe of pain and solitude, and he couldn’t get out.

Georgia’s childhood seems far better than the norm: she had two living parents who stayed married through thick and thin, and they lived in a stunning home with walls displaying Ivy League diplomas; Georgia’s father was a well-respected, Yale-educated investment banker. Her mom stayed at home with Georgia and two younger sisters. The five of them appear, in photos, to be the perfect family.

All seemed fine, growing up, practically perfect.

“But I felt, very early on, that something wasn’t quite right in our home, and that no one was talking about it,” Georgia says. “Our house was saturated by a kind of unease all the time. You could never put your finger on what it was, but it was there.”

Georgia’s mom was “emotionally distant and controlling,” Georgia recalls. “If you said or did something she didn’t like, she had a way of going stone cold right in front of you she’d become what I used to think of as a moving statue that looked like my mother, only she wouldn’t look at you or speak to you.” The hardest part was that Georgia never knew what she’d done wrong. “I just knew that l was shut out of her world until whenever she decided I was worth speaking to again.”

For instance, her mother would “give my sisters and me a tiny little tablespoon of ice cream and then say, ‘You three will just have to share that.’ We knew better than to complain. If we did, she’d tell us how ungrateful we were, and suddenly she wouldn’t speak to us.”

Georgia’s father was a borderline alcoholic and “would occasionally just blow up over nothing,” she says. “One time he was changing a light bulb and he just started cursing and screaming because it broke. He had these unpredictable eruptions of rage. They were rare but unforgettable.” Georgia was so frightened at times that “I’d run like a dog with my tail between my legs to hide until it was safe to come out again.”

Georgia was “so sensitive to the shifting vibe in our house that I could tell when my father was about to erupt before even he knew. The air would get so tight and I’d know, it’s going to happen again.” The worst part was that “We had to pretend my father’s outbursts weren’t happening. He’d scream about something minor, and then he’d go take a nap. Or you’d hear him strumming his guitar in his den.”

Between her mother’s silent treatments and her dad’s tirades, Georgia spent much of her childhood trying to anticipate and move out of the way of her parents’ anger. She had the sense, even when she was nine or ten, “that their anger was directed at each other. They didn’t fight, but there was a constant low hum of animosity between them. At times it seemed they vehemently hated each other.” Once, fearing that her inebriated father would crash his car after an argument with her mother, Georgia stole his car keys and refused to give them back.

Today, at age forty nine, Georgia is reflective about her childhood. “I internalized all the emotions that were storming around me in my house, and in some ways it’s as if I’ve carried all that external angst inside me all my life.” Over the decades, carrying that pain has exacted a high toll. At first, Georgia says, “My physical pain began as a low whisper in my body.” But by the time she entered Columbia graduate school to pursue a PhD in classics, “I’d started having severe back problems. I was in so much physical pain, I could not sit in a chair. I had to study lying down.” At twenty six, Georgia was diagnosed with degenerative disc disease. “My body just started screaming with its pain.”

Over the next few years, in addition to degenerative disc disease, Georgia was diagnosed with severe depression, adrenal fatigue, and finally, fibromyalgia. “I’ve spent my adult life in doctors’ clinics and trying various medications to relieve my pain,” she says. “But there is no relief in sight.”

Laura’s, John’s, and Georgia’s life stories illustrate the physical price we pay, as adults, for childhood adversity. New findings in neuroscience, psychology, and medicine have recently unveiled the exact ways in which childhood adversity biologically alters us for life.

This groundbreaking research tells us that the emotional trauma we face when we are young has farther reaching consequences than we might have imagined.

Adverse Childhood Experiences change the architecture of our brains and the health of our immune systems, they trigger and sustain inflammation in both body and brain, and they influence our overall physical health and longevity long into adulthood.

These physical changes, in turn, prewrite the story of how we will react to the world around us, and how well we will work, and parent, befriend, and love other people throughout the course of our adult lives.

This is true whether our childhood wounds are deeply traumatic, such as witnessing violence in our family, as John did; or more chronic living room variety humiliations, such as those Laura endured; or more private but pervasive familial dysfunctions, such as Georgia’s.

All of these Adverse Childhood Experiences can lead to deep biophysical changes in a child that profoundly alter the developing brain and immunology in ways that also change the health of the adult he or she will become.

Scientists have come to this startling understanding of the link between Adverse Childhood Experiences and later physical illness in adulthood thanks, in large part, to the work of two individuals: a dedicated physician in San Diego, and a determined medical epidemiologist from the Centers for Disease Control (CDC). Together, during the 1980s and 1990s, the same years when Laura, John, and Georgia were growing up, these two researchers slowly uncovered the stunning scientific link between Adverse Childhood Experiences and later physical and neurological inflammation and life changing adult health outcomes.

The Philosophical Physicians

In 1985 physician and researcher Vincent J. Felitti, MD, chief of a revolutionary preventive care initiative at the Kaiser Permanente Medical Program in San Diego, noticed a startling pattern: adult patients who were obese also alluded to traumatic incidents in their childhood. Felitti came to this realization almost by accident.

In the mid 1980s, a significant number of patients in Kaiser Permanente’s obesity program were, with the help and support of Felitti and his nurses, successfully losing hundreds of pounds a year nonsurgically, a remarkable feat. The program seemed a resounding success, up until a large number of patients who were losing substantial amounts of weight began to drop out.

The attrition rate didn’t make sense, and Felitti was determined to find out what was going on. He conducted face-to-face interviews with 286 patients. In the course of Felitti’s one-on-one conversations, a striking number of patients confided that they had faced trauma in their childhood; many had been sexually abused. To these patients, eating was a solution: it soothed the anxiety, fear, and depression that they had secreted away inside for decades. Their weight served, too, as a shield against unwanted physical attention, and they didn’t want to let it go.

Felitti’s conversations with this large group of patients allowed him to perceive a pattern, and a new way of looking at human health and well-being, that other physicians just were not seeing. It became clear to him that, for his patients, obesity, “though an obvious physical sign,” was not the core problem to be treated, “any more than smoke is the core problem to be treated in house fires.”

In 1990, Felitti presented his findings at a national obesity conference. He told the group of physicians gathered that he believed “certain of our intractable public health problems” had root causes hidden “by shame, by secrecy, and by social taboos against exploring certain areas of life experience.”….

*

from

Childhood Disrupted. How Your Biography Becomes Your Biology, and How You Can Heal

by Donna Jackson Nakazawa

get it at Amazon.com


The Origins of Addiction: Evidence from the Adverse Childhood Experiences Study

Vincent J. Felitti, MD

Department of Preventive Medicine Kaiser Permanente Medical Care Program

”In my beginning is my end.” T.S. Elliot, “Four Quartets”

ABSTRACT:

A population based analysis of over 17,000 middle class American adults undergoing comprehensive, biopsychosocial medical evaluation indicates that three common categories of addiction are strongly related in a proportionate manner to several specific categories of adverse experiences during childhood. This, coupled with related information. suggests that the basic cause of addiction is predominantly experience dependent during childhood and not substance dependent. This challenge to the usual concept of the cause of addictions has significant implications for medical practice and for treatment programs.

Purpose: My intent is to challenge the usual concept of addiction with new evidence from a population based clinical study of over 17,000 adult, middle class Americans.

The usual concept of addiction essentially states that the compulsive use of ‘addictive’ substances is in some way caused by properties intrinsic to their molecular structure’. This view confuses mechanism with cause. Because any accepted explanation of addiction has social. medical. therapeutic, and legal implications, the way one understands addiction is important. Confusing mechanism with basic cause quickly leads one down a path that is misleading. Here, new data is presented to stimulate rethinking the basis of addiction.

Background: The information I present comes from the Adverse Childhood Experiences (ACE) Study. The ACE Study deals with the basic causes underlying the 10 most common causes of death in America; addiction is only one of several outcomes studied.

In the mid 1980s, physicians in Kaiser Permanente’s Department of Preventive Medicine in San Diego discovered that patients successfully losing weight in the Weight Program were the most likely to drop out. This unexpected observation led to our discovery that overeating and obesity were often being used unconsciously as protective solutions to unrecognized problems dating back to childhood.” Counterintuitively, obesity provided hidden benefits: it often was sexually, physically, or emotionally protective.

Our discovery that public health problems like obesity could also be personal solutions, and our finding an unexpectedly high prevalence of adverse childhood experiences in our middle class adult population, led to collaboration with the Centers for Disease Control (CDC) to document their prevalence and to study the implications of these unexpected clinical observations. I am deeply indebted to my colleague, Robert F. Anda MD, who skillfully designed the Adverse Childhood Experiences (ACE) Study in an epidemiologically sound manner, and whose group at CDC analyzed several hundred thousand pages of patient data to produce the data we have published.

Many of our obese patients had previously been heavy drinkers, heavy smokers, or users of illicit drugs. Of what relevance are these observations; do they imply some unspecified innate tendency to addiction? Is addiction genetic, as some have proposed for alcoholism? Is addiction a biomedical disease. a personality disorder. or something different? Are diseases and personality disorders separable. or are they ultimately related? What does one make of the dramatic recent findings in neurobiology that seem to promise a neurochemical explanation for addiction? Why does only a small percent of persons exposed to addictive substances become compulsive users?

Although the problem of narcotic addiction has led to extensive legislative attempts at eradication, its prevalence has not abated over the past century. However. the distribution pattern of narcotic use within the population has radically changed. attracting significant political attention and governmental actions The inability to control addiction by these major, well intended governmental efforts has drawn thoughtful and challenging commentary from a number of different viewpoints.

In our detailed study of over 17.000 middle class American adults of diverse ethnicity, we found that the compulsive use of nicotine, alcohol, and injected street drugs increases proportionally in a strong, graded dose response manner that closely parallels the intensity of adverse life experiences during childhood. This of course supports old psychoanalytic views and is at odds with current concepts, including those of biological psychiatry, drug treatment programs, and drug eradication programs.

Our findings are disturbing to some because they imply that the basic causes of addiction lie within us and the way we treat each other, not in drug dealers or dangerous chemicals. They suggest that billions of dollars have been spent everywhere except where the answer is to be found.

Study design: Kaiser Permanente (KP) is the largest prepaid. non profit. healthcare delivery system in the United States; there are 500,000 KP members in San Diego, approximately 30% of the greater metropolitan population. We invited 26,000 consecutive adults voluntarily seeking comprehensive medical evaluation in the Department of Preventive Medicine to help us understand how events in childhood might later affect health status in adult life. Seventy percent agreed, understanding the information obtained was anonymous and would not become part of their medical records.

Our cohort population was 80% white including Hispanic, 10% black, and 10% Asian. Their average age was 57 years; 74% had been to college. 44% had graduated college; 49.5% were men.

In any four year period, 81% of all adult Kaiser Health Plan members seek such medical evaluation; there is no reason to believe that selection bias is a significant factor in the Study. The Study was carried out in two waves. to allow mid point correction if necessary. Further details of Study design are described in our initial publication.

The ACE Study compares adverse childhood experiences against adult health status, on average a half century later. The experiences studied were eight categories of adverse childhood experience commonly observed in the Weight Program. The prevalence of each category is stated in parentheses. The categories are:

1. recurrent and severe physical abuse (11%)

2. recurrent and severe emotional abuse (11%)

3. contact sexual abuse (22%)

growing up in a household with:

4. an alcoholic or drug user (25%)

5. a member being imprisoned (3%)

6. a mentally ill, chronically depressed, or institutionalized member (19%)

7. the mother being treated violently (12%)

8. both biological parents not being present (22%)

The scoring system is simple: exposure during childhood or adolescence to any category of ACE was scored as one point. Multiple exposures within a category were not scored: one alcoholic within a household counted the same as an alcoholic and a drug user; if anything, this tends to understate our findings. The ACE Score therefore can range from 0 to 8. Less than half of this middle class population had an ACE Score of 0; one in fourteen had an ACE Score of 4 or more.

In retrospect, an initial design flaw was not scoring subtle issues like low level neglect and lack of interest in a child who is otherwise the recipient of adequate physical care. This omission will not affect the interpretation of our First Wave findings, and may explain the presence of some unexpected outcomes in persons having ACE Score zero. Emotional neglect was studied in the Second Wave.

The ACE Study contains a prospective arm: the starting cohort is being followed forward in time to match adverse childhood experiences against current doctor office visits, emergency depanment visits, pharmacy costs, hospitalizations, and death. Publication of these analyses soon will begin.

Findings: Our overall findings. presented extensively in the American literature, demonstrate that:

– Adverse childhood experiences are surprisingly common. although typically concealed and unrecognized.

– ACEs still have a profound effect 50 years later, although now transformed from psychosocial experience into organic disease, social malfunction, and mental illness.

– Adverse childhood experiences are the main determinant of the health and social well being of the nation.

Our overall findings challenge conventional views, some of which are clearly defensive. They also provide opportunities for new approaches to some of our most difficult public health problems. Findings from the ACE Study provide insights into changes that are needed in pediatrics and adult medicine, which expectedly will have a significant impact on the cost and effectiveness of medical care.

Our intent here is to present our findings only as they relate to the problem of addiction, using nicotine, alcohol, and injected illicit drugs as examples of substances that are commonly viewed as ‘addicting‘. If we know why things happen and how, then we may have a new basis for prevention.

Smoking

Smoking tobacco has come under heavy opposition in the United States, particularly in southern California where the ACE Study was carried out. Whereas at one time most men and many women smoked, only a minority does so now; it is illegal to smoke in office buildings, public transportation, restaurants, bars, and in most areas of hotels.

When we studied current smokers, we found that smoking had a strong, graded relationship to adverse childhood experiences Figure 1 illustrates this clearly. The p value for this and all other data displays is .001 or better.

This stepwise 250% increase in the likelihood of an ACE Score 6 child being a current smoker, compared to an ACE Score 0 child. is generally not known. This simple observation has profound implications that illustrate the psychoactive benefits of nicotine; this information has largely been lost in the public health onslaught against smoking but is important in understanding the intractable nature of smoking in many people.

When we match the prevalence of adult chronic bronchitis and emphysema against ACEs, we again see a strong dose response relationship. We thereby proceed from the relationship of adverse childhood experiences to a health risk behavior to their relationship with an organic disease. In other words, Figure 2 illustrates the conversion of emotional stressors into an organic disease, through the intermediary mechanism of an emotionally beneficial (although medically unsafe) behavior.

Alcoholism

One’s own alcoholism is not easily or comfortably acknowledged; therefore. when we asked our Study cohort if they had ever considered themselves to be alcoholic, we felt that Yes answers probably understated the truth, making the effect even stronger than is shown. The relationship of self acknowledged alcoholism to adverse childhood experiences is depicted in Figure 3. Here we see that more than a 500% increase in adult alcoholism is related in a strong, graded manner to adverse childhood experiences.

Injection of illegal drugs

In the United States the most commonly injected street drugs are heroin and methamphetamine. Methamphetamine has the interesting property of being closely related to amphetamine, the first anti depressant introduced by Ciba Pharmaceuticals in 1932.

When we studied the relation of injecting illicit drugs to adverse childhood experiences, we again found a similar dose response pattern; the likelihood of injection of street drugs increases strongly and in a graded fashion as the ACE Score increases (Figure 4). At the extremes of ACE Score. the figures for injected drug use are even more powerful. For instance, a male child with an ACE Score of 6, when compared to a male child with an ACE Score of 0, has a 46 fold (4.600%) increase in the likelihood of becoming an injection drug user sometime later in life.

Discussion

Although awareness of the hazards of smoking is now near universal. and has caused a significant reduction in smoking, in recent years the prevalence of smoking has remained largely unchanged. In fact. the association between ACE score and smoking is stronger in age cohorts born after the Surgeon General’s Report on Smoking.

Do current smokers now represent a core of individuals who have a more profound need for the psychoactive benefits of nicotine than those who have given up smoking? Our clinical experience and data from the ACE Study suggest this as a likely possibility. Certainly, there is good evidence of the psychoactive benefits of nicotine for moderating anger anxiety, and hunger.

Alcohol is well accepted as a psychoactive agent. This obvious explanation of alcoholism is now sometimes rejected in favor of a proposed genetic causality. Certainly, alcoholism may be familial, as is language spoken. Our findings support an experiential and psychodynamic explanation for alcoholism, although this may well be moderated by genetic and metabolic differences between races and individuals.

Analysis of our Study data for injected drug use shows a powerful relation to ACEs. Population Attributable Risk (PAR) analysis shows that 78% of drug injection by women can be attributed to adverse childhood experiences. For men and women combined, the PAR is 67%. Moreover, this PAR has been constant in four age cohorts whose birth dates span a century; this indicates that the relation of adverse childhood experiences to illicit drug use has been constant in spite of major changes in drug availability and in social customs, and in the introduction of drug eradication programs.

American soldiers in Vietnam provided an important although overlooked observation. Many enlisted men in Vietnam regularly used heroin. However, only 5% of those considered addicted were still using it 10 months after their return to the US.” Treatment did not account for this high recovery rate.

Why does not everyone become addicted when they repeatedly inject a substance reputedly as addicting as heroin? If a substance like heroin is not inherently addicting to everyone, but only to a small minority of human users, what determines this selectivity? Is it the substance that is intrinsically addicting, or do life experiences actually determine its compulsive use? Surely its chemical structure remains constant.

Our findings indicate that the major factor underlying addiction is adverse childhood experiences that have not healed with time and that are overwhelmingly concealed from awareness by shame, secrecy, and social taboo.

The compulsive user appears to be one who, not having other resolutions available, unconsciously seeks relief by using materials with known psychoactive benefit, accepting the known long term risk of injecting illicit, impure chemicals. The ACE Study provides population based clinical evidence that unrecognized adverse childhood experiences are a major, if not the major, determinant of who turns to psychoactive materials and becomes ‘addicted’.

Given that the conventional concept of addiction is seriously flawed, and that we have presented strong evidence for an alternative explanation, we propose giving up our old mechanistic explanation of addiction in favor of one that explains it in terms of its psychodynamics: unconscious although understandable decisions being made to seek chemical relief from the ongoing effects of old trauma, often at the cost of accepting future health risk.

Expressions like ‘self destructive behavior’ are misleading and should be dropped because, while describing the acceptance of long term risk, they overlook the importance of the obvious short term benefits that drive the use of these substances.

This revised concept of addiction suggests new approaches to primary prevention and treatment. The current public health approach of repeated cautionary warnings has demonstrated its limitations, perhaps because the cautions do not respect the individual when they exhort change without understanding.

Adverse childhood experiences are widespread and typically unrecognized. These experiences produce neurodevelopmental and emotional damage, and impair social and school performance. By adolescence, children have a sufficient skill and independence to seek relief through a small number of mechanisms, many of which have been in use since biblical times: drinking alcohol, sexual promiscuity, smoking tobacco, using psychoactive materials, and overeating. These coping devices are manifestly effective for their users, presumably through their ability to modulate the activity of various neurotransmitters. Nicotine, for instance. is a powerful substitute for the neurotransmitter acetylcholine. Not surprisingly, the level of some neurotransmitters varies genetically between individuals.

It is these coping devices, with their short term emotional benefits, that often pose long term risks leading to chronic disease; many lead to premature death. This sequence is depicted in the ACE Pyramid (Figure 5). The sequence is slow, often unstoppable, and is generally obscured by time, secrecy, and social taboo. Time does not heal in most of these instances. Because cause and effect usually lie within a family, it is understandably more comforting to demonize a chemical than to look within. We find that addiction overwhelmingly implies prior adverse life experiences.

The sequence in the ACE Pyramid supports psychoanalytic observations that addiction is primarily a consequence of adverse childhood experiences. Moreover, it does so by a population based study, thereby escaping the potential selection bias of individual case reports.

Addiction is not a brain disease, nor is it caused by chemical imbalance or genetics. Addiction is best viewed as an understandable, unconscious, compulsive use of psychoactive materials in response to abnormal prior life experiences, most of which are concealed by shame, secrecy, and social taboo.

Our findings show that childhood experiences profoundly and causally shape adult life‘ ‘Chemical imbalances’. whether genetically modulated or not, are the necessary intermediary mechanisms by which these causal life experiences are translated into manifest effect. It is important to distinguish between cause and mechanism. Uncertainty and confusion between the two will lead to needless polemics and misdirected efforts for preventing or treating addiction, whether on a social or an individual scale.

Our findings also make it clear that studying any one category of adverse experience, be it domestic violence. childhood sexual abuse, or other forms of family dysfunction is a conceptual error. None occur in vacuum; they are part of a complex systems failure: one does not grow up with an alcoholic where everything else in the household is fine.

Treatment

If we are to improve the current unhappy situation, we must in medical settings routinely screen at the earliest possible point for adverse childhood experiences. It is feasible and acceptable to carry out mass screening for ACEs in the context of comprehensive medical evaluation. This identifies cases early and allows treatment of basic causes rather than vainly treating the symptom of the moment. We have screened over 450, 000 adult members of Kaiser Health Plan for these eight categories of adverse childhood experiences. Our initial screening is by an expanded Review of Systems questionnaire; patients certainly do not spontaneously volunteer this information. ‘Yes’ answers then are pursued with conventional history taking: “I see that you were molested as a child. Tell me how that has affected you later in your life.”

Such screening has demonstrable value. Before we screened for adverse childhood experiences, our standardized comprehensive medical evaluation led to a 12% reduction in medical visits during the subsequent year. Later, in a pilot study, an on site psychoanalyst conducted a one time interview of depressed patients; this produced a 50% reduction in the utilization of this subset during the subsequent year. However, the reduction occurred only in those depressed patients who were high utilizers of medical care because of somatization disorders.

Recently, we evaluated our current approach by a neural net analysis of the records of 135,000 patients who were screened for adverse childhood experiences as part of our redesigned comprehensive medical evaluation. This entire cohort showed an overall reduction of 35% in doctor office visits during the year subsequent to evaluation.

Our experience asking these questions indicates that the magnitude of the ACE problem is so great that primary prevention is ultimately the only realistic solution. Primary prevention requires the development of a beneficial and acceptable intrusion into the closed realm of personal and family experience. Techniques for accomplishing such change en masse are yet to be developed because each of us, fearing the new and unknown as a potential crisis in self esteem, often adjusts to the status quo. However, one possible approach to primary prevention lies in the mass media: the story lines of movies and television serials present a major therapeutic opportunity, unexploited thus far, for contrasting desirable and undesirable parenting skills in various life situations.

Because addiction is experience dependent and not substance dependent, and because compulsive use of only one substance is actually uncommon, one also might restructure treatment programs to deal with underlying causes rather than to focus on substance withdrawal. We have begun using this approach with benefit in our Obesity Program, and plan to do so with some of the more conventionally accepted addictions.

Conclusion

The current concept of addiction is ill founded. Our study of the relationship of adverse childhood experiences to adult health status in over 17,000 persons shows addiction to be a readily understandable, although largely unconscious, attempt to gain relief from well concealed prior life traumas by using psychoactive materials. Because it is difficult to get enough of something that doesn’t quite work, the attempt is ultimately unsuccessful, apart from its risks. What we have shown will not surprise most psychoanalysts, although the magnitude of our observations is new, and our conclusions are sometimes vigorously challenged by other disciplines.

The evidence supporting our conclusions about the basic cause of addiction is powerful and its implications are daunting. The prevalence of adverse childhood experiences and their long term effects are clearly a major determinant of the health and social well being of the nation. This is true whether looked at from the standpoint of social costs, the economics of health care, the quality of human existence, the focus of medical treatment, or the effects of public policy.

Adverse childhood experiences are difticult issues, made more so because they strike close to home for many of us. Taking them on will create an ordeal of change, but will also provide for many the opportunity to have a better life.


Adverse Childhood Experiences Study

Wikipedia

The Adverse Childhood Experiences Study (ACE Study) is a research study conducted by the American health maintenance organization Kaiser Permanente and the Centers for Disease Control and Prevention. Participants were recruited to the study between 1995 and 1997 and have been in long-term follow up for health outcomes. The study has demonstrated an association of adverse childhood experiences (ACEs) (aka childhood trauma) with health and social problems across the lifespan. The study is frequently cited as a notable landmark in epidemiological research, and has produced many scientific articles and conference and workshop presentations that examine ACEs.

Background

In the 1980s, the dropout rate of participants at Kaiser Permanente’s obesity clinic in San Diego, California, was about 50%; despite all of the dropouts successfully losing weight under the program. Vincent Felitti, head of Kaiser Permanente’s Department of Preventive Medicine in San Diego, conducted interviews with people who had left the program, and discovered that a majority of 286 people he interviewed had experienced childhood sexual abuse. The interview findings suggested to Felitti that weight gain might be a coping mechanism for depression, anxiety, and fear.

Felitti and Robert Anda from the Centers for Disease Control and Prevention (CDC) went on to survey childhood trauma experiences of over 17,000 Kaiser Permanente patient volunteers. The 17,337 participants were volunteers from approximately 26,000 consecutive Kaiser Permanente members. About half were female; 74.8% were white; the average age was 57; 75.2% had attended college; all had jobs and good health care, because they were members of the Kaiser health maintenance organization. Participants were asked about 10 types of childhood trauma that had been identified in earlier research literature:

– Physical abuse

– Sexual abuse

– Emotional abuse

– Physical or emotional neglect

– Exposure to domestic violence

– Household substance abuse

– Household mental illness

– Family member (attempted) suicide

– Parental separation or divorce

– Incarcerated household member

In one way or another, all ten questions speak to family dysfunction.

Findings

The ACE Pyramid represents the conceptual framework for the ACE Study, which has uncovered how adverse childhood experiences are strongly related to various risk factors for disease throughout the lifespan, according to the Centers for Disease Control and Prevention.

According to the United States’ Substance Abuse and Mental Health Services Administration, the ACE study found that:

Adverse childhood experiences are common. For example, 28% of study participants reported physical abuse and 21% reported sexual abuse. Many also reported experiencing a divorce or parental separation, or having a parent with a mental and/or substance use disorder.

Adverse childhood experiences often occur together. Almost 40% of the original sample reported two or more ACEs and 12.5% experienced four or more. Because ACEs occur in clusters, many subsequent studies have examined the cumulative effects of ACEs rather than the individual effects of each.

Adverse childhood experiences have a dose response relationship with many health problems. As researchers followed participants over time, they discovered that a person’s cumulative ACEs score has a strong, graded relationship to numerous health, social, and behavioral problems throughout their lifespan, including substance use disorders. Furthermore, many problems related to ACEs tend to be comorbid, or co-occurring.

About two-thirds of individuals reported at least one adverse childhood experience; 87% of individuals who reported one ACE reported at least one additional ACE. The number of ACEs was strongly associated with adulthood high-risk health behaviors such as smoking, alcohol and drug abuse, promiscuity, and severe obesity, and correlated with ill-health including depression, heart disease, cancer, chronic lung disease and shortened lifespan.

Compared to an ACE score of zero, having four adverse childhood experiences was associated with a seven fold (700%) increase in alcoholism, a doubling of risk of being diagnosed with cancer, and a four-fold increase in emphysema; an ACE score above six was associated with a 30-fold (3000%) increase in attempted suicides.

The ACE study’s results suggest that maltreatment and household dysfunction in childhood contribute to health problems decades later. These include chronic diseases, such as heart disease, cancer, stroke, and diabetes, that are the most common causes of death and disability in the United States. The study’s findings, while relating to a specific population within the United States, might reasonably be assumed to reflect similar trends in other parts of the world, according to the World Health Organization. The study was initially published in the American Journal of Preventive Medicine.

Subsequent surveys

The ACE Study has produced more than 50 articles that look at the prevalence and consequences of ACEs. It has been influential in several areas. Subsequent studies have confirmed the high frequency of adverse childhood experiences, or found even higher incidences in urban or youth populations.

The original study questions have been used to develop a 10-item screening questionnaire. Numerous subsequent surveys have confirmed that adverse childhood experiences are frequent.

The CDC runs the Behavioral Risk Factor Surveillance System (BRFSS), an annual survey conducted by individual state health departments in all 50 states. An expanded survey instrument in several states found each state to be similar. Some states have collected additional local data. Adverse childhood experiences were even more frequent in studies in urban Philadelphia, and in a survey of young mothers (mostly younger than 19). Internationally, an Adverse Childhood Experiences International Questionnaire (ACE-IQ) is undergoing validation testing. Surveys of adverse childhood experiences have been conducted in Romania, the Czech Republic, the Republic of Macedonia, Norway, the Philippines, the United Kingdom, Canada, China and Jordan.

Child Trends used data from the 2011/12 National Survey of Children’s Health (NSCH) to analyze ACEs prevalence in children nationally, and by state. The NSCH’s list of “adverse family experiences” includes a measure of economic hardship and shows that this is the most common ACE reported nationally.

Neurobiology of Stress

Cognitive and neuroscience researchers have examined possible mechanisms that might explain the negative consequences of adverse childhood experiences on adult health. Adverse childhood experiences can alter the structural development of neural networks and the biochemistry of neuroendocrine System and may have long term effects on the body, including speeding up the processes of disease and aging and compromising immune systems.

Allostatic load refers to the adaptive processes that maintain homeostasis during times of toxic stress through the production of mediators such as adrenalin, cortisol and other chemical messengers. According to researcher Bruce S McEwen, who coined the term:

“These mediators of the stress response promote adaptation in the aftermath of acute stress, but they also contribute to allostatic overload, the wear and tear on the body and brain that result from being ‘stressed out.‘ This conceptual framework has created a need to know how to improve the efficiency of the adaptive response to stressors while minimizing overactivity of the same systems, since such overactivity results in many of the common diseases of modern life. This framework has also helped to demystify the biology of stress by emphasizing the protective as well as the damaging effects of the body’s attempts to cope with the challenges known as stressors.”

Additionally, epigenetic transmission may occur due to stress during pregnancy or during interactions between mother and newborns. Maternal stress, depression, and exposure to partner violence have all been shown to have epigenetic effects on infants.

Implementing practices

As knowledge about the prevalence and consequences of adverse childhood experiences increases, trauma informed and resilience building practices based on the research is being implemented in communities, education, public health departments, social services, faith-based organizations and criminal justice. A few states are considering legislation.

Communities

As knowledge about the prevalence and consequences of ACEs increases, more communities seek to integrate trauma informed and resilience building practices into their agencies and systems. Tarpon Springs, Florida, became the first trauma informed community in 2011. Trauma informed initiatives in Tarpon Springs include trauma awareness training for the local housing authority, changes in programs for ex-offenders, and new approaches to educating students with learning difficulties.

Education

Children who are exposed to adverse childhood experiences may become overloaded with stress hormones, leaving them in a constant state of arousal and alertness to environmental and relational threats. Therefore, they may have difficulty focusing on school work, and consolidating new memory, making it harder for them to learn at school.

Approximately one in three or four children have experienced significant ACEs. A study by the Area Health Education Center of Washington State University found that students with at least three ACEs are three times as likely to experience academic failure, six times as likely to have behavioral problems, and five times as likely to have attendance problems. These students may have trouble trusting teachers and other adults, and may have difficulty creating and maintaining relationships.

The trauma informed school movement aims to train teachers and staff to help children self-regulate, and to help families that are having problems that result in children’s normal response to trauma, rather than simply jumping to punishment. It also seeks to provide behavioral consequences that will not retraumatize a child. Punishment is often ineffective, and better results can often be achieved with positive reinforcement. Out of school suspensions can be particularly bad for students with difficult home lives; forcing students to remain at home may increase their distrust of adults.

Trauma sensitive, or compassionate, schooling has become increasingly popular in Washington, Massachusetts, and California. Lincoln High School in Walla Walla, Washington, adapted a trauma informed approach to discipline and reduced its suspensions by 85%. Rather than standard punishment, students are taught to recognize their reaction to stress and learn to control it.

Spokane, Washington, schools conducted a research study that demonstrated that academic risk was correlated with students’ experiences of traumatic events known to their teachers. The same school district has begun a study to test the impact of trauma informed intervention programs, in an attempt to reduce the impact of toxic stress.

In Brockton, Massachusetts, a community wide meeting led to a trauma informed approach being adopted by the Brockton School District. So far, all of the district’s elementary schools have implemented trauma informed improvement plans, and there are plans to do the same in the middle school and high school. About one-fifth of the district teachers have participated in a course on teaching traumatized students. Police alert schools when they have arrested someone or visited at a student’s address.

Massachusetts state legislation has sought to require all schools to develop plans to create “safe and supportive schools”.

At El Dorado, an elementary school in San Francisco, California, trauma-informed practices were associated with a suspension reduction of 89%.

Social services

Social service providers, including welfare systems, housing authorities, homeless shelters, and domestic violence centers are adopting trauma informed approaches that help to prevent ACEs or minimize their impact. Utilizing tools that screen for trauma can help a social service worker direct their clients to interventions that meet their specific needs. Trauma informed practices can also help social service providers look at how trauma impacts the whole family.

Trauma informed approaches can improve child welfare services by 1) openly discussing trauma and 2) addressing parental trauma.

The New Hampshire Division for Children Youth and Families (DCYF) is taking a trauma informed approach to their foster care services by educating staff about childhood trauma, screening children entering foster care for trauma, using trauma informed language to mitigate further traumatization, mentoring birth parents and involving them in collaborative parenting, and training foster parents to be trauma informed.

In Albany, New York the HEARTS Initiative has led to local organizations developing trauma informed practice. Senior Hope Inc, an organization serving adults over the age of 50, began implementing the 10 question ACE survey and talking with their clients about childhood trauma. The LaSalle School, which serves orphaned and abandoned boys, began looking at delinquent boys from a trauma informed perspective and began administering the ACE questionnaire to their clients.

Housing authorities are also becoming trauma informed. Supportive housing can sometimes recreate control and power dynamics associated with clients’ early trauma. This can be reduced through trauma informed practices, such as training staff to be respectful of clients’ space by scheduling appointments and not letting themselves into clients’ private spaces, and also understanding that an aggressive response may be trauma related coping strategies.

The housing authority in Tarpon Springs provided trauma awareness training to staff so they could better understand and react to their clients’ stress and anger resulting from poor employment, health, and housing.

A survey of 200 homeless individuals in California and New York demonstrated that more than 50% had experienced at least four ACEs. In Petaluma, California, the Committee on the Shelterless (COTS) uses a trauma informed approach called Restorative Integral Support (RIS) to reduce intergenerational homelessness. RIS increases awareness of and knowledge about ACEs, and calls on staff to be compassionate and focus on the whole person. COTS now consider themselves ACE informed and focus on resiliency and recovery.

Health care services

Screening for or talking about ACEs with parents and children can help to foster healthy physical and psychological development and can help doctors understand the circumstances that children and their parents are facing. By screening for ACEs in children, pediatric doctors and nurses can better understand behavioral problems.

Some doctors have questioned whether some behaviors resulting in attention deficit hyperactivity disorder (ADHD) diagnoses are in fact reactions to trauma. Children who have experienced four or more ACEs are three times as likely to take ADHD medication when compared with children with less than four ACEs.

Screening parents for their ACEs allows doctors to provide the appropriate support to parents who have experienced trauma, helping them to build resilience, foster attachment with their children, and prevent a family cycle of ACEs. Trauma informed pediatric care also allows doctors to develop a more trusting relationship with parents, opening the lines of communication.

At Monteflore Medical Center ACEs screenings will soon be implemented in 22 pediatric clinics. In a pilot program any child with one parent who has an ACE score of four or higher is offered enrollment and receive a variety of services. For families enrolled in the program parents report fewer ER visits and children have healthier emotional and social development, compared with those not enrolled.

Public health

Most American doctors as of 2015 do not use ACE surveys to assess patients. Objections to doing so include that there are no randomized controlled trials that show that such surveys can be used to actually improve health outcomes, there are no standard protocols for how to use the information gathered, and that revisiting negative childhood experiences could be emotionally traumatic. Other obstacles to adoption include that the technique is not taught in medical schools, is not billable, and the nature of the conversation makes some doctors personally uncomfortable.

Some public health centers see ACEs as an important way, especially for mothers and children, to target health interventions for individuals during sensitive periods of development early in their life, or even in utero.

For example, Jefferson Country Public Health clinic in Port Townsend, Washington, now screens pregnant women, their partners, parents of children with special needs, and parents involved with CPS for ACEs. With regard to patient counseling, the clinic treats ACEs like other health risks such as smoking or alcohol consumption.

Resiliency

Resilience is not a trait that people either have or do not have. It involves behaviors, thoughts and actions that can be learned and developed in anyone.

According to the American Psychological Association (2017) resilience is the ability to adapt in the face of adversity, tragedy, threats or significant stress such as family and relationship problems, serious health problems or workplace and financial stressors. Resilience refers to bouncing back from difficult experiences in life. There is nothing extraordinary about resilience. People often demonstrate resilience in times of adversity. However, being resilient does not mean that a person will not experience difficulty or distress as emotional pain is common for people when they suffer from a major adversity or trauma. In fact, the path to resilience often involves considerable emotional pain.

Resilience is labeled as a protective factor. Having resilience can benefit children who have been exposed to trauma and have a higher ACE score. Children who can learn to develop it, can use resilience to build themselves up after trauma. A child who has not developed resilience will have a harder time coping with the challenges that can come in adult life. People and children who are resilient, embrace the thinking that adverse experiences do not define who they are. They also can think about past events in their lives that were traumatic and, try to reframe them in a way that is constructive. They are able to find strength in their struggle and ultimately can overcome the challenges and adversity that was faced in childhood.

In childhood, resiliency can come from having a caring adult in a child’s life. Resiliency can also come from having meaningful moments such as an academic achievement or getting praise from teachers or mentors. In adulthood, resilience is the concept of self-care. If you are taking care of yourself and taking the necessary time to reflect and build on your experiences, then you will have a higher capacity for taking care of others.

Adults can also use this skill to counteract some of the trauma they have experienced. Self-care can mean a variety of things. One example of selfcare, is knowing when you are beginning to feel burned out and then taking a step back to rest and recuperate yourself. Another component of self-care is practicing mindfulness or engaging in some form of meditation. If you are able to take the time to reflect upon your experiences, then you will be able to build a greater level of resiliency moving forward.

All of these strategies put together can help to build resilience and counteract some of the childhood trauma that was experienced. With these strategies children can begin to heal after experiencing adverse childhood experiences. This aspect of resiliency is so important because it enables people to find hope in their traumatic past.

When first looking at the ACE study and the different correlations that come with having 4 or more traumas, it is easy to feel defeated. It is even possible for this information to encourage people to have unhealthy coping behaviors. Introducing resilience and the data that supports its positive outcome in regards to trauma, allows for a light at the end of a tunnel. It gives people the opportunity to be proactive instead of reactive when it comes to addressing the traumas in their past.

Criminal justice

Since research suggests that incarcerated individuals are much more likely to have been exposed to violence and suffer from posttraumatic stress disorder (PTSD), a trauma informed approach may better help to address some of these criminogenic risk factors and can create a less traumatizing criminal justice experience. Programs, like Seeking Safety, are often used to help individuals in the criminal justice system learn how to better cope with trauma, PTSD, and substance abuse.

Juvenile courts better help deter children from crime and delinquency when they understand the trauma many of these children have experienced.

The criminal justice system itself can also retraumatize individuals. This can be prevented by creating safer facilities where correctional and police officers are properly trained to keep incidents from escalating. Partnerships between police and mental health providers can also reduce the possible traumatizing effects of police intervention and help provide families with the proper mental health and social services.

The Women’s Community Correctional Center of Hawaii began a Trauma Informed Care Initiative that aims to train all employees to be aware and sensitive to trauma, to screen all women in their facility for trauma, to assess those who have experienced trauma, and begin providing trauma informed mental health care to those women identified.

Faith based Organizations

Some faith based organizations offer spiritual services in response to traumas identified by ACE surveys. For example, the founder of ACE Overcomers combined the epidemiology of ACEs, the neurobiology of toxic stress and principles of the Christian Bible into a workbook and 12-week course used by clergy in several states.

Another example of this integration of faith based principles and ACEs science is the work of Intermountain Residential’s chaplain, who has created a curriculum called “Bruised Reeds and Smoldering Wicks” a six week study meant to introduce the science behind ACEs and early childhood trauma within the context of Christian theology and ministry practice. Published in 2017, it has been used by ministry professionals in 30 states, the District of Columbia, and two Canadian provinces.

Faith based organizations also participate in the online group ACES Connection Network.

The Faith and Health Connection Ministry also applies principles of Christian theology to address childhood traumas.

Legislation

Vermont has passed a bill, Act 43(H.508), an act relating to building resilience for individuals experiencing adverse childhood experiences which acknowledges the life span effects of ACEs on health outcomes, seeks wide use of ACE screening by health providers and aims to educate medical and health school students about ACEs.

“Vermont first state to propose bill to screen for ACEs in health care”, ACEs Connection, 18 March 2014

Previously Washington State passed legislation to set up a public-private partnership to further community development of trauma informed and resilience building practices that had begun in that state; but it was not adequately funded.

On August 18, 2014, California lawmakers unanimously passed ACR No. 155, which encourages policies reducing children’s exposure to adverse experiences.

Recent Massachusetts legislation supports a trauma informed school movement as part of The Reduction of Gun Violence bill (No. 4376). This bill aims to create “safe and supportive schools” through services and initiatives focused on physical, social, and emotional safety.

Childhood Adversity Can Change Your Brain. How People Recover From Post Childhood Adversity Syndrome – Donna Jackson Nakazawa * Future Directions in Childhood Adversity and Youth Psychopathology – Katie A. McLaughlin.

Childhood Adversity: exposure during childhood or adolescence to environmental circumstances that are likely to require significant psychological, social, or neurobiological adaptation by an average child and that represent a deviation from the expectable environment.

Early emotional trauma changes who we are, but we can do something about it.

The brain and body are never static; they are always in the process of becoming and changing.

Findings from epidemiological studies indicate clearly that exposure to childhood adversity powerfully shapes risk for psychopathology.

This research tells us that what doesn’t kill you doesn’t necessarily make you stronger; far more often, the opposite is true.

Donna Jackson Nakazawa

If you’ve ever wondered why you’ve been struggling a little too hard for a little too long with chronic emotional and physical health conditions that just won’t abate, feeling as if you’ve been swimming against some invisible current that never ceases, a new field of scientific research may offer hope, answers, and healing insights.

In 1995, physicians Vincent Felitti and Robert Anda launched a large scale epidemiological study that probed the child and adolescent histories of 17,000 subjects, comparing their childhood experiences to their later adult health records. The results were shocking: Nearly two thirds of individuals had encountered one or more Adverse Childhood Experiences (ACEs), a term Felitti and Anda coined to encompass the chronic, unpredictable, and stress inducing events that some children face. These included growing up with a depressed or alcoholic parent; losing a parent to divorce or other causes; or enduring chronic humiliation, emotional neglect, or sexual or physical abuse. These forms of emotional trauma went beyond the typical, everyday challenges of growing up.

The number of Adverse Childhood Experiences an individual had had predicted the amount of medical care she’d require as an adult with surprising accuracy:

– Individuals who had faced 4 or more categories of ACEs were twice as likely to be diagnosed with cancer as individuals who hadn’t experienced childhood adversity.

– For each ACE Score a woman had, her risk of being hospitalized with an autoimmune disease rose by 20 percent.

– Someone with an ACE Score of 4 was 460 percent more likely to suffer from depression than someone with an ACE Score of 0.

– An ACE Score greater than or equal to 6 shortened an individual’s lifespan by almost 20 years.

The ACE Study tells us that experiencing chronic, unpredictable toxic stress in childhood predisposes us to a constellation of chronic conditions in adulthood. But why? Today, in labs across the country, neuroscientists are peering into the once inscrutable brain-body connection, and breaking down, on a biochemical level, exactly how the stress we face when we’re young catches up with us when we’re adults, altering our bodies, our cells, and even our DNA. What they’ve found may surprise you.

Some of these scientific findings can be a little overwhelming to contemplate. They compel us to take a new look at how emotional and physical pain are intertwined.

1. Epigenetic Shifts

When we’re thrust over and over again into stress inducing situations during childhood or adolescence, our physiological stress response shifts into overdrive, and we lose the ability to respond appropriately and effectively to future stressors 10, 20, even 30 years later. This happens due to a process known as gene methylation, in which small chemical markers, or methyl groups, adhere to the genes involved in regulating the stress response, and prevent these genes from doing their jobs.

As the function of these genes is altered, the stress response becomes re-set on ”high” for life, promoting inflammation and disease.

This can make us more likely to overreact to the everyday stressors we meet in our adult life, an unexpected bill, a disagreement with a spouse, or a car that swerves in front of us on the highway, creating more inflammation. This, in turn, predisposes us to a host of chronic conditions, including autoimmune disease, heart disease, cancer, and depression.

Indeed, Yale researchers recently found that children who’d faced chronic, toxic stress showed changes “across the entire genome,” in genes that not only oversee the stress response, but also in genes implicated in a wide array of adult diseases. This new research on early emotional trauma, epigenetic changes, and adult physical disease breaks down longstanding delineations between what the medical community has long seen as “physical” disease versus what is “mental” or “emotional.”

2. Size and Shape of the Brain

Scientists have found that when the developing brain is chronically stressed, it releases a hormone that actually shrinks the size of the hippocampus, an area of the brain responsible of processing emotion and memory and managing stress. Recent magnetic resonance imaging (MRI) studies suggest that the higher an individual’s ACE Score, the less gray matter he or she has in other key areas of the brain, including the prefrontal cortex, an area related to decision making and self regulatory skills, and the amygdala, or fear-processmg center. Kids whose brains have been changed by their Adverse Childhood Experiences are more likely to become adults who find themselves over-reacting to even minor stressors.

3. Neural Pruning

Children have an overabundance of neurons and synaptic connections; their brains are hard at work, trying to make sense of the world around them. Until recently, scientists believed that the pruning of excess neurons and connections was achieved solely in a “use-it-or-lose-it” manner, but a surprising new player in brain development has appeared on the scene: non-neuronal brain cells, known as microglia, which make up one-tenth of all the cells in the brain, and are actually part of the immune system, participate in the pruning process. These cells prune synapses like a gardener prunes a hedge. They also engulf and digest entire cells and cellular debris, thereby playing an essential housekeeping role.

But when a child faces unpredictable, chronic stress of Adverse Childhood Experiences, microglial cells “can get really worked up and crank out neurochemicals that lead to neuroinflammation,” says Margaret McCarthy, PhD, whose research team at the University of Maryland Medical Center studies the developing brain. “This below-the-radar state of chronic neuroinflammation can lead to changes that reset the tone of the brain for life.”

That means that kids who come into adolescence with a history of adversity and lack the presence of a consistent, loving adult to help them through it may become more likely to develop mood disorders or have poor executive functioning and decision-making skills.

4. Telomeres

Early trauma can make children seem “older,” emotionally speaking, than their peers. Now, scientists at Duke University; the University of California, San Francisco; and Brown University have discovered that Adverse Childhood Experiences may prematurely age children on a cellular level as well. Adults who’d faced early trauma show greater erosion in what are known as telomeres, the protective caps that sit on the ends of DNA strands, like the caps on Shoelaces, to keep the genome healthy and intact. As our telomeres erode, we’re more likely to develop disease, and our cells age faster.

5. Default Mode Network

Inside each of our brains, a network of neurocircuitry, known as the “default mode network,” quietly hums along, like a car idling in a driveway. It unites areas of the brain associated with memory and thought integration, and it’s always on standby, ready to help us to figure out what we need to do next. “The dense connectivity in these areas of the brain help us to determine what’s relevant or not relevant, so that we can be ready for whatever our environment is going to ask of us,” explains Ruth Lanius, neuroscientist professor of psychiatry, and director of the Post Traumatic Stress (PTSD) Research Unit at the University of Ontario.

But when children face early adversity and are routinely thrust into a state of fight-or-flight, the default mode network starts to go offline; it’s no longer helping them to figure out what’s relevant, or what they need to do next.

According to Lanius, kids who’ve faced early trauma have less connectivity in the default mode network, even decades after the trauma occurred. Their brains don’t seem to enter that healthy idling position, and so they may have trouble reacting appropriately to the world around them.

6. Brain-Body Pathway

Until recently, it’s been scientifically accepted that the brain is ”immune-privileged,” or cut off from the body’s immune system. But that turns out not to be the case, according to a groundbreaking study conducted by researchers at the University of Virginia School of Medicine. Researchers found that an elusive pathway travels between the brain and the immune system via lymphatic vessels. The lymphatic system, which is part of the circulatory system, carries lymph, a liquid that helps to eliminate toxins, and moves immune cells from one part of the body to another. Now we know that the immune system pathway includes the brain.

The results of this study have profound implications for ACE research. For a child who’s experienced adversity, the relationship between mental and physical suffering is strong: the inflammatory chemicals that flood a child’s brain when she’s chronically stressed aren’t confined to the brain alone; they’re shuttled from head to toe.

7. Brain Connectivity

Ryan Herringa, neuropsychiatrist and assistant professor of child and adolescent psychiatry at the University of Wisconsin, found that children and teens who’d experienced chronic childhood adversity showed weaker neural connections between the prefrontal cortex and the hippocampus. Girls also displayed weaker connections between the prefrontal cortex and the amygdala. The prefrontalcortex-amygdala relationship plays an essential role in determining how emotionally reactive we’re likely to be to the things that happen to us in our day-to-day life, and how likely we are to perceive these events as stressful or dangerous.

According to Herringa:

“If you are a girl who has had Adverse Childhood Experiences and these brain connections are weaker, you might expect that in just about any stressful situation you encounter as life goes on, you may experience a greater level of fear and anxiety.”

Girls with these weakened neural connections, Herringa found, stood at a higher risk for developing anxiety and depression by the time they reached late adolescence. This may, in part, explain why females are nearly twice as likely as males to suffer from later mood disorders.

This science can be overwhelming, especially to those of us who are parents. So, what can you do if you or a child you love has been affected by early adversity?

The good news is that, just as our scientific understanding of how adversity affects the developing brain is growing, so is our scientific insight into how we can offer the children we love resilient parenting, and how we can all take small steps to heal body and brain. Just as physical wounds and bruises heal, just as we can regain our muscle tone, we can recover function in under-connected areas of the brain. The brain and body are never static; they are always in the process of becoming and changing.

Donna Jackson Nakazawa

8 Ways People Recover From Post Childhood Adversity Syndrome

New research leads to new approaches with wide benefits.

In this infographic, I show the link between Adverse Childhood Experiences, later physical adult disease, and what we can do to heal.

Cutting edge research tells us that experiencing childhood emotional trauma can play a large role in whether we develop physical disease in adulthood. In Part 1 of this series we looked at the growing scientific link between childhood adversity and adult physical disease. This research tells us that what doesn’t kill you doesn’t necessarily make you stronger; far more often, the opposite is true.

Adverse Childhood Experiences (ACES), which include emotional or physical neglect; harm developing brains, predisposing them to autoimmune disease, heart disease, cancer, debression, and a number of other chronic conditions, decades after the trauma took place.

Recognizing that chronic childhood stress can play a role, along with genetics and other factors, in developing adult illnesses and relationship challenges, can be enormously freeing. If you have been wondering why you’ve been struggling a little too hard for a little too long with your emotional and physical wellbeing, feeling as if you’ve been swimming against some invisible current that never ceases this “aha” can come as a welcome relief. Finally, you can begin to see the current and understand how it’s been working steadily against you all of your life.

Once we understand how the past can spill into the present, and how a tough childhood can become a tumultuous, challenging adulthood, we have a new possibility of healing. As one interviewee in my new book, Childhood Disrupted: How Your Biography Becomes Your Biology, and How You Can Heal, said, when she learned about Adverse Childhood Experiences for the first time, “Now I understand why I’ve felt all my life as if I’ve been trying to dance without hearing any music.” Suddenly, she felt the possibility that by taking steps to heal from the emotional wounds of the past she might find a new layer of healing in the present.

There is truth to the old saying that knowledge is power. Once you understand that your body and brain have been harmed by the biological impact of early emotional trauma, you can at last take the necessary, science based steps to remove the fingerprints that early adversity left on your neurobiology. You can begin a journey to healing, to reduce your proclivity to inflammation, depression, addiction, physical pain, and disease.

Science tells us that biology does not have to be destiny. ACEs can last a lifetime but they don’t have to. We can reboot our brains. Even if we have been set on high reactive mode for decades or a lifetime, we can still dial it down. We can respond to life’s inevitable stressors more appropriately and shift away from an overactive inflammatory response. We can become neurobiologically resilient. We can turn bad epigenetics into good epigenetics and rescue ourselves.

Today, researchers recognize a range of promising approaches to help create new neurons (known as neurogenesis), make new synaptic connections between those neurons (known as synaptogenesis), promote new patterns of thoughts and reactions, bring underconnected areas of the brain back online, and reset our stress response so that we decrease the inflammation that makes us ill.

We have the capacity, within ourselves, to create better health. We might call this brave undertaking “the neurobiology of awakening.”

There can be no better time than now to begin your own awakening, to proactively help yourself and those you love, embrace resilience, and move forward toward growth, even transformation.

Here are 8 steps to try:

1. Take the ACE Questionnaire

The single most important step you can take toward healing and transformation is to fill out the ACE Questionnaire for yourself and share your results with your health, care practitioner. For many people, taking the 10-question survey “helps to normalize the conversation about Adverse Childhood Experiences and their impact on our lives,” says Vincent Felitti, co-founder of the ACE Study. “When we make it okay to talk about what happened, it removes the power that secrecy so often has.”

You’re not asking your healthcare practitioner to act as your therapist, or to change your prescriptions; you’re simply acknowledging that there might be a link between your past and your present. Ideally, given the recent discoveries in the field of ACE research, your doctor will also acknowledge that this link is plausible, and add some of the following modalities to your healing protocol.

2. Begin Writing to Heal.

Think about writing down your story of childhood adversity, using a technique psychologists call “writing to heal.” James Pennebaker, professor of psychology at the University of Texas, Austin, developed this assignment, which demonstrates the effects of writing as a healing modality. He suggests: “Over the next four days, write down your deepest emotions and thoughts about the emotional upheaval that has been influencing your life the most. in your writing, really let go and explore the event and how it has affected you. You might tie this experience to your childhood, your relationship with your parents, people you have loved or love now…Write continuously for twenty minutes a day.”

When Pennebaker had students complete this assignment, their grades went up. When adults wrote to heal, they made fewer doctors’ visits and demonstrated changes in their immune function. The exercise of writing about your secrets, even if you destroy what you’ve written afterward, has been shown to have positive health effects.

3. Practice Mindfulness Meditation.

A growing body of research indicates that individuals who’ve practiced mindfulness meditation and Mindfulness Based Stress Reduction (MBSR) show an increase in gray matter in the same parts of the brain that are damaged by Adverse Childhood Experiences and shifts in genes that regulate their physiological stress response.

According to Trish Magyari, LCPC, a mindfulness-based psychotherapist and researcher who specializes in trauma and illness, adults abuse who took part in a “trauma-sensitive” MBSR program, had less anxiety and depression, and demonstrated fewer PTSD symptoms, even two years after taking the course.

Many meditation centers offer MBSR classes and retreats, but you can practice anytime in your own home. Choose a time and place to focus on your breath as it enters and leaves your nostrils; the rise and fall of your chest; the sensations in your hands or through the whole body; or sounds within or around you. If you get distracted, just come back to your anchor.

There are many medications you can take that dampen the sympathetic nervous system (which ramps up your stress response when you come into contact with a stressor), but there aren’t any medications that boost the parasympathetic nervous system (which helps to calm your body down after the stressor has passed).

Your breath is the best natural calming treatment, and it has no side effects.

4. Yoga

When children face ACEs, they often store decades of physical tension from a fight, flight, or freeze state of mind in their bodies. PET scans show that yoga decreases blood flow to the amygdala, the brain’s alarm center, and increases blood flow to the frontal lobe and prefrontal cortex, which help us to react to stressors with a greater sense of equanimity.

Yoga has also be found to increase levels of GABA, or gamma aminobutyric acid, a chemical that improves brain function, promotes calm, and helps to protect us against depression and anxiety.

5. Therapy

Sometimes, the long lasting effects of childhood trauma are just too great to tackle on our own. In these cases, says Jack Kornfield, psychologist and meditation teacher, “meditation is not always enough.” We need to bring unresolved issues into a therapeutic relationship, and get backup in unpacking the past.

When we partner with a skilled therapist to address the adversity we may have faced decades ago, those negative memories become paired with the positive experience of being seen by someone who accepts us as we are, and a new window to healing opens.

Part of the power of therapy lies in an allowing safe person. A therapist’s unconditional acceptance helps us to modify the circuits in our brain that tell us that we can’t trust anyone, and grow new, healthier neural connections.

It can also help us to heal the underlying, cellular damage of traumatic stress, down to our DNA. In one study, patients who underwent therapy showed changes in the integrity of their genome, even a year after their regular sessions ended.

6. EEG Neurofeedback

Electroencephalographic (EEG) Neurofeedback is a clinical approach to healing childhood trauma in which patients learn to influence their thoughts and feelings by watching their brain’s electrical activity in real-time, on a laptop screen. Someone hooked up to the computer via electrodes on his scalp might see an image of a field; when his brain is under-activated in a key area, the field, which changes in response to neural activity, may appear to be muddy and gray, the flowers wilted; but when that area of the brain reactivates, it triggers the flowers to burst into color and birds to sing. With practice, the patient learns to initiate certain thought patterns that lead to neural activity associated with pleasant images and sounds.

You might think of a licensed EEG Neurofeedback therapist as a musical conductor, who’s trying to get different parts of the orchestra to play a little more softly in some cases, and a little louder in others, in order to achieve harmony. After just one EEG Neurofeedback session, patients showed greater neural connectivity and improved emotional resilience, making it a compelling option for those who’ve suffered the long lasting effects of chronic, unpredictable stress in childhood.

7. EMDR Therapy

Eye Movement Desensitization and Reprocessing (EMDR) is a potent form of psychotherapy that helps individuals to remember difficult experiences safely and relate those memories in ways that no longer cause pain in the present.

Here’s how it works:

EMDR-certified therapists help patients to trigger painful emotions. As these emotions lead the patients to recall specific difficult experiences, they are asked to shift their gaze back and forth rapidly, often by following a pattern of lights or a wand that moves from right to left, right to left, in a movement that simulates the healing action of REM sleep.

The repetitive directing of attention in EMDR induces a neurobiological state that helps the brain to re-integrate neural connections that have been dysregulated by chronic, unpredictable stress and past experiences. This re-integration can, in turn, lead to a reduction in the episodic, traumatic memories we store in the hippocampus, and downshift the amygdala’s activity. Other studies have shown that EMDR increases the volume of the hippocampus,

EMDR therapy has been endorsed by the World Health Organization as one of only two forms of psychotherapy for children and adults in natural disasters and war settings.

8. Rally Community Healing

Often, ACEs stem from bad relationships, neglectful relatives, schoolyard bullies, abusive partners, but the right kinds of relationships can help to make us whole again. When we find people who support us, when we feel “tended and befriended,” our bodies and brains have a better shot at healing. Research has found that having strong social ties improves outcomes for women with breast cancer, multiple sclerosis, and other diseases. In part, that’s because positive interactions with others boost our production of oxytocin, a feel-good hormone that dials down the inflammatory stress response.

If you’re at a loss for ways to connect, try a mindfulness meditation community or an MBSR class, or pass along the ACE Questionnaire or even my newest book, Childhood Disrupted: How Your Biography Becomes Your Biology, and How You Can Heal, to family and friends to spark important, meaningful conversations.

You’re Not Alone

Whichever modalities you and your physician choose to implement, it’s important to keep in mind that you’re not alone. When you begin to understand that your feelings of loss, shame, guilt, anxiety, or grief are shared by so many others, you can lend support and swap ideas for healing.

When you embrace the process of healing despite your Adverse Childhood Experiences, you don’t just become who you might have been if you hadn’t encountered childhood suffering in the first place. You gain something better, the hard earned gift of life wisdom, which you bring forward into every arena of your life. The recognition that you have lived through hard times drives you to develop deeper empathy, seek more intimacy, value life’s sweeter moments, and treasure your connectedness to others and to the world at large. This is the hard won benefit of having known suffering.

Best of all, you can find ways to start right where you are, no matter where you find yourself.

Future Directions in Childhood Adversity and Youth Psychopathology

Katie A. McLaughlin, Department of Psychology, University of Washington

Abstract

Despite long standing interest in the influence of adverse early experiences on mental health, systematic scientific inquiry into childhood adversity and developmental outcomes has emerged only recently. Existing research has amply demonstrated that exposure to childhood adversity is associated with elevated risk for multiple forms of youth psychopathology.

In contrast. knowledge of developmental mechanisms linking childhood adversity to the onset of Psychopathology, and whether those mechanisms are general or specific to particular kinds of adversity, remains cursory.

Greater understanding of these pathways and identification of protective factors that buffer children from developmental disruptions following exposure to adversity is essential to guide the development of interventions to prevent the onset of psychopathology following adverse childhood experiences,

This article provides recommendations for future research in this area. In particular, use of a consistent definition of childhood adversity, integration of studies of typical development with those focused on childhood adversity, and identification of distinct dimensions of environmental experience that differentially influence development are required to uncover mechanisms that explain how childhood adversity is associated with numerous psychopathology outcomes (i.e., multifinality) and identify moderators that shape divergent trajectories following adverse childhood experiences.

A transdiagnostic model that highlights disruptions in emotional processing and poor executive functioning as key mechanisms linking childhood adversity with multiple forms of psychopathology is presented as a starting point in this endeavour. Distinguishing between general and specific mechanisms linking childhood adversity with psychopathology is needed to generate empirically informed interventions to prevent the long term consequences of adverse early environments on children’s development.

The lasting influence of early experience on mental health across the lifespan has been emphasized in theories of the etiology of psychopathology since the earliest formulations of mental illness. In particular, the roots of mental disorder have often been argued to be a consequence of adverse environmental experiences occurring in childhood. Despite this long standing interest, systematic scientific inquiry into the effects of childhood adversity on health and development has emerged only recently.

Prior work on childhood adversity focused largely on individual types of adverse experiences, such as death of a parent, divorce, sexual abuse, or poverty, and research on these topics evolved as relatively independent lines of inquiry. The transition to considering these types of adversities as indicators of the same underlying construct was prompted, in part, by the findings of a seminal study examining childhood adversity as a determinant of adult physical and mental health and advances in theoretical conceptualizations of stress. Specifically. findings from the Adverse Childhood Experiences (ACE) Study documented high levels of cooccurrence of multiple forms of childhood adversity and strong associations of exposure to adverse childhood experiences with a wide range of adult health outcomes (Dong et al., 2004; Edwards, Holden, Felitti, & Anda, 2003; Felitti et al., 1998).

Around the same time, the concept of allostatic load was introduced as a comprehensive neurobiological model of the effects of stress (McEwen, 1998, 2000). Allostatic load provided a framework for explaining the neurobiological mechanisms linking a variety of adverse social experiences to health. Together, these discoveries sparked renewed interest in the childhood determinants of physical and mental health. Since that time there has been a veritable explosion of research into the impact of childhood adversity on developmental outcomes, including psychopathology.

CHILDHOOD ADVERSITY AND PSYCHOPATHOLOGY

Over the past two decades, hundreds of studies have examined the associations between exposure to childhood adversity and risk for psychopathology (Evans, Li, & Whipple, 2013). Here, I briefly review this evidence, focusing specifically on findings from epidemiological studies designed to allow inferences to be drawn at the population level. These studies have documented five general patterns with regard to childhood adversity and the distribution of mental disorders in the population.

First, despite differences across studies in the prevalence of specific types of adversity, all population based studies indicate that exposure to childhood adversity is common. The prevalence of exposure to childhood adversity is estimated at about 50% in the U.S. population across numerous epidemiological surveys (Green et al., 2010; Kessler, Davis, & Kendler, 1997; McLaughlin, Conron, Koenen, & Gilman, 2010; McLaughlin, Green et al., 2012). Remarkably similar prevalence estimates have been documented in other high income countries, as well as in low and middle income countries worldwide (Kessler et al., 2010).

Second, individuals who have experienced childhood adversity are at elevated risk for developing a lifetime mental disorder compared to individuals without such exposure, and the odds of developing a lifetime mental disorder increase as exposure to adversity increases (Edwards et al., 2003; Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010; McLaughlin, Conron, etal., 2010; McLaughlin, Green, et al., 2012).

Third, exposure to childhood adversity confers vulnerability to psychopathology that persists across the life course. Childhood adversity exposure is associated not only with risk of mental disorder onset in childhood and adolescence (McLaughlin, Green, et al., 2012) but also with elevated odds of developing a first onset mental disorder in adulthood, which persists after adjustment for mental disorders beginning at earlier stages of development (Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010).

Fourth, the associations of childhood adversity with different types of commonly occurring mental disorders are largely nonspecific. Individuals who have experienced childhood adversity experience greater odds of developing mood, anxiety, substance use, and disruptive behavior disorders, with little meaningful variation in the strength of associations across disorder classes (Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010; McLaughlin, Green, et al., 2012).

Recent epidemiological findings suggest that the associations of child maltreatment, a commonly measured form of adversity, with lifetime mental disorders operate entirely through a latent liability to experience internalizing and externalizing psychopathology with no direct effects on specific mental disorders that are not explained by this latent vulnerability (Caspi et al., 2014; Keyes et al., 2012).

Finally, exposure to childhood adversity explains a substantial proportion of mental disorder onsets in the population, both in the United States and cross nationally (Afifi et al., 2008′, Green et a1., 2010; Kessler et al., 2010; McLaughlin, Green, et al., 2012). This reflects both the high prevalence of exposure to childhood adversity and the strong association of childhood adversity with the onset of psychopathology.

Together, findings from epidemiological studies indicate clearly that exposure to childhood adversity powerfully shapes risk for psychopathology in the population.

As such, it is time for the field to move beyond these types of basic descriptive studies to research designs aimed at identifying the underlying developmental mechanisms linking childhood adversity to psychopathology. Although ample research has been conducted examining mechanisms linking individual types of adversity to psychopathology (e.g., sexual abuse; Trickett, Noll, & Putnam, 2011), far less is known about which of these mechanisms are common across different types of adversity versus specific to particular types of experiences. Greater understanding of these pathways, as well as the identification of protective factors that buffer children from disruptions in emotional, cognitive, social, and neurobiological development following exposure to adversity, is essential to guide the development of interventions to prevent the onset of psychopathology in children exposed to adversity, a critical next step for the field.

However, persistent issues regarding the definition and measurement of childhood adversity must be addressed before meaningful progress on mechanisms, protective factors, and prevention of psychopathology following childhood adversity will be possible.

FUTURE DIRECTIONS IN CHILDHOOD ADVERSITY AND YOUTH PSYCHOPATHOLOGY

This article has two primary goals. The first is to provide recommendations for future research on childhood adversity and youth psychopathology. These recommendations relate to the definition and measurement of childhood adversity, the integration of studies of typical development with those on childhood adversity, and the importance of distinguishing between general and specific mechanisms linking childhood adversity to psychopathology.

The second goal is to provide a transdiagnostic model of mechanisms linking childhood adversity and youth psychopathology that incorporates each of these recommendations.

Defining Childhood Adversity

Childhood adversity is a construct in search of a definition. Despite the burgeoning interest and research attention devoted to childhood adversity, there is a surprising lack of consistency with regard to the definition and measurement of the construct. Key issues remain unaddressed in the literature regarding the definition of childhood adversity and the boundary conditions of the construct. To what does the construct of childhood adversity refer? What types of experiences qualify as childhood adversity and what types do not?

Where do we draw the line between normative experiences of stress and those that qualify as an adverse childhood experience? How does the construct of childhood adversity differ from other constructs that have been linked to psychopathology risk, including stress, toxic stress. and trauma? It will be critical to gain clarity on these definitional issues before more complex questions regarding mechanisms and protective factors can be systematically examined.

Even in the seminal ACE Study that spurred much of the recent research into childhood adversity, a concrete definition of adverse childhood experience is not provided. The original article from the study argues for the importance of understanding the lasting health effects of child abuse and “household dysfunction,” the latter of which is never defined specifically (Felitti et al., 1998). The CDC website for the ACE Study indicates that the ACE score. a count of the total number of adversities experienced. is designed to assess ”the total amount of stress experienced during childhood.”

Why has a concrete definition of childhood adversity remained elusive? As I see it, there is a relatively simple explanation for this notable gap in the literature. Childhood adversity is difficult to define but fairly obvious to most observers. making the construct an example of the classic standard of you know it when you see it. Although this has allowed a significant scientific knowledge base on childhood adversity to emerge within a relatively short period, the lack of an agreed upon definition of the construct represents a significant impediment to future progress in the field.

How can we begin to build scientific consensus on the definition of childhood adversity? Critically, we must come to an agreement about what childhood adversity is and what it is not. Adversity is defined as “a state or instance of serious or continued difficulty or misfortune; a difficult situation or condition; misfortune or tragedy” (“Adversity,” 2015).

This provides a reasonable starting point. Adversity is an environmental event that must be serious (i.e., severe) or a series of events that continues overtime (i.e.. chronic).

Building on Scott Monroe‘s (2008) definition of life stress and models of experience expectant brain development (Baumrind, 1993; Fox. Levitt, & Nelson. 2010), I propose that childhood adversity should be defined as experiences that are likely to require significant adaptation by an average child and that represent a deviation from the expectable environment. The expectable environment refers to a wide range of species typical environmental inputs that the human brain requires to develop normally. These include sensory inputs (e.g., variation in patterned light information that is required for normal development of the visual system), exposure to language, and the presence of a sensitive and responsive caregiver (Fox et al., 2010).

As I have argued elsewhere (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014), deviations from the expectable environment often take two primary forms: an absence of expected inputs (e. g., limited exposure to language or the absence of a primary caregiver), or the presence of unexpected inputs that represent significant threats to the physical integrity or well being of the child (e.g., exposure to violence).

A similar approach to classifying key forms of child adversity has been articulated by others as well (Farah et al., 2008; Humphreys & Zeanah, 2015). These experiences can either be chronic (e.g.. prolonged neglect) or involve single events that are severe enough to represent a deviation from the expectable environment (e.g., sexual abuse).

Together, this provides a working definition of childhood adversity: exposure during childhood or adolescence to environmental circumstances that are likely to require significant psychological, social, or neurobiological adaptation by an average child and that represent a deviation from the expectable environment.

This definition provides some clarity about what childhood adversity is not. The clearest boundary condition involves the developmental timing of exposure; experiences classified as childhood adversity must occur prior to adulthood, either during childhood or adolescence. Most research on childhood adversity has taken a broad definition of childhood, including events occurring during either childhood or adolescence. Although the demarcation between adolescence and adulthood is itself a point of debate, relative consensus exists regarding the onset of adult roles as the end of adolescence (Steinberg, 2014).

Second. childhood adversity refers to an event or ongoing events in the environment. Childhood adversity thus refers only to specific environmental circumstances or events and not to an individual child’s response to those circumstances.

Third, childhood adversity refers to environmental conditions that are likely to require significant psychological, social, or neurobiological adaptation by an average child; therefore, events that represent transient or minor hassles should not qualify.

What types of events should be considered severe enough to warrant classification as adversity? Although there is no absolute rule or formula that can be used to distinguish circumstances or events requiring significant adaptation from those that are less severe or impactful, childhood adversity should include conditions or events that are likely to have a meaningful and lasting impact on developmental processes for most children who experience them. In other words, experiences that could alter fundamental aspects of development in emotional, cognitive, social, or neurobiological domains are the types of experiences that should qualify as adversity.

Studies of childhood adversity should clearly define the study specific decision rules used to distinguish between adversity and more normative stressors.

Finally, environmental circumstances or stressors that do not represent deviations from the expectable environment should not be classified as childhood adversity. In other words. childhood adversity should not include any and all stressors that occur during childhood or adolescence. Two examples of childhood stressors that would likely not qualify as childhood adversity based on this definition, because they do not meet the condition of representing a deviation from the expectable environment, are moving to a new school and death of an elderly grandparent. Each of these childhood stressors should require adaptation by an average child. and could influence mental health and development. However, neither represents a deviation from the expectable childhood environment and therefore does not meet the proposed definition of childhood adversity.

A key question for the field is whether the definition of childhood adversity should be narrow or broad. This question will determine whether other common forms of adversity or stress should be considered as indicators of childhood adversity. For example, many population based studies have included parental psychopathology and divorce as forms of adversity (Felitti et al., 1998; Green et al.. 2010). Given the high prevalence of psychopathology and divorce in the population, consideration of any form of parental psychopathology or any type of divorce as a form of adversity results in a fairly broad definition of adversity; certainly, not all cases of parental psychopathology or all divorces result in significant adversity for children. A more useful approach might be to consider only those cases of parental psychopathology or divorce that result in parenting behavior that deviates from the expectable environment (i. e., consistent unavailability, unresponsiveness, or insensitive care) or that generate other types of significant adversity for children (e.g., economic adversity, emotional abuse, etc.) as meeting the threshold for childhood adversity. Providing these types of boundary conditions is important to prevent the construct of childhood adversity from meaning everything and nothing at the same time.

Finally, how does childhood adversity differ from related constructs, including stress, toxic stress, and trauma that can also occur during childhood? What is unique about the construct of childhood adversity that is not captured in definitions of these similar constructs?

First, how is childhood adversity different from stress? The prevailing conceptualization of life stress defines the construct as the adaptation of an organism to specific circumstances that change over time (Monroe, 2008). This definition includes three primary components that interact with one another: environment (the circumstance or event that requires adaptation by the organism), organism (the response to the environmental stimulus), and time (the interactions between the organism and the environment over time; Monroe, 2008). In contrast, childhood adversity refers only to the first of these three components, the environmental aspect of stress.

Second. how is adversity different from toxic stress, a construct recently developed by Jack Shonkoff and colleagues (Shonkoff & Garner, 2012)? Toxic stress refers to the second component of stress just described, the response of the organism. Specifically, toxic stress refers to exaggerated, frequent, or prolonged activation of physiological stress response systems in response to an accumulation of multiple adversities over time in the absence of protection from a supportive caregiver (Shonkoff & Garner, 2012). The concept of toxic stress is conceptually similar to the construct of allostatic load as defined by McEwen (2000) and focuses on a different aspect of stress than childhood adversity.

Finally, how is childhood adversity distinct from trauma? Trauma is defined as exposure to actual or threatened death. serious injury, or sexual violence, either by directly experiencing or witnessing such events or by learning of such events occurring to a close relative or friend (American Psychiatric Association, 2013). Traumatic events occurring in childhood represent one potential form of childhood adversity, but not all types of childhood adversity are traumatic. Examples of adverse childhood experiences that would not be considered traumatic are neglect; poverty; and the absence of a stable, supportive caregiver.

The first concrete recommendation for future research is that the field must utilize a consistent definition of childhood adversity. A useful definition must have clarity about what childhood adversity is and what it is not, provide guidance about decision rules for applying the definition in specific contexts, and increase consistency in the measurement and application of childhood adversity across studies. The definition proposed here that childhood adversity involves experiences that are likely to require significant adaptation by an average child and that represent a deviation from the expectable environment-represents a starting point in this endeavor, although consideration of alterative definitions and scholarly debate about the relative merits of different definitions is encouraged.

Integrating Studies of Typical and Atypical Development

A developmental psychopathology perspective emphasizes the reciprocal and integrated nature of our understanding of normal and abnormal development (Cicchetti, 1996′, Cicchetti & Lynch, 1993; Lynch & Cicchetti, 1998). Normal developmental patterns must be characterized to identify developmental deviations, and abnormal developmental outcomes shed light on the normal developmental processes that lead to maladaptation when disrupted (Cicchetti, 1993; Sroufe, 1990). Maladaptive outcomes, including psychopathology, are considered to be the product of developmental processes (Sroufe, 1997, 2009). This implies that in order to uncover mechanisms linking childhood adversity to psychopathology, the developmental trajectory of the candidate emotional, cognitive, social, or neurobiological process under typical circumstances must first be characterized before examining how exposure to an adverse environment alters that trajectory. This approach has been utilized less frequently than would be expected in the literature on childhood adversity.

Recent work from Nim Tottenham’s lab on functional connectivity between the amygdala and medial prefrontal cortex (mPFC) highlights the utility of this strategy. In an initial study, Gee, Humphreys, et a1. (2013) demonstrated age related changes in amygdala-mPFC functional connectivity in a typically developing sample of children during a task involving passive viewing of fearful and neutral faces. Specifically, they observed a developmental shift from a pattern of positive amygdala-mPFC functional connectivity during early and middle childhood to a pattern of negative connectivity (i.e., higher mPFC activity, lower amygdala activity) beginning in the prepubertal period and continuing throughout adolescence (Gee. Humphreys, et al., 2013). Next, they examined how exposure to institutional rearing in infancy influenced these age related changes, documenting a more mature pattern of negative functional connectivity among young children with a history of institutionalization (Gee, Gabard Dumam, et a1., 2013).

Utilizing this type of approach is important not only to advance knowledge of developmental mechanisms underlying childhood adversity-psychopathology associations but also to leverage research on adverse environmental experiences to inform our understanding of typical development. Specifically, as frequently argued by Cicchetti (Cicchetti & Toth, 2009), research on atypical or aberrant developmental processes can provide a window into typical development not available through other means, This is particularly relevant in studies of some forms of childhood adversity that involve an absence of expected inputs from the environment, such as institutional rearing and child neglect (McLaughlin. Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014). Examining the developmental consequences associated with deprivation in a particular type of input from the environment (e.g., the presence of an attachment figure, exposure to complex language) can provide insights into the types of environmental inputs that are required for a system or set of competencies to develop normally.

Evidence on the developmental trajectories of children raised in institutional settings provides an illustrative example. Institutions for abandoned and orphaned children vary widely, but a common feature across them is the absence of an attachment figure who provides sensitive and responsive care for each child (Smyke et al., 2007; Tottenham, 2012; Zeanah et al., 2003). Developmental research on children raised in institutional settings has provided ample evidence about the importance of the attachment relationship in early development for shaping numerous aspects of development, Unsurprisingly, most children raised in institutions fail to develop a secure attachment relationship to a caregiver; this is particularly true if children remain in institutional care past the age of 2 years (Smyke, Zeanah, Fox, Nelson, & Guthrie, 2010; Zeanah, Smyke, Koga, Carlson, & The Bucharest Early Intervention Project Core Group, 2005).

Children reared in institutional settings also exhibit social skills deficits, delays in language development, lasting disruptions in executive functioning skills, decrements in IQ, and atypical patterns of emotional processing (Almas et al., 2012; Bos. Fox, Zeanah, & Nelson, 2009; Nelson et al., 2007; Tibu et al., 2016; Tottenham et al., 2011; Windsor et al., 2011). Institutional rearing also has wide ranging impacts on patterns of brain development, including neural structure and function (Gee et al., 2013; McLaughlin, Fox, Zeanah, & Nelson, 20] 1; McLaughlin, Sheridan, Winter, et al., 2014; Sheridan, Fox, Zeanah, McLaughlin, & Nelson, 2012; Tottenham et al., 2011).

Although children raised in institutional settings often experience deprivation in environmental inputs of many kinds, it is likely that the absence of a primary attachment figure in early development explains many of the downstream consequences of institutionalization on developmental outcomes. Indeed, recent evidence suggests that disruptions in attachment may be a causal mechanism linking institutional rearing with the onset of anxiety and depression in children. Specifically, in a randomized controlled trial of foster care as an intervention for orphaned children in Romania, improvements in attachment security were a mechanism underlying the preventive effects of the intervention on the onset of anxiety and depression in children (McLaughlin, Zeanah, Fox, & Nelson, 2012). By examining the developmental consequences of the absence of an expected input from the environment, namely, the presence of a primary attachment figure, studies of institutional rearing provide strong evidence for the centrality of the early attachment relationship in shaping numerous aspects of development.

Sensitive Periods

The integration of studies on typical and atypical development may be particularly useful in the identification of sensitive periods. Developmental psychopathology emphasizes the cumulative and hierarchical nature of development (Gottlieb, 1991a, 1991b; Sroufe, 2009; Sroufe, Egeland, & Kreutzer, I990; Werner & Kaplan, 1963). Learning and acquisition of competencies at one point in development provide the scaffolding upon which subsequent skills and competencies are built, such that capabilities from previous periods are consolidated and reorganized in a dynamic, unfolding process across time. The primary developmental tasks occurring at the time of exposure to a risk factor are thought to be the most likely to be interrupted or disrupted by the experience. Developmental deviations from earlier periods are then carried forward and have consequences for children’s ability to successfully accomplish developmental tasks in a later period (Cicchetti & Toth, 1998; Sroufe, 1997). In other words, early experiences constrain future learning of patterns or associations that represent departures from those that were previously learned (Kuhl, 2004).

This concept points to a critical area for future research on childhood adversity involving the identification of sensitive periods of emotional, cognitive, social, and neurobiological development when inputs from the environment are particularly influential. Sensitive periods have been identified both in sensory development and in the development of complex social-cognitive skills, including language (Hensch, 2005′, Kuhl. 2004).

Emerging evidence from cognitive neuroscience also suggests the presence of developmental periods when specific regions of the brain are most sensitive to the effects of stress and adversity (Andersen et al., 2008).

However, identification of sensitive periods has remained elusive in other domains of emotional and social development, potentially reflecting the fact that sensitive periods exist for fewer processes in these domains. However, determining how anomalous or atypical environmental inputs influence developmental processes differently based on the timing of exposure provides a unique opportunity to identify sensitive periods in development; in this way, research on adverse environments can inform our understanding of typical development by highlighting the environmental inputs that are necessary to foster adaptive development.

Identifying sensitive periods of emotional and social development requires detailed information on the timing of exposure to atypical or adverse environments, which is challenging to measure. To date, studies of institutional rearing have provided the best opportunity for studying sensitive periods in human emotional and social development, as it is straightforward to determine the precise period during which the child lived in the institutional setting.

Studies of institutional rearing have identified a sensitive period for the development of a secure attachment relationship at around 2 years of age; the majority of children placed into stable family care before that time ultimately develop secure attachments to a caregiver, whereas the majority of children placed after 2 years fail to develop secure attachments (Smyke et al., 2010).

Of interest, a sensitive period occurring around 2 years of age has also been identified for other domains, including reactivity of the autonomic nervous system and hypothalamic pituitary adrenal (HPA) axis to the environment and a neural marker of affective style (i.e., frontal electroencephalogram asymmetry; McLaughlin et al., 20] l; McLaughlin, Sheridan, et al., 2015), suggesting the importance of the early attachment relationship in shaping downstream aspects of emotional and neurobiological development.

The second concrete recommendation for future research is to integrate studies of typical development with those focused on understanding the impact of childhood adversity, in particular, research that can shed light on sensitive periods in emotional, social, cognitive, and neurobiological development is needed. Identifying the developmental processes that are disrupted by exposure to particular types of adverse environments will be facilitated by first characterizing the typical developmental trajectories of the processes in question. In turn, studies of atypical or adverse environments should be leveraged to inform our understanding of the types of environmental inputs that are required, and when, for particular systems to develop normally.

Given the inherent problems in retrospective assessment of timing of exposure to particular environmental experiences, longitudinal studies with repeated measurements of environmental experience and acquisition of developmental competencies are likely to be most informative. Alternatively, the occurrence of exogenous events like natural disasters, terrorist attacks, and changes in policies or the availability of resources (e.g., the opening of the casino on a Native American reservation; Costello, Compton, Keeler, & Angold, 2003) provides additional opportunities to study sensitive periods of development. Identifying sensitive periods is likely to yield critical insights into the points in development when particular capabilities are most likely to be influenced by environmental experience, an issue of central importance for understanding both typical and atypical development. Such information can be leveraged to inform decisions about the points in time when psychosocial interventions for children exposed to adversity are likely to be maximally efficacious.

Explaining Multifinality

The principle of multifinality is central to developmental psychopathology (Cicchetti, 1993). Multifinality refers to the process by which the same risk and/or protective factors may ultimately lead to different developmental outcomes (Cicchetti & Rogosch, 1996).

It has been repeatedly demonstrated that most forms of childhood adversity are associated with elevated risk for the onset of virtually all commonly occurring mental disorders (Green et al., 2010; McLaughlin, Green, et al., 2012). As noted earlier, recent evidence suggests that child maltreatment is associated with a latent liability for psychopathology that explains entirely the associations of maltreatment with specific mental disorders (Caspi et al., 2014; Keyes et al., 2012). However, the mechanisms that explain how child maltreatment, or other forms of adversity, influence a generalized liability to psychopathology have not been specified. To date, there have been few attempts to articulate a model explaining how childhood adversity leads to the diversity of mental disorders with which it is associated (i. e., multifinality). What are the mechanisms that explain this generalized vulnerability to psychopathology arising from adverse early experiences? Are these mechanisms shared across multiple forms of childhood adversity, or are they specific to particular types of adverse experience?

Identifying general versus specific mechanisms will require changes in the way we conceptualize and measure childhood adversity. Prior research has followed one of two strategies. The first involves studying individual types of childhood adversity, such as parental death, physical abuse, neglect, or poverty (Chase Lansdale, Cherlin, & Kieman, 1995; Dubowitz, Papas, Black, & Starr, 2002; Fristad, Jedel, Weller, & Weller, 1993; Mullen, Martin, Anderson, Romans, & Herbison, 1993; Noble, McCandliss, & Farah, 2007; Wolfe, Sas, & Wekerle, 1994). However, most individuals exposed to childhood adversity have experienced multiple adverse experiences (Dong et a1., 2004; Finkelhor, Ormrod, & Turner, 2007; Green et a1., 2010; McLaughlin, Green, et a1., 2012). This presents challenges for studies focusing on a single type of adversity, as it is unclear if any observed associations represent the downstream effects of the focal adversity in question (e.g., poverty) or the consequences of other co occurring experiences (e.g., exposure to violence) that might have different developmental consequences.

Increasing recognition of the co-occurring nature of adverse childhood experiences has resulted in a shift from focusing on single types of adversity to examining the associations between a number of adverse childhood experiences and developmental outcomes, the core strategy of the ACE approach (Arata, Langhinrichsen Roling, Bowers, & O’Brien, 2007; Dube et al., 2003; Edwards et a1., 2003; Evans et al., 2013). There has been a proliferation of research utilizing this approach in recent years, and it has proved useful in documenting the importance of childhood adversity as a risk factor for a wide range of negative mental health outcomes. However, this approach implicitly assumes that very different kinds of experiences ranging from violence exposure to material deprivation (e.g., food insecurity) to parental loss influence psychopathology through similar mechanisms. Although there is likely to be some overlap in the mechanisms linking different forms of adversity to psychopathology, the count approach oversimplifies the boundaries between distinct types of environmental experience that may have unique developmental consequences.

An alternative approach that is likely to meet with more success involves identifying dimensions of environmental experience that underlie multiple forms of adversity and are likely to influence development in similar ways. In recent work, my colleague Margaret Sheridan and I have proposed two such dimensions that cut across multiple forms of adversity: threat and deprivation (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014).

Threat involves exposure to events involving harm or threat of harm, consistent with the definition of trauma in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; American Psychiatric Association, 2013). Threat is a central dimension underlying multiple commonly studied forms of adversity, including physical abuse, sexual abuse, some forms of emotional abuse (i.e., that involve threats of physical violence and coercion), exposure to domestic violence, and other forms of violent victimization in home, school, or community settings.

Deprivation, in contrast, involves the absence of expected cognitive and social inputs from the environmental stimuli, resulting in reduced opportunities for learning. Deprivation in expected environmental inputs is common to multiple forms of adversity including emotional and physical neglect, institutional rearing, and poverty. Critically, we do not propose that exposure to deprivation and threat occurs independently for children, as these experiences are highly co-occurring, or that these are the only important dimensions of experience involved in childhood adversity.

Instead we propose, first, that these are two important dimensions that can be measured separately and, second, that the mechanisms linking these experiences to the onset of psychopathology are likely to be at least partially distinct (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014). I describe some of these key mechanisms in the transdiagnostic model presented later. Recently, others have argued for the importance of taking this type of dimensional approach as well (Hamby & Grych, 2013; Humphreys & Zeanah, 2015).

Specific recommendations are for future research to (a) identify key dimensions of environmental experience that might differentially influence developmental outcomes and (b) measure multiple such dimensions in studies of childhood adversity to distinguish between general and specific underlying mechanisms linking different forms of adversity to psychopathology. Fine grained measurement of the dimensions of threat and deprivation has often not been conducted within the same study.

Studies focusing on specific types of exposure (e.g., abuse) without measuring or adjusting for co-occurring exposures (e.g., neglect) are unable to distinguish between common and specific mechanisms linking different dimensions of adverse experiences to psychopathology. The only way to determine whether such specificity exists is to measure and model these dimensions of experience together in future studies.

Characterizing the Interplay of Risk and Protective Factors

Although psychopathology is common among children exposed to a wide range of adverse environments, many children exhibit adaptation and resilience following adversity (Masten, 2001; Masten, Best, & Garmezy, 1990). For example, studies of resilience suggest that children who have a positive relationship with a caring and competent adult; are good at learning. problem solving, and self regulation; are socially engaging; and have positive self image are more likely to exhibit positive adaptation after exposure to adversity than children without these characteristics (Luthar, Cicchetti. & Becker, 2000; Masten. 2001; Masten et al.. 1990).

However, in contrast to the consistent pattern of associations between childhood adversity and psychopathology, evidence for protective factors varies widely across studies, and in most cases children exposed to adversity exhibit adaptive functioning in some domains but not others: even within a single domain, children may be functioning well at one point in time but not at others (Luthar et al.. 2000). This is not surprising given that the degree to which a particular factor is protective depends heavily upon context, including the specific risk factors with which it is interacting (Cicchetti & Lynch. 1993; Sameroff. Gutman, & Peck, 2003).

For example. authoritative parenting has been shown to be associated with adaptive outcomes for children raised in stable contexts that are largely free of significant adversity (Steinberg, Elmen, & Mounts, 1989; Steinberg, Lamborn, Dornbusch. & Darling. 1992; Steinberg, Mounts, Lambom. & Dombusch, I991); in contrast, authoritarian parenting appears to be protective for children being raised in environments characterized by low resources and/or high degrees of violence and other threats (Flouri, 2007; Gonzales, Cauce. Friedman. & Mason, 1996).

The degree to which variation in specific genetic polymorphisms moderates the impact of childhood adversity on development outcomes is also highly variable across studies; although genetic variation clearly contributes to developmental trajectories of adaptation and maladaptation following childhood adversity, this topic has been reviewed extensively elsewhere (Heim & Binder. 2012: McCrory, De Brito, & Viding. 2010; Uher & McGuffrn, 20l0) and is not discussed further. This complexity has contributed to the widely variable findings regarding protective factors and resilience.

Progress in identifying protective factors that buffer children from maladaptive outcomes following childhood adversity might be achieved by shifting the focus from downstream outcomes to more proximal mechanisms known to underlie the relationship between adverse childhood experiences and psychopathology. Research on resiliency has often focused on distal outcomes, such as the absence of psychopathology, the presence of high quality peer relationships, or good academic performance as markers of adaptive functioning in children with exposure to adversity (Bolger, Patterson, & Kupersmidt. 1999; Collishaw et al., 2007; Fergusson & Lynskey, 1996; Luthar, 1991).

Just as there are numerous mechanisms through which exposure to adverse environments lead to psychopathology and other downstream outcomes, there are likely to be a wide range of mechanisms through which protective factors buffer children from maladaptation following childhood adversity. Indeed. modern conceptualizations of resilience describe it as a developmental process that unfolds over time as an ongoing transaction between a child and the multiple contexts in which he or she is embedded (Luthar et al., 2000)

Rather than examining protective factors that buffer children from developing psychopathology following adverse childhood experiences, an alternative approach is to focus on factors that moderate the association of childhood adversity with the developmental processes that serve as mechanisms linking adversity with psychopathology (e.g., emotion regulation, executive functioning) or that moderate the link between these developmental processes and the onset of psychopathology. Deconstructing the pathways linking childhood adversity to psychopathology allows moderators to be examined separately at different stages of these pathways and may yield greater information about how protective factors ultimately exert their effects on downstream outcomes. including psychopathology.

Accordingly, a fourth recommendation is that future research should focus on identifying protective factors that buffer children from the negative consequences of adversity at two levels: (a) factors that modify the association between childhood adversity and the maladaptive patterns of emotional, cognitive, social, and neurobiological development that serve as intermediate phenotypes linking adversity with psychopathology. and (b) factors that moderate the influence of intennediate phenotypes on the emergence of psychopathology, leading to divergent trajectories of adaptation across children.

To understand resilience, we first need to understand the developmental processes that are disrupted following exposure to adversity and how certain characteristics either prevent or compensate for those developmental disruptions or reduce their impact on risk for psychopathology.

A TRANSDIAGNOSTIC MODEL OF CHILDHOOD ADVERSITY AND PSYCHOPATHOLOGY

The remainder of the article outlines a transdiagnostic model of mechanisms linking childhood adversity with youth psychopathology. Two core developmental mechanisms are proposed that, in part, explain patterns of multitinality: emotional processing and executive functioning.

The model builds on a framework described by Nolen Hoeksema and Watkins (2011) for identifying transdiagnostic processes. Of importance, the model is not intended to be comprehensive in delineating all mechanisms linking childhood adversity with psychopathology but rather focuses on two candidate mechanisms linking childhood adversity to multiple forms of psychopathology. At the same time, these mechanisms are also specific in that each is most likely to emerge following exposure to specific dimensions of adverse early experience.

The model is specific with regard to the underlying dimensions of adverse experience considered and identifies several key moderators that might explain divergent developmental trajectories among children following exposure to adversity. Future research is needed to expand this framework to incorporate other key dimensions of the adverse environmental experience, developmental mechanisms linking those dimensions of adversity with psychopathology, and moderators of those associations.

Distal Risk Factors

Within the proposed model, core dimensions of environmental experience that underlie multiple forms of adversity are conceptualized as distal risk factors for psychopathology, Specifically, experiences of threat and deprivation constitute the first component of the proposed transdiagnostic model of childhood adversity and psychopathology.

Experiences of threat and deprivation meet each of Nolen Hoeksema and Watkins’s (2011) criteria for a distal risk factor. They represent environmental conditions largely outside the control of the child that are linked to the onset of psychopathology only through intervening causal mechanisms that represent more proximal risk factors. Although they are probabilistically related to psychopathology, exposure to threat and deprivation do not invariably lead to mental disorders. These experiences influence proximal risk factors primarily through learning mechanisms that ultimately shape patterns of information processing, emotional responses to the environment, and higher order control processes that influence both cognitive and emotional processing.

Proximal Risk Factors

The developmental processes that are altered following exposure to adverse environmental experiences represent proximal risk factors, or intermediate phenotypes, linking them to the onset of psychopathology. These proximal risk factors represent the second component of the proposed transdiagnostic model. Nolen Hoeksema and Watkins (2011) argued that proximal risk factors are within person factors that mediate the relationship between distal risk factors, including aspects of environmental context that are difficult to modify, such as childhood adversity, and the emergence of psychopathology. Proximal risk factors directly influence symptoms and are temporally closer to symptom onset and often easier to modify than distal risk factors (Nolen Hoeksema & Watkins, 2011).

Identifying modifiable within person factors that link adverse environmental experiences with the onset of symptoms is the key to developing interventions to prevent the onset of psychopathology in children who have experienced adversity.

The model includes two primary domains of proximal risk factors: emotional processing and executive functioning.

Emotional processing refers to information processing of emotional stimuli (e.g., attention, memory), emotional reactivity, and both automatic (e.g., habituation, fear extinction) and effortful (e.g., cognitive reappraisal) forms of emotion regulation. These processes all represent responses to emotional stimuli, and many involve interactions of cognition with emotion.

Executive functions comprise a set of cognitive processes that support the ability to learn new knowledge and skills; hold in mind goals and information; and create and execute complex, future oriented plans. Executive functioning comprises the ability to hold information in mind and focus on currently relevant information (working memory), inhibit actions and information not currently relevant (inhibition). and switch flexibly between representations or goals (cognitive flexibility; Miyake & Friedman. 2012; Miyake, Friedman, Rettinger, Shah, & Hegarty, 2001).

Together these skills allow the creation and execution of future oriented plans and the inhibition of behaviors that do not serve these plans, providing the foundation for healthy decision making and self regulation. Many of the diverse mechanisms linking childhood adversity to psychopathology are subsumed within these two broad domains.

Emotional processing, stable patterns of emotional processing, emotional responding to the environment, and emotion regulation represent the first core domain of proximal risk factors. Experiences of uncontrollable threat are associated with strong learning of specific contingencies and overgeneralization of that learning to novel contexts, which facilitates the processing of salient emotional cues in the environment (e.g., biased attention to threat). Given the importance of quickly identifying potential threats in the environment for children growing up in environments characterized by legitimate danger, these learning processes should produce information processing biases that promote rapid identification of potential threats. Indeed, evidence suggests that children with abuse histories, an environment characterized by high levels of threat, exhibit attention biases toward facial displays of anger, identify anger with little perceptual information, have difficulty disengaging from angry faces, and display anticipatory monitoring of the environment following interpersonal displays of anger (Pollak, Cicchetti, Hornung, & Reed, 2000; Pollak & Sinha, 2002; Pollak & Tolley Schell, 2003; Pollak, Vardi, Putzer Bechner, & Curtin, 2005; Shackman, Shackman, & Pollak, 2007).

Given the relevance of anger as a signal of potential threat, these findings suggest that exposure to threatening environments results in stable patterns of information processing that facilitate threat identification and maintenance of attention to threat cues. These attention biases are specific to children who have experienced violence; for example, children who have been neglected (i.e., an environment characterized by deprivation in social and cognitive inputs) experience difficulty discriminating facial expressions of emotion but do not exhibit attention biases toward threat (Pollak, Klorrnan, Thatcher, & Cicchetti, 2001; Pollak et al., 2005).

In addition to attention biases, children who have been the victims of violence are also more likely to generate attributions of hostility to others in socially ambiguous situations (Dodge, Bates, & Pettit, 1990; Dodge, Pettit, Bates, & Valente, 1995; Weiss, Dodge, Bates, & Petit, 1992), a pattern of social information processing tuned to be overly sensitive to potential threats in the environment. Finally, some evidence suggests that exposure to threatening environments is associated with memory biases for overgeneral autobiographical memories in both children and adults (Crane et al., 2014; Williams et al., 2007).

Children with trauma histories also exhibit meaningful differences in patterns of emotional responding that are consistent with these patterns of information processing. For example, children who have experienced interpersonal violence exhibit greater activation in the amygdala and other nodes of the salience network (e.g., anterior insula, putamen, thalamus) to a wide range of negative emotional stimuli (McCrory et al., 2013; McCrory et al., 2011; McLaughlin, Peverill, Gold, Alves, & Sheridan, 2015), suggesting heightened salience of information that could predict threat.

These findings build on earlier work using evoked response potentials documenting amplified neural response to angry faces in children who were physically abused (Pollak, Cicchetti, Klorman, & Brumaghim, 1997; Pollak et al., 2001) and suggests that exposure to threatening experiences heightens the salience of negative emotional information, due to the potential relevance for detecting novel threats.

Heightened amygdala response to negative emotional cues could also reflect fear learning processes, whereby previously neutral stimuli that have become associated with traumatic events begin to elicit conditioned fear responses, or the result of deficits in automatic emotion regulation processes like fear extinction and habituation, which are mediated through connections between the ventromedial prefrontal cortex and amygdala. Recent findings of poor resting state functional connectivity between the ventromedial prefrontal cortex and amygdala among female adolescents with abuse histories provide some evidence for this latter pathway (Herringa et al., 2013).

In addition to heightened neural responses in regions involved in salience processing, consistent associations between exposure to threatening environments and elevations in self reported emotional reactivity to the environment have been observed in our lab and elsewhere (Glaser, Van Os, Portegijs. & Myin Genneys, 2006; Heleniak, Jenness, Van Der Stoep, McCauley, & McLaughlin, in press; McLaughlin, Kubzansky et al., 2010).

Atypical physiological responses to emotional cues have also been documented consistently among children who have experienced trauma, although the specific pattern of findings has varied across studies depending on the specific physiological measures and emotion eliciting paradigms employed. We recently applied a theoretical model drawn from social psychology on adaptive and maladaptive responses to stress to examine physiological responses to stress among maltreated youths. We observed a pattern of increased vascular resistance and blunted cardiac output reactivity among youths who had been physically or sexually abused relative to participants with no history of violence exposure (McLaughlin, Sheridan, Alves, & Mendes, 2014). This pattern of autonomic nervous system reactivity reflects an inefficient cardiovascular response to stress that has been shown in numerous studies to occur when individuals are in a state of heightened threat and is associated with threat appraisals and maladaptive cognitive and behavioral responses to stress (J amieson, Mendes, Blackstock, & Schmader, 2010; Jamieson, Nock, & Mendes, 2012; Mendes, Blascovich, Major. & Seery, 200l; Mendes, Major, McCoy, & Blascovich, 2008). Using data from a large population based cohort of adolescents, we recently replicated the association between childhood trauma exposure and blunted cardiac output reactivity during acute stress (Heleniak, Riese, Ormel, & McLaughlin, 2016).

Together, converging evidence across multiple levels of analysis indicates that exposure to trauma is associated with a persistent pattern of information processing involving biased attention toward potential threats in the environment, heightened neural and subjective responses to negative emotional cues, and a pattern of autonomic nervous system reactivity consistent with heightened threat perception. This heightened reactivity to negative emotional cues may make it more difficult for children who have been exposed to threatening enviromnents to regulate emotional responses. Indeed, a recent study from my lab found that when trying to regulate emotional responses using cognitive reappraisal, children who had been abused recomited regions of the prefrontal cortex involved in effortful control to a greater degree than children who had never experienced violence (McLaughlin, Peverill, et a1., 2015). This pattern suggests that attempts to modulate emotional responses to negative cues require more cognitive resources for children with abuse histories, meaning that effective regulation may break down more easily in the face of stress. Evidence that the negative emotional effects of stressful events are heightened among those with maltreatment histories is consistent with this possibility (Glaser et a1., 2006; McLaughlin, Conron, et al., 2010).

In addition to alterations in patterns of emotional reactivity to environmental cues, child trauma has been associated with maladaptive patterns of responding to distress. For example, exposure to threatening environments early in development is associated with habitual engagement in rumination, a response style characterized by passive focus on feelings of distress along with their causes and consequences without attempts to actively resolve the causes of distress (Nolen Hoeksema, Wisco, & Lyubomirsky, 2008). High reliance on rumination as a strategy for responding to distress has been observed in adolescents and adults who were abused as children (Conway. Mendelson, Giannopoulos, Csank. & Holm, 2005; Heleniak et al., in press; Sarin & Nolen Hoeksema, 2010). Adolescents who experienced victimization by peers (McLaughlin, Hatzenbuehler, & Hilt, 2009), and both adolescents and adults exposed to a wide range of negative life events (McLaughlin & Hatzenbuehler, 2009; Michl, McLaughlin, Shepherd, & Nolen Hoeksema, 2013), although the latter findings are not specific to threat per se.

Although evidence for disruptions in emotional processing come primarily from studies examining children exposed to environments characterized by high degrees of threat, deprived environments are also likely to have downstream effects on emotional development that are at least partially unique from those associated with threat. As noted previously, children who have been neglected experience difficulties discriminating facial displays of emotion (Pollak et al., 2001: Pollak et al., 2005), although some studies of neglected children have found few differences in neural responses to facial emotion in early childhood (Moulson, Fox, Zeanah, & Nelson, 2009; Slopen, McLaughlin, Fox, Zeanah, & Nelson, 2012). However, recent work suggests that children raised in deprived early environments exhibit elevated amygdala response to facial emotion and a mature pattem of functional connectivity between the amygdala and mPFC during emotional processing tasks (Gee et al., 2013; Tottenham et al., 201 1). Finally, children who were neglected or raised in deprived institutions tend to exhibit blunted physiological responses to stress, including in the autonomic nervous system and HPA axis (Gunnar, Frenn, Wewerka, & Van Ryzin, 2009; McLaughlin, Sheridan, et al., 2015).

Much of the existing work on childhood adversity and emotional responding has focused on responses to negative emotional cues. However, a growing body of evidence also suggests that responses to appetitive and rewarding cues are disrupted in children exposed to adversity. For example, children raised in deprived early environments exhibit blunted ventral striatal response to the anticipation of reward (Mehta et al., 2010), and a similar pattern has been observed in a sample of adults exposed to abuse during childhood (Dillon et al., 2009). In a recent study, an increase in ventral striatum response to happy emotional faces occurred from childhood to adolescence in typically developing children but not in children reared in deprived institutions (Goff et al., 2013). In recent work in our lab, we have also observed blunted reward learning among children exposed to institutional rearing (Sheridan, McLaughlin, et al., 2016).

Although the mechanisms underlying the link between diverse forms of childhood adversity and responsiveness to reward have yet to be clearly identified, it has been suggested that repeated activation of the HPA axis in early childhood can attenuate expression of brain derived neurotrophic factor, which in turn regulates the mesolimbic dopamine system that underlies reward learning (Goff & Tottenham, 2014). These reductions in brain derived neurotrophic factor expression may contribute to a pattern of blunted ventral striatum response to reward anticipation or receipt.

Alternatively, given the central role of the mesolimbic dopamine system in attachment related behavior (Strathearn, 2011), the absence or unpredictability of an attachment figure in early development may reduce opportunities for learning about the rewarding nature of affiliative interactions and social bonds; the absence of this type of stimulus reward learning early in development, when sensitive and responsive caregiving from a primary attachment figure is an expected environmental input, may ultimately contribute to biased processing of rewarding stimuli later in development. If social interactions in early life are either absent or unrewarding, expectations about the hedonic value of social relationships and other types of rewards might be altered in the long term, culminating in attenuated responsiveness to anticipation of reward. Future research is needed to identify the precise mechanisms through which adverse early environments ultimately shape reward learning and responses to rewarding stimuli.

Links between emotional processing and psychopathology

An extensive and growing body of work suggests that disruptions in emotional processing, emotional responding, and emotion regulation represent transdiagnostic factors associated with virtually all commonly occurring forms of psychopathology (Aldao, Nolen Hoeksema, & Schweizer, 2010). Specifically, attention biases to threat and overgeneral autobiographical memory biases have been linked to anxiety and depression, respectively, in numerous studies (Bar Haim, Lamy, Bakermans Kranenburgh, Pergamin, & Van Ijzendoorn, 2007; Williams et al., 2007), and attributions of hostility and other social information processing biases associated with trauma exposure are associated with risk for the onset of conduct problems and aggression (Dodge et al., 1990; Dodge et 211., 1995; Weiss et al., 1992).

Heightened emotional responses to negative environmental cues are associated with both internalizing and externalizing psychopathology in laboratory based paradigms examining self reported emotional and physiological responses to emotional stimuli (Boyce et al., 2001; Carthy, Horesh, Apter, Edge. & Gross, 2010: Hankin, Badanes, Abela, & Watamura, 2010′, McLaughlin, Kubzansky. et al., 2010; McLaughlin, Sheridan, Alves, et al., 2014; Rao. Hammen, Ortiz, Chen, & Poland, 2008), MRI studies examining neural response to facial emotion (Sebastian et al., 2012; Siegle, Thompson, Caner, Steinhauer, & Thase, 2007; Stein, Simmons, Feinstein, & Paulus, 2007; Suslow et al., 2010; Thomas et al.. 2001), and experience sampling studies that measure emotional responses in real world situations (Myin Germeys et al., 2003; Silk. Steinberg, & Morris. 2003).

Habitual engagement in rumination has also been linked to heightened risk for anxiety, depression, eating disorders, and problematic substance use (McLaughlin & Nolen Hoeksema, 20l 1: Nolen Hoeksema, 2000; Nolen Hoeksema, Stice, Wade, & Bohon, 2007). Together, evidence from numerous studies examining emotional processing at multiple levels of analysis suggests that disruptions in emotional processing are a key transdiagnostic factor in psychopathology that may explain patterns of multifinality following exposure to threatening early environments.

Executive functioning

Disruption in executive functioning represent the second key proximal risk factor in the model. A growing body of evidence suggests that environmental deprivation is associated with lasting alterations in executive functioning skills. Poor executive functioning, including problems with working memory, inhibitory control, planning ability, and cognitive flexibility, has consistently been documented among children raised in deprived environments ranging from institutional settings to low socioeconomic status [3138) families.

Children raised in institutional settings exhibit a range of deficits in cognitive functions including general intellectual ability (Nelson et al., 2007‘, O’Connor, Rutter, Beckett, Keaveney, & Kreppner, 2000), expressive and receptive language (Albers, Johnson, Hostetter, Iverson, & Miller, 1997; Windsor et al., 201 I), and executive function skills (Bos et al., 2009; Tibu et al., 2016). In contrast to other domains of cognitive ability, however, deficits in executive functioning and marked elevations in the prevalence of attention deficit hyperactivity disorder (ADHD), which is characterized by executive functioning problems, are persistent over time even after placement into a stable family environment (Bos et al., 2009; Tibu et al., 2016; Zeanah et al., 2009).

Similar patterns of executive functioning deficits have also been observed among children raised in low SES families, including problems with working memory, inhibitory control, and cognitive flexibility (Blair, 2002; Farah et al., 2006; Noble et al., 2007; Noble, Norman, & Farah, 2005; Raver, Blair, Willoughby, & The Family life Project Key Investigators, 2013), as well as deficits in language abilities (Fernald, Marchman, & Weisleder, 2013; Weisleder & Femald, 2013). Poor cognitive flexibility among children raised in low SES environments has been observed as early as infancy (Clearfield & Niman, 2012). Relative to children who have been abused, children exposed to neglect are at greater risk for cognitive deficits (Hildyard & Wolfe, 2002) similar to those observed in poverty and institutionalization (Dubowitz et al., 2002; Spratt et al., 2012).

The lateral PFC is recruited during a wide variety of executive functioning tasks, including working memory (Wager & Smith, 2003), inhibition (Aron, Robbins, & Poldrack, 2004), and cognitive flexibility (Rougier, Noelle, Braver, Cohen, & O’Reilly, 2005), and is one of the brain regions most centrally involved in executive functioning. In addition to exhibiting poor performance on executive functioning tasks, children from low SES families also have different patterns of lateral PFC recruitment during these tasks as compared to children from middle class families (Kishiyama, Boyce, Jimenez, Perry, & Knight, 2009; Sheridan, Sarsour, Jutte, D’Esposito, & Boyce, 2012). A similar pattern of poor inhibitory control and altered lateral PFC recruitment during an inhibition task has also been observed in children raised in institutional settings (Mueller et al., 2010).

These studies provide some clues about where to look with regards to the types of environmental inputs that might be necessary for the development of adaptive executive functions. In particular, environmental inputs that are absent or atypical among children raised in institutional settings. as well as among children raised in poverty, are promising candidates. Institutional rearing is associated with an absence of environmental inputs of numerous kinds, including the presence of an attachment figure, variation in daily routines and activities, access to age appropriate enriching cognitive stimulation from books, toys, and interactions with adults, and complex language exposure (Smyke et al., 2007; Zeanah et al., 2003).

Some of these dimensions of environmental experience have also been shown to be deprived among children raised in poverty, including access to cognitively enriching activities, including access to books, toys, and puzzles; learning opportunities outside the home (e.g., museums) and within the context of the parent-child relationship (e.g., parental encouragement of learning colors, words, and numbers, reading to the child); and variation in environmental complexity and stimulation as well as the amount and complexity of language input (Bradley, Convyn, Burchinal, McAdoo, & C01], 200]; Bradley, Corwyn, MCAdoo. & C011, 2001; Dubowitz et al., 2002; Garrett, Ng’andu, & Ferron, 1994; Hart & Risley, 1995; Hoff, 2003; Linver, Brooks Gunn, & Kohen, 2002).

Together, these distinct lines of research suggest that enriching cognitive activities and exposure to complex language might provide the scaffolding that children require to develop executive functions. Some indirect evidence supports this notion. For example, degree of environmental stimulation in the home and amount and quality of maternal language each predict the development of language skills in early childhood (Farah et al., 2008; Hoff, 2003), and children raised in both institutional settings and low SES families exhibit deficits in expressive and receptive language (Albers et al., 1997; Hoff, 2003; Noble et al., 2007; Noble et al., 2005; Windsor et al., 2011), in addition to problems with executive functioning skills. Moreover, a recent study found that atypical patterns of PFC activation during executive function tasks among children from low SES families is explained by degree of complex language exposure in the home (Sheridan et al., 2012). Finally, children raised in bilingual environments appear to have improved performance on executive function tasks (Carlson & Meltzoff, 2008).

These findings suggest that the environmental inputs that are required for language development (i.e., complex language directed at the child) may also be critical for the development of executive function skills. Language provides an opportunity to develop multiple such skills ranging from working memory (e.g., holding in mind the first part of a sentence as you wait for the speaker to finish), inhibitory control (e.g., waiting your turn in a conversation), and cognitive flexibility (e.g,, switching between grammatical and syntactic rules).

Lack of consistent rules, routines, structure, and parental scaffolding behaviors may be another mechanism explaining deficits in executive functioning among children from low SES families. This lack of environmental predictability is more common among low SES than middle class families (Deater Deckard, Chen, Wang, & Bell, 2012; Evans, Gonnella, Mareynyszyn, Gentile, & Salpekar, 2005; Evans & Wachs, 2009). The absence of consistent rules, routines, and contingencies in the environment may interfere with children’s ability to learn abstract rules and to develop the capacity for self regulation. Indeed, higher levels of parental scaffolding, or provision of support to allow the child to solve problems autonomously, has been prospectively linked with the development of better executive function skills in early childhood (Bemier, Carlson. & Whipple, 2010; Hammond, Muller, Carpendale, Bibok, & Lieberrnann Finestone, 2012; Landry, Miller Loncar, Smith, & Swank, 2002).

These findings suggest that environmental unpredictability is an additional mechanism linking low SES environments to poor executive functioning in children. However, given the highly structured and routinized nature of most institutional settings, environmental unpredictability is an unlikely explanation for executive functioning deficits among institutionally reared children.

Deficits in executive functioning skills have sometimes been observed in children with exposure to trauma (DePrince, Weinzierl, & Combs, 2009; Mezzacappa, Kindlon, & Earls, 2001) as well as children with high levels of exposure to stressful life events (Hanson et al., 2012), although some studies have found associations between trauma exposure and working memory but not inhibition or cognitive flexibility (Augusti & Melinder. 2013).

There are two possible explanations for these findings.

First, for children exposed to threat, it may be that deficits in executive functions emerge primarily in emotional contexts, such that the heightened perceptual sensitivity and reactivity to emotional stimuli in children exposed to threat draws attention to emotional stimuli (Shackman et al., 2007), making it more difficult to hold other stimuli in mind, effectively inhibit responses to emotional stimuli, or flexibly allocate attention to nonemotional stimuli. Indeed. in a recent study in my lab, we observed that exposure to trauma (both maltreatment and community violence) was associated with deficits in inhibitory control only in the context of emotional stimuli (i.e., a Stroop task involving emotional faces) and not when stimuli were neutral (i.e., shapes), and had no association with cognitive flexibility (Lambert. King, Monahan, & McLaughlin, 2016). In contrast, deprivation exposure was associated with deficits in inhibition to both neutral and emotional stimuli and poor cognitive flexibility. Although this suggests there may be specificity in the association of trauma exposure with executive functions, greater research is needed to understand these links.

Second, studies examining exposure to trauma seldom measure indicies of deprivation, nor do they adjust for deprivation exposure (just as studies of deprivation rarely assess or control for trauma exposure). Disentangling the specific effects of these two types of experiences on executive functioning processes is a critical goal for future research.

Links between executive functioning and psychopathology

Executive functioning deficits are a central feature of ADHD (Martinussen, Hayden, Hogg Johnson, & Tannock, 2005′, Sergeant, Geurts. & Oosterlaan, 2002; Willcutt, Doyle, Nigg, Faraone, & Pennington, 2005). Problems with executive functions have also been observed in children with externalizing psychopathology, including conduct disorder and oppositional defiant disorder, even after accounting for comorbid ADHD (Hobson, Scott, & Rubia, 2011). They are also associated with elevated risk for the onset of substance use problems and other types of risky behavior (Crews & Boettiger, 2009; Patrick, Blair, & Maggs, 2008), including criminal behavior (Moffitt et al., 2011) and the likelihood of becoming incarcerated (Yechiam et a1., 2008).

Although executive functioning deficits figure less prominently in theoretical models of the etiology of internalizing psychopathology, when these deficits emerge in the context of emotional processing (e.g., poor inhibition of negative emotional information) they are more strongly linked to internalizing problems, including depression (Goeleven, De Raedt, Baert, & Koster, 2006; Joorman & Gotlib, 2010). Executive functioning deficits also contribute to other proximal risk factors, such as rumination (Joorman, 2006), that are well established risk factors for depression and anxiety disorders. Patterns of executive functioning in childhood have lasting implications for health and development beyond effects on psychopathology. Recent work suggests that executive functioning measured in early childhood predicts a wide range of outcomes in adulthood in the domains of health, SES, and criminal behavior. over and above the effects of IQ (Moffrtt et al., 2011).

Mechanisms Linking Distal Risk Factors to Proximal Risk Factors

How do experiences of threat and deprivation come to influence proximal risk factors? Learning mechanisms are the most obvious pathways linking these experiences with changes in emotional processing and executive functioning. although other mechanisms (e. g., the development of stable beliefs and schemes) are also likely to play an important role. Specifically, the impact of threatening and deprived early environments on the development of patterns of emotional processing and emotional responding may be mediated, at least in part, through emotional learning pathways. The associative learning mechanisms and neural circuitry underlying fear learning and reward learning have been well characterized in both animals and humans and reviewed elsewhere (Delgado, Olsson, & Phelps, 2006; Flagel et al., 2011; Johansen, Cain, Ostroff, & LeDoux. 20l l; O’Doherty‘ 2004).

Exposure to threatening or deprived environments early in development results in the presence (i.e., in the case of threats) or absence (i.e., in the case of deprivation) of opportunities for emotional learning: these learning experiences, in turn, have lasting downstream effects on emotional processing. Specifically, early learning histories can influence the salience of environmental stimuli as either potential threats or incentives, shape the magnitude of emotional responses to environmental stimuli, particularly those that represent either threat or reward, and alter motivation to avoid threats or pursue rewards. Thus, fear learning mechanisms and their downstream consequences explain, in part, the association of threatening environments with alterations in emotional processing (McLaughlin et al., 2014; Sheridan & McLaughlin. 2014).

Similarly, the effects of deprived early environments on emotional processing are likely to be partially explained through reward learning pathways. Pathways linking threatening early environments to habitual patterns of responding to distress, such as rumination, may also involve learning mechanisms including both observational (e.g., modeling responses utilized by caregivers) and instrumental (e.g., reinforcement of passive responses to distress when emotional displays are met with dismissive or punishing reactions from caregivers) learning.

Learning mechanisms may also be a central mechanism in the association between deprived early environments and the development of executive functioning. In particular, deprived environments such as institutional rearing. neglect, and poverty are characterized by the absence of learning opportunities, which is thought to directly contribute to later difficulties with complex higher order cognition. Specifically. reduced opportunities for learning due to the absence of complex and varied stimulus response contingencies or the presence of consistent rules, routines, and structures that allow children to learn concrete and abstract rules may influence the development of both cognitive and behavioral aspects of self regulation.

Moderators of the Link Between Distal and Proximal Risk Factors

Children vary markedly in their sensitivity to environmental context. Advances in theoretical conceptualizations of individual differences in sensitivity to context can be leveraged to understand variability in developmental processes among children exposed to adverse environments. A growing body of evidence suggests that certain characteristics make children particularly responsive to environmental influences; such factors confer not only vulnerability in the context of adverse environments but also benefits in the presence of supportive environments (Belsky, Bakermans Kranenburg, & Van Ijzendoom, 2007; Belsky & Pluess, 2009; Boyce & Ellis, 2005; Ellis, Essex, & Boyce, 2005). Highly reactive temperament, vagal tone, and genetic polymorphisms that regulate the dopaminergic and serotonergic system have been identified as markers of plasticity and susceptibility to both negative and positive environmental influences (Belsky & Pluess, 2009). These plasticity markers represent potential moderators of the link between childhood adversity and disruptions in emotional processing and executive functioning.

Developmental timing of exposure to adversity also plays a meaningful role in moderating the impact of childhood adversity on emotional processing and executive functioning For example, in recent work we have shown that early environmental deprivation has a particularly pronounced impact on the development of stress response systems during the first 2 years of life (McLaughlin et al., ZOIS). These findings suggest the possibility of an early sensitive period during which the environment exerts a disproportionate effect on the development of neurobiological systems that regulate responses to stress. As noted in the beginning of this article, additional research is needed to identify developmental periods of heightened plasticity in specific subdomains of emotional processing and executive functioning and to determine the degree to which disruptions in these domains vary as a function of the timing of exposure to childhood adversity.

Moderators of Trajectories From Proximal Risk Factors to Psychopathology

A key component of Nolen Hoeksema and Watkins‘s (2011) transdiagnostic model of psychopathology involves moderators that determine the specific type of psychopathology that someone with a particular proximal risk factor will develop. Specifically, their model argues that ongoing environmental context and neurobiological factors can moderate the impact of proximal risk factors on psychopathology by raising concerns or themes that are acted upon by proximal risk factors and by shaping responses to and altering the reinforcement value of particular types of stimuli.

For example. the nature of ongoing environmental experiences might determine whether someone with an underlying vulnerability (e.g.. neuroticism) develops anxiety or depression. Specifically, a person with high neuroticism who experiences a stressor involving a high degree of threat or danger (e.g., a mugging or a car accident) might develop an anxiety disorder, whereas a person with high neuroticism who experiences a loss (e.g.. an unexpected death of a loved one) might develop major depression (Nolen Hoeksema & Watkins. 2011).

Neurobiological factors that influence the reinforcement value of certain stimuli (e.g., alcohol and other substances. food, social rejection) can also serve as moderators. For example, individual differences in rejection sensitivity might determine whether a child who is bullied develops an anxiety disorder. Although a review of these factors is beyond the scope of the current article, greater understanding of the role of ongoing environmental context as a moderator of the link between proximal risk factors and the emergence of psychopathology has relevance for research on childhood adversity. In particular, environmental factors that buffer against the emergence of psychopathology in children with disruptions in emotional processing and executive functioning can point to potential targets for preventive interventions for children exposed to adversity.

CONCLUSION

Exposure to childhood adversity represents one of the most potent risk factors for the onset of psychopathology. Recognition of the strong and pervasive influence of childhood adversity on risk for psychopathology throughout the life course has generated a burgeoning field of research focused on understanding the links between adverse early experience, developmental processes, and mental health. This article provides recommendations for future research in this area. In particular, future research must develop and utilize a consistent definition of childhood adversity across studies, as it is critical for the field to agree upon what the construct of childhood adversity represents and what types of experiences do and do not qualify.

Progress in identifying developmental mechanisms linking childhood adversity to psychopathology requires integration of studies of typical development with those focused on childhood adversity in order to characterize how experiences of adversity disrupt developmental trajectories in emotion, cognition, social behavior. and the neural circuits that support these processes, as well as greater efforts to distinguish between distinct dimensions of adverse environmental experience that differentially influence these domains of development. Greater understanding of the developmental pathways linking childhood adversity to the onset of psychopathology can inform efforts to identify protective factors that buffer children from the negative consequences of adversity by allowing a shift in focus from downstream outcomes like psychopathology to specific developmental processes that serve as intermediate phenotypes (i.e., mechanisms) linking adversity with psychopathology.

Progress in these domains will generate clinically useful knowledge regarding the mechanisms that explain how childhood adversity is associated with a wide range of psychopathology outcomes (i.e., multifinality) and identify moderators that shape divergent trajectories following adverse childhood experiences. This knowledge can be leveraged to develop and refine empirically informed interventions to prevent the long term consequences of adverse early environments on children’s development. Greater understanding of modifiable developmental processes underlying the associations of diverse forms of childhood adversity with psychopathology will provide critical information regarding the mechanisms that should be specifically targeted by intervention. Determining whether these mechanisms are general or specific is essential, as it is unlikely that a one size fits all approach to intervention will be effective for preventing the onset of psychopathology following all types of childhood adversity. Identifying processes that are disrupted following specific forms of adversity, but not others, will allow interventions to be tailored to address the developmental mechanisms that are most relevant for children exposed to particular types of adversity. Identification of moderators that buffer children either from disruptions in core developmental domains or from developing psychopathology in the presence of developmental disruptions, for example, among children with heightened emotional reactivity or poor executive functioning, will provide additional targets for intervention.

Finally, uncovering sensitive periods when emotional, cognitive, and neurobiological processes are most likely to be influenced by the environment will provide key information about when interventions are most likely to be successful. Together, these advances will help the field to generate innovative new approaches for preventing the onset of psychopathology among children who have experienced adversity.

The Deepest Well. Healing the long term effects of Childhood Adversity – Dr Nadine Burke Harris.

What put Evan at increased risk for waking up with half of his body paralyzed (and for numerous other diseases as well) is not rare. It’s something two-thirds of the nation’s population is exposed to, something so common it’s hiding in plain sight.

So what is it? Lead? Asbestos? Some toxic packing material?

It’s Childhood Adversity.

“Well, her asthma does seem to get worse whenever her dad punches a hole in the wall. Do you think that could be related?”

Twenty years of medical research has shown that childhood adversity literally gets under our skin, changing people in ways that can endure in their bodies for decades. It can tip a child’s developmental trajectory and affect physiology. It can trigger chronic inflammation and hormonal changes that can last a lifetime. It can alter the way DNA is read and how cells replicate, and it can dramatically increase the risk for heart disease, stroke, cancer, diabetes even Alzheimer’s.

At five o’clock on an ordinary Saturday morning, a forty three year old man, we’ll call him Evan, wakes up. His wife, Sarah, is breathing softly beside him, curled in her usual position, arm slung over her forehead. Without thinking much about it, Evan tries to roll over and slide out of bed to get to the bathroom, but something’s off. He can’t roll over and it feels like his right arm has gone numb.

Ugh, must have slept on it too long, he thinks, bracing himself for those mean, hot tingles you get when the circulation starts again.

He tries to wiggle his fingers to get the blood flowing, but no dice. The aching pressure in his bladder isn’t going to wait, though, so he tries again to get up. Nothing happens.

What the. . .

His right leg is still exactly where he left it, despite the fact that he tried to move it the same way he has been moving it all his life without thinking.

He tries again. Nope.

Looks like this morning, it doesn’t want to cooperate. It’s weird, this whole body not doing what you want it to thing, but the urge to pee feels like a much bigger problem right now.

“Hey, baby, can you help me? I gotta pee. Just push me out of bed so I don’t do it right here,” he says to Sarah, half joking about the last part.

“What’s wrong, Evan?” says Sarah, lifting her head and squinting at him. “Evan?”

Her voice rises as she says his name the second time.

He notices she’s looking at him with deep concern in her eyes. Her face wears the expression she gets when the boys have fevers or wake up sick in the middle of the night. Which is ridiculous because all he needs is a little push. It’s five in the morning, after all. No need for a full-blown conversation.

“Honey, I just gotta go pee,” he says.

“What’s wrong? Evan? What’s wrong?”

In an instant, Sarah is up. She’s got the lights on and is peering into Evan’s face as though she is reading a shocking headline in the Sunday paper.

“It’s all right, baby. I just need to pee. My leg is asleep. Can you help me real quick?” he says.

He figures that maybe if he can put some pressure on his left side, he can shift position and jump-start his circulation. He just needs to get out of the bed.

It is in that moment that he realizes it isn’t just the right arm and leg that are numb it’s his face too.

In fact, it’s his whole right side.

What is happening to me?

Then Evan feels something warm and wet on his left leg.

He looks down to see his boxers are soaked. Urine is seeping into the bed sheets.

“Oh my Godl” Sarah screams. In that instant, seeing her husband wet the bed, Sarah realizes the gravity of the situation and leaps into action. She jumps out of bed and Evan can hear her running to their teenage son’s bedroom. There are a few muffled words that he can’t make out through the wall and then she’s back. She sits on the bed next to him, holding him and caressing his face.

“You’re okay,” Sarah says. “It’s gonna be okay.” Her voice is soft and soothing.

“Babe, what’s going on?” Evan asks, looking at his wife. As he gazes up at her, it dawns on him that she can’t understand anything he’s saying. He’s moving his lips and words are coming out of his mouth, but she doesn’t seem to be getting any of it.

Just then, a ridiculous cartoon commercial with a dancing heart bouncing along to a silly song starts playing in his mind.

F stands for face drooping. Bounce. Bounce.

A stands for arm weakness. Bounce. Bounce.

S stands for speech difficulty.

T stands for time to call 911. Learn to identify signs of a stroke. Act FAST!

Holy crap!

Despite the early hour, Evan’s son Marcus comes briskly to the doorway and hands his mom the phone. As father and son lock eyes, Evan sees a look of alarm and worry that makes his heart clench in his chest. He tries to tell his son it will be okay, but it’s clear from the boy’s expression that his attempt at reassurance is only making things worse. Marcus’s face contorts with fear, and tears start streaming down his cheeks.

On the phone with the 911 operator, Sarah is clear and forceful.

“I need an ambulance right now, right now! My husband is having a stroke. Yes, I’m sure! He can’t move his entire right side. Half of his face won’t move. No, he can’t speak. It’s totally garbled. His speech doesn’t make any sense. Just hurry up. Please send an ambulance right away!”

The first responders, a team of paramedics, make it there inside of five minutes. They bang on the door and ring the bell. Sarah runs downstairs and lets them in. Their younger son is still in his bedroom asleep, and she’s worried that the noise will wake him, but fortunately, he doesn’t stir.

Evan stares up at the crown molding and tries to calm down. He feels himself starting to drift off, getting further away from the current moment. This isn’t good.

The next thing he knows, he is on a stretcher being carried down the stairs. As the paramedics negotiate the landing, they pause to shift positions. In that slice of a second, Evan glances up and catches one of the medics watching him with an expression that makes him go cold. It’s a look of recognition and pity. It says, Poor guy. I’ve seen this before and it ain’t good.

As they are passing through the doorway, Evan wonders whether he will ever come back to this house. Back to Sarah and his boys. From the way that medic looked at him, Evan thinks the answer might not be yes.

When they get to the emergency room, Sarah is peppered with questions about Evan’s medical history. She tells them every detail of Evan’s life she thinks might be relevant. He’s a computer programmer. He goes mountain biking every weekend. He loves playing basketball with his boys. He’s a great dad. He’s happy. At his last checkup the doctor said everything looked great. At one point, she overhears one of the doctors relating Evan’s case to a colleague over the phone: “Forty-three-year-old male, nonsmoker, no risk factors.”

But unbeknownst to Sarah, Evan, and even Evan’s doctors, he did have a risk factor. A mighty big one. In fact, Evan was more than twice as likely to have a stroke as a person without this risk factor. What no one in the ER that day knew was that, for decades, an invisible biological process had been at work, one involving Evan’s cardiovascular, immune, and endocrine systems. One that might very well have led to the events of this moment. The risk factor and its potential impact never came up in all of the regular checkups Evan had had over the years.

What put Evan at increased risk for waking up with half of his body paralyzed (and for numerous other diseases as well) is not rare. It’s something two-thirds of the nation’s population is exposed to, something so common it’s hiding in plain sight.

So what is it? Lead? Asbestos? Some toxic packing material?

It’s childhood adversity.

Most people wouldn’t suspect that what happens to them in childhood has anything to do with stroke or heart disease or cancer. But many of us do recognize that when someone experiences childhood trauma, there may be an emotional and psychological impact. For the unlucky (or some say the “weak”), we know what the worst of the fallout looks like: substance abuse, cyclical violence, incarceration, and mentalhealth problems. But for everyone else, childhood trauma is the bad memory that no one talks about until at least the fifth or sixth date. It’s just drama, baggage.

Childhood adversity is a story we think we know.

Children have faced trauma and stress in the form of abuse, neglect, violence, and fear since God was a boy. Parents have been getting trashed, getting arrested, and getting divorced for almost as long. The people who are smart and strong enough are able to rise above the past and triumph through the force of their own will and resilience.

Or are they?

We’ve all heard the Horatio Alger-like stories about people who have experienced early hardships and have either overcome or, better yet, been made stronger by them. These tales are embedded in Americans’ cultural DNA. At best, they paint an incomplete picture of what childhood adversity means for the hundreds of millions of people in the United States (and the billions around the world) who have experienced early life stress. More often, they take on moral overtones, provoking feelings of shame and hopelessness in those who struggle with the lifelong impacts of childhood adversity. But there is a huge part of the story missing.

Twenty years of medical research has shown that childhood adversity literally gets under our skin, changing people in ways that can endure in their bodies for decades. It can tip a child’s developmental trajectory and affect physiology. It can trigger chronic inflammation and hormonal changes that can last a lifetime. It can alter the way DNA is read and how cells replicate, and it can dramatically increase the risk for heart disease, stroke, cancer, diabetes even Alzheimer’s.

This new science gives a startling twist to the Horatio Alger tale we think we know so well; as the studies reveal, years later, after having “transcended” adversity in amazing ways, even bootstrap heroes find themselves pulled up short by their biology. Despite rough childhoods, plenty of folks got good grades and went to college and had families. They did what they were supposed to do. They overcame adversity and went on to build successful lives and then they got sick. They had strokes. Or got lung cancer, or developed heart disease, or sank into depression. Since they hadn’t engaged in high risk behavior like drinking, overeating, or smoking, they had no idea where their health problems had come from. They certainly didn’t connect them to the past, because they’d left the past behind. Right?

The truth is that despite all their hard work, people like Evan who have had adverse childhood experiences are still at greater risk for developing chronic illnesses, like cardiovascular disease, and cancer.

But why? How does exposure to stress in childhood crop up as a health problem in middle age or even retirement? Are there effective treatments? What can we do to protect our health and our children’s health?

In 2005, when I finished my pediatrics residency at Stanford, I didn’t even know to ask these questions. Like everyone else, I had only part of the story. But then, whether by chance or by fate, I caught glimpses of a story yet to be told. It started in exactly the place you might expect to find high levels of adversity: a lowincome community of color with few resources, tucked inside a wealthy city with all the resources in the world. In the Bayview Hunters Point neighborhood of San Francisco, I started a community pediatric clinic. Every day I witnessed my tiny patients dealing with overwhelming trauma and stress; as a human being, I was brought to my knees by it. As a scientist and a doctor, I got up off those knees and began asking questions.

My journey gave me, and I hope this book will give you, a radically different perspective on the story of childhood adversity, the whole story, not just the one we think we know. Through these pages, you will better understand how childhood adversity may be playing out in your life or in the life of someone you love, and, more important, you will learn the tools for healing that begins with one person or one community but has the power to transform the health of nations.

Chapter 1

Discovery

Something’s Just Not Right

As I walked into an exam room at the Bayview Child Health Center to meet my next patient, I couldn’t help but smile. My team and I had worked hard to make the clinic as inviting and family friendly as possible. The room was painted in pastel colors and had a matching checkered floor. Cartoons of baby animals paraded across the wall above the sink and marched toward the door. if you didn’t know better, you’d think you were in a pediatric office in the affluent Pacific Heights neighborhood of San Francisco instead of in struggling Bayview, which was exactly the point. We wanted our clinic to be a place where people felt valued.

When I came through the door, Diego’s eyes were glued to the baby giraffes. What a super-cutie, I thought as he moved his attention to me, flashed me a smile, and checked me out through a mop of shaggy black hair. He was perched on the chair next to his mother, who held his three year old sister in her lap. When I asked him to climb onto the exam table, he obediently hopped up and started swinging his legs back and forth. As I opened his chart, I saw his birth date and looked up at him again Diego was a cutie and a shorty.

Quickly I flipped through the chart, looking for some objective data to back up my initial impression. I plotted Diego’s height on the growth curve, then I double checked to be sure I hadn’t made a mistake. My newest patient was at the 50th percentile for height for a four year old.

Which would have been fine, except that Diego was seven years old.

That’s weird, I thought, because otherwise, Diego looked like a totally normal kid. I scooted my chair over to the table and pulled out my stethoscope. As I got closer I could see thickened, dry patches of eczema at the creases of his elbows, and when I listened to his lungs, I heard a distinct wheezing. Diego’s school nurse had referred him for evaluation for attention deficit hyperactivity disorder (ADHD), a chronic condition characterized by hyperactivity, inattention, and impulsivity. Whether or not Diego was one of the millions of children affected by ADHD remained to be seen, but already I could see his primary diagnoses would be more along the lines of persistent asthma, eczema, and growth failure.

Diego’s mom, Rosa, watched nervously as I examined her son. Her eyes were fixed on Diego and filled with concern; little Selena’s gaze was darting around the room as she checked out all the shiny gadgets.

“Do you prefer English 0 Espanol?” I asked Rosa.

Relief crossed her face and she leaned forward.

After we talked in Spanish through the medical history that she had filled out in the waiting room, I asked the same question I always do before jumping into the results of the physical exam: Is there anything specific going on that I should know about?

Concern gathered her forehead like a stitch.

“He’s not doing well in school, and the nurse said medicine could help. Is that true? What medicine does he need?”

“When did you notice he’d started having trouble in school?” I asked.

There was a slight pause as her face morphed from tense to tearful.

“jAy, Doctoral” she said and began the story in a torrent of Spanish.

I put my hand on her arm, and before she could get much further, I poked my head out the door and asked my medical assistant to take Selena and Diego to the waiting room.

The story I heard from Rosa was not a happy one. She spent the next ten minutes telling me about an incident of sexual abuse that had happened to Diego when he was four years old. Rosa and her husband had taken in a tenant to help offset the sky-high San Francisco rent. it was a family friend, someone her husband knew from his work in construction. Rosa noticed that Diego became more clingy and withdrawn after the man arrived, but she had no idea why until she came home one day to find the man in the shower with Diego.

While they had immediately kicked the man out and filed a police report, the damage was done. Diego started having trouble in preschool, and as he moved up, he lagged further and further behind academically. Making matters worse, Rosa’s husband blamed himself and seemed angry all the time. While he had always drunk more than she liked, after the incident it got a lot worse. She recognized the tension and drinking weren’t good for the family but didn’t know what she could do about it. From what she told me about her state of mind, I strongly suspected she was suffering from depression.

I assured her that we could help Diego with the asthma and eczema and that I’d look into the ADHD and growth failure. She sighed and seemed at least a little relieved.

We sat in silence for a moment, my mind zooming around. I believed, ever since we’d opened the clinic in 2007, that something medical was happening with my patients that I couldn’t quite understand. It started with the glut of ADHD cases that were referred to me. As with Diego’s, most of my patients’ ADHD symptoms didn’t just come out of the blue. They seemed to occur at the highest rates in patients who were struggling with some type of life disruption or trauma, like the twins who were failing classes and getting into fights at school after witnessing an attempted murder in their home or the three brothers whose grades fell precipitously after their parents’ divorce turned violently acrimonious, to the point where the family was ordered by the court to do their custody swaps at the Bayview police station.

Many patients were already on ADHD medication; some were even on antipsychotics. For a number of patients, the medication seemed to be helping, but for many it clearly wasn’t. Most of the time I couldn’t make the ADHD diagnosis. The diagnostic criteria for ADHD told me I had to rule out other explanations for ADHD symptoms (such as pervasive developmental disorders, schizophrenia, or other psychotic disorders) before I could diagnose ADHD. But what if there was a more nuanced answer? What if the cause of these symptoms, the poor impulse control, inability to focus, difficulty sitting still was not a mental disorder, exactly, but a biological process that worked on the brain to disrupt normal functioning? Weren’t mental disorders simply biological disorders? Trying to treat these children felt like jamming unmatched puzzle pieces together; the symptoms, causes, and treatments were close, but not close enough to give that satisfying click.

I mentally scrolled back, cataloging all the patients like Diego and the twins that I’d seen over the past year. My mind went immediately to Kayla, a ten year old whose asthma was particularly difficult to control. After the last flare-up, I sat down with mom and patient to meticulously review Kayla’s medication regimen. When I asked if Kayla’s mom could think of any asthma triggers that we hadn’t already identified (we had reviewed everything from pet hair to cockroaches to cleaning products), she responded,

“Well, her asthma does seem to get worse whenever her dad punches a hole in the wall. Do you think that could be related?”

Kayla and Diego were just two patients, but they had plenty of company. Day after day I saw infants who were listless and had strange rashes. I saw kindergartners whose hair was falling out. Epidemic levels of learning and behavioral problems. Kids just entering middle school had depression. And in unique cases, like Diego’s, kids weren’t even growing. As I recalled their faces, I ran an accompanying mental checklist of disorders, diseases, syndromes, and conditions, the kinds of early setbacks that could send disastrous ripples throughout the lives to come.

If you looked through a certain percentage of my charts, you would see not only a plethora of medical problems but story after story of heart-wrenching trauma. In addition to the blood pressure reading and the body mass index in the chart, if you flipped all the way to the Social History section, you would find parental incarcerations, multiple foster-care placements, suspected physical abuse, documented abuse, and family legacies of mental illness and substance abuse. A week before Diego, I’d seen a six year old girl with type 1 diabetes whose dad was high for the third visit in a row. When I asked him about it, he assured me I shouldn’t worry because the weed helped to quiet the voices in his head.

In the first year of my practice, seeing roughly a thousand patients, I diagnosed not one but two kids with autoimmune hepatitis, a rare disorder that typically affects fewer than three children in one hundred thousand. Both cases coincided with significant histories of adversity.

I asked myself again and again: What’s the connection?

If it had been just a handful of kids with both overwhelming adversity and poor health outcomes, maybe I could have seen it as a coincidence. But Diego’s situation was representative of hundreds of kids I had seen over the past year. The phrase statistical significance kept echoing through my head. Every day I drove home with a hollow feeling. I was doing my best to care for these kids, but it wasn’t nearly enough. There was an underlying sickness in Bayview that I couldn’t put my finger on, and with every Diego that I saw, the gnawing in my stomach got worse.

For a long time the possibility of an actual biological link between childhood adversity and damaged health came to me as a question that lingered for only a moment before it was gone. I wonder… What if… It seems like… These questions kept popping up, but part of the problem in putting the pieces together was that they would emerge from situations occurring months or sometimes years apart. Because they didn’t fit logically or neatly into my worldview at those discrete moments in time, it was difficult to see the story behind the story. Later it would feel obvious that all of these questions were simply clues pointing to a deeper truth, but like a soap-opera wife whose husband was stepping out with the nanny, I would understand it only in hindsight. It wasn’t hotel receipts and whiffs of perfume that clued me in, but there were plenty of tiny signals that eventually led me to the same thought: How could I not have seen this? It was right in front of me the whole damn time.

I lived in that state of not-quite-getting-it for years because I was doing my job the way I had been trained to do it. I knew that my gut feeling about this biological connection between adversity and health was just a hunch. As a scientist, I couldn’t accept these kinds of associations without some serious evidence. Yes, my patients were experiencing extremely poor health outcomes, but wasn’t that endemic to the community they lived in? Both my medical training and my public health education told me that this was so.

That there is a connection between poor health and poor communities is well documented. We know that it’s not just how you live that affects your health, it’s also where you live. Public health experts and researchers refer to communities as “hot spots” if poor health outcomes on the whole are found to be extreme in comparison to the statistical norm. The dominant view is that health disparities in populations like Bayview occur because these folks have poor access to health care, poor quality of care, and poor options when it comes to things like healthy, affordable food and safe housing. When I was at Harvard getting my master’s degree in public health, I learned that if I wanted to improve people’s health, the best thing I could do was find a way to provide accessible and better health care for these communities.

Straight out of my medical residency, I was recruited by the California Pacific Medical Center (CPMC) in the Laurel Heights area of San Francisco to do my dream job: create programs specifically targeted to address health disparities in the city. The hospital’s CEO, Dr. Martin Brotman, personally sat me down to reinforce his commitment to that. My second week on the job, my boss came into my office and handed me a 147 page document, the 2004 Community Health Assessment for San Francisco. Then he promptly went on vacation, giving me very little direction and leaving me to my own ambitious devices (in hindsight, this was either genius or crazy on his part). I did what any good public health nerd would do, I looked at the numbers and tried to assess the situation. I had heard that Bayview Hunters Point in San Francisco, where much of San Francisco’s African American population lived, was a vulnerable community, but when I looked at the 2004 assessment, I was floored. One way the report grouped people was by their zip code. The leading cause of early death in seventeen out of twenty-one zip codes in San Francisco was ischemic heart disease, which is the number-one killer in the United States. In three zip codes it was HIV/AIDS. But Bayview Hunters Point was the only zip code where the number one cause of early death was violence. Right next to Bayview (94124) in the table was the zip code for the Marina district (94123), one of the city’s more affluent neighborhoods. As I ran my finger down the rows of numbers, my jaw dropped. What they showed me was that if you were a parent raising your baby in the Bayview zip code, your child was two and a half times as likely to develop pneumonia than a child in the Marina district. Your child was also six times as likely to develop asthma. And once that baby grew up, he or she was twelve times as likely to develop uncontrolled diabetes.

I had been hired by CPMC to address disparities. And, boy, now I saw why.

Looking back, I think it was probably a combination of naiveté and youthful enthusiasm that spurred me to spend the two weeks that my boss was gone drawing up a business plan for a clinic in the heart of the community with the greatest need. I wanted to bring services to the people of Bayview rather than asking them to come to us. Luckily, when my boss and I gave the plan to Dr. Brotman, he didn’t fire me for excessive idealism. Instead, he helped me make the clinic a reality, which still kind of blows my mind.

The numbers in that report had given me a good idea of what the people of Bayview were up against, but it wasn’t until March of 2007, when we opened the doors to CPMC’s Bayview Child Health Center, that I saw the full shape of it. To say that life in Bayview isn’t easy would be an understatement. It’s one of the few places in San Francisco where drug deals happen in plain sight of kindergartners on their way to school and where grandmas sometimes sleep in bathtubs because they’re afraid of stray bullets coming through the walls. It’s always been a rough place and not only because of violence. In the 1960s, the U.S. Navy decontaminated radioactive boats in the shipyard, and up until the early 2000s, the toxic byproducts from a nearby power plant were routinely dumped in the area. In a documentary about the racial strife and marginalization of the neighborhood, writer and social critic James Baldwin said, “This is the San Francisco that America pretends does not exist.”

My day-to-day experience working in Bayview tells me that the struggles are real and ever present, but it also tells me that’s not the whole story. Bayview is the oily concrete you skin your knee on, but it’s also the flower growing up between the cracks. Every day I see families and communities that lovingly support each other through some of the toughest experiences imaginable. I see beautiful kids and doting parents. They struggle and they laugh and then they struggle some more. But no matter how hard parents work for their kids, the lack of resources in the community is crushing. Before we opened the Bayview Child Health Center, there was only one pediatrician in practice for over ten thousand children. These kids face serious medical and emotional problems. So do their parents. And their grandparents. In many cases, the kids fare better because they are eligible for government assisted health insurance. Poverty, violence, substance abuse, and crime have created a multigenerational legacy of ill health and frustration. But still, I believed we could make a difference. I opened my practice there because I wasn’t okay with pretending the people of Bayview didn’t exist.

from

The Deepest Well. Healing the long term effects of Childhood Adversity

by Dr Nadine Burke Harris

get it at Amazon.com

Epigenetics: The Evolution Revolution – Israel Rosenfield and Edward Ziff * The Epigenetics Revolution – Nessa Carey.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

We are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

Israel Rosenfield and Edward Ziff

At the end of the eighteenth century, the French naturalist Jean-Baptiste Lamarck noted that life on earth had evolved over long periods of time into a striking variety of organisms. He sought to explain how they had become more and more complex. Living organisms not only evolved, Lamarck argued; they did so very slowly, “little by little and successively.” In Lamarckian theory, animals became more diverse as each creature strove toward its own “perfection,” hence the enormous variety of living things on earth. Man is the most complex life form, therefore the most perfect, and is even now evolving.

In Lamarck’s view, the evolution of life depends on variation and the accumulation of small, gradual changes. These are also at the center of Darwin’s theory of evolution, yet Darwin wrote that Lamarck’s ideas were “veritable rubbish.” Darwinian evolution is driven by genetic variation combined with natural selection, the process whereby some variations give their bearers better reproductive success in a given environment than other organisms have. Lamarckian evolution, on the other hand, depends on the inheritance of acquired characteristics. Giraffes, for example, got their long necks by stretching to eat leaves from tall trees, and stretched necks were inherited by their offspring, though Lamarck did not explain how this might be possible.

When the molecular structure of DNA was discovered in 1953, it became dogma in the teaching of biology that DNA and its coded information could not be altered in any way by the environment or a person’s way of life. The environment, it was known, could stimulate the expression of a gene. Having a light shone in one’s eyes or suffering pain, for instance, stimulates the activity of neurons and in doing so changes the activity of genes those neurons contain, producing instructions for making proteins or other molecules that play a central part in our bodies.

The structure of the DNA neighboring the gene provides a list of instructions, a gene program, that determines under what circumstances the gene is expressed. And it was held that these instructions could not be altered by the environment. Only mutations, which are errors introduced at random, could change the instructions or the information encoded in the gene itself and drive evolution through natural selection. Scientists discredited any Lamarckian claims that the environment can make lasting, perhaps heritable alterations in gene structure or function.

But new ideas closely related to Lamarck’s eighteenth century views have become central to our understanding of genetics. In the past fifteen years these ideas, which belong to a developing field of study called epigenetics, have been discussed in numerous articles and several books, including Nessa Carey’s 2012 study The Epigenetic Revolution and The Deepest Well, a recent work on childhood trauma by the physician Nadine Burke Harris.

The developing literature surrounding epigenetics has forced biologists to consider the possibility that gene expression could be influenced by some heritable environmental factors previously believed to have had no effect over it, like stress or deprivation. “The DNA blueprint,” Carey writes,

Isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life.

That might seem a commonsensical view. But it runs counter to decades of scientific thought about the independence of the genetic program from environmental influence. What findings have made it possible?

In 1975, two English biologists, Robin Holliday and John Pugh, and an American biologist, Arthur Riggs, independently suggested that methylation, a chemical modification of DNA that is heritable and can be induced by environmental influences, had an important part in controlling gene expression. How it did this was not understood, but the idea that through methylation the environment could, in fact, alter not only gene expression but also the genetic program rapidly took root in the scientific community.

As scientists came to better understand the function of methylation in altering gene expression, they realized that extreme environmental stress, the results of which had earlier seemed self explanatory, could have additional biological effects on the organisms that suffered it. Experiments with laboratory animals have now shown that these outcomes are based on the transmission of acquired changes in genetic function. Childhood abuse, trauma, famine, and ethnic prejudice may, it turns out, have long term consequences for the functioning of our genes.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

Epigenesis does not change the information coded in the genes or a person’s genetic makeup, the genes themselves are not affected, but instead alters the manner in which they are “read” by blocking access to certain genes and preventing their expression.

This mechanism can be the hidden cause of our feelings of depression, anxiety, or paranoia. What is perhaps most surprising of all, this alteration could, in some cases, be passed on to future generations who have never directly experienced the stresses that caused their forebears’ depression or ill health.

Numerous clinical studies have shown that childhood trauma, arising from parental death or divorce, neglect, violence, abuse, lack of nutrition or shelter, or other stressful circumstances, can give rise to a variety of health problems in adults: heart disease, cancer, mood and dietary disorders, alcohol and drug abuse, infertility, suicidal behavior, learning deficits, and sleep disorders.

Since the publication in 2003 of an influential paper by Rudolf Jaenisch and Adrian Bird, we have started to understand the genetic mechanisms that explain why this is the case. The body and the brain normally respond to danger and frightening experiences by releasing a hormone, a glucocorticoid that controls stress. This hormone prepares us for various challenges by adjusting heart rate, energy production, and brain function; it binds to a protein called the glucocorticoid receptor in nerve cells of the brain.

Normally, this binding shuts off further glucocorticoid production, so that when one no longer perceives a danger, the stress response abates. However, as Gustavo Turecki and Michael Meaney note in a 2016 paper surveying more than a decade’s worth of findings about epigenetics, the gene for the receptor is inactive in people who have experienced childhood stress; as a result, they produce few receptors. Without receptors to bind to, glucocorticoids cannot shut off their own production, so the hormone keeps being released and the stress response continues, even after the threat has subsided.

“The term for this is disruption of feedback inhibition,” Harris writes. It is as if “the body’s stress thermostat is broken. Instead of shutting off this supply of ‘heat’ when a certain point is reached, it just keeps on blasting cortisol through your system.”

It is now known that childhood stress can deactivate the receptor gene by an epigenetic mechanism, namely, by creating a physical barrier to the information for which the gene codes. What creates this barrier is DNA methylation, by which methyl groups known as methyl marks (composed of one carbon and three hydrogen atoms) are added to DNA.

DNA methylation is long-lasting and keeps chromatin, the DNA-protein complex that makes up the chromosomes containing the genes, in a highly folded structure that blocks access to select genes by the gene expression machinery, effectively shutting the genes down. The long-term consequences are chronic inflammation, diabetes, heart disease, obesity, schizophrenia, and major depressive disorder.

Such epigenetic effects have been demonstrated in experiments with laboratory animals. In a typical experiment, rat or mouse pups are subjected to early-life stress, such as repeated maternal separation. Their behavior as adults is then examined for evidence of depression, and their genomes are analyzed for epigenetic modifications. Likewise, pregnant rats or mice can be exposed to stress or nutritional deprivation, and their offspring examined for behavioral and epigenetic consequences.

Experiments like these have shown that even animals not directly exposed to traumatic circumstances, those still in the womb when their parents were put under stress, can have blocked receptor genes. It is probably the transmission of glucocorticoids from mother to fetus via the placenta that alters the fetus in this way. In humans, prenatal stress affects each stage of the child’s maturation: for the fetus, a greater risk of preterm delivery, decreased birth weight, and miscarriage; in infancy, problems of temperament, attention, and mental development; in childhood, hyperactivity and emotional problems; and in adulthood, illnesses such as schizophrenia and depression.

What is the significance of these findings?

Until the mid-1970s, no one suspected that the way in which the DNA was “read” could be altered by environmental factors, or that the nervous systems of people who grew up in stress free environments would develop differently from those of people who did not. One’s development, it was thought, was guided only by one’s genetic makeup.

As a result of epigenesis, a child deprived of nourishment may continue to crave and consume large amounts of food as an adult, even when he or she is being properly nourished, leading to obesity and diabetes. A child who loses a parent or is neglected or abused may have a genetic basis for experiencing anxiety and depression and possibly schizophrenia.

Formerly, it had been widely believed that Darwinian evolutionary mechanisms, variation and natural selection, were the only means for introducing such long lasting changes in brain function, a process that took place over generations. We now know that epigenetic mechanisms can do so as well, within the lifetime of a single person.

It is by now well established that people who suffer trauma directly during childhood or who experience their mother’s trauma indirectly as a fetus may have epigenetically based illnesses as adults. More controversial is whether epigenetic changes can be passed on from parent to child.

Methyl marks are stable when DNA is not replicating, but when it replicates, the methyl marks must be introduced into the newly replicated DNA strands to be preserved in the new cells. Researchers agree that this takes place when cells of the body divide, a process called mitosis, but it is not yet fully established under which circumstances marks are preserved when cell division yields sperm and egg, a process called meiosis, or when mitotic divisions of the fertilized egg form the embryo. Transmission at these two latter steps would be necessary for epigenetic changes to be transmitted in full across generations.

The most revealing instances for studies of intergenerational transmission have been natural disasters, famines, and atrocities of war, during which large groups have undergone trauma at the same time. These studies have shown that when women are exposed to stress in the early stages of pregnancy, they give birth to children whose stress response systems malfunction. Among the most widely studied of such traumatic events is the Dutch Hunger Winter. In 1944 the Germans prevented any food from entering the parts of Holland that were still occupied. The Dutch resorted to eating tulip bulbs to overcome their stomach pains. Women who were pregnant during this period, Carey notes, gave birth to a higher proportion of obese and schizophrenic children than one would normally expect. These children also exhibited epigenetic changes not observed in similar children, such as siblings, who had not experienced famine at the prenatal stage.

During the Great Chinese Famine (1958-1961), millions of people died, and children born to young women who experienced the famine were more likely to become schizophrenic, to have impaired cognitive function, and to suffer from diabetes and hypertension as adults. Similar studies of the 1932-1933 Ukrainian famine, in which many millions died, revealed an elevated risk of type II diabetes in people who were in the prenatal stage of development at the time. Although prenatal and early childhood stress both induce epigenetic effects and adult illnesses, it is not known if the mechanism is the same in both cases.

Whether epigenetic effects of stress can be transmitted over generations needs more research, both in humans and in laboratory animals. But recent comprehensive studies by several groups using advanced genetic techniques have indicated that epigenetic modifications are not restricted to the glucocorticoid receptor gene. They are much more extensive than had been realized, and their consequences for our development, health, and behavior may also be great.

It is as though nature employs epigenesis to make long lasting adjustments to an individual’s genetic program to suit his or her personal circumstances, much as in Lamarck’s notion of “striving for perfection.”

In this view, the ill health arising from famine or other forms of chronic, extreme stress would constitute an epigenetic miscalculation on the part of the nervous system. Because the brain prepares us for adult adversity that matches the level of stress we suffer in early life, psychological disease and ill health persist even when we move to an environment with a lower stress level.

Once we recognize that there is an epigenetic basis for diseases caused by famine, economic deprivation, war related trauma, and other forms of stress, it might be possible to treat some of them by reversing those epigenetic changes. “When we understand that the source of so many of our society’s problems is exposure to childhood adversity,” Harris writes,

The solutions are as simple as reducing the dose of adversity for kids and enhancing the ability of caregivers to be buffers. From there, we keep working our way up, translating that understanding into the creation of things like more effective educational curricula and the development of blood tests that identify biomarkers for toxic stress, things that will lead to a wide range of solutions and innovations, reducing harm bit by bit, and then leap by leap.

Epigenetics has also made clear that the stress caused by war, prejudice, poverty, and other forms of childhood adversity may have consequences both for the persons affected and for their future unborn children, not only for social and economic reasons but also for biological ones.

The Epigenetics Revolution

Nessa Carey

DNA.
Sometimes, when we read about biology, we could be forgiven for thinking that those three letters explain everything. Here, for example, are just a few of the statements made on 26 June 2000, when researchers announced that the human genome had been sequenced:

Today we are learning the language in which God created life. US President Bill Clinton

We now have the possibility of achieving all we ever hoped for from medicine. UK Science Minister Lord Sainsbury

Mapping the human genome has been compared with putting a man on the moon, but I believe it is more than that. This is the outstanding achievement not only of our lifetime, but in terms of human history. Michael Dexter, The Wellcome Trust

From these quotations, and many others like them, we might well think that researchers could have relaxed a bit after June 2000 because most human health and disease problems could now be sorted out really easily. After all, we had the blueprint for humankind. All we needed to do was get a bit better at understanding this set of instructions, so we could fill in a few details. Unfortunately, these statements have proved at best premature. The reality is rather different.

We talk about DNA as if it’s a template, like a mould for a car part in a factory. In the factory, molten metal or plastic gets poured into the mould thousands of times and, unless something goes wrong in the process, out pop thousands of identical car parts.

But DNA isn’t really like that. It’s more like a script. Think of Romeo and Juliet, for example. In 1936 George Cukor directed Leslie Howard and Norma Shearer in a film version. Sixty years later Baz Luhrmann directed Leonardo DiCaprio and Claire Danes in another movie version of this play. Both productions used Shakespeare’s script, yet the two movies are entirely different. Identical starting points, different outcomes.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

The implications of this for human health are very wide ranging, as we will see from the case studies we are going to look at in a moment. In all these case studies it’s really important to remember that nothing happened to the DNA blueprint of the people in these case studies. Their DNA didn’t change (mutate), and yet their life histories altered irrevocably in response to their environments.

Audrey Hepburn was one of the 20th century’s greatest movie stars. Stylish, elegant and with a delicately lovely, almost fragile bone structure, her role as Holly Golightly in Breakfast at Tiffany’s has made her an icon, even to those who have never seen the movie. It’s startling to think that this wonderful beauty was created by terrible hardship. Audrey Hepburn was a survivor of an event in the Second World War known as the Dutch Hunger Winter. This ended when she was sixteen years old but the after effects of this period, including poor physical health, stayed with her for the rest of her life.

The Dutch Hunger Winter lasted from the start of November 1944 to the late spring of 1945. This was a bitterly cold period in Western Europe, creating further hardship in a continent that had been devastated by four years of brutal war. Nowhere was this worse than in the Western Netherlands, which at this stage was still under German control. A German blockade resulted in a catastrophic drop in the availability of food to the Dutch population. At one point the population was trying to survive on only about 30 per cent of the normal daily calorie intake. People ate grass and tulip bulbs, and burned every scrap of furniture they could get their hands on, in a desperate effort to stay alive. Over 20,000 people had died by the time food supplies were restored in May 1945.

The dreadful privations of this time also created a remarkable scientific study population. The Dutch survivors were a well defined group of individuals all of whom suffered just one period of malnutrition, all of them at exactly the same time. Because of the excellent healthcare infrastructure and record keeping in the Netherlands, epidemiologists have been able to follow the long term effects of the famine. Their findings were completely unexpected.

One of the first aspects they studied was the effect of the famine on the birth weights of children who had been in the womb during that terrible period. If a mother was well fed around the time of conception and malnourished only for the last few months of the pregnancy, her baby was likely to be born small. If, on the other hand, the mother suffered malnutrition for the first three months of the pregnancy only (because the baby was conceived towards the end of this terrible episode), but then was well fed, she was likely to have a baby with a normal body weight. The foetus ‘caught up’ in body weight.

That all seems quite straightforward, as we are all used to the idea that foetuses do most of their growing in the last few months of pregnancy. But epidemiologists were able to study these groups of babies for decades and what they found was really surprising. The babies who were born small stayed small all their lives, with lower obesity rates than the general population. For forty or more years, these people had access to as much food as they wanted, and yet their bodies never got over the early period of malnutrition. Why not? How did these early life experiences affect these individuals for decades? Why weren’t these people able to go back to normal, once their environment reverted to how it should be?

Even more unexpectedly, the children whose mothers had been malnourished only early in pregnancy, had higher obesity rates than normal. Recent reports have shown a greater incidence of other health problems as well, including certain tests of mental activity. Even though these individuals had seemed perfectly healthy at birth, something had happened to their development in the womb that affected them for decades after. And it wasn’t just the fact that something had happened that mattered, it was when it happened. Events that take place in the first three months of development, a stage when the foetus is really very small, can affect an individual for the rest of their life.

Even more extraordinarily, some of these effects seem to be present in the children of this group, i.e. in the grandchildren of the women who were malnourished during the first three months of their pregnancy.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

Let’s consider a different human story. Schizophrenia is a dreadful mental illness which, if untreated, can completely overwhelm and disable an affected person. Patients may present with a range of symptoms including delusions, hallucinations and enormous difficulties focusing mentally. People with schizophrenia may become completely incapable of distinguishing between the ‘real world’ and their own hallucinatory and delusional realm. Normal cognitive, emotional and societal responses are lost. There is a terrible misconception that people with schizophrenia are likely to be violent and dangerous. For the majority of patients this isn’t the case at all, and the people most likely to suffer harm because of this illness are the patients themselves. Individuals with schizophrenia are fifty times more likely to attempt suicide than healthy individuals.

Schizophrenia is a tragically common condition. It affects between 0.5 per cent and 1 per cent of the population in most countries and cultures, which means that there may be over fifty million people alive today who are suffering from this condition. Scientists have known for some time that genetics plays a strong role in determining if a person will develop this illness. We know this because if one of a pair of identical twins has schizophrenia, there is a 50 per cent chance that their twin will also have the condition. This is much higher than the 1 per cent risk in the general population.

Identical twins have exactly the same genetic code as each other. They share the same womb and usually they are brought up in very similar environments. When we consider this, it doesn’t seem surprising that if one of the twins develops schizophrenia, the chance that his or her twin will also develop the illness is very high. In fact, we have to start wondering why it isn’t higher. Why isn’t the figure 100 per cent? How is it that two apparently identical individuals can become so very different? An individual has a devastating mental illness but will their identical twin suffer from it too? Flip a coin heads they win, tails they lose. Variations in the environment are unlikely to account for this, and even if they did, how would these environmental effects have such profoundly different impacts on two genetically identical people?

Here’s a third case study. A small child, less than three years old, is abused and neglected by his or her parents. Eventually, the state intervenes and the child is taken away from the biological parents and placed with foster or adoptive parents. These new carers love and cherish the child, doing everything they can to create a secure home, full of affection. The child stays with these new parents throughout the rest of its childhood and adolescence, and into young adulthood.

Sometimes everything works out well for this person. They grow up into a happy, stable individual indistinguishable from all their peers who had normal, non abusive childhoods. But often, tragically, it doesn’t work out this way. Children who have suffered from abuse or neglect in their early years grow up with a substantially higher risk of adult mental health problems than the general population. All too often the child grows up into an adult at high risk of depression, self-harm, drug abuse and suicide.

Once again, we have to ask ourselves why. Why is it so difficult to override the effects of early childhood exposure to neglect or abuse?

Why should something that happened early in life have effects on mental health that may still be obvious decades later?

In some cases, the adult may have absolutely no recollection of the traumatic events, and yet they may suffer the consequences mentally and emotionally for the rest of their lives.

These three case studies seem very different on the surface. The first is mainly about nutrition, especially of the unborn child. The second is about the differences that arise between genetically identical individuals. The third is about long term psychological damage as a result of childhood abuse.

But these stories are linked at a very fundamental biological level. They are all examples of epigenetics. Epigenetics is the new discipline that is revolutionising biology. Whenever two genetically identical individuals are non-identical in some way we can measure, this is called epigenetics. When a change in environment has biological consequences that last long after the event itself has vanished into distant memory, we are seeing an epigenetic effect in action.

Epigenetic phenomena can be seen all around us, every day. Scientists have identified many examples of epigenetics, just like the ones described above, for many years. When scientists talk about epigenetics they are referring to all the cases where the genetic code alone isn’t enough to describe what’s happening, there must be something else going on as well.

This is one of the ways that epigenetics is described scientifically, where things which are genetically identical can actually appear quite different to one another. But there has to be a mechanism that brings out this mismatch between the genetic script and the final outcome. These epigenetic effects must be caused by some sort of physical change, some alterations in the vast array of molecules that make up the cells of every living organism. This leads us to the other way of viewing epigenetics, the molecular description.

In this model, epigenetics can be defined as the set of modifications to our genetic material that change the ways genes are switched on or off, but which don’t alter the genes themselves.

Although it may seem confusing that the word ‘epigenetics’ can have two different meanings, it’s just because we are describing the same event at two different levels. It’s a bit like looking at the pictures in old newspapers with a magnifying glass, and seeing that they are made up of dots. If we didn’t have a magnifying glass we might have thought that each picture was just made in one solid piece and we’d probably never have been able to work out how so many new images could be created each day. On the other hand, if all we ever did was look through the magnifying glass, all we would see would be dots, and we’d never see the incredible image that they formed together and which we’d see if we could only step back and look at the big picture.

The revolution that has happened very recently in biology is that for the first time we are actually starting to understand how amazing epigenetic phenomena are caused. We’re no longer just seeing the large image, we can now also analyse the individual dots that created it.

Crucially, this means that we are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

The ‘epi’ in epigenetics is derived from Greek and means at, on, to, upon, over or beside. The DNA in our cells is not some pure, unadulterated molecule. Small chemical groups can be added at specific regions of DNA. Our DNA is also smothered in special proteins. These proteins can themselves be covered with additional small chemicals. None of these molecular amendments changes the underlying genetic code. But adding these chemical groups to the DNA, or to the associated proteins, or removing them, changes the expression of nearby genes. These changes in gene expression alter the functions of cells, and the very nature of the cells themselves. Sometimes, if these patterns of chemical modifications are put on or taken off at a critical period in development, the pattern can be set for the rest of our lives, even if we live to be over a hundred years of age.

There’s no debate that the DNA blueprint is a starting point. A very important starting point and absolutely necessary, without a doubt. But it isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life. And as we shall see in Chapter 1, we would all look like big amorphous blobs, because all the cells in our bodies would be completely identical.

Huge areas of biology are influenced by epigenetic mechanisms, and the revolution in our thinking is spreading further and further into unexpected frontiers of life on our planet. Some of the other examples we’ll meet in this book include why we can’t make a baby from two sperm or two eggs, but have to have one of each. What makes cloning possible? Why is cloning so difficult? Why do some plants need a period of cold before they can flower? Since queen bees and worker bees are genetically identical, why are they completely different in form and function? Why are all tortoiseshell cats female?

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

Scientists in both the academic and commercial sectors are also waking up to the enormous impact that epigenetics has on human health. It’s implicated in diseases from schizophrenia to rheumatoid arthritis, and from cancer to chronic pain. There are already two types of drugs that successfully treat certain cancers by interfering with epigenetic processes. Pharmaceutical companies are spending hundreds of millions of dollars in a race to develop the next generation of epigenetic drugs to treat some of the most serious illnesses afflicting the industrialised world. Epigenetic therapies are the new frontiers of drug discovery.

In biology, Darwin and Mendel came to define the 19th century as the era of evolution and genetics; Watson and Crick defined the 20th century as the era of DNA, and the functional understanding of how genetics and evolution interact. But in the 21st century it is the new scientific discipline of epigenetics that is unravelling so much of what we took as dogma and rebuilding it in an infinitely more varied, more complex and even more beautiful fashion.

The world of epigenetics is a fascinating one. It’s filled with remarkable subtlety and complexity, and in Chapters 3 and 4 we’ll delve deeper into the molecular biology of what’s happening to our genes when they become epigenetically modified. But like so many of the truly revolutionary concepts in biology, epigenetics has at its basis some issues that are so simple they seem completely self evident as soon as they are pointed out. Chapter 1 is the single most important example of such an issue. It’s the investigation which started the epigenetics revolution.

Notes on nomenclature

There is an international convention on the way that the names of genes and proteins are written, which we adhere to in this book.

Gene names and symbols are written in italics. The proteins encoded by the genes are written in plain text. The symbols for human genes and proteins are written in upper case. For other species, such as mice, the symbols are usually written with only the first letter capitalised.

This is summarised for a hypothetical gene in the following table.

Like all rules, however, there are a few quirks in this system and while these conventions apply in general we will encounter some exceptions in this book.

Chapter 1

An Ugly Toad and an Elegant Man

Like the toad, ugly and venomous, wears yet a precious jewel in his head. William Shakespeare

Humans are composed of about 50 to 70 trillion cells. That’s right, 50,000,000,000,000 cells. The estimate is a bit vague but that’s hardly surprising. Imagine we somehow could break a person down into all their individual cells and then count those cells, at a rate of one cell every second. Even at the lower estimate it would take us about a million and a half years, and that’s without stopping for coffee or losing count at any stage. These cells form a huge range of tissues, all highly specialised and completely different from one another. Unless something has gone very seriously wrong, kidneys don’t start growing out of the top of our heads and there are no teeth in our eyeballs.

This seems very obvious but why don’t they? It’s actually quite odd, when we remember that every cell in our body was derived from the division of just one starter cell. This single cell is called the zygote. A zygote forms when one sperm merges with one egg.

A Zygote

This zygote splits in two; those two cells divide again and so on, to create the miraculous piece of work which is a full human body. As they divide the cells become increasingly different from one another and form specialised cell types. This process is known as differentiation. It’s a vital one in the formation of any multicellular organism.

If we look at bacteria down a microscope then pretty much all the bacteria of a single species look identical. Look at certain human cells in the same way say, a food absorbing cell from the small intestine and a neuron from the brain and we would be hard pressed to say that they were even from the same planet. But so what? Well, the big ‘what’ is that these cells started out with exactly the same genetic material as one another. And we do mean exactly, this has to be the case, because they came from just one starter cell, that zygote. So the cells have become completely different even though they came from one cell with just one blueprint.

One explanation for this is that the cells are using the same information in different ways and that’s certainly true. But it’s not necessarily a statement that takes us much further forwards. In a 1960 adaptation of H. G. Wells’s The Time Machine, starring Rod Taylor as the time travelling scientist, there’s a scene where he shows his time machine to some learned colleagues (all male, naturally) and one asks for an explanation of how the machine works. Our hero then describes how the occupant of the machine will travel through time by the following mechanism:

In front of him is the lever that controls movement. Forward pressure sends the machine into the future. Backward pressure, into the past. And the harder the pressure, the faster the machine travels.

Everyone nods sagely at this explanation. The only problem is that this isn’t an explanation, it’s just a description. And that’s also true of that statement about cells using the same information in different ways it doesn’t really tell us anything, it just re-states what we already knew in a different way.

What’s much more interesting is the exploration of how cells use the same genetic information in different ways. Perhaps even more important is how the cells remember and keep on doing it. Cells in our bone marrow keep on producing blood cells, cells in our liver keep on producing liver cells. Why does this happen? One possible and very attractive explanation is that as cells become more specialised they rearrange their genetic material, possibly losing genes they don’t require. The liver is a vital and extremely complicated organ. The website of the British Liver Trust states that the liver performs over 500 functions, including processing the food that has been digested by our intestines, neutralising toxins and creating enzymes that carry out all sorts of tasks in our bodies. But one thing the liver simply never does is transport oxygen around the body. That job is carried out by our red blood cells, which are stuffed full of a particular protein, haemoglobin. Haemoglobin binds oxygen in tissues where there’s lots available, like our lungs, and then releases it when the red blood cell reaches a tissue that needs this essential chemical, such as the tiny blood vessels in the tips of our toes. The liver is never going to carry out this function, so perhaps it just gets rid of the haemoglobin gene, which it simply never uses.

It’s a perfectly reasonable suggestion cells could simply lose genetic material they aren’t going to use. As they differentiate, cells could jettison hundreds of genes they no longer need. There could of course be a slightly less drastic variation on this, maybe the cells shut down genes they aren’t using. And maybe they do this so effectively that these genes can never ever be switched on again in that cell, i.e. the genes are irreversibly inactivated. The key experiments that examined these eminently reasonable hypotheses, loss of genes, or irreversible inactivation involved an ugly toad and an elegant man.

Turning back the biological clock

The work has its origins in experiments performed many decades ago in England by John Gurdon, first in Oxford and subsequently Cambridge. Now Professor Sir John Gurdon, he still works in a lab in Cambridge, albeit these days in a gleaming modern building that has been named after him. He’s an engaging, unassuming and striking man who, 40 years on from his ground breaking work, continues to publish research in a field that he essentially founded.

John Gurdon cuts an instantly recognisable figure around Cambridge. Now in his seventies, he is tall, thin and has a wonderful head of swept back blonde hair. He looks like the quintessential older English gentleman of American movies, and fittingly he went to school at Eton. There is a lovely story that John Gurdon still treasures, a school report from his biology teacher at that institution which says, ‘I believe Gurdon has ideas about becoming a scientist. In present showing, this is quite ridiculous.’ The teacher’s comments were based on his pupil’s dislike of mindless rote learning of unconnected facts. But as we shall see, for a scientist as wonderful as John Gurdon, memory is much less important than imagination.

In 1937 the Hungarian biochemist Albert SzentGyorgyi won the Nobel Prize for Physiology or Medicine, his achievements including the discovery of vitamin C. In a phrase that has various subtly different translations but one consistent interpretation he defined discovery as, ‘To see what everyone else has seen but to think what nobody else has thought’. It is probably the best description ever written of what truly great scientists do. And John Gurdon is truly a great scientist, and may well follow in Szent-Gyorgyi’s Nobel footsteps.

In 2009 he was a co-recipient of the Lasker Prize, which is to the Nobel what the Golden Globes are so often to the Oscars. John Gurdon’s work is so wonderful that when it is first described it seems so obvious, that anyone could have done it. The questions he asked, and the ways in which he answered them, have that scientifically beautiful feature of being so elegant that they seem entirely self-evident.

John Gurdon used non-fertilised toad eggs in his work. Any of us who has ever kept a tank full of frogspawn and watched this jelly-like mass develop into tadpoles and finally tiny frogs, has been working, whether we thought about it in these terms or not, with fertilised eggs, i.e. ones into which sperm have entered and created a new complete nucleus. The eggs John Gurdon worked on were a little like these, but hadn’t been exposed to sperm.

There were good reasons why he chose to use toad eggs in his experiments. The eggs of amphibians are generally very big, are laid in large numbers outside the body and are see-through. All these features make amphibians a very handy experimental species in developmental biology, as the eggs are technically relatively easy to handle. Certainly a lot better than a human egg, which is hard to obtain, very fragile to handle, is not transparent and is so small that we need a microscope just to see it.

John Gurdon worked on the African clawed toad (Xenopus Iaevis, to give it its official title), one of those John Malkovich ugly-handsome animals, and investigated what happens to cells as they develop and differentiate and age. He wanted to see if a tissue cell from an adult toad still contained all the genetic material it had started with, or if it had lost or irreversibly inactivated some as the cell became more specialised. The way he did this was to take a nucleus from the cell of an adult toad and insert it into an unfertilised egg that had had its own nucleus removed. This technique is called somatic cell nuclear transfer (SCNT), and will come up over and over again. ‘Somatic’ comes from the Greek word for ‘body’.

After he’d performed the SCNT, John Gurdon kept the eggs in a suitable environment (much like a child with a tank of frogspawn) and waited to see if any of these cultured eggs hatched into little toad tadpoles.

The experiments were designed to test the following hypothesis: ‘As cells become more specialised (differentiated) they undergo an irreversible loss/inactivation of genetic material.’ There were two possible outcomes to these experiments:

Either

The hypothesis was correct and the ‘adult’ nucleus has lost some of the original blueprint for creating a new individual. Under these circumstances an adult nucleus will never be able to replace the nucleus in an egg and so will never generate a new healthy toad, with all its varied and differentiated tissues.

Or

The hypothesis was wrong, and new toads can be created by removing the nucleus from an egg and replacing it with one from adult tissues.

Other researchers had started to look at this before John Gurdon decided to tackle the problem, two scientists called Briggs and King using a different amphibian, the frog Rana pipiens. In 1952 they transplanted the nuclei from cells at a very early stage of development into an egg lacking its own original nucleus and they obtained viable frogs. This demonstrated that it was technically possible to transfer a nucleus from another cell into an ‘empty’ egg without killing the cell. However, Briggs and King then published a second paper using the same system but transferring a nucleus from a more developed cell type and this time they couldn’t create any frogs. The difference in the cells used for the nuclei in the two papers seems astonishingly minor just one day older and no froglets. This supported the hypothesis that some sort of irreversible inactivation event had taken place as the cells differentiated. A lesser man than John Gurdon might have been put off by this. Instead he spent over a decade working on the problem.

The design of the experiments was critical. Imagine we have started reading detective stories by Agatha Christie. After we’ve read our first three we develop the following hypothesis: ‘The killer in an Agatha Christie novel is always the doctor.’ We read three more and the doctor is indeed the murderer in each. Have we proved our hypothesis? No. There’s always going to be the thought that maybe we should read just one more to be sure. And what if some are out of print, or unobtainable? No matter how many we read, we may never be entirely sure that we’ve read the entire collection. But that’s the joy of disproving hypotheses. All we need is one instance in which Poirot or Miss Marple reveal that the doctor was a man of perfect probity and the killer was actually the vicar, and our hypothesis is shot to pieces. And that is how the best scientific experiments are designed to disprove, not to prove an idea.

And that was the genius of John Gurdon’s work. When he performed his experiments what he was attempting was exceptionally challenging with the technology of the time. If he failed to generate toads from the adult nuclei this could simply mean his technique had something wrong with it. No matter how many times he did the experiment without getting any toads, this wouldn’t actually prove the hypothesis. But if he did generate live toads from eggs where the original nucleus had been replaced by the adult nucleus he would have disproved the hypothesis. He would have demonstrated beyond doubt that when cells differentiate, their genetic material isn’t irreversibly lost or changed. The beauty of this approach is that just one such toad would topple the entire theory and topple it he did.

John Gurdon is incredibly generous in his acknowledgement of the collegiate nature of scientific research, and the benefits he obtained from being in dynamic laboratories and universities. He was lucky to start his work in a well set-up laboratory which had a new piece of equipment which produced ultraviolet light. This enabled him to kill off the original nuclei of the recipient eggs without causing too much damage, and also ‘softened up’ the cell so that he could use tiny glass hypodermic needles to inject donor nuclei.

Other workers in the lab had, in some unrelated research, developed a strain of toads which had a mutation with an easily detectable, but non-damaging effect. Like almost all mutations this was carried in the nucleus, not the cytoplasm. The cytoplasm is the thick liquid inside cells, in which the nucleus sits. So John Gurdon used eggs from one strain and donor nuclei from the mutated strain. This way he would be able to show unequivocally that any resulting toads had been coded for by the donor nuclei, and weren’t just the result of experimental error, as could happen if a few recipient nuclei had been left over after treatment.

John Gurdon spent around fifteen years, starting in the late 1950s, demonstrating that in fact nuclei from specialised cells are able to create whole animals if placed in the right environment i.e. an unfertilised eggé. The more differentiated/specialised the donor cell was, the less successful the process in terms of numbers of animals, but that’s the beauty of disproving a hypothesis we might need a lot of toad eggs to start with but we don’t need to end up with many live toads to make our case. Just one non murderous doctor will do it, remember?

Sir John Gurdon showed us that although there is something in cells that can keep specific genes turned on or switched off in different cell types, whatever this something is, it can’t be loss or permanent inactivation of genetic material, because if he put an adult nucleus into the right environment in this case an ‘empty’ unfertilised egg it forgot all about this memory of which cell type it came from. It went back to being a naive nucleus from an embryo and started the whole developmental process again.

Epigenetics is the ‘something’ in these cells. The epigenetic system controls how the genes in DNA are used, in some cases for hundreds of cell division cycles, and the effects are inherited from when cells divide. Epigenetic modifications to the essential blueprint exist over and above the genetic code, on top of it, and program cells for decades. But under the right circumstances, this layer of epigenetic information can be removed to reveal the same shiny DNA sequence that was always there. That’s what happened when John Gurdon placed the nuclei from fully differentiated cells into the unfertilised egg cells.

Did John Gurdon know what this process was when he generated his new baby toads? No. Does that make his achievement any less magnificent? Not at all. Darwin knew nothing about genes when he developed the theory of evolution through natural selection. Mendel knew nothing about DNA when, in an Austrian monastery garden, he developed his idea of inherited factors that are transmitted ‘true’ from generation to generation of peas. It doesn’t matter. They saw what nobody else had seen and suddenly we all had a new way of viewing the world.

The epigenetic landscape

Oddly enough, there was a conceptual framework that was in existence when John Gurdon performed his work. Go to any conference with the word ‘epigenetics’ in the title and at some point one of the speakers will refer to something called ‘Waddington’s epigenetic landscape’.

from

The Epigenetics Revolution

by Nessa Carey

get it at Amazon.com

Music and the Mind – Anthony Storr.

“Music’s the Medicine of the Mind” John Logan (1744-88)

“Since music is the only language with the contradictory attributes of being at once intelligible and untranslatable, the musical creator is a being comparable to the gods, and music itself the supreme mystery of the science of man.” Claude Levi-Strauss

Today, more people listen to music than ever before in the history of the world. The audience has increased enormously since the Second World War. Recordings, radio, and even television, have made music available to a wider range of the population than anyone could have predicted fifty years ago. In spite of dire warnings that recordings might empty opera houses and concert halls, the audience for live performances has also multiplied.

This book reflects my personal preference in that it is primarily concerned with classical or Western ‘art’ music, rather than with ‘popular’ music. That these two varieties of music should have become so divergent is regrettable. The demand for accessible musical entertainment grew during the latter half of the nineteenth century in response to the increased wealth of the middle class. It was met by Offenbach, both Johann Strausses, Chabrier, Sullivan, and other gifted composers of light music which still enchants us today. The tradition was carried on into the twentieth century by composers of the stature of Gershwin, Jerome Kern, and Irving Berlin. It is only since the 1950s that the gap between classical and popular music has widened into a canyon which is nearly unbridgeable.

In spite of its widespread diffusion, music remains an enigma. Music for those who love it is so important that to be deprived of it would constitute a cruel and unusual punishment. Moreover, the perception of music as a central part of life is not confined to professionals or even to gifted amateurs. It is true that those who have studied the techniques of musical composition can more thoroughly appreciate the structure of a musical work than those who have not. It is also true that people who can play an instrument, or who can sing, can actively participate in music in ways which enrich their understanding of it. Playing in a string quartet, or even singing as one anonymous voice in a large choir, are both life-enhancing activities which those who take part in them find irreplaceable.

But even listeners who cannot read musical notation and who have never attempted to learn an instrument may be so deeply affected that, for them, any day which passes without being seriously involved with music in one way or another is a day wasted.

In the context of contemporary Western culture, this is puzzling. Many people assume that the arts are luxuries rather than necessities, and that words or pictures are the only means by which influence can be exerted on the human mind. Those who do not appreciate music think that it has no significance other than providing ephemeral pleasure. They consider it a gloss upon the surface of life; a harmless indulgence rather than a necessity.

This, no doubt, is why our present politicians seldom accord music a prominent place in their plans for education. Today, when education is becoming increasingly utilitarian, directed toward obtaining gainful employment rather than toward enriching personal experience, music is likely to be treated as an ‘extra’ in the school curriculum which only affluent parents can afford, and which need not be provided for pupils who are not obviously ‘musical’ by nature.

The idea that music is so powerful that it can actually affect both individuals and the state for good or ill has disappeared. In a culture dominated by the visual and the verbal, the significance of music is perplexing, and is therefore underestimated. Both musicians and lovers of music who are not professionally trained know that great music brings us more than sensuous pleasure, although sensuous pleasure is certainly part of musical experience.

Yet what it brings is hard to define. This book is an exploratory search; an attempt to discover what it is about music that so profoundly affects us, and why it is such an important part of our culture.

Chapter 1

Origins and Collective Functions

“Music is so naturally united with us that we cannot be free from it even if we so desired.” Boethius

No culture so far discovered lacks music. Making music appears to be one of the fundamental activities of mankind; as characteristically human as drawing and painting. The survival of Palaeolithic cavepaintings bears witness to the antiquity of this form of art; and some of these paintings depict people dancing. Flutes made of bone found in these caves suggest that they danced to some form of music. But, because music itself only survives when the invention of a system of notation has made a written record possible, or else when a living member of a culture recreates the sounds and rhythms which have been handed down to him by his forebears, we have no information about prehistoric music. We are therefore accustomed to regarding drawing and painting as integral parts of the life of early man, but less inclined to think of music in the same way. However, music, or musical sounds of some variety, are so interwoven with human life that they probably played a greater part in prehistory than can ever be determined.

When biologists consider complex human activities such as the arts, they tend to assume that their compelling qualities are derivations of basic drives. If any given activity can be seen to aid survival or facilitate adaptation to the environment, or to be derived from behaviour which does so, it ‘makes sense’ in biological terms. For example, the art of painting may originate from the human need to comprehend the external world through vision; an achievement which makes it possible to act upon the environment or influence it in ways which promote survival.

The Palaeolithic artists who drew and painted animals on the walls of their caves were using their artistic skills for practical reasons. Drawing is a form of abstraction which may be compared with the formation of verbal concepts. It enables the draughtsman to study an object in its absence; to experiment with various images of it, and thus, at least in phantasy, to exert power over it. These artists were magicians, who painted and drew animals in order to exercise magical charms upon them. By capturing the image of the animal, early humans probably felt that they could partially control it. Since the act of drawing sharpens the perceptions of the artist by making him pay detailed attention to the forms he is trying to depict, the Palaeolithic painter did in reality learn to know his prey more accurately, and therefore increased his chances of being successful in the hunt.

The art historian Herbert Read wrote:

“Far from being an expenditure of surplus energy, as earlier theories have supposed, art, at the dawn of human culture, was a key to survival, a sharpening of the faculties essential to the struggle for existence. Art, in my opinion, has remained a key to survival.”

The art of literature probably derived from that of the primitive story-teller. He was not merely providing entertainment, but passing down to his listeners a tradition of who they were, where they had come from, and what their lives signified. By making sense and order out of his listeners’ existence, he was enhancing their feeling of personal worth in the scheme of things and therefore increasing their capacity to deal effectively with the social tasks and relationships which made up their lives. The myths of a society usually embody its traditional values and moral norms. Repetition of these myths therefore reinforces the coherence and unity of the society, as well as giving each individual a sense of meaning and purpose. Both painting and literature can be understood as having developed from activities which, originally, were adaptively useful.

But what use is music?

Music can certainly be regarded as a form of communication between people; but what it communicates is not obvious. Music is not usually representational: it does not sharpen our perception of the external world, nor, allowing for some notable exceptions, does it generally imitate it. Nor is music propositional: it does not put forward theories about the world or convey information in the same way as does language.

There are two conventional ways in which one can approach the problem of the significance of music in human life. One is to examine its origins. Music today is highly developed, complex, various and sophisticated. If we could understand how it began, perhaps we could better understand its fundamental meaning. The second way is to examine how music has actually been used. What functions has music served in different societies throughout history?

There is no general agreement about the origins of music. Music has only tenuous links with the world of nature. Nature is full of sound, and some of nature’s sounds, such as running water, may give us considerable pleasure. A survey of sound preferences amongst people in New Zealand, Canada, Jamaica and Switzerland revealed that none disliked the sounds of brooks, rivers and waterfalls, and that a high proportion enjoyed them. But nature’s sounds, with the exception of bird-song and some other calls between animals, are irregular noises rather than the sustained notes of definable pitch which go to form music. This is why the sounds of which Western music is composed are referred to as ‘tones’: they are separable units with constant auditory waveforms which can be repeated and reproduced.

Although science can define the differences between tones in terms of pitch, loudness, timbre, and waveform, it cannot portray the relation between tones which constitutes music.

Whilst there is still considerable dispute concerning the origins, purpose, and significance of music, there is general agreement that it is only remotely related to the sounds and rhythms of the natural world.

Absence of external association makes music unique amongst the arts; but since music is closely linked with human emotions, it cannot be regarded as no more than a disembodied system of relationships between sounds.

Music has often been compared with mathematics; but, as G. H. Hardy pointed out, ‘Music can be used to stimulate mass emotion, while mathematics cannot.’

If music were merely a series of artificial constructs comparable with decorative visual patterns, it would induce a mild aesthetic pleasure, but nothing more. Yet music can penetrate the core of our physical being. It can make us weep, or give us intense pleasure. Music, like being in love, can temporarily transform our whole existence. But the links between the art of music and the reality of human emotions are difficult to define; so difficult that, as we shall see, many distinguished musicians have abandoned any attempt to do so, and have tried to persuade us that musical works consist of disembodied patterns of sound which have no connection with other forms of human experience.

Can music be related to the sounds made by other species? The most obviously ‘musical’ of such sounds are those found in bird-song. Birds employ both noises and tones in their singing; but the proportion of definable tones is often high enough for some people to rate some bird-songs as ‘music’. Bird-song has a number of different functions. By locating the singer, it both advertises a territory as desirable, and also acts as a warning to rivals. Birds in search of a mate sing more vigorously than those who are already mated, thus supporting Darwin’s notion that song was originally a sexual invitation. Bird-song is predominantly a male activity, dependent upon the production of the male sex hormone, testosterone, although duets between male and female occur in some species. Given sufficient testosterone, female birds who do not usually sing will master the same repertoire of songs as the males.

Charles Hartshorne, the American ornithologist and philosopher, claims that bird-song shows variation of both pitch and tempo: accelerando, crescendo, diminuendo, change of key, and variations on a theme. Some birds, like the Wood thrush Hylochicla mustelina, have a repertoire of as many as nine songs which can follow each other in a variety of different combinations. Hartshorne argues:

“Bird songs resemble human music both in the sound patterns and in the behavior setting. Songs illustrate the aesthetic mean between chaotic irregularity and monotonous regularity The essential difference from human music is in the brief temporal span of the bird’s repeatable patterns, commonly three seconds or less, with an upper limit of about fifteen seconds. This limitation conforms to the concept of primitive musicality. Every simple musical device, even transposition and simultaneous harmony, occurs in bird music.”

He goes on to state that birds sing far more than is biologically necessary for the various forms of communication. He suggests that bird-song has partially escaped from practical usage to become an activity which is engaged in for its own sake: an expression of avian joie de vivre.

“Singing repels rival males, but only when nearby; and it attracts mates. It is persisted in without any obvious immediate result, and hence must be largely self-rewarding. lt expresses no one limited emotional attitude and conveys more information than mere chirps or squeaks. In all these ways song functions like music.”

Other observers disagree, claiming that bird-song is so biologically demanding that it is unlikely to be produced unless it is serving some useful function.

Is it possible that human music originated from the imitation of bird-song?

Géza Révész, who was a professor of Psychology at the University of Amsterdam and a friend of Béla Bartok, dismisses this possibility on two counts. First, if human music really began in this way, we should be able to point to examples of music resembling birdsong in isolated pre-literate communities. Instead, we find complex rhythmic patterns bearing no resemblance to avian music. Second, bird-song is not easily imitated. Slowing down modern recordings of birdsongs has demonstrated that they are even more complicated than previously supposed; but one only has to listen to a thrush singing in the garden to realize that imitation of his song is technically difficult.

Liszt’s ‘Légende’ for solo piano, ‘St Francois d’Assise: La Prédication aux oiseaux’, manages to suggest the twittering of birds in ways which are both ingenious and musically convincing. I have heard a tape of American bird-song which persuasively suggests that Dvorak incorporated themes derived from it following his sojourn in the Czech community in Spillville, Iowa. Olivier Messiaen made more use of bird-song in his music than any other composer. But these are sophisticated, late developments in the history of music. It is probable that early man took very little notice of birdsong, since it bore scant relevance to his immediate concerns.

Levi-Strauss affirms that music is in a special category compared with the other arts, and also agrees that bird-song cannot be the origin of human music.

“If, through lack of verisimilitude, we dismiss the whistling of the wind through the reeds of the Nile, which is referred to by Diodorus, we are left with little but bird song Lucretius’ quuidas avium voces that can serve as a natural model for music. Although ornithologists and acousticians agree about the musicality of the sounds uttered by birds, the gratuitous and unverifiable hypothesis of the existence of a genetic relation between bird song and music is hardly worth discussing.”

Stravinsky points out that natural sounds, like the murmur of the breeze in the trees, the rippling of a brook or the song of a bird, suggest music to us but are not themselves music: ‘I conclude that tonal elements become music only by virtue of their being organized, and that such organization presupposes a conscious human act.’

It is not surprising that Stravinsky emphasizes organization as the leading feature of music, since he himself was one of the most meticulous, orderly, and obsessionally neat composers in the history of music. But his emphatic statement is surely right. Bird-song has some elements of music in it, but, although variations upon inherited patterns occur, it is too obviously dependent upon in-built templates to be compared with human music.

In general, music bears so little resemblance to the sounds made by other species that some scholars regard it as an entirely separate phenomenon. This is the view of the ethnomusicologist John Blacking, who was, until his untimely death, Professor of Social Anthropology at the Queen’s University of Belfast, as well as being an accomplished musician.

“There is so much music in the world that it is reasonable to suppose that music, like language and possibly religion, is a species-specific trait of man. Essential physiological and cognitive processes that generate musical composition and performance may even be genetically inherited, and therefore present in almost every human being.

If music is indeed species-specific, there might seem to be little point in comparing it with the sounds made by other species. But those who have studied the sounds made by subhuman primates, and who have discovered what functions these sounds serve, find interesting parallels with human music. Gelada monkeys produce a wide variety of sounds of different pitches which accompany all their social interactions. They also use many different rhythms, accents, and types of vocalization. The particular type of sound which an individual produces indicates his emotional state at the time and, in the longer term, aids the development of stable bonds between different individuals. When tensions between individuals exist, these can sometimes be resolved by synchronizing and coordinating vocal expressions.

“Human beings, like geladas, also use rhythm and melody to resolve emotional conflicts. This is perhaps the main social function served by group singing in people. Music is the ‘language’ of emotional and physiological arousal. A culturally agreed upon pattern of rhythm and melody, ie, a song, that is sung together, provides a shared form of emotion that, at least during the course of the song, carries along the participants so that they experience their bodies responding emotionally in very similar ways. This is the source of the feeling of solidarity and good will that comes with choral singing: people’s physiological arousals are in synchrony and in harmony, at least for a brief period. It seems possible that during the course of human evolution the use of rhythm and melody for the purposes of speaking sentences grew directly out of its use in choral singing. It also seems likely that geladas singing their sound sequences together synchronously and harmoniously also perhaps experience such a temporary physiological synchrony.”

We shall return to the subject of group arousal in the next chapter. Meanwhile, let us consider some other speculations about the origin of music.

One theory is that music developed from the lalling of infants. All infants babble, even if they are born deaf or blind. During the first year of life, babbling includes tones as well as approximations to words: the precursors of music and language cannot be separated. According to the Harvard psychologist Howard Gardner, who has conducted research into the musical development of small children:

“The first melodic fragments produced by children around the age of a year or fifteen months have no strong musical identity. Their undulating patterns, going up and down over a very brief interval or ambitus, are more reminiscent of waves than of particular pitch attacks. Indeed, a quantum leap, in an almost literal sense, occurs at about the age of a year and a half, when for the first time children can intentionally produce discrete pitches. It is as if diffuse babbling had been supplanted by stressed words.”

During the next year, children make habitual use of discrete pitches, chiefly using seconds, minor thirds, and major thirds. By the age of two or two and a half, children are beginning to notice and learn songs sung by others. Révész is quite sure that the lalling melodies produced by children in their second year are already conditioned by songs which they have picked up from the environment or by other music to which they have been exposed. If lalling melodies are in fact dependent upon musical input from the environment, it is obviously inadmissible to suggest that music itself developed from infant lalling.

Ellen Dissanayake, who teaches at the New School for Social Research in New York and who has lived in Sri Lanka, Nigeria, and Papua New Guinea, persuasively argues that music originated in the ritualized verbal exchanges which go on between mothers and babies during the first year of life. In this type of interchange, the most important components of language are those which are concerned with emotional expressiveness rather than with conveying factual information. Metre, rhythm, pitch, volume, lengthening of vowel sounds, tone of voice, and other variables are all characteristic of a type of utterance which has much in common with poetry. She writes:

“No matter how important lexico-grammatical meaning eventually becomes, the human brain is first organized or programmed to respond to emotional/intonational aspects of the human voice.”

Since infants in the womb react both to unstructured noise and to music with movements which their mothers can feel, it seems likely that auditory perception prompts the baby’s first realization that there is something beyond itself to which it is nevertheless related. After birth, vocal interchange between mother and infant continues to reinforce mutual attachment, although vision soon becomes equally important. The crooning, cooing tones and rhythms which most mothers use when addressing babies are initially more significant in cementing the relationship between them than the words which accompany these vocalizations. This type of communication continues throughout childhood.

If, for example, I play with a child of eighteen months who can only utter a few words, we can communicate in all kinds of ways which require no words at all. It is probable that both of us will make noises: we will chuckle, grunt, and make the kinds of sounds which accompany chasing and hiding games. We may establish, at least for the time being, a relationship which is quite intimate, but nothing which passes between us needs to be expressed in words. Moreover, although relationships between adults usually involve verbal interchange, they do not always do so. We can establish relationships with people who do not speak the same language, and our closest physical relationships need not make use of words, although they usually do so. Many people regard physical intimacy with another person as impossible to verbalize, as deeper than anything which words can convey.

Linguistic analysts distinguish prosodic features of speech from syntactic: stress, pitch, volume, emphasis, and any other features conveying emotional significance, as opposed to grammatical structure or literal meaning. There are many similarities between prosodic communication and music. Infants respond to the rhythm, pitch, intensity, and timbre of the mother’s voice; all of which are part of music.

Such elements are manifestly important in poetry, but they can also be in prose. As a modern example, we can consider James Joyce’s experiments with the sound of words which are particularly evident in his later works.

“But even in his earliest stories the meaning of a word did not necessarily depend on the object it denoted but on the sonority and intonation of the speaker’s voice; for even then Joyce addressed the listener rather than the reader.”

It will be recalled that Joyce had an excellent voice and considered becoming a professional singer. He described using the technical resources of music in writing the Sirens chapter of Ulysses. Joyce portrays Molly Bloom as comprehending the hurdygurdy boy without understanding a word of his language.

One popular Victorian notion was that music gradually developed from adult speech through a separation of the prosodic elements from the syntactic. William Pole wrote in The Philosophy of Music:

“The earliest forms of music probably arose out of the natural inflections of the voice in speaking. It would be very easy to sustain the sound of the voice on one particular note, and to follow this by another sustained note at a higher or lower pitch. This, however rude, would constitute music.

We may further easily conceive that several persons might be led to join in a rude chant of this kind. If one acted as leader, others, guided by the natural instinct of their ears, would imitate him, and thus we might get a combined unison song.”

Dr Pole’s original lectures, on which his book is based, were given in 1877, and bear the impress of their time, with frequent references to savages, barbarians, and the like. Although The Philosophy of Music is still useful, Pole shows little appreciation of the fact that music amongst pre-literate peoples might be as complex as our own.

Twenty years earlier, in 1857, Herbert Spencer had advanced a similar theory of the origins of music, which was published in Fraser’s Magazine: Spencer noted that when speech became emotional the sounds produced spanned a greater tonal range and thus came closer to music. He therefore proposed that the sounds of excited speech became gradually uncoupled from the words which accompanied them, and so came to exist as separate sound entities, forming a ‘language’ of their own.

Darwin came to an opposite conclusion. He supposed that music preceded speech and arose as an elaboration of mating calls. He observed that male animals which possess a vocal apparatus generally use their voices most when under the influence of sexual feelings. A sound which was originally used to attract the attention of a potential mate might gradually be modified, elaborated, and intensified.

“The suspicion does not appear improbable that the progenitors of man, either the males or the females, or both sexes, before they had acquired the power of expressing their mutual love in articulate language, endeavoured to charm each other with musical notes and rhythm. The impassioned orator, bard, or musician, when with his various tones and cadences he excites the strongest emotions in his hearers, little suspects that he uses the same means by which, at an extremely remote period, his half-human ancestors aroused each other’s ardent passions during their mutual courtship and rivalry.”

*

from

Music and the Mind

by Anthony Storr

get it at Amazon.com

How to Be Alone – Sara Maitland.

What changed was that I got fascinated by silence; by what happens to the human spirit, to identity and personality when the talking stops, when you press the off-button, when you venture out into that enormous emptiness.

Sara Maitland: ‘My subconscious was cleverer than my conscious in choosing to live alone’

The author of How to Be Alone on the joys of solitude, Skyping and why having a dog isn’t really cheating…

How did you come to live the solitary life? Was it a sudden decision or did it evolve gradually?

I didn’t seek solitude, it sought me. It evolved gradually after my marriage broke down. I found myself living on my own in a small country village. At first I was miserable and cross. It took me between six months and a year before I noticed that I had become phenomenally happy. And this was about being alone not about being away from my husband. I found out, for instance, how much I liked being in my garden. My subconscious was cleverer than my conscious in choosing to live alone. The discovery about solitude was a surprise in waiting.

Yet isn’t writing a book such as How to Be Alone a way of communicating with others, of not being alone?

It is. Anthony Storr [author of Solitude: A Return to the Self] is right about companionship through writing and creative work. In my book about silence [A Book of Silence, 2008] I conclude that complete silence and writing are incompatible.

How would you distinguish between solitude and loneliness?

Solitude is a description of a fact: you are on your own. Loneliness is a negative emotional response to it. People think they will be lonely and that is the problem, the expectation is also now a cultural assumption.

If someone has not chosen to be alone, is bereaved or divorced, do you think they can make solitude feel like a choice?

It is possible. That has been my autobiography. They need more knowledge about it, to read about the lives of solitaries who have enjoyed it, to take it on, see what is good in it. Since I wrote about silence, many bereaved people have written asking: how do I do it? The largest groups of people living alone are women over 65 and separated men in their 40s. A lot of solitude is not chosen. It may come to any of us.

Do you ever feel lonely?

Very seldom because I have good friends and there are telephones and Skype. But broadband was down for a week over Christmas. I couldn’t Skype the kids and did find myself asking: why didn’t I go to my brother who had warmly invited me?

So what was Christmas like on your own in rural Galloway?

It was bliss. On Christmas Eve the tiny village five miles away has a nativity play. Young adults come home, it’s a very happy event. On the day itself I drank a little bit more than I should have done sitting in front of my fire. I had a long walk. It was lovely…

How much do you use the internet and social media?

Social media not at all. But when broadband went I realised how excessively I use it. Without it, I read more. I’m making a big patchwork quilt. I did more that week than in the past three months. It made me realise I have got to get this online thing under control. When I first came here I had it switched off three days a week but that has slipped.

You seem to lead a non-materialistic life. What three things would you most hate to lose from your shepherd’s cottage?

Last Christmas my son gave me a dragon hoodie bright green with pink spikes. I’d be sad to lose it. I’d hate to lose photos of my children. And I’d be seriously sad to lose Zoe, my border collie. I took her on because she got out of control in an urban community. She was seeking a wilder, freer life.

Yet in the book you suggest it’s cheating on the solitary life to have a dog when you walk…?

The pure soul probably doesn’t have a dog. I have a dog but no television.

You mention having suffered depression earlier in your life was this related to lack of solitude?

That is a correct reading, although I would not use it diagnostically. I’m deeply fond of my family but they put a high value on extroversion. I come from an enormous family and have spent a lot of time pretending I wasn’t introverted.

Yet deciding whether one is extrovert or introvert is not straightforward?

Everyone has a differing need for solitude. I feel we haven’t created space for children to find out what they need. I’ve never heard of being sent to your room as a reward. In my childhood I had a happy home being alone was thought weird. I’d like people to be offered solitude as an ordinary thing.

Does being alone teach children to be alone? Yes, just as talk is the teacher of talk.

You write: ‘Most of us have a dream of doing something in particular which we have never been able to find anyone to do with us. And the answer is simple really: do it yourself.‘ What dream have you realised by yourself?

The one thing I really don’t like doing by myself is changing a double duvet… But I went up Merrick on my own, the highest hill in the area a week after my mother died. A little voice kept saying: this is not safe, it is stupid. What happens if you break your ankle? What happens if you get lost? Doing it was a breakthrough. Another dream I am sad about. My brother and I used to sail a dinghy. He died and I wanted to sail alone. I went on a dingy course only to discover I’m not physically strong enough to right the dinghy were it to tip over.

How does love fit into the solitary life?

How much loving are people doing if they’re socialising 24/7? And if the loving is only to be loved, what is unselfish about that? The fact you’re on your own does not mean you are not loving.

Your book is part of a self-help series. What book has helped you most?

What an interesting question. Lots of stuff. Anything good. I have just been reading Alan Garner’s phenomenally brilliant Boneland and A Voyage for Madmen [by Peter Nichols], an account of the people who sailed in the 1968 solo round-the-world race. They had the same circumstances: ill-equipped boats, not enough money, plenty of anxiety. Yet different people had different responses to the same thing. People are not righter or wrong er, they’re different. I’ve struggled with this all my life and, God, it’s hard to grasp.

How to Be Alone

by Sara Maitland.

You have just started to read a book that claims, at least, to tell you how to be alone.

Why?

It is extremely easy to be alone; you do not need a book. Here are some suggestions:

Go into the bathroom; lock the door, take a shower. You are alone.

Get in your car and drive somewhere (or walk, jog, bicycle, even swim).You are alone.

Wake yourself in the middle of the night (you are of course completely and absolutely alone while you are asleep, even if you share your bed with someone else, but you are almost certainly not conscious of it, so let’s ignore that one for the moment); don’t turn your lights on; just sit in the dark. You are alone.

Now push it a bit. Think about doing something that you normally do in company go to the cinema or a restaurant, take a walk in the country or even a holiday abroad by yourself. Plan it out; the logistics are not difficult. You know how to do these things. You would be alone.

So what is the problem? Why are you reading this book?

And of course I do not know the answer. Not in your case, at least. But I can imagine some possible motives:

For some reason, good or bad, of which bereavement is perhaps the bitterest, your normal circle of not-aloneness has been broken up; you have to tackle unexpected isolation, you doubt your resources and are courageously trying to explore possible options. You will be a member of a fast-growing group, single-occupancy households in the UK have increased from 12 per cent of all households in 1961 to nearly 30 per cent in 2011.

Someone you thought you knew well has opted for more solitude, they have gone off alone to do something that excludes you, temporarily or for a longer period; you cannot really feel jealous, because it excludes everyone else too; you are a little worried about them; you cannot comprehend why they would do anything so weird or how they will manage. You want to understand.

You want to get something done, something that feels important to you. It is quite likely, in this case, that it is something creative. But you find it difficult to concentrate; constant interruptions, the demands of others, your own busy-ness and sociability, endless connections and contacts and conversations make it impossible to focus. You realize that you will not be able to pay proper attention unless you find some solitude, but you are not sure how this might work out for you.

You want to get something done something that feels important to you and of its very nature has to be done alone (single-handed sailing, solo mountaineering and becoming a hermit are three common examples, but there are others). The solitude is secondary to you, but necessary, so you are looking for a briefing. This group is quite small, I think; most of the people who seriously want to do these sorts of things tend to be experienced and comfortable with a degree of aloneness before they become committed to their project.

You have come to the disagreeable awareness that you do not much like the people you are spending time with; yet you sense that you are somehow addicted to them, that it will be impossible to change; that any relationship, however impoverished, unsatisfying, lacking in value and meaning, is better than no relationship; is better than being alone. But you aren’t sure. You are worried by the very negative responses you get whenever you bring the subject up.

You are experiencing a growing ecological passion and love of nature. You want to get out there, and increasingly you want to get out there on your own. You are not sure why this new love seems to be pulling you away from sociability and are looking for explanations.

You are one of those courageous people who want to dare to live; and to do so believe you have to explore the depths of yourself, undistracted and unprotected by social conventions and norms. You agree with Richard Byrd, the US admiral and explorer, who explained why he went to spend the winter alone on the southern polar ice cap in 1934:

‘I wanted to go for experience’s sake; one man’s desire to know that kind of experience to the full . . . to be able to live exactly as I chose, obedient to no necessities but those imposed by wind and night and cold, and to no man’s laws but my own.’

You do not, of course, need to go all the way to Antarctica to achieve this, but you do need to go all the way into yourself. You feel that if you have not lived with yourself alone, you have not fully lived. You want to get some clues about what you might encounter in this solitary space.

You feel and do not fully understand the feeling that you are missing something. You have an inchoate, inarticulate, groping feeling that there is something else, something more, something that may be scary but may also be beautiful. You know that other people, across the centuries and from a wide range of cultures and countries, have found this something and they have usually found it alone, in solitude. You want it. Whatever it is.

You are reading this book not because you want to know how to be alone, which is perfectly easy as soon as you think about it, but because you want to know why you might want to be alone; why the whole subject fills you with both longing and deep unease. You want to know what is going on here.

But actually the most likely reason why you are reading this book (like most books) is curiosity why would someone write this book?

And I can answer that question, so that is where I am going to begin.

I live alone. I have lived alone for over twenty years now. I do not just mean that I am single I live in what might seem to many people to be ‘isolation’ rather than simply ‘solitude’. My home is in a region of Scotland with one of the lowest population densities in Europe, and I live in one of the emptiest parts of it: the average population density of the UK is 674 people per square mile (246 per square kilometre). In my valley, though, we have (on average) over three square miles each. The nearest shop is ten miles away, and the nearest supermarket over twenty. There is no mobile phone connection and very little through traffic uses the single-track road that runs a quarter of a mile below my house. Often I do not see another person all day. I love it.

But I have not always lived alone. I grew up in a big family, one of six children, very close together in age, and in lots of ways a bit like a litter of puppies. It was not a household much given to reflection or introversion we were emotional, argumentative, warm, interactive. We did things together. I am still deeply and affectionately involved with all my siblings. I became a student in 1968 and was fully involved in all the excitement and hectic optimism of those years. Then I was married and had two children. I became a writer. I have friends, friendship remains one of the core values of my life. None of this looked like a life of solitude, nor a good preparation for living up a back road on a huge, austere Scottish moor.

What changed was that I got fascinated by silence; by what happens to the human spirit, to identity and personality when the talking stops, when you press the off-button, when you venture out into that enormous emptiness. I was interested in silence as a lost cultural phenomenon, as a thing of beauty and as a space that had been explored and used over and over again by different individuals, for different reasons and with wildly differing results. I began to use my own life as a sort of laboratory to test some ideas and to find out what it felt like. Almost to my surprise, I found I loved silence. It suited me. I got greedy for more. In my hunt for more silence, I found this valley and built a house here, on the ruins of an old shepherd’s cottage. I moved into it in 2007.

In 2008 I published a book about silence. A Book of Silence was always meant to be a ‘hybrid’ book: it is both a cultural history and a personal memoir and it uses the forms and conventions of both genres melded into a single narrative. But it turned out to be a hybrid in another way that I had not intended. Although it was meant to be about silence, it turned out to be also about solitude and there was extensive and, I now think, justifiable criticism of the fact that it never explicitly distinguished between the two.

Being silent and being alone were allowed to blur into each other in ways that were confusing to readers. For example, one of the things I looked at in A Book of Silence was the actual physical and mental effects of silence ranging from a heightened sensory awareness (how good food tasted, how extreme the experiences of heat and cold were), through to some curious phenomena like voice-hearing and a profound loss of inhibition. These effects were both reported frequently by other people engaged in living silent lives and experienced by me personally in specific places like deserts or mountains. However, a number of commentators felt that these were not effects of silence per se, but of solitude of being alone.

After the book was published I also began to get letters from readers wanting advice . . . and more often than I had anticipated, it was not advice on being silent but on being alone.

Some of this was because there are at least two separate meanings to the word silence. Even the Oxford English Dictionary gives two definitions which are mutually exclusive: silence is defined as both the absence of any sounds and the absence of language. For many people, often including me, ‘natural noises’ like wind and running water do not ‘break’ silence, while talking does. And somewhere in between is the emotional experience that human-made noises (aeroplanes overhead, cars on distant roads) do kill silence even where the same volume of natural sound does not.

But it was not just a question of definitions. I came to see that although for me silence and solitude were so closely connected that I had never really needed to distinguish them, they did not need to be, and for many people they were not by any means the same. The proof cases are the communities where people are silent together, like Trappist monasteries or Quaker meetings.

The bedrock of the Quaker way is the silent meeting for worship. We seek a communal gathered stillness, where we can be open to inspiration from the Spirit of God and find peace of mind, a renewed sense of purpose for living, and joy to wonder at God’s creation.

*

from

How to Be Alone

by Sara Maitland.

get it at Amazon.com

Scientific Facts About Mindfulness from a Recovered Ruminator – Ruby Wax.

The real reason I began to practise mindfulness seriously was because of the empirical evidence of what happens in the brain. It wasn’t good enough that mindfulness helped me deal with the depression or that it brought me calm in the storm, ever the sceptic, I demanded hard-core proof. It appeared I didn’t trust my own feelings as much as I did science.

There is so much data to show the practice doesn’t just ameliorate physical and emotional pain, it sharpens your concentration and focus and therefore gives you the edge when others are floundering in the mud. (If that’s what you’re after.)

Here is just some of the evidence that swung the jury in favour of mindfulness (for me):


Connection to Feelings

A number of studies have found mindfulness results in increased blood flow to the insula and an increased volume and density of grey matter. This is a crucial area that gives the ability to focus into your body, and connects you to your feelings, such as butterflies in your stomach, or a blow to the heart. Strengthening your insula enhances introspection, which is the key to mindfulness.

Insula


Self Control

Researchers found that increased blood flow to the anterior cingulate cortex after just six 30 minute meditation sessions strengthened connections to this area, which is crucial for controlling impulse, and may help explain why mindfulness is effective in helping with self control, i.e. addictions.

Cingulate Cortex


Counteracting High Anxiety

Researchers from Stanford found that after an eight week mindfulness course participants had less reactivity in their amygdala and reported feeling fewer negative emotions.

Amygdala


Quietening the Mind

The brain stem produces neurotransmitters which regulate attention, mood and sleep. These changes may explain why meditators perform better on tests of attention, are less likely to suffer from anxiety and depression and often have improved sleep patterns.

Brain Stem


Regulating Emotions

The hippocampus is involved in learning and memory and can help with reactivity to stress. Increased density of neurons in this area may help explain why meditators are more emotionally stable and less anxious.

Hippocampus


Regulating Thoughts

Changes in the cerebellum are likely to contribute to meditators’ increased ability to respond to life events in a positive way.

Cerebellum


Curbing Addictive Behaviour

The prefrontal cortex is involved with self regulation and decision making. Mindfulness has been found to increase blood flow to this area, which enhances self awareness and self control, helping you to make constructive choices and let go of harmful ones.

Prefrontal Cortex


Curbing OCD

PET scans were performed on 18 OCD patients before and after 10 weeks of mindfulness practice, none took medication and all had moderate to severe symptoms. PET scans after treatment showed activity in the orbital frontal cortex had fallen dramatically meaning the worry circuit was unwired. It was the first study to show that mindfulness based cognitive therapy has the power to systematically change brain chemistry in a well identified brain circuit. So, intentionally making a mindful effort can alter brain function and this induces neuroplasticity. This is the first time it was established that mindfulness is a form of experience that promotes neuroplasticity.

Orbital Frontal Cortex


A Quicker Brain

Researchers from UCLA have found that meditators have stronger connections between different areas in the brain. This greater connectivity is not limited to specific regions but found across the brain at large. It also increases the ability to rapidly relay information from one area to the next giving you a quicker and more agile brain.

Training Your Brain, As Well As Your Body

A trained mind is physically different from an untrained mind. You can retain inner strength even though the world around you is frantic and chaotic. People are trying to find the antidotes to suffering so it’s time we started doing the obvious; training our brains as we do our bodies. Changing the way you think changes the chemicals in your brain. For example, the less you workout, the lower the level of acetylcholine and the less you have of this chemical, the poorer your ability to pay attention. Even with age related losses, almost every physical aspect of the brain can recover and new neurons can bloom.


More Positive Research on Mindfulness

Research from Harvard University suggests that we spend nearly 50% of our day mindwandering, typically lost in negative thoughts about what might happen, or has already happened to us. There is a mind-wandering network in the brain, which generates thoughts centred around ‘me’ and is focused in an area called the medial prefrontal cortex. Research has shown that when we practise mindfulness, activity in this ‘me’ centre decreases. Furthermore, it has been shown that when experienced practitioners’ minds do wander, monitoring areas (such as the lateral prefrontal cortex) become active to keep an eye on where the mind is going and if necessary bring attention back to the present, which results in less worrying and more living.

Medial Prefrontal Cortex


Researchers from the University of Montreal investigated the differences in how meditators and non-meditators experience pain and how this relates to brain structure. They found that the more experienced the meditators were, the thicker their anterior cingulate cortex and the lower their sensitivity to pain.


Researchers from Emory University found that the decline in cognitive abilities that typically occurs as we age, such as slower reaction times and speed of thinking, was not found in elderly meditators. Using fMRl, they also established that the physical thinning of grey matter that usually comes with ageing had actually been remarkably diminished.


Researchers from UCLA found that when people become aware of their anger and label it as ‘anger’ then the part of the brain that generates negative emotions, the amygdala, calms down. It’s almost as if once the emotional message has been delivered to the conscious mind it can quieten down a little.


Mindfulness activates the ‘rest and digest’ part of our nervous system, and increases blood flow to parts of our brains that help us regulate our emotions, such as the hippocampus, anterior cingulate cortex and the lateral parts of the prefrontal cortex. Our heart rate slows, our respiration slows and our blood pressure drops. A researcher from Harvard coined the changes in the body that meditation evokes as the ‘relaxation response’ basically the opposite to the ‘stress response’. While the stress response is extremely detrimental to the body, the relaxation response is extremely salutary and is probably at the root of the wide-ranging benefits mindfulness has been found to have, both mentally and physically.


Mindfulness and the Body

Researchers from the University of Wisconsin Madison investigated the effects of mindfulness on immune system response. They injected participants with a flu virus at the end of an eight-week course and they found that the mindfulness group had a significantly stronger immune system compared with the others.


Scientists at UCLA found mindfulness to be extremely effective at maintaining the immune system of HIV sufferers. Over an eight-week period, the group who weren’t taught mindfulness had a 25% fall in their CDT 4 cells (the ‘brains’ of the immune system) whereas the group taught mindfulness maintained their levels.


Researchers from the University of California, Davis, found that improved psychological wellbeing fostered by meditation may reduce cellular ageing. People who live to more than 100 have been found to have more active telomerase, an enzyme involved in cell replication. The researchers found that the meditators had a 30% increase in this enzyme linked to longevity following a three-month retreat.

Telomerase


Skin disorders are a common symptom of stress. The University of Massachusetts taught mindfulness to psoriasis sufferers and found their skin problems cleared four times faster than those who weren’t taught the technique.


Researchers from the University of North Carolina have found mindfulness to be an effective method of treating irritable bowel syndrome. Over a period of eight weeks, participants either were taught mindfulness or they went to a support group. Three months later, they found that on a standard 500-point IBS symptom questionnaire, the support group’s score had dropped by 30 points. The mindfulness group’s score had fallen by more than 100 points.


Researchers from Emory University investigated whether training in compassion meditation could reduce physiological responses to stress. Participants were stressed by being requested to perform a public speaking task. The researchers found that the participants who had practised the most had the lowest physiological responses to stress, as measured by reduced pro-inflammatory cytokines and also reported the lowest levels of psychological distress.


Researchers investigated the physiological effects of an eight-week mindfulness programme on patients suffering from breast cancer and prostate cancer. In addition to the patients reporting reduced stress, they found significant reductions in physiological markers of stress, such as reduced cortisol levels, pro-inflammatory cytokines, heart rate and systolic blood pleasure. A follow-up study a year later found these improvements had been maintained or enhanced further.


Mindfulness and Emotions

Researchers from the University of Massachusetts Medical School investigated the effects of an eight-week mindfulness course on generalized anxiety disorder. 90% of those taught the technique reported significant reductions in anxiety.


Studies from the University of Wisconsin suggest that meditators’ calmness is not a result of becoming emotionally numb in fact they may be able to experience emotions more fully. If asked to enter into a state of compassion, then played an emotionally evocative sound, such as a woman screaming, they showed increased activity in the emotional areas of the brain compared to novices. However, if asked to enter into a state of deep concentration, they showed reduced activity in the emotional areas of the brain compared with novices. The key is that they were better able to control their emotional reactions depending on the mental state they chose to be in.


Optimists and resilient people have been found to have more activity in the front of their brains (prefrontal cortex) on the left hand side, whereas those more prone to rumination and anxiety have more on the right. Researchers from the University of Wisconsin found that after eight weeks of mindfulness practice, participants had been able to change their baseline levels of activity moving more towards left hand activation. This suggests that mindfulness can help us change our base-line levels of happiness and optimism.


If you suffer from recurring depression, scientists suggest that mindfulness might be a way to keep you free from it. Researchers from Toronto and Exeter in the UK recently found that learning mindfulness, while tapering off anti-depressants, was as effective as remaining on medication.


Researchers from Stanford University have found that mindfulness can help with social anxiety by reducing reactivity in the amygdala, an area of the brain that is typically overactive in those with anxiety problems.


Researchers at the University of Manchester tested meditators’ response to pain, by heating their skin with a laser. They found that the more meditation the subject had done, the less they experienced pain. They also found that they had less neural activity in the anticipation of pain than controls, which is likely to be due to their increased ability to remain in the present rather than worry about the future.


A recent study from Wake Forest University found that just four sessions of 20 minutes mindfulness training a day reduced pain sensitivity by 57% an even greater reduction than drugs such as morphine.


Numerous studies have found that mindfulness on its own or in combination with medication can be effective in dealing with addictive behaviours, from drug abuse through to binge eating. Recently researchers from Yale School of Medicine found that mindfulness training of less than 20 minutes per day was more effective at helping smokers quit than the American Lung Association’s gold standard treatment. Over a period of four weeks, on average, there was a 90% reduction in the number of cigarettes smoked from 18 per day to two per day with 35% of smokers quitting completely. When they checked four months later over 30% had maintained abstinence.


Researchers investigated the impact of mindfulness on the psychological health of 90 cancer patients. After seven weeks of daily practice, the patients reported a 65% reduction in mood disturbances including depression, anxiety, anger and confusion. They also reported a 31% reduction in symptoms of stress and less stress-related heart and stomach pain.


Researchers from the University of California, San Diego investigated the impact of a four week mindfulness programme on the psychological well-being of students, in comparison to a body relaxation technique. They found that both techniques reduced distress, however mindfulness was more effective at developing positive states of mind and at reducing distractive and ruminative thoughts. This research suggests that training the mind with mindfulness delivers benefits over and above simple relaxation.


Mindfulness and Thoughts/Cognition

Researchers from Wake Forest University investigated how four sessions of 20 minutes mindfulness practice could affect critical cognitive abilities. They found that the mindfulness practitioners were significantly better than the control group at maintaining their attention and performed especially well at stressful tasks under time pressure. [This is another study demonstrating that significant benefits can be enjoyed from relatively little practise]


Researchers from the University of Pennsylvania wanted to investigate how mindfulness could help improve thinking in the face of stress. So, they taught it to marines prior to their deployment in Iraq. In cognitive tests, they found that the marines who practised for more than 10 minutes a day managed to maintain their mental abilities in spite of a stressful deployment period, whereas the control group and those practising less than 10 minutes could not.


Researchers from UCLA conducted a pilot to investigate the effectiveness of an eight-week mindfulness course for adults and adolescents with ADHD. Over 75% of the participants reported a reduction in their total ADHD symptoms, with about a third reporting clinically significant reductions in their symptoms of more than 30%.


Researchers conducted a pilot study, to investigate the efficacy of mindfulness in treating OCD. Sixty per cent of the participants experienced clinically significant reductions in their symptoms, of over 30%. The researchers suggest that the increased ability to ‘let go’ of thoughts and feelings helps stop the negative rumination process that is so prevalent in OCDs


I hope the above has not put you to sleep but for me it makes me feel I’m in well researched hands. If it’s good enough for Harvard, UCLA, University of Pennsylvania, Yale School of Medicine and Stanford, it’s good enough for me.

Critical Thinking Skills. Why more highly educated people are less into conspiracy theories – Christian Jarrett * Suspicious Minds. Why We Believe Conspiracy Theories – Rob Brotherton.

In this era of “fake news” and rising populism, encountering conspiracy theories is becoming a daily phenomenon.

People usually shrug them off, they find them too simplistic, biased or far-fetched but others are taken in. And if a person believes one kind of conspiracy theory, they usually believe others.

Psychologists are very interested in why some people are more inclined to believe in conspiracy theories, especially since the consequences can be harmful: for example, by avoiding getting their kids vaccinated, believers in vaccination conspiracies can harm wider public health; in other cases, a belief in a conspiracy against one’s own ethnic or religious group can foment radicalism.

One of the main differences between conspiracy believers and nonbelievers that’s cropped up in multiple studies is that nonbelievers tend to be more highly educated. For a new study in Applied Cognitive Psychology, Ian-Willem Van Prooijen at VU Amsterdam has conducted two large surveys to try to dig into just what it is about being more educated that seems to inoculate against belief in conspiracy.

For the first survey, Van Prooijen recruited over 4000 readers of a popular science journal in the Netherlands, with an average age of 32. He asked them about their formal education level and their belief in various well known conspiracy theories, such as that the moon landings were hoax; he tested their feelings of powerlessness; their subjective sense of their social class (they located their position on a social ladder); and their belief in simple solutions, such as that “most problems in society are easy to solve”.

The more highly educated a participant, the less likely they were to endorse the conspiracy theories.

Importantly, several of the other measures were linked to education and contributed to the association between education and less belief in conspiracy: feeling less powerlessness (or more in control), feelings of higher social status, and being sceptical of simple solutions.

A second survey was similar, but this time Van Prooijen quizzed nearly 1000 participants, average age 50, selected to be representative of the wider Dutch population. Also, there were two phases: for the first, participants answered questions about their education level; feelings of power; subjective social class; belief in simple solutions; and they took some basic tests of their analytical thinking skills. Then two weeks later, the participants rated their belief in various conspiracy theories.

Once again, more education was associated with less belief in conspiracy theories, and this seemed to be explained in part by more educated participants feeling more in control, having less belief in simple solutions, and having stronger analytical skills. Subjective social class wasn’t relevant in this survey.

Taken together, Van Prooijen said the results suggest that “the relationship between education and belief in conspiracy theories cannot be reduced to a single psychological mechanism but is the product of the complex interplay of multiple psychological processes.”

The nature of his study means we can’t infer that education or the related factors he measured actually cause less belief in conspiracies. But it makes theoretical sense that they might be involved: for example, more education usually increases people’s sense of control over their lives (though there are exceptions, for instance among people from marginalized groups), while it is feelings of powerlessness that is one of the things that often attracts people to conspiracy theories.

Importantly, Van Prooijen said his findings help make sense of why education can contribute to “a less paranoid society” even when conspiracy theories are not explicitly challenged.

“By teaching children analytical thinking skills along with the insight that societal problems often have no simple solutions, by stimulating a sense of control, and by promoting a sense that one is a valued member of society, education is likely to install the mental tools that are needed to approach far-fetched conspiracy theories with a healthy dose of skepticism.

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Suspicious Minds

WHY WE BELIEVE CONSPIRACY THEORIES

Rob Brotherton

Down The Rabbit Hole

ALL is not as it seems. There is a hidden side to reality, a secret realm buzzing with clandestine activity and covert operations. This invisible network constantly screens, sifts, and manipulates information. It conjures up comforting lies to hide the real, bewildering truth. It steers what we think and believe, even shapes the decisions we make, molding our perception to its own agenda. Our understanding of the world, in short, is an illusion. Who is behind this incredible scheme? Some sinister secret society? Psychopathic bureaucrats in smoke-filled boardrooms? The Queen of England? The intergalactic shape-shifting lizards who she works for? All of the above?

No. This is an inside job. It’s not them, it’s us. More specifically, it’s you. More specifically, it’s your brain.

Everything Is a Conspiracy

There’s a conspiracy theory for everything. Ancient Atlanteans built the pyramids. Abraham Lincoln was assassinated on the orders of his vice president, Andrew Johnson. The Apollo moon landings were filmed on a sound stage in Arizona. Area 51 is home to advanced technology of alien origin. Alex Jones, a conspiracy minded radio host based out of Austin, Texas, is actually the alterego of comedian Bill Hicks (who faked his death in the early 1990s to pursue a career in conspiracism). And then there’s Big Pharma, black helicopters, the Bilderberg Group, Bohemian Grove . . .

The rabbit hole runs deep. The conspiracy allegedly extends to the air we breathe (tainted by chem-trails), the food we eat (monkeyed with by Monsanto), the medicine we take (filled with deadly toxins), and the water we drink (spiked with mind-warping fluoride). Elections are rigged, politics is a sham, and President Obama is a communist Muslim from Kenya.

These are a few of the theories, but who are the theorists? According to cliché, conspiracy theorists are a rare breed, a small but dedicated lunatic fringe of basement dwelling, middle aged men, intelligent outsiders with an idiosyncratic approach to research (and, often, a stockpile of Reynolds Wrap).

Most elements of the stereotype, however, don’t hold up. On the whole, women are just as conspiracy minded as men. Education and income don’t make much difference either. The ranks of conspiracy theorists include slightly more high school dropouts than college graduates, but even professors, presidents, and Nobel Prize winners can succumb to conspiracism. And conspiracy theories appeal to all ages. Senior citizens are no more or less conspiracy minded than Millennials, on average. At the low end of the age bracket, legions of American teens suspect that Louis Tomlinson and Harry Styles of the inordinately popular boy band One Direction are secretly an item, and that the band’s corporate overlords invented a fake girlfriend for Louis as part of the cover-up.

As for the idea that conspiracy theories are a fringe affair, nothing could be farther from the truth. All told, huge numbers of people are conspiracy theorists when it comes to one issue or another. According to polls conducted over the last decade or so, around half of Americans think their government is probably hiding the truth about the 9/11 attacks. Almost four in ten suspect that climate change is a scientific fraud. Something like a third believe the government is likely hiding evidence of aliens. More than a quarter are worried about the New World Order.

In a 2013 survey, 4 percent of the people polled (which, extended to the entire population of the United States, would mean twelve million people) said they think “shape-shifting reptilian people control our world by taking on human form and gaining political power to manipulate our societies.” A further 7 percent said they just weren’t sure.

These sorts of public opinion polls, it’s worth bearing in mind, only provide a rough indication of any particular theory’s popularity. Estimates vary depending on exactly who you ask, how you ask them, and when. But this much is crystal clear: There are more conspiracy theorists out there than you might expect. Chances are you know some. Chances are you are one.

It’s not just Americans. People in the United Kingdom and Europe are similarly suspicious. And it’s not just Westerners. Conspiracism is a global phenomenon. According to a 2011 Pew Research Center survey, between half and three quarters of people in various Middle Eastern countries doubt that Arab hijackers pulled off the 9/11 attacks. In many parts of the world, vaccines and other Western medicines are viewed with suspicion. Four out of ten Russians think that America faked the moon landings, according to a 2011 poll. In India, shortly after the country’s prime minister, Indira Gandhi, was assassinated in 1984, her successor told an audience of a hundred thousand people gathered in New Delhi, “the assassination of Indira Gandhi is the doing of a vast conspiracy whose object is to weaken and divide India.” And in Brazil, a popular conspiracy theory asserts that the American military is planning to invade the Amazon rain forest and take control of its rich natural resources. As part of the propaganda campaign to prepare American citizens for the impending invasion, the theory goes, maps of South America in American junior high school textbooks show a huge swath of the Amazon under the control of the United Nations.

So, was there a gunman on the Grassy Knoll? Is Elvis alive, relaxing by the pool with Jim Morrison, Marilyn Monroe, and Princess Diana in some secret resort for aggressively reclusive stars? Who really rules the world, and what did they do with flight MH370?

If you’re looking for answers to these questions, then I’m afraid you’ve picked up the wrong book. The truth might be out there, but it’s not in here. If there really are sinister schemes taking shape behind closed doors at this very moment, if the real perpetrators of atrocities have not yet been brought to justice, if everything we think we know is a lie, it’d be nice to know. But there are plenty of other books dedicated to compiling evidence of some alleged conspiracy, and almost as many books that purport to tear the theories to shreds. That’s not what this one is about. In fact, this book isn’t really about conspiracy theories at all (though we’ll encounter plenty of theories along the way).

It’s about conspiracy thinking, about what psychology can reveal about how we decide what is reasonable and what is ridiculous, and why some people believe things that, to other people, seem completely unbelievable.

Of course, if you ask someone why they believe, or why they don’t believe, some theory or other, they’ll probably tell you it’s simple: They’ve made up their mind based on the evidence. But psychology tells a different story. It turns out that we’re not always the best judge of why we believe what we believe.

Tidy Desk, Tidy Mind (or: The Unexpected Virtue of Neatness)

In a recent experiment, psychologists at the University of Amsterdam had students think about something that they felt ambivalent about, any topic about which they had both positive and negative feelings. Imagine, for instance, eating an entire tub of ice cream. It would be a nice way to spend twenty minutes, but it’d also be pretty bad for you in the long run. You know there are pros and cons. That’s ambivalence.

Each student sat at a computer, thought about whatever it was that made him or her feel ambivalent, and typed up a few of the pros and cons. At that point, an error message appeared on the screen. Fear not-it was all part of the psychologists’ devious plan. The researcher monitoring the experiment feigned surprise, and told the participant that they would have to complete the next (ostensibly unrelated) questionnaire at a different desk. The unwitting subject was led to a cubicle across the room, where they encountered a desk in disarray, strewn with pens, books, magazines, and crumpled pieces of paper. Then, nestled comfortably amid the detritus, the participant was shown a series of pictures.

Some pictures, like the one on the left, had a faintly discernible image, in this case, a sailboat. Others, like the one on the right, consisted of nothing but random splotches. The students weren’t told which were which; they simply had to say whether they saw a pattern in the static. Pretty much everyone spotted the boat and all the other real pictures. More interestingly, a lot of the time people said they saw images where, in reality, there was only randomness. There were twelve pictures that contained nothing but random blobs. On average, the students saw imaginary images in nine of them.

At least, that’s how the experiment went for one group of students. For another group, things started out pretty much the same. They had to think about something that made them ambivalent, they saw an error message, they were led to the messy cubicle. Then there was one crucial difference. Before carrying on with the experiment, the experimenter asked each student to help tidy up the mess. Once the desk was straightened up, the students saw those same pictures. Compared to students who had worked amid the clutter, these students consistently saw fewer phantom images. They saw imaginary patterns in just five of the twelve meaningless pictures, on average, which was about the same number as people who hadn’t been made to feel ambivalent at the start of the experiment.

Feeling conflicting emotions about something is unpleasant, the researchers explained. We habitually seek order and consistency, and to be ambivalent is to experience disorder and conflict. When that happens, we might try to change our beliefs, or simply ignore the issue. Or we can use more roundabout strategies to deal with our unwanted emotions. Ambivalence threatens our sense of order, so, to compensate, we can seek order elsewhere. This is why the first group of students saw so many imaginary images. Seeing meaning in the ambiguous splotches, connecting the dots, allowed them to satisfy the craving for order that had been triggered by their sense of ambivalence. And it also explains why the second group of students saw fewer imaginary images. The simple act of tidying the desk, transforming the chaos into order, had already satisfied their craving. They were no longer on the lookout for patterns in the static. They didn’t need the dots to be connected.

What does this have to do with conspiracy theories? In another experiment, the researchers again made people feel ambivalent. This time, instead of looking at strange pictures, the students were asked to imagine they had been passed over for a promotion at work. What are the chances, the researchers asked, that a conniving co-worker had a hand in the boss’s decision? Compared to a group of people who hadn’t been made to feel ambivalent, the ambivalent students were more likely to suspect that a conspiracy was afoot.

Sometimes, it would seem, buying into a conspiracy is the cognitive equivalent of seeing meaning in randomness.

A bit of clutter isn’t the only thing that can subtly influence our beliefs. In another recent study, almost two hundred students at a college in London were asked simply to rate how plausible they found a handful of popular conspiracy theories. For half of the students, the allegations were written in an easy to read font, regular old Arial, size twelve, like so:

For the other half of the students, however, the allegations were written in a font that was a little harder to read, like so:

The students who read the theories in the clear, legible font consistently rated them more likely to be true. The students who had the harder to read font found the claims harder to believe.

The remarkable thing is that if you were to ask the students who took part why they rated the conspiracy theories the way they did, they might have told you something like “I heard a rumor about the New World Order the other day,” or “Conspiracies happen all the time,” or “It just makes sense that people are up to no good.” None of the Dutch students would have told you that feeling ambivalent about a bowl of ice cream had influenced their judgment. None of the Londoners thought to themselves, “This is an attractive font, so I suppose the New World Order really is planning to take over.” They didn’t consciously choose to see the theories as more or less plausible. Their brains did most of the work behind the scenes.

Who Is Pulling the Strings?

As neuroscientist David Eagleman points out in Incognito: The Secret Lives of the Brain, there is a complicated network of machinery hidden just beneath your skin. Your body is chock full of organs, each with its own special job to do, all working together to keep you alive and healthy, and they manage it without any conscious input from you. Whether you’re paying attention or not, your heart keeps on beating, your blood vessels expand and contract, and your spleen does whatever it does. Our detailed scientific understanding of how the body works is a relatively recent development, and yet, for some reason, the idea that our organs can go about their business without us telling them to do it, or even being aware of what they’re up to, doesn’t strike us as particularly hard to believe.

Your brain seems different, though. The brain is the most complicated organ of them all. It is made up of billions of specialized cells, each one in direct communication with thousands of others, all ceaselessly firing off electrical signals in cascading flurries of activity. Somehow, it’s still largely a mystery, out of this chaos arises consciousness: our experience of being us, of being a thinking, feeling, deciding person, residing just behind our eyes, looking out on the world, making important decisions like when to cross the road and where to go for lunch.

Consciousness is all we know about what’s going on inside our head, and it feels like it’s all there is to know. Masses of psychological studies, however, lead to a surprising conclusion. Consciousness is not the whole story. We are not privy to everything, or even most, of what our brain is up to.

The brain, like its fellow organs, is primarily in the business of keeping us alive, and, also like its less mysterious colleagues, the brain doesn’t need much input from us to get the job done. All sorts of activity goes on behind the scenes, outside of our conscious awareness and entirely beyond our control.

But just because our brain doesn’t let us in on all of its antics doesn’t mean its subconscious processes are unimportant or inconsequential. On the contrary, our perception, thoughts, beliefs, and decisions are all shaped by our brain’s secret shenanigans. Imaginative psychologists have come up with various metaphors for our mistaken intuition that we’re aware of, and in control of, everything that happens in our brain. As David Eagleman put it:

“Your consciousness is like a tiny stowaway on a transatlantic steamship, taking credit for the journey without acknowledging the massive engineering underfoot.”

Social scientist Jonathan Haidt likened consciousness to a rider on the back of an elephant: The rider can coax and cajole the elephant to go one direction or another by pulling on the reins, but at the end of the day, the elephant has whims of its own, and it’s bigger than we are.

Daniel Kahneman, one of the pioneers of the psychology of our brain’s hidden biases and shortcuts, described the division of labor between our conscious and unconscious mental processes in cinematic terms. “In the unlikely event” of a movie being made in which our brain’s two modes of activity were the main characters, consciousness “would be a supporting character who believes herself to be the hero,” Kahneman wrote.

I’d like to propose a similar metaphor, one more in keeping with our theme.

We imagine ourselves to be puppet masters, in full control of our mental faculties. In reality, however, we’re the puppet, tethered to our silent subconscious by invisible strings, dancing to its whims and then taking credit for the choreography ourselves.

Suspicious Minds

Does this mean that conspiracy theories are inherently irrational, nutty, harebrained, confused, crackpot, or pathological? Some pundits enthusiastically heap this kind of scorn and ridicule on conspiracy theories, painting them as the product of faulty thinking, which disbelievers are presumably immune to. Because of this dim view, tensions between conspiracy theorists and their critics can run high. As far as some conspiracy theorists are concerned, looking for psychological reasons for believing conspiracy theories is worse than simply challenging them on their facts. It can seem like an attempt to smear believers’ credibility, or even to write conspiracy theorists off as mentally unbalanced.

That’s not my goal.

This book isn’t about listing conspiracy theories like some catalog of bizarre beliefs. It’s not about singling out conspiracy theorists as a kind of alien species, or as a cautionary tale about how not to think.

The scientific findings we’ve amassed over the last few years tell a much more interesting story, one that has implications for us all. Michael Billig, an early trailblazer of research into conspiracy thinking, warned that when it comes to conspiracism, “it is easy to overemphasise its eccentricities at the expense of noticing what is psychologically commonplace.”

Conspiracy theories might be a result of some of our brain’s quirks and foibles, but, as we’ll see, they are by no means unique in that regard. Most of our quirks simply slide by unnoticed. Psychology can tell us a lot, not only about why people believe theories about grand conspiracies, but about how everyone’s mind works, and about why we believe anything at all.

So here’s my theory: We are each at the mercy of a hundred billion tiny conspirators, a cabal of conspiring neurons.

Throughout this book, we’ll be pulling back the curtain, shining a light into the shadowy recesses of our mind, and revealing how our brain’s secret shenanigans can shape the way we think about conspiracy theories, and a whole lot else besides.

Whether conspiracy theories reflect what’s really going on in the world or not, they tell us a lot about our secret selves. Conspiracy theories resonate with some of our brain’s built-in biases and shortcuts, and tap into some of our deepest desires, fears, and assumptions about the world and the people in it. We have innately suspicious minds. We are all natural born conspiracy theorists.

Chapter 1

The Age Of Conspiracy

“THIS is the age of conspiracy,” a character in Don DeLillo’s Running Dog intones, ominously, “the age of connections, links, secret relationships.” The quote has featured in countless books and essays on contemporary conspiracism, reflecting a belief, widely held among laypeople and scholars alike, that conspiracy theories have never been more popular than they are right now. As one scholar put it, “other centuries have only dabbled in conspiracy like amateurs. It is our century which has established conspiracy as a system of thought and a method of action.”

There’s no shortage of guesses about what ushered in this alleged golden age of conspiracism. The prime suspect, as far as many twenty-first-century pundits are concerned, is the rise of the Internet. Political scientist Jodi Dean began an article published in the year 2000 by asserting that