Category Archives: Depression

BEING DEAD BUT YET ALIVE. The psychological secrets of suicide – Britt Mann * A Very Human Ending: How Suicide Haunts Our Species – Jesse Bering.

There’s a tipping point where the agony of living becomes worse than the pain of dying. Many of us would rather go to our graves keeping up appearances than reveal we’re secretly coming undone. We are the only species on earth that deliberately ends its own life. Depression is a secret tomb that no one sees but you, being dead but yet alive.

Statistically we’re far more likely to perish intentionally by our own hand than to die of causes that are more obviously outside of our control. In fact, historically, suicide has accounted for more deaths than all wars and homicides combined.

“Never kill yourself while you are suicidal.” Edwin Shneidman, suicideologist

The suicidal mind is cognitively distorted, and unreliable when it comes to intelligent decision making. As such, waiting out a dark night of the soul, especially if you’re a teenager, a demographic more likely to kill themselves impulsively, can yield a brighter tomorrow.
Even if the act of killing oneself could be considered rational, the “tremendous urge” to do so rarely lasts longer than 24 hours.

Understanding suicidal urges, from a scientific perspective, can keep many people alive, at least in the short term. My hope is that knowing how it all works will help us to short-circuit the powerful impetus to die when things look calamitous.

It’s that everyday person dealing with suicidal thoughts, the suicidal person in all of us, who is the main subject of this book.

American writer and research psychologist Jesse Bering was considering taking his own life before he was offered a job in New Zealand.

Bering found himself fantasising about a tree near his house in upstate New York, which had a particular bough “crooked as an elbow” that seemed a perfect place from which to hang himself.
So goes the opening anecdote in his latest book, A Very Human Ending: How Suicide Haunts Our Species.

In New Zealand, his desire to die has subsided, but the spectre of suicide still emits a “low hum” in his life. His new book explores why people decide to kill themselves, born from a need to understand his own psyche, and prompt those on the edge to think twice before stepping off.

“The best predictor of future behaviour is past behaviour, and unfortunately that’s the case with suicidal thinking and especially suicide attempts. The likelihood of me being in that state again is pretty high… I think of the book as this is me having a conversation with my future self, to talk me out of this.”

The suicidal mind is cognitively distorted, and unreliable when it comes to intelligent decision making. As such, waiting out a dark night of the soul, especially if you’re a teenager, a demographic more likely to kill themselves impulsively, can yield a brighter tomorrow.
Even if the act of killing oneself could be considered rational, the “tremendous urge” to do so rarely lasts longer than 24 hours.

Stuff.co.nz

A Very Human Ending: How Suicide Haunts Our Species

Jesse Bering

‘This book touches on some deep questions relevant to us all… A fascinating, thoughtful, unflinching meditation on one of the most intriguing and curious aspects of the human condition.‘ Dr Frank Tallis

Why do people want to kill themselves? Despite the prevalence of suicide in the developed world, it’s a question most of us fail to ask. On hearing news of a suicide we are devastated, but overwhelmingly we feel disbelief.

In A Very Human Ending, research psychologist Jesse Bering lifts the lid on this taboo subject, examining the suicidal mindset from the inside out to reveal the subtle tricks the mind can play when we’re easy emotional prey. In raising challenging questions Bering tests our contradictory superstitions about the act itself.

Combining cutting-edge research with investigative journalism and first-person testimony, Bering also addresses the history of suicide and its evolutionary inheritance to offer a personal, accessible, yet scientifically sound examination of why we are the only species on earth that deliberately ends its own life.

This penetrating analysis aims to demystify a subject that knows no cultural or demographic boundaries.

FOR THE SUICIDAL PERSON IN ALL OF US

And so far forth death’s terror doth affright,

He makes away himself, and hates the light

To make an end of fear and grief of heart,

He voluntarily dies to ease his smart.

Robert Burton, The Anatomy of Melancholy (1621)

Given the sensitive nature of the material in this book, I have not used any real names (unless otherwise stated), and I have changed physical descriptions, locations, and other features to ensure that no one is identifiable and their story is protected. This is because this is not a book about the individuals I have described, but about what we can learn from them and how they shape our lives.

1

the call to oblivion

“Just as life had been strange a few minutes before, so death was now as strange. The moth having righted himself now lay most decently and uncomplainingly composed. O yes, he seemed to say, death is stronger than I am.” Virginia Woolf, “The Death of the Moth” (1942)

Just behind my former home in upstate New York, in a small, dense pocket of woods, stood an imposing lichen-covered oak tree built by a century of sun and dampness and frost, its hardened veins crisscrossing on the forest floor. It was just one of many such specimens in this copse of dappled shadows, birds, and well-worn deer tracks, but this particular tree held out a single giant limb crooked as an elbow, a branch so deliberately poised that whenever I’d stroll past it while out with the dogs on our morning walks, it beckoned me.

It was the perfect place, I thought, to hang myself.

I’d had fleeting suicidal feelings since my late teenage years. But now I was being haunted day and night by what was, in fact, a not altogether displeasing image of my corpse spinning ever so slowly from a rope tied around this creaking, pain-relieving branch. It’s an absurd thought, that I could have observed my own dead body as if I’d casually stumbled upon it. And what good would my death serve if it meant having to view it through the eyes of the very same head that I so desperately wanted to escape from in the first place?

Nonetheless, I couldn’t help but fixate on this hypothetical scene of the lifeless, pirouetting dummy, this discarded sad sack whose long-suffering owner had been liberated from a world in which he didn’t truly belong.

Globally, a million people a year kill themselves, and many times that number try to do so. That’s probably a hugely conservative estimate, too; for reasons such as stigma and prohibitive insurance claims, suicides and attempts are notoriously underreported when it comes to the official statistics. Roughly, though, these figures translate to the fact that someone takes their own life every forty seconds. Between now and the time you finish reading the next paragraph, someone, somewhere, will decide that death is a more welcoming prospect than breathing another breath in this world and will permanently remove themselves from the population.

The specific issues leading any given person to become suicidal are as different, of course, as their DNA -involving chains of events that one expert calls “dizzying in their variety”, but that doesn’t mean there aren’t common currents pushing one toward this fatal act. We’re going to get a handle on those elusive themes in this book and, ultimately, begin to make sense of what remains one of the greatest riddles of all time: Why would an otherwise healthy person, someone even in the prime of their life, “go against nature” by hastening their death? After all, on the surface, suicide wouldn’t appear to be a very smart Darwinian tactic, given that being alive would seem to be the first order of business when it comes to survival of the fittest.

But like most scientific questions, it turns out it’s a little more complicated than that.

We won’t be dealing here with “doctor-assisted suicide” or medical euthanasia, what Derek Humphrey in Final Exit regarded as “not suicide [but] selfdeliverance, thoughtful, accelerated death to avoid further suffering from a physical disease.” I consider such merciful instances of death almost always to be ethical and humane. Instead, we’ll be focusing in the present book on those self-killings precipitated by fleeting or ongoing mental distress, namely, those that aren’t the obvious result of physical pain or infirmity.

Our primary analysis will center on the suicides of otherwise normal folks battling periodic depression or who suddenly find themselves in unexpected and overwhelming social circumstances. Plenty of suicides are linked to major psychiatric conditions (in which the person has a tenuous grasp of reality, such as in schizophrenia), but plenty aren’t. And it’s that everyday person dealing with suicidal thoughts, the suicidal person in all of us, who is the main subject of this book.

Benjamin Franklin famously quipped that “nine men in ten are would-be suicides.” Maybe so, but some of us will lapse into this state more readily. It’s now believed that around 43 percent of the variability in suicidal behavior among the general population can be explained by genetics, while the remaining 57 percent is attributable to environmental factors. When people who have a genetic predisposition for suicidality find themselves assaulted by a barrage of challenging life events, they are particularly vulnerable.

The catchall mental illness explanation only takes us so far. The vast majority of those who die by suicide, with some estimates as high as 90 percent, have underlying psychiatric conditions, especially mood disorders such as depressive illness and bipolar disorder. (I have frequently battled the former, coupled with social anxiety.) But it’s also true that not everyone with depression is suicidal, nor, believe it or not, is everyone who commits suicide depressed. According to one estimate, around 5 percent of depressed people will die by suicide, but about half a percent of the nondepressed population will end up taking their own lives too.

As for my own recurring compulsion to end my life, which flares up like a sore tooth at the whims of bad fortune, subsides for a while, yet always threatens to throb again, the types of problems that trigger these dangerous desires change over time. Edwin Shneidman, the famous suicidologist, yes, that’s an actual occupation, had an apt term for this acute, intolerable feeling that makes people want to die: “psychache,” he called it. It’s like what Winona Ryder’s character in the film Girl, Interrupted said after throwing back a fistful of aspirin in a botched suicide attempt-she just wanted “to make the shit stop.” And like a toothache, which can be set off by any number of packaged treats at our fingertips, psychache can be caused by an almost unlimited number of things in our modern world.

What made me suicidal as a teenager, the everlooming prospect of being outed as gay in an intolerant small midwestern town, isn’t what pushes those despairing buttons in me now. I’ve been out of the closet for twenty years and with my partner, Juan, for over a decade. I do sometimes still wince at the memory of my adolescent fear regarding my sexual orientation, but the constant worry and anxiety about being forced prematurely out of the closet are gone now.

Still, other seemingly unsolvable problems continue to crop up as a matter of course.

“Psychache”

Psychache is a term first used by pioneer suicidologist Edwin Shneidman to refer to psychological pain that has become unbearable. The pain is deeper and more vicious than depression, although depression may be present as well.

What drew me to those woods behind my house not so long ago was my unemployment. I was sorely unprepared for it. Not long before, I’d enjoyed a fairly high status in the academic world. Frankly, I was spoiled. And lucky. That part I didn’t realize until much later. I’d gotten my first faculty position at the University of Arkansas straight out of grad school. Then, at the age of thirty, I moved to Northern Ireland, where I ran my own research center for several years at the Queen’s University Belfast.

Somewhere along the way, though, my scholarly ambitions began to wear thin.

It was a classic case of career burnout. By the time I was thirty-five, I’d already done most of what I’d set out to do: I was publishing in the best journals, speaking at conferences all over the world, scoring big grants, and writing about my research (in religion and psychology) for popular outlets. If I were smart, I’d have kept my nose to the grindstone. Instead, I grew restless. “Now what?” I asked myself.

The prospect of doing slight iterations of the same studies over and over became a nightmare, the academic’s equivalent of being stuck in a never-ending time loop. Besides, although controversial issues like religion are never definitively settled, I’d already answered my main research question, at least to my own satisfaction. (Question: “What are the odds that religious ideas are a product of the human mind?” Answer: “Pretty darn high.”)

With my professorial aspirations languishing, I began devoting more and more time to writing popular science essays for outfits such as Scientific American, Slate, Playboy, and a few others. My shtick was covering the salacious science beat. If you’d ever wondered about the relationship between gorilla fur, crab lice, and human pubic hair, about the mysterious psychopharmacological properties of semen, or why our species’ peculiar penis is shaped like it is, l was your man. In fact, I wrote that very book: Why Is the Penis Shaped Like That?

The next book I was to write had an even more squirm-inducing title: Perv: The Sexual Deviant in All of Us. Ever wonder why amputees turn on some folks, others can’t keep from having an orgasm when an attractive passerby lapses into a sneezing fit, or why women are generally kinkier than men? Again, I was your clickable go-to source.

Now, perhaps I should have thought more about how, in a conservative and unforgiving academic world, such subject matter would link my name inexorably with unspeakable things. Sure, my articles got page clicks. My books made you blush at Barnes & Noble. But these titles aren’t exactly ones that university deans and provosts like to boast about to donors. Once you go public with the story of how you masturbated as a teenager to a wax statue of an anatomically correct Neanderthal (I swear it made sense in context), there is no going back. You can pretty much forget about ever getting inducted into the Royal Society. “Oh good riddance,” I thought. Being finally free to write in a manner that suited me, and with my very own soapbox to say the things I’d long wanted to say about society’s souI-crushing hypocrisy, was incredibly appealing.

There was also the money. I wasn’t getting rich, but I’d earned large enough advances with my book deals to quit my academic job, book a one-way ticket from Belfast back to the U.S., and put a deposit down on an idyllic little cottage next to a babbling brook just outside of Ithaca. Back then, the dark patch of forest behind the house didn’t seem so sinister; it was just a great place to walk our two border terriers, Gulliver and Uma, our rambunctious Irish imports. The whole domestic setting seemed the perfect little place to build the perfect little writing life, a fairy tale built on the foundations of other people’s “deviant” sexualities.

You can probably see where this is heading. Juan, the more practical of us, raised his eyebrows early on over such an impulsive and drastic career move. By that I mean he was resolutely set against it. “What are you going to do after you finish the book?” he’d ask, sensing doom on the horizon.

“Write another book I guess. Maybe do freelance. I can always go back to teaching, right? C’mon, don’t be such a pessimist!”

“I don’t know,” Juan would say worriedly. But he also realized how unhappy I was in Northern Ireland, so he went along, grudgingly, with my loosely laid plans.

I wouldn’t say my fall from grace was spectacular. But it was close. If nothing else, it was deeply embarrassing. It’s hard to talk about it even now that I’m, literally, out of the woods.

That’s the thing. Much of what makes people suicidal is hard to talk about. Shame plays a major role. Even suicide notes, as we’ll learn, don’t always key us in to the real reason someone opts out of existence. (Forgive the glib euphemisms; there are only so many times one can write the word “suicide” without expecting readers’ eyes to glaze over.) If I’ll be asking others in this book to be honest about their feelings, though, it would be unfair for me to hide the reasons for my own self-loathing and sense of irredeemable failure during this dark period.

It’s often at our very lowest that we cling most desperately to our points of pride, as though we’re trying to convince not only others, but also ourselves, that we still have value.

Once, long ago, when I was about twenty, I met an old man of about ninety who carried around with him an ancient yellowed letter everywhere he went. People called him “the Judge.”

“I want to show you something, young man,” he said to me after a dinner party, reaching a shaky hand into his vest pocket to retrieve the letter. “See that?” he asked, beaming. A twisted arthritic finger was pointing to a typewritten line from the Prohibition era. As I tried to make sense of the words on the page, he studied my gaze under his watery pink lids to be sure it was really sinking in. “It’s a commendation from Franklin D. Roosevelt, the governor of New York back then. Says here, see, says right here I was the youngest Supreme Court Justice in the state. Twenty. Eight. Years. Old.” With each punctuated word, he gave the paper a firm tap. “Whaddaya think of that?”

“That’s incredibly impressive,” I said.

And it was. In fact, I remember being envious of him. Not because of his accomplished legal career, but because, as I so often have been in my life, I was suicidal at the time; and unlike me, he hadn’t long to go before slipping gently off into that good night.

One of the cruelest tricks played on the genuinely suicidal mind is that time slows to a crawl. When each new dawn welcomes what feels like an eternity of mental anguish, the yawning expanse between youth and old age might as well be interminable Hell itself.

But the point is that when we’re thrown against our wishes into a liminal state, that reluctant space between activity and senescence, employed and unemployed, married and single, closeted and out, citizen and prisoner, wife and widow, healthy person and patient, wealthy and broke, celebrity and has-been, and so on, it’s natural to take refuge in the glorified past of our previous selves. And to try to remind others of this eclipsed identity as well.

Alas, it’s a lost cause. Deep down, we know there’s no going back. Our identities have changed permanently in the minds of others. In the real world (the one whose axis doesn’t turn on cheap clichés and self-help canons about other people’s opinions of us not mattering), we’re inextricably woven into the fabric of society.

For better or worse, our well-being is hugely dependent on what others think we are.

Social psychologist Roy Baumeister, whom we’ll meet again later on, argues that idealistic life conditions actually heighten suicide risk because they create unreasonable standards for personal happiness. When things get a bit messy, people who have led mostly privileged lives, those seen by society as having it made, have a harder time coping with failures. “A reverse of fortune, as society is constituted,” wrote the eighteenth-century thinker Madame de Staél, “produces a most acute unhappiness, which multiplies itself in a thousand different ways. The most cruel of all, however, is the loss of the rank we occupied in the world. Imagination has as much to do with the past, as with the future, and we form with our possessions an alliance, whose rupture is most grievous.”

Like the Judge, I was dangerously proud of my earlier status. The precipitous drop between my past and my present job footing was discombobulating. I wouldn’t have admitted it then, or even known I was guilty of such a cognitive crime, but I also harbored an unspoken sense of entitlement. Now, I felt like Jean-Baptiste Clamence in The Fall by Albert Camus. In the face of a series of unsettling events, the successful Parisian defense attorney watches as his career, and his entire sense of meaning, goes up in smoke. Only when sifting through the ashes are his biases made clear. “As a result of being showered with blessings,” Clamence observes of his worldview till then,

“I felt, I hesitate to admit, marked out. Personally marked out, among all, for that long uninterrupted success. I refused to attribute that success to my own merits and could not believe that the conjunction in a single person of such different and such extreme virtues was the result of chance alone. This is why in my happy life I felt somehow that that happiness was authorized by some higher decree. When I add that l had no religion you can see even better how extraordinary that conviction was.”

Similarly, what I had long failed to fully appreciate were the many subtle and incalculable forces behind my earlier success, forces that had always been beyond my control. I felt somehow, what is the word, charmed is too strong, more like fatalistic. The reality was that I was like everyone else, simply held upright by the brittle bones of chance. And now, they threatened to give way. I’d worked hard, sure, but again, I’d been lucky. Back when I’d earned my doctoral degree, the economy wasn’t so gloomy and there were actually opportunities. I was also doing research on a hot new topic, my PhD dissertation was on children’s reasoning about the afterlife, and I was eager to make a name for myself in a burgeoning field. Now, eleven years later, having turned my back on the academy, fresh out of book ideas, along with a name pretty much synonymous with penises and pervs, it was a very different story. Career burnout? Please. That’s a luxury for the employed.

I just needed a steady paycheck.

The rational part of my brain assured me that my present dilemma was not the end of the world. Still, the little that remained of my book advance was drying up quickly, and my freelance writing gigs, feverishly busy as they kept me, didn’t pay enough to live on. Juan, who’d been earning his master’s degree in library science, was forced to take on a minimum-wage cashier job at the grocery store. He never said “I told you so.” He didn’t have to.

I knew going in that the grass wouldn’t necessarily be greener on the other side of a staid career, but never did I think it could be scorched earth. That perfect little cottage? It came with a mortgage. We didn’t have kids, but we did have two bright-eyed terriers and a cat named Tommy to feed and care for. Student loans. Taxes. Fuel. Credit cards. Electricity. Did I mention I was an uninsured Type I diabetic on an insulin pump? My blinkered pursuit of freedom to write at any cost was starting to have potentially fatal consequences.

Doing what you love for a living is great. But you know what’s even more fun? Food.

The irrational part of my brain couldn’t see how this state of affairs, which I’d stupidly, selfishly put us into, could possibly turn out well. Things were only going to get worse. Cue visions of foreclosure, confused, sadfaced, whimpering pets torn asunder and kenneled (or worse), loving family members, stretched to the limit already themselves, arguing with each other behind closed doors over how to handle the “situation with Jesse.” Everyone, including me, would be better off without me; I just needed to get the animals placed in a loving home and Juan to start a fresh, unimpeded life back in Santa Fe, where he’d been living when we first met.

“You’re such a loser,” I’d scold myself. “You had it made. Now look at you.”

Asshole though this internal voice could be, it did make some good points. What if that was the rational part of my brain, I began to wonder, and the more optimistic side, the one telling me it was all going to be okay, was delusional? After all, in the fast-moving world of science, I was now a dinosaur. I hadn’t taught or done research for years. I’d also burned a lot of bridges due to my, er, penchant for sensationalism. An air of Schadenfreude, which I’m sure I’d rightfully earned from some of my critics, would soon be palpable.

Overall, I felt like persona non grata among all the proper citizens surrounding me, all those deeply rooted trees that so obviously belonged to this world. Even the weeds had their place. But me? I didn’t belong. I was, in point of fact, simultaneously over-and under-qualified for everything I could think of, saddled with an obscure advanced degree and absolutely no practical skills. And of course I might as well be a registered sex offender with the titles of my books and articles (among the ones I was working on at the time, “The Masturbatory Habits of Priests” and “Erotic Vomiting”). I envied the mailman, the store clerk, the landscaper anyone with a clear purpose.

Meanwhile, the stark contrast between my private and public life only exacerbated my despondency. From a distance, it would appear that my star was rising. I was giving talks at the Sydney Opera House, being interviewed regularly by NPR and the BBC, and getting profiled in the Guardian and the New York Times. Morgan Freeman featured my earlier work on religion for his show Through the Wormhole. Meanwhile, over in the UK, the British illusionist Derren Brown did the same on his televised specials. My blog at Scientific American was nominated for a Webby Award. Dan Savage, the famous sex advice columnist, tapped me to be his substitute columnist when he went away on vacation for a week. I even did the late-night talk show circuit. Chelsea Handler brazenly asked me, on national television, if I’d have anal sex with her. (I said yes, by the way, but I was just being polite.) A big Hollywood producer acquired the film option rights to one of my Slate articles.

With such exciting things happening in my life, how could I possibly complain, let alone be suicidal? After all, most writers would kill (no pun intended) to attract the sort of publicity I was getting.

“Oh, boo-hoo,” I told myself. “You’ve sure got it rough. Let’s ask one of those new Syrian refugees how they feel about your dire straits, shall we? How about that nice old woman up the road vomiting her guts out from chemo?” A close friend from my childhood had just had a stroke and was posting inspirational status updates on his Twitter account as he learned how to walk again, #trulyblessed. What right did I have to be so unhappy?

This kind of internal self-flagellation, like reading a never-ending scroll of excoriating social media comments projected onto my mind’s eye, only made being me more insufferable. I ambled along for months this way, miserable, smiling like an idiot and popping Prozac, hoping the constant gray drizzle in my brain would lift before the dam finally flooded and I got washed up into the trees behind the house.

No one knew it. At least, not the full extent of it.

From the outside looking in, even to the few close friends I had, things were going swimmingly. “When are you going to be on TV again?” they’d ask. “Where to next on your book tour?” Or “Hey, um, interesting article on the history of autofellatio.”

All was illusion. The truth is these experiences offered little in the way of remuneration. The press didn’t pay. The public speaking didn’t amount to much. And the film still hasn’t been made.

My outward successes only made me feel like an impostor. Less than a week after I appeared as a guest on Conan, I was racking my head trying to think of someone, anyone, who could get me a gun to blow it off. Yet look hard as you might at a recording of that interview from October 16, 2013, and you won’t see a trace of my crippling worry and despair. What does a suicidal person look like? Me, in that Conan interview.

Here’s the trouble. We’re not all ragingly mad, violently unstable, or even obviously depressed. Sometimes, a suicide seems like it comes out of nowhere. But that’s only because so many of us would rather go to our graves keeping up appearances than reveal we’re secretly coming undone.

In response to an article in Scientific American in which I’d shared my personal experiences as a suicidal gay teenager (while keeping my current mental health issues carefully under wraps), one woman wrote to me about the torturous divide between her own public persona and private inner life. “It’s difficult to admit that at age 34,” she explained, with a young daughter, a graduate degree in history, divorced, and remarried to my high school love, that I’m Googling suicide. But what the world doesn’t see is years of fertility issues, childhood rape, post-traumatic stress disorder, a failing marriage, a custody battle, nonexistent career, mounds of debt, and a general hatred of myself. Depression is a secret tomb that no one sees but you, being dead but yet alive.

She’s far from alone. There are more people walking around this way, “dead but yet alive,” than anyone realizes.

In my case, being open about my persistent suicidal thoughts at a time when readers’ perception of me as a good, clearheaded thinker meant the difference between a respectable middle age and moving into my elderly father’s basement and living off cans of Spaghettios. It just wasn’t something I was willing to do at the time. Who’d buy a book by an author with a mood disorder, a has-been academic, and a self-confessed sensationalist who can’t stop thinking about killing himself, and take him seriously as an authoritative voice of reason?

I don’t blame anyone for missing the signs. What signs? Anyway, regrettably, I’ve done the same. The man who’d designed my website, a sweet, introverted IT guy also struggling to find a job, overdosed while lying on his couch around this time. His landlord found him three days later with his two cats standing on his chest, meowing. I was unnerved to realize that despite our mutual email pleasantries, we’d both in fact wanted to die.

We’re more intuitive than we give ourselves credit for, but people aren’t mind readers. We come to trust appearances; we forget that others are self-contained universes just like us, and the deep rifts forming at the edges go unnoticed, until another unreachable cosmos “suddenly” collapses. In the semiautobiographical The Book of Disquiet, Fernando Pessoa describes being surprised upon learning that a young shop assistant at the tobacco store had killed himself. “Poor lad,” writes Pessoa, “so he existed too!”

“We had all forgotten that, all of us; we who knew him only about as well as those who didn’t know him at all …. But what is certain is that he had a soul, enough soul to kill himself. Passions? Worries? Of course. But for me, and for the rest of humanity, all that remains is the memory of a foolish smile above a grubby woollen jacket that didn’t fit properly at the shoulders. That is all that remains to me of someone who felt deeply enough to kill himself, because, after all[,] there’s no other reason to kill oneself.”

These dark feelings are inherently social in nature. In the vast majority of cases, people kill themselves because of other people. Social problems, especially, a hypervigilant concern with what others think or will think of us if only they knew what we perceive to be some unpalatable truth, stoke a deadly fire.

Fortunately, suicide isn’t inevitable. As for me, it’s funny how things turned out. (And I mean “funny” in the way a lunatic giggles into his hand, because this entire wayward career experience must have knocked about five years off my life.) Just as things looked most grim, I was offered a job in one of the most beautiful places on the planet: the verdant wild bottom of the South Island in New Zealand. In July 2014 Juan, Gulliver, Uma, Tommy, and I, the whole hairy, harried family, packed up all of our earthly possessions, drove across country in a rented van, and flew from Los Angeles to Dunedin, where I’d been hired as the writing coordinator in a new Science Communication department at the University of Otago.

Ironically, I wouldn’t have been much of a candidate had I not devoted a few solid nail-biting years to freelancing. I’ll never disentangle myself from my reputation as a purveyor of pervy knowledge, but the Kiwis took my frank approach to sex with good humor.

Outside our small home on the Otago Peninsula, I’m serenaded by tuis and bellbirds; just up the road, penguins waddle from the shores of an endless ocean each dusk to nest in cliff-side dens, octopuses bobble at the harbor’s edge, while dolphins frolic and giant albatrosses the size of small aircraft soar overhead. At night the Milky Way is so dense and bright against the inky black sky, I can almost reach up and stir it, and every once in a while, the aurora australis, otherwise known as the southern lights, puts on a spectacular multicolored display. The dogs are thriving. The cat is purring. Juan has a great new job.

I therefore whisper this to you as though the cortical gods might conspire against me still: I’m currently “happy” with life.

I use that word happy with trepidation. It defines not a permanent state of being but slippery moments of non-worry. All we can do, really, is try to maximize the occurrence of such anxiety-free moments throughout the course of our lives; a worrisome mind is a place where suicide’s natural breeding ground, depression, spreads like black mold.

Personally, I’m all too conscious of the fact that had things gone this way or that but by a hairbreadth, my own story might just as well have ended years ago at the end of a rope on a tree that grows 8,000 miles away. Whether I’d have gone through with it is hard to say. I don’t enjoy pain, but I certainly wanted to die, and there’s a tipping point where the agony of living becomes worse than the pain of dying. It would be naive of me to assume that just because I called the universe’s bluff back then, my suicidal feelings have been banished for good.

As I write this, I’m forty-two years of age, and so there’s likely plenty of time for those dark impulses to return. Perhaps they’re merely lying in wait for the next unmitigated crisis and will come back with a vengeance. Also, according to some of the science we’ll be examining, I possess almost a full complement of traits that make certain types of people more prone to suicide than others. Impulsive. Check. Perfectionist. Check. Sensitive. Shame-prone. Mooddisordered. Sexual minority. Self-blaming. Check.

Check. Check. Check. Check.

We’re used to safeguarding ourselves against external threats and preparing for unexpected emergencies. We diligently strap on our seat belts every time we get in a car. We look our doors before bed. Some of us even carry weapons in case we’re attacked by a stranger. Ironic, then, that statistically we’re far more likely to perish intentionally by our own hand than to die of causes that are more obviously outside of our control. In fact, historically, suicide has accounted for more deaths than all wars and homicides combined.

When I get suicidal again, not if, but when, I want to be armed with an up-to-date scientific understanding that allows me to critically analyze my own doomsday thoughts or, at the very least, to be an informed consumer of my own oblivion. I want you to have that same advantage. That’s largely why I have written this book to reveal the psychological secrets of suicide, the tricks our minds play on us when we’re easy emotional prey. It’s also about leaving our own preconceptions aside and instead considering the many different experiences of those who’ve found themselves affected somehow, whether that means getting into the headspaces of people who killed themselves or are actively suicidal, those bereaved by the suicide death of a loved one, researchers who must quarantine their own emotions to study suicide objectively, or those on the grueling front lines of prevention campaigns.

Finally, we’ll be exploring some challenging, but fundamental, questions about how we wrestle with the ethical questions surrounding suicide, and how our intellect is often at odds with our emotions when it comes to weighing the “rationality” of other people’s fatal decisions.

Unlike most books on the subject, this one doesn’t necessarily aim to prevent all suicides. My own position, for lack of a better word, is nuanced. In fact, I tend to agree with the Austrian scholar Josef Popper-Lynkeus, who remarked in his book The Right to Live and the Duty to Die (1878) that, for him, “the knowledge of always being free to determine when or whether to give up one’s life inspires me with the feeling of a new power and gives me a composure comparable to the consciousness of the soldier on the battlefield.”

The trouble is, being emotionally fraught with despair can also distort human decision making in ways that undermine a person’s ability to decide intelligently “when or whether” to act. Because despite our firm conviction that there’s absolutely no escape from that seemingly unsolvable, hopeless situation we may currently find ourselves in, we’re often, as I was, dead wrong in retrospect.

“Never kill yourself while you are suicidal” was one of Shneidman’s favorite maxims. Intellectualizing a personal problem is a well-known defense mechanism, and it’s basically what I’ll be doing in this book. Some might see this coldly scientific approach as a sort of evasion tactic for avoiding unpleasant emotions. Yet with suicide, I’m convinced that understanding suicidal urges, from a scientific perspective, can keep many people alive, at least in the short term. My hope is that knowing how it all works will help us to short-circuit the powerful impetus to die when things look calamitous. I want people to be able to recognize when they’re under suicide’s hypnotic spell and to wait it out long enough for that spell to wear off. Acute episodes of suicidal ideation rarely last longer than twenty-four hours.

Education may not always lead to prevention, but it certainly makes for good preparation. And for those of you trying to understand how someone you loved or cared about could have done such an inexplicable thing as to take their own life, my hope is that you’ll benefit, too, from this examination of the self-destructive mind and how we, as a society, think about suicide.

*

from

A very human ending. How Suicide Haunts Our Species

by Jesse Bering

get it at Amazon.com

MIT Creates AI that Predicts Depression from Speech – Cami Rosso.

Depression is one of the most common disorders globally that impacts the lives of over 300 million people, and nearly 800,000 suicides annually.

For a mental health professional, asking the right questions and interpreting the answers is a key factor in the diagnosis. But what if a diagnosis could be achieved through natural conversation, versus requiring context from question and answer?

An innovative Massachusetts Institute of Technology (MIT) research team has discovered a way for AI to detect depression in individuals through identifying patterns in natural conversation.

Psychology Today

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions – Johann Hari.

“Even when the tears didn’t come, I had an almost constant anxious monologue thrumming through my mind. Then I would chide myself: It’s all in your head. Get over it. Stop being so weak.”

As she was speaking, I started to experience something strange. Her voice seemed to be coming from very far away, and the room appeared to be moving around me uncontrollably. Then, quite unexpectedly, I started to explode, all over her hut, like a bomb of vomit and faeces. When, some time later, I became aware of my surroundings again, the old woman was looking at me with what seemed to be sad eyes. “This boy needs to go to a hospital,” she said. “He is very sick.

Although I couldn’t understand why, all through the time I was working on this book, I kept thinking of something that doctor said to me that day, during my unglamorous hour of poisoning.

“You need your nausea. It is a message. It will tell us what is wrong with you.”

It only became clear to me why in a very different place, thousands of miles away, at the end of my journey into what really causes depression and anxiety, and how we can find our way back.
.
In every book about depression or severe anxiety by someone who has been through it, there is a long stretch of pain-porn in which the author describes, in ever more heightened language, the depth of the distress they felt. We needed that once, when other people didn’t know what depression or severe anxiety felt like. Thanks to the people who have been breaking this taboo for decades now, I don’t have to write that book all over again. That is not what I am going to write about here. Take it from me, though: it hurts.

Prologue: The Apple

One evening in the spring of 2014, I was walking down a small side street in central Hanoi when, on a stall by the side of the road, I saw an apple. It was freakishly large and red and inviting. I’m terrible at haggling, so I paid three dollars for this single piece of fruit, and carried it into my room in the Very Charming Hanoi Hotel. Like any good foreigner who’s read his health warnings, I washed the apple diligently with bottled water, but as I bit into it, I felt a bitter, chemical taste fill my mouth. It was the flavor I imagined, back when I was a kid, that all food was going to have after a nuclear war. I knew I should stop, but I was too tired to go out for any other food, so I ate half, and then set it aside, repelled.

Two hours later, the stomach pains began. For two days, I sat in my room as it began to spin around me faster and faster, but I wasn’t worried: I had been through food poisoning before. I knew the script. You just have to drink water and let it pass through you.

On the third day, I realized my time in Vietnam was slipping away in this sickness-blur. I was there to track down some survivors of the war for another book project I’m working on, so I called my translator, Dang Hoang Linh, and told him we should drive deep into the countryside in the south as we had planned all along. As we traveled around, a trashed hamlet here, an Agent Orange victim there, I was starting to feel steadier on my feet.

The next morning, he took me to the hut of a tiny eighty-seven-year-old woman. Her lips were dyed bright red from the herb she was chewing, and she pulled herself toward me across the floor on a wooden plank that somebody had managed to attach some wheels to. Throughout the war, she explained, she had spent nine years wandering from bomb to bomb, trying to keep her kids alive. They were the only survivors from her village.

As she was speaking, I started to experience something strange. Her voice seemed to be coming from very far away, and the room appeared to be moving around me uncontrollably. Then-quite unexpectedly, I started to explode, all over her hut, like a bomb of vomit and faeces. When, some time later, I became aware of my surroundings again, the old woman was looking at me with what seemed to be sad eyes. “This boy needs to go to a hospital,” she said. “He is very sick.”

No, no, I insisted. I had lived in East London on a staple diet of fried chicken for years, so this wasn’t my first time at the E.coli rodeo. I told Dang to drive me back to Hanoi so I could recover in my hotel room in front of CNN and the contents of my own stomach for a few more days.

“No,” the old woman said firmly. “The hospital.”

“Look, Johann,” Dang said to me, “this is the only person, with her kids, who survived nine years of American bombs in her village. I am going to listen to her health advice over yours.” He dragged me into his car, and I heaved and convulsed all the way to a sparse building that I learned later had been built by the Soviets decades before. I was the first foreigner ever to be treated there. From inside, a group of nurses, half excited, half baffled, rushed to me and carried me to a table, where they immediately started shouting. Dang was yelling back at the nurses, and they were shrieking now, in a language that had no words I could recognize. I noticed then that they had put something tight around my arm.

I also noticed that in the corner, there was a little girl with her nose in plaster, alone. She looked at me. I looked back. We were the only patients in the room.

As soon as they got the results of my blood pressure, dangerously low, the nurse said, as Dang translated, they started jabbing needles into me. Later, Dang told me that he had falsely said that I was a Very Important Person from the West, and that if I died there, it would be a source of shame for the people of Vietnam. This went on for ten minutes, as my arm got heavy with tubes and track marks. Then they started to shout questions at me about my symptoms through Dang. It was a seemingly endless list about the nature of my pain.

As all this was unfolding, I felt strangely split. Part of me was consumed with nausea, everything was spinning so fast, and I kept thinking: stop moving, stop moving, stop moving. But another part of me, below or beneath or beyond this, was conducting a quite rational little monologue. Oh. You are close to death. Felled by a poisoned apple. You are like Eve, or Snow White, or Alan Turing.

Then I thought, is your last thought really going to be that pretentious?

Then I thought, if eating half an apple did this to you, what do these chemicals do to the farmers who work in the fields with them day in, day out, for years? That’d be a good story, some day.

Then I thought, you shouldn’t be thinking like this if you are on the brink of death. You should be thinking of profound moments in your life. You should be having flashbacks. When have you been truly happy? I pictured myself as a small boy, lying on the bed in our old house with my grandmother, cuddling up to her and watching the British soap opera Coronation Street. I pictured myself years later when I was looking after my little nephew, and he woke me up at seven in the morning and lay next to me on the bed and asked me long and serious questions about life. I pictured myself lying on another bed, when I was seventeen, with the first person I ever fell in love with. It wasn’t a sexual memory, just lying there, being held.

Wait, I thought. Have you only ever been happy lying in bed? What does this reveal about you? Then this internal monologue was eclipsed by a heave. I begged the doctors to give me something that would switch off this extreme nausea. Dang talked animatedly with the doctors. Then he told me finally: “The doctor says you need your nausea. It is a message, and we must listen to the message. It will tell us what is wrong with you.”

And with that, I began to vomit again.

Many hours later, a doctor, a man in his forties came into my field of vision and said: “We have learned that your kidneys have stopped working. You are extremely dehydrated. Because of the vomiting and diarrhea, you have not absorbed any water for a very long time, so you are like a man who has been wandering in the desert for days.” Dang interjected: “He says if we had driven you back to Hanoi, you would have died on the journey.”

The doctor told me to list everything I had eaten for three days. It was a short list. An apple. He looked at me quizzically. “Was it a clean apple?” Yes, I said, I washed it in bottled water. Everybody burst out laughing, as if I had served up a killer Chris Rock punch line. it turns out that you can’t just wash an apple in Vietnam. They are covered in pesticides so they can stand for months without rotting. You need to cut off the peel entirely, or this can happen to you.

Although I couldn’t understand why, all through the time I was working on this book, I kept thinking of something that doctor said to me that day, during my unglamorous hour of poisoning.

“You need your nausea. It is a message. It will tell us what is wrong with you.”

It only became clear to me why in a very different place, thousands of miles away, at the end of my journey into what really causes depression and anxiety, and how we can find our way back.

“When I flushed away my final packs of Paxil, I found these mysteries waiting for me, like children on a train platform, waiting to be collected, trying to catch my eye. Why was I still depressed? Why were there so many people like me?”

Introduction: A Mystery

I was eighteen years old when I swallowed my first antidepressant. I was standing in the weak English sunshine, outside a pharmacy in a shopping center in London. The tablet was white and small, and as I swallowed, it felt like a chemical kiss.

That morning I had gone to see my doctor. I struggled, I explained to him, to remember a day when I hadn’t felt a long crying jag judder its way out of me. Ever since I was a small child, at school, at college, at home, with friends, I would often have to absent myself, shut myself away, and cry. They were not a few tears. They were proper sobs. And even when the tears didn’t come, I had an almost constant anxious monologue thrumming through my mind. Then I would chide myself: It’s all in your head. Get over it. Stop being so weak.

I was embarrassed to say it then; I am embarrassed to type it now.

In every book about depression or severe anxiety by someone who has been through it, there is a long stretch of pain-porn in which the author describes, in ever more heightened language, the depth of the distress they felt. We needed that once, when other people didn’t know what depression or severe anxiety felt like. Thanks to the people who have been breaking this taboo for decades now, I don’t have to write that book all over again. That is not what I am going to write about here. Take it from me, though: it hurts.

A month before I walked into that doctor’s office, I found myself on a beach in Barcelona, crying as the waves washed into me, when, quite suddenly, the explanation, for why this was happening, and how to find my way back, came to me. I was in the middle of traveling across Europe with a friend, in the summer before I became the first person in my family to go to a fancy university. We had bought cheap student rail passes, which meant for a month we could travel on any train in Europe for free, staying in youth hostels along the way. I had visions of yellow beaches and high culture, the Louvre, a spliff, hot Italians. But just before we left, I had been rejected by the first person I had ever really been in love with, and I felt emotion leaking out of me, even more than usual, like an embarrassing smell.

The trip did not go as I planned. I burst into tears on a gondola in Venice. I howled on the Matterhorn. I started to shake in Kafka’s house in Prague.

For me, it was unusual, but not that unusual. I’d had periods in my life like this before, when pain seemed unmanageable and I wanted to excuse myself from the world. But then in Barcelona, when I couldn’t stop crying, my friend said to me, “You realize most people don’t do this, don’t you?”

And then I experienced one of the very few epiphanies of my life. I turned to her and said: “I am depressed! It’s not all in my head! I’m not unhappy, I’m not weak, I’m depressed!”

This will sound odd, but what I experienced at that moment was a happy jolt, like unexpectedly finding a pile of money down the back of your sofa.

There is a term for feeling like this! It is a medical condition, like diabetes or irritable bowel syndrome! I had been hearing this, as a message bouncing through the culture, for years, of course, but now it clicked into place. They meant me! And there is, I suddenly recalled in that moment, a solution to depression: antidepressants. So that’s what I need! As soon as I get home, I will get these tablets, and I will be normal, and all the parts of me that are not depressed will be unshackled. I had always had drives that have nothing to do with depression, to meet people, to learn, to understand the world. They will be set free, I said, and soon.

The next day, we went to the Parc Güell, in the center of Barcelona. It’s a park designed by the architect Antoni Gaudi to be profoundly strange, everything is out of perspective, as if you have stepped into a funhouse mirror. At one point you walk through a tunnel in which everything is at a rippling angle, as though it has been hit by a wave. At another point, dragons rise close to buildings made of ripped iron that almost appears to be in motion. Nothing looks like the world should. As I stumbled around it, I thought, this is what my head is like: misshapen, wrong. And soon it’s going to be fixed.

Like all epiphanies, it seemed to come in a flash, but it had in fact been a long time coming. I knew what depression was. I had seen it play out in soap operas, and had read about it in books. I had heard my own mother talking about depression and anxiety, and seen her swallowing pills for it. And I knew about the cure, because it had been announced by the global media just a few years before. My teenage years coincided with the Age of Prozac the dawn of new drugs that promised, for the first time, to be able to cure depression without crippling side effects. One of the bestselling books of the decade explained that these drugs actually make you “better than well”, they make you stronger and healthier than ordinary people.

I had soaked all this up, without ever really stopping to think about it. There was a lot of talk like that in the late 1990s; it was everywhere. And now I saw, at last that it applied to me.

My doctor, it was clear on the afternoon when I went to see him, had absorbed all this, too. In his little office, he explained patiently to me why I felt this way. There are some people who naturally have depleted levels of a chemical named serotonin in their brains, he said, and this is what causes depression, that weird, persistent, misfiring unhappiness that won’t go away. Fortunately, just in time for my adulthood, there was a new generation of drugs, Selective Serotonin Reuptake Inhibitors (SSRIs), that restore your serotonin to the level of a normal person’s. Depression is a brain disease, he said, and this is the cure. He took out a picture of a brain and talked to me about it.

He was saying that depression was indeed all in my head, but in a very different way. It’s not imaginary. It’s very real, and it’s a brain malfunction.

He didn’t have to push. It was a story I was already sold on. I left within ten minutes with my script for Seroxat (or Paxil, as it’s known in the United States).

It was only years later, in the course of writing this book, that somebody pointed out to me all the questions my doctor didn’t ask that day. Like: Is there any reason you might feel so distressed? What’s been happening in your life? Is there anything hurting you that we might want to change? Even if he had asked, I don’t think I would have been able to answer him. I suspect I would have looked at him blankly. My life, I would have said, was good. Sure, I’d had some problems; but I had no reason to be unhappy, certainly not this unhappy.

In any case, he didn’t ask, and I didn’t wonder why. Over the next thirteen years, doctors kept writing me prescriptions for this drug, and none of them asked either. If they had, I suspect I would have been indignant, and said, If you have a broken brain that can’t generate the right happiness, producing chemicals, what’s the point of asking such questions?

Isn’t it cruel? You don’t ask a dementia patient why they can’t remember where they left their keys. What a stupid thing to ask me. Haven’t you been to medical school?

The doctor had told me it would take two weeks for me to feel the effect of the drugs, but that night, after collecting my prescription, I felt a warm surge running through me, a light thrumming that I was sure consisted of my brain synapses groaning and creaking into the correct configuration. I lay on my bed listening to a worn-out mix tape, and I knew I wasn’t going to be crying again for a long time.

I left for the university a few weeks later. With my new chemical armor, I wasn’t afraid. There, I became an evangelist for antidepressants. Whenever a friend was sad, I would offer them some of my pills to try, and I’d tell them to get some from the doctor. I became convinced that I was not merely nondepressed, but in some better state, I thought of it as “antidepression.” I was, I told myself, unusually resilient and energetic. I could feel some physical side effects from the drug, it was true, I was putting on a lot of weight, and I would find myself sweating unexpectedly. But that was a small price to pay to stop hemorrhaging sadness on the people around me. And-look! I could do anything now.

Within a few months, I started to notice that there were moments of welling sadness that would come back to me unexpectedly. They seemed inexplicable, and manifestly irrational. I returned to my doctor, and we agreed that I needed a higher dose. So my 20 milligrams a day was upped to 30 milligrams a day; my white pills became blue pills.

And so it continued, all through my late teens, and all through my twenties. I would preach the benefits of these drugs; after a while, the sadness would return; so I would be given a higher dose; 30 milligrams became 40; 40 became 50; until finally I was taking two big blue pills a day, at 60 milligrams. Every time, I got fatter; every time, I sweated more; every time, I knew it was a price worth paying.

I explained to anyone who asked that depression is a disease of the brain, and SSRis are the cure. When I became a journalist, I wrote articles in newspapers explaining this patiently to the public. I described the sadness returning to me as a medical process, clearly there was a running down of chemicals in my brain, beyond my control or comprehension. Thank God these drugs are remarkably powerful, I explained, and they work. Look at me. I’m the proof. Every now and then, I would hear a doubt in my head, but I would swiftly dismiss it by swallowing an extra pill or two that day.

I had my story. In fact, I realize now, it came in two parts. The first was about what causes depression: it’s a malfunction in the brain, caused by serotonin deficiency or some other glitch in your mental hardware. The second was about what solves depression: drugs, which repair your brain chemistry.

I liked this story. It made sense to me. It guided me through life.

I only ever heard one other possible explanation for why I might feel this way. It didn’t come from my doctor, but I read it in books and saw it discussed on TV. It said depression and anxiety were carried in your genes. I knew my mother had been depressed and highly anxious before I was born (and after), and that we had these problems in my family running further back than that. They seemed to me to be parallel stories. They both said, it’s something innate, in your flesh.

I started work on this book three years ago because I was puzzled by some mysteries, weird things that I couldn’t explain with the stories I had preached for so long, and that I wanted to find answers to.

Here’s the first mystery. One day, years after I started taking these drugs, I was sitting in my therapist’s office talking about how grateful I was that antidepressants exist and were making me better. “That’s strange,” he said. “Because to me, it seems you are still really quite depressed.” I was perplexed. What could he possibly mean? “Well,” he said, “you are emotionally distressed a lot of the time. And it doesn’t sound very different, to me, from how you describe being before you took the drugs.”

I explained to him, patiently, that he didn’t understand: depression is caused by low levels of serotonin, and I was having my serotonin levels boosted. What sort of training do these therapists get, I wondered?

Every now and then, as the years passed, he would gently make this point again. He would point out that my belief that an increased dose of the drugs was solving my problem didn’t seem to match the facts, since I remained down and depressed and anxious a lot of the time. I would recoil, with a mixture of anger and prissy superiority.

“No matter how high a dose I jacked up my antidepressants to, the sadness would always outrun it.”

It was years before I finally heard what he was saying. By the time I was in my early thirties, I had a kind of negative epiphany, the opposite of the one I had that day on a beach in Barcelona so many years before. No matter how high a dose I jacked up my antidepressants to, the sadness would always outrun it. There would be a bubble of apparently chemical relief, and then that sense of prickling unhappiness would return. I would start once again to have strong recurring thoughts that said: life is pointless; everything you’re doing is pointless; this whole thing is a fucking waste of time. It would be a thrum of unending anxiety.

So the first mystery I wanted to understand was: How could I still be depressed when I was taking antidepressants? I was doing everything right, and yet something was still wrong. Why?

“Addictions to legal and illegal drugs are now so widespread that the life expectancy of white men is declining for the first time in the entire peacetime history of the United States.”

A curious thing has happened to my family over the past few decades.

From when I was a little kid, I have memories of bottles of pills laid out on the kitchen table, waiting, with inscrutable white medical labels on them. I’ve written before about the drug addiction in my family, and how one of my earliest memories was of trying to wake up one of my relatives and not being able to. But when I was very young, it wasn’t the banned drugs that were dominant in our lives, it was the ones handed out by doctors: old-style antidepressants and tranquilizers like Valium, the chemical tweaks and alterations that got us through the day.

That’s not the curious thing that happened to us. The curious thing is that as I grew up, Western civilization caught up with my family. When I was small and I stayed with friends, I noticed that nobody in their families swallowed pills with their breakfast, lunch, or dinner. Nobody was sedated or amped up or antidepressed. My family was, I realized, unusual.

And then gradually, as the years passed, I noticed the pills appearing in more and more people’s lives, prescribed, approved, recommended. Today they are all around us. Some one in five US. adults is taking at least one drug for a psychiatric problem; nearly one in four middle-aged women in the United States is taking antidepressants at any given time; around one in ten boys at American high schools is being given a powerful stimulant to make them focus; and addictions to legal and illegal drugs are now so widespread that the life expectancy of white men is declining for the first time in the entire peacetime history of the United States.

These effects have radiated out across the Western world: for example, as you read this, one in three French people is taking a legal psychotropic drug such as an antidepressant, while the UK has almost the highest use in all of Europe. You can’t escape it: when scientists test the water supply of Western countries, they always find it is laced with antidepressants, because so many of us are taking them and excreting them that they simply can’t be filtered out of the water we drink every day. We are literally awash in these drugs.

What once seemed startling has become normal. Without talking about it much, we’ve accepted that a huge number of the people around us are so distressed that they feel they need to take a powerful chemical every day to keep themselves together.

So the second mystery that puzzled me was: Why were so many more people apparently feeling depressed and severely anxious? What changed?

“We’ve accepted that a huge number of the people around us are so distressed that they feel they need to take a powerful chemical every day to keep themselves together.”

Then, when I was thirty-one years old, I found myself chemically naked for the first time in my adult life. For almost a decade, I had been ignoring my therapist’s gentle reminders that I was still depressed despite my drugs. It was only after a crisis in my life, when I felt unequivocally terrible and couldn’t shake it off, that I decided to listen to him. What I had been trying for so long wasn’t, it seemed, working. And so, when I flushed away my final packs of Paxil, I found these mysteries waiting for me, like children on a train platform, waiting to be collected, trying to catch my eye. Why was I still depressed? Why were there so many people like me?

And I realized there was a third mystery, hanging over all of it. Could something other than bad brain chemistry have been causing depression and anxiety in me, and in so many people all around me? If so-what could it be?

Still, I put off looking into it. Once you settle into a story about your pain, you are extremely reluctant to challenge it. It was like a leash I had put on my distress to keep it under some control. I feared that if I messed with the story I had lived with for so long, the pain would be like an unchained animal, and would savage me.

Over a period of several years, I fell into a pattern. I would begin to research these mysteries, by reading scientific papers, and talking to some of the scientists who wrote them, but I always backed away, because what they said made me feel disoriented, and more anxious than I had been at the start. I focused on the work for another book, Chasing the Scream: The First and Last Days of the War on Drugs, instead. It sounds ridiculous to say I found it easier to interview hit men for the Mexican drug cartels than to look into what causes depression and anxiety, but messing with my story about my emotions, what I felt, and why I felt it, seemed more dangerous, to me, than that.

And then, finally, I decided I couldn’t ignore it any longer. So, over a period of three years, I went on a journey of over forty thousand miles. I conducted more than two hundred interviews across the world, with some of the most important social scientists in the world, with people who had been through the depths of depression and anxiety, and with people who had recovered. I ended up in all sorts of places I couldn’t have guessed at in the beginning, an Amish village in Indiana, a Berlin housing project rising up in rebellion, a Brazilian city that had banned advertising, a Baltimore laboratory taking people back through their traumas in a totally unexpected way. What I learned forced me to radically revise my story, about myself, and about the distress spreading like tar over our culture.

“Everything that causes an increase in depression also causes an increase in anxiety, and the other way around. They rise and fall together.”

I want to flag up, right at the start, two things that shape the language I am going to use all through the book. Both were surprising to me.

I was told by my doctor that I was suffering from both depression and acute anxiety. I had believed that those were separate problems, and that is how they were discussed for the thirteen years I received medical care for them. But I noticed something odd as I did my research. Everything that causes an increase in depression also causes an increase in anxiety, and the other way around. They rise and fall together.

It seemed curious, and I began to understand it only when, in Canada, I sat down with Robert Kohlenberg, a professor of psychology. He, too, once thought that depression and anxiety were different things. But as he studied it, for over twenty years now, he discovered, he says, that “the data are indicating they’re not that distinct.” In practice, “the diagnoses, particularly depression and anxiety, overlap.” Sometimes one part is more pronounced than the other, you might have panic attacks this month and be crying a lot the next month. But the idea that they are separate in the way that (say) having pneumonia and having a broken leg are separate isn’t borne out by the evidence. It’s “messy,” he has proved.

Robert’s side of the argument has been prevailing in the scientific debate. In the past few years, the National Institutes of Health, the main body funding medical research in the United States, has stopped funding studies that present depression and anxiety as different diagnoses. “They want something more realistic that corresponds to the way people are in actual clinical practice,” he explains.

I started to see depression and anxiety as like cover versions of the same song by different bands. Depression is a cover version by a downbeat emo band, and anxiety is a cover version by a screaming heavy metal group, but the underlying sheet music is the same. They’re not identical, but they are twinned.

*

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

Anatomy of a teenage suicide: Leo’s death will count – Virginia Fallon.

In the past year, 668 people took their own lives in New Zealand: the highest number since records began and the fourth year in a row the number increased.

In 2016, some time after the magnitude 7.8 Kaikōura earthquake rattled the capital, the 18-year-old took his own life.

Stuff.co.nz

get help

24/7 Lifeline – 0800 543 354

World in mental health crisis of ‘monumental suffering’, say experts – Sarah Broseley.

“Mental health problems kill more young people than any other cause around the world.” Prof. Vikram Patel, Harvard Medical School
Lancet report says 13.5 million lives could be saved every year if mental illness addressed.

Every country in the world is facing and failing to tackle a mental health crisis, from epidemics of anxiety and depression to conditions caused by violence and trauma, according to a review by experts that estimates the rising cost will hit $16tn (£12tn) by 2030.

A team of 28 global experts assembled by the Lancet medical journal says there is a “collective failure to respond to this global health crisis” which “results in monumental loss of human capabilities and avoidable suffering.”

The burden of mental ill-health is rising everywhere, says the Lancet Commission, in spite of advances in the understanding of the causes and options for treatment. “The quality of mental health services is routinely worse than the quality of those for physical health,” says their report, launched at a global ministerial mental health summit in London.

The Guardian

Towards a New Era for Mental Health

Prabha S Chandra, Prabhat Chand

The new Lancet Commission on global mental health and sustainable development raises important issues at a time when many countries in the Global South are re-examining their national priorities in mental health. With its broad vision, the Commission shows why mental health is a public good that is a crucial part of the Sustainable Development Goals (SDGs). The Commission’s report emphasises the need to take a dimensional approach to mental health problems and their treatment; to allocate resources where they will be most cost-effective; to consider a life-course approach; and to build on existing research that will pave the way for better understanding of the causes, prevention, and treatment of mental health problems.

The Lancet

‘Living hell’: Inside one man’s battle with anxiety and depression – Bruce Munro.

It is this panic that the panic will overwhelm and expose you, that is the deep demon of anxiety disorders.

Anxiety and depression are a plague on Western society, especially New Zealand. It is only getting worse, becoming decidedly common.

About 17% of New Zealanders have been diagnosed with depression, anxiety, bipolar disorder or a bitter cocktail of the above, at some point in their lives. During the next 12 months, 228,000 Kiwis are predicted to experience a major depressive disorder.

Globally, the World Health Organisation believes mental illness will become the second leading cause of disability within two years.

A fear process had established itself in my brain.

For several years, my brain had been building an extensive back catalogue of experiences it interpreted as fearful. A mind loop was set up. The amygdala, the almond-sized primal brain, detected a threat. A flood of adrenaline and cortisol was released, creating a hyper-attentive state. The neocortex scanned memories for explanations of this arousal. If what was going on, no matter how mundane – a phone ringing, having a conversation, driving across a bridge – had been labelled “fearful” by a past experience, then fear was offered to my conscious brain as the appropriate emotion.

Deep ruts were created that ran directly from any stimuli, past, present and future to a fear response.

By simple, tragic repetition, I had trained my thinking to be scared of virtually everything. …

New Zealand Herald

Need Help? Want to Talk?

24/7 LIFELINE: 0800 543 354

Thinking, Fast And Slow – Daniel Kahneman.

This book presents my current understanding of judgment and decision making, which has been shaped by psychological discoveries of recent decades.

The idea that our minds are susceptible to systematic errors is now generally accepted. Our research on judgment had far more effect on social science than we thought possible when we were working on it.

We can be blind to the obvious, and we are also blind to our blindness.

Daniel Kahneman is a Senior Scholar at Princeton University, and Emeritus Professor of Public Affairs, Woodrow Wilson School of Public and International Affairs. He was awarded the Nobel Prize in Economics in 2002.

*

Every author, I suppose, has in mind a setting in which readers of his or her work could benefit from having read it. Mine is the proverbial office water-cooler, where opinions are shared and gossip is exchanged. I hope to enrich the vocabulary that people use when they talk about the judgments and choices of others, the company’s new policies, or a colleague’s investment decisions.

Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and at home.

To be a good diagnostician, a physician needs to acquire a large set of labels for diseases, each of which binds an idea of the illness and its symptoms, possible antecedents and causes, possible developments and consequences, and possible interventions to cure or mitigate the illness. Learning medicine consists in part of learning the language of medicine. A deeper understanding of judgments and choices also requires a richer vocabulary than is available in everyday language.

The hope for informed gossip is that there are distinctive patterns in the errors people make. Systematic errors are known as biases, and they recur predictably in particular circumstances. When the handsome and confident speaker bounds onto the stage, for example, you can anticipate that the audience will judge his comments more favorably than he deserves. The availability of a diagnostic label for this bias, the halo effect, makes it easier to anticipate, recognize, and understand.

When you are asked what you are thinking about, you can normally answer. You believe you know what goes on in your mind, which often consists of one conscious thought leading in an orderly way to another. But that is not the only way the mind works, nor indeed is that the typical way. Most impressions and thoughts arise in your conscious experience without your knowing how they got there. You cannot trace how you came to the belief that there is a lamp on the desk in front of you, or how you detected a hint of irritation in your spouse’s voice on the telephone, or how you managed to avoid a threat on the road before you became consciously aware of it. The mental work that produces impressions, intuitions, and many decisions goes on in silence in our mind.

Much of the discussion in this book is about biases of intuition. However, the focus on error does not denigrate human intelligence, any more than the attention to diseases in medical texts denies good health. Most of us are healthy most of the time, and most of our judgments and actions are appropriate most of the time. As we navigate our lives, we normally allow ourselves to be guided by impressions and feelings, and the confidence we have in our intuitive beliefs and preferences is usually justified. But not always. We are often confident even when we are wrong, and an objective observer is more likely to detect our errors than we are.

So this is my aim for watercooler conversations: improve the ability to identify and understand errors of judgment and choice, in others and eventually in ourselves, by providing a richer and more precise language to discuss them. In at least some cases, an accurate diagnosis may suggest an intervention to limit the damage that bad judgments and choices often cause.

ORIGINS

This book presents my current understanding of judgment and decision making, which has been shaped by psychological discoveries of recent decades. However,

I trace the central ideas to the lucky day in 1969 when I asked a colleague to speak as a guest to a seminar I was teaching in the Department of Psychology at the Hebrew University of Jerusalem. Amos Tversky was considered a rising star in the field of decision research, indeed, in anything he did, so I knew we would have an interesting time. Many people who knew Amos thought he was the most intelligent person they had ever met. He was brilliant, voluble, and charismatic. He was also blessed with a perfect memory for jokes and an exceptional abiIity to use them to make a point. There was never a dull moment when Amos was around. He was then thirty-two; I was thirty-five.

Amos told the class about an ongoing program of research at the University of Michigan that sought to answer this question: Are people good intuitive statisticians? We already knew that people are good intuitive grammarians: at age four a child effortlessly conforms to the rules of grammar as she speaks, although she has no idea that such rules exist. Do people have a similar intuitive feel for the basic principles of statistics? Amos reported that the answer was a qualified yes. We had a lively debate in the seminar and ultimately concluded that a qualified no was a better answer.

Amos and I enjoyed the exchange and concluded that intuitive statistics was an interesting topic and that it would be fun to explore it together. That Friday we met for lunch at Café Rimon, the favorite hangout of bohemians and professors in Jerusalem, and planned a study of the statistical intuitions of sophisticated researchers. We had concluded in the seminar that our own intuitions were deficient. In spite of years of teaching and using statistics, we had not developed an intuitive sense of the reliability of statistical results observed in small samples. Our subjective judgments were biased: we were far too willing to believe research findings based on inadequate evidence and prone to collect too few observations in our own research. The goal of our study was to examine whether other researchers suffered from the same affliction.

We prepared a survey that included realistic scenarios of statistical issues that arise in research. Amos collected the responses of a group of expert participants in a meeting of the Society of Mathematical Psychology, including the authors of two statistical textbooks. As expected, we found that our expert colleagues, like us, greatly exaggerated the likelihood that the original result of an experiment would be successfully replicated even with a small sample. They also gave very poor advice to a fictitious graduate student about the number of observations she needed to collect. Even statisticians were not good intuitive statisticians.

While writing the article that reported these findings, Amos and I discovered that we enjoyed working together. Amos was always very funny, and in his presence I became funny as well, so we spent hours of solid work in continuous amusement. The pleasure we found in working together made us exceptionally patient; it is much easier to strive for perfection when you are never bored. Perhaps most important, we checked our critical weapons at the door. Both Amos and I were critical and argumentative, he even more than I, but during the years of our collaboration neither of us ever rejected out of hand anything the other said. Indeed, one of the great joys I found in the collaboration was that Amos frequently saw the point of my vague ideas much more clearly than I did. Amos was the more logical thinker, with an orientation to theory and an unfailing sense of direction. I was more intuitive and rooted in the psychology of perception, from which we borrowed many ideas. We were sufficiently similar to understand each other easily, and sufficiently different to surprise each other. We developed a routine in which we spent much of our working days together, often on long walks. For the next fourteen years our collaboration was the focus of our lives, and the work we did together during those years was the best either of us ever did.

We quickly adopted a practice that we maintained for many years. Our research was a conversation, in which we invented questions and jointly examined our intuitive answers. Each question was a small experiment, and we carried out many experiments in a single day. We were not seriously looking for the correct answer to the statistical questions we posed. Our aim was to identify and analyze the intuitive answer, the first one that came to mind, the one we were tempted to make even when we knew it to be wrong. We believed, correctly, as it happened, that any intuition that the two of us shared would be shared by many other people as well, and that it would be easy to demonstrate its effects on judgments.

We once discovered with great delight that we had identical silly ideas about the future professions of several toddlers we both knew. We could identify the argumentative three-year-old lawyer, the nerdy professor, the empathetic and mildly intrusive psychotherapist. Of course these predictions were absurd, but we still found them appealing. It was also clear that our intuitions were governed by the resemblance of each child to the cultural stereotype of a profession. The amusing exercise helped us develop a theory that was emerging in our minds at the time, about the role of resemblance in predictions. We went on to test and elaborate that theory in dozens of experiments, as in the following example.

As you consider the next question, please assume that Steve was selected at random from a representative sample:

An individual has been described by a neighbor as follows: “Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” Is Steve more likely to be a librarian or a farmer?

The resemblance of Steve’s personality to that of a stereotypical librarian strikes everyone immediately, but equally relevant statistical considerations are almost always ignored. Did it occur to you that there are more than 20 male farmers for each male librarian in the United States? Because there are so many more farmers, it is almost certain that more “meek and tidy” souls will be found on tractors than at library information desks. However, we found that participants in our experiments ignored the relevant statistical facts and relied exclusively on resemblance. We proposed that they used resemblance as a simplifying heuristic (roughly, a rule of thumb) to make a difficult judgment. The reliance on the heuristic caused predictable biases (systematic errors) in their predictions.

On another occasion, Amos and I wondered about the rate of divorce among professors in our university. We noticed that the question triggered a search of memory for divorced professors we knew or knew about, and that we judged the size of categories by the ease with which instances came to mind. We called this reliance on the ease of memory search the availability heuristic. In one of our studies, we asked participants to answer a simple cuestion about words in a typical English text:

Consider the letter K . Is K more likely to appear as the first letter in a word OR as the third letter?

As any Scrabble player knows, it is much easier to come up with words that begin with a particular letter than to find words that have the same letter in the third position. This is true for every letter of the alphabet. We therefore expected respondents to exaggerate the frequency of letters appearing in the first position, even those letters (such as K, L, N, R, V) which in fact occur more frequently in the third position. Here again, the reliance on a heuristic produces a predictable bias in judgments. For example, I recently came to doubt my long-held impression that adultery is more common among politicians than among physicians or lawyers. I had even come up with explanations for that “fact,” including the aphrodisiac effect of power and the temptations of life away from home. I eventually realized that the transgressions of politicians are much more likely to be reported than the transgressions of lawyers and doctors. My intuitive impression could be due entirely to journalists’ choices of topics and to my reliance on the availability heuristic.

Amos and I spent several years studying and documenting biases of intuitive thinking in various tasks, assigning probabilities to events, forecasting the future, assessing hypotheses, and estimating frequencies. In the fifth year of our collaboration, we presented our main findings in Science magazine, a publication read by scholars in many disciplines. The article (which is reproduced in full at the end of this book) was titled “Judgment Under Uncertainty: Heuristics and Biases.” It described the simplifying shortcuts of intuitive thinking and explained some 20 biases as manifestations of these heuristics, and also as demonstrations of the role of heuristics in judgment.

Historians of science have often noted that at any given time scholars in a particular field tend to share basic assumptions about their subject. Social scientists are no exception; they rely on a view of human nature that provides the background of most discussions of specific behaviors but is rarely questioned. Social scientists in the 1970s broadly accepted two ideas about human nature. First, people are generally rational, and their thinking is normally sound. Second, emotions such as fear, affection, and hatred explain most of the occasions on which people depart from rationality. Our article challenged both assumptions without discussing them directly. We documented systematic errors in the thinking of normal people, and we traced these errors to the design of the machinery of cognition rather than to the corruption of thought by emotion.

Our article attracted much more attention than we had expected, and it remains one of the most highly cited works in social science (more than three hundred scholarly articles referred to it in 2010). Scholars in other disciplines found it useful, and the ideas of heuristics and biases have been used productively in many fields, including medical diagnosis, legal judgment, intelligence analysis, philosophy, finance, statistics, and military strategy.

For example, students of policy have noted that the availability heuristic helps explain why some issues are highly salient in the public’s mind while others are neglected. People tend to assess the relative importance of issues by the ease with which they are retrieved from memory, and this is largely determined by the extent of coverage in the media. Frequently mentioned topics populate the mind even as others slip away from awareness. In turn, what the media choose to report corresponds to their view of what is currently on the public’s mind. It is no accident that authoritarian regimes exert substantial pressure on independent media. Because public interest is most easily aroused by dramatic events and by celebrities, media feeding frenzies are common. For several weeks after Michael Jackson’s death, for example, it was virtually impossible to find a television channel reporting on another topic. In contrast, there is little coverage of critical but unexciting issues that provide less drama, such as declining educational standards or overinvestment of medical resources in the last year of life. (As I write this, I notice that my choice of “little-covered” examples was guided by availability. The topics I chose as examples are mentioned often; equally important issues that are less available did not come to my mind.)

We did not fully realize it at the time, but a key reason for the broad appeal of “heuristics and biases” outside psychology was an incidental feature of our work: we almost always included in our articles the full text of the questions we had asked ourselves and our respondents. These questions served as demonstrations for the reader, allowing him to recognize how his own thinking was tripped up by cognitive biases. I hope you had such an experience as you read the question about Steve the librarian, which was intended to help you appreciate the power of resemblance as a cue to probability and to see how easy it is to ignore relevant statistical facts.

The use of demonstrations provided scholars from diverse disciplines, notably philosophers and economists, an unusual opportunity to observe possible flaws in their own thinking. Having seen themselves fail, they became more likely to question the dogmatic assumption, prevalent at the time, that the human mind is rational and logical. The choice of method was crucial: if we had reported results of only conventional experiments, the article would have been less noteworthy and less memorable. Furthermore, skeptical readers would have distanced themselves from the results by attributing judgment errors to the familiar fecklessness of undergraduates, the typical participants in psychological studies. Of course, we did not choose demonstrations over standard experiments because we wanted to influence philosophers and economists. We preferred demonstrations because they were more fun, and we were lucky in our choice of method as well as in many other ways.

A recurrent theme of this book is that luck plays a large role in every story of success; it is almost always easy to identify a small change in the story that would have turned a remarkable achievement into a mediocre outcome. Our story was no exception.

The reaction to our work was not uniformly positive. In particular, our focus on biases was criticized as suggesting an unfairly negative view of the mind. As expected in normal science, some investigators refined our ideas and others offered plausible alternatives. By and large, though, the idea that our minds are susceptible to systematic errors is now generally accepted. Our research on judgment had far more effect on social science than we thought possible when we were working on it.

Immediately after completing our review of judgment, we switched our attention to decision making under uncertainty. Our goal was to develop a psychological theory of how people make decisions about simple gambles. For example: Would you accept a bet on the toss of a coin where you win $130 if the coin shows heads and lose $100 if it shows tails? These elementary choices had long been used to examine broad questions about decision making, such as the relative weight that people assign to sure things and to uncertain outcomes. Our method did not change: we spent many days making up choice problems and examining whether our intuitive preferences conformed to the logic of choice. Here again, as in judgment, we observed systematic biases in our own decisions, intuitive preferences that consistently violated the rules of rational choice. Five years after the Science article, we published “Prospect Theory: An Analysis of Decision Under Risk,” a theory of choice that is by some counts more influential than our work on judgment, and is one of the foundations of behavioral economics.

Until geographical separation made it too difficult to go on, Amos and I enjoyed the extraordinary good fortune of a shared mind that was superior to our individual minds and of a relationship that made our work fun as well as productive. Our collaboration on judgment and decision making was the reason for the Nobel Prize that I received in 2002, which Amos would have shared had he not died, aged fifty-nine, in 1996.

WHERE WE ARE NOW

This book is not intended as an exposition of the early research that Amos and I conducted together, a task that has been ably carried out by many authors over the years. My main aim here is to present a view of how the mind works that draws on recent developments in cognitive and social psychology. One of the more important developments is that we now understand the marvels as well as the flaws of intuitive thought.

Amos and I did not address accurate intuitions beyond the casual statement that judgment heuristics “are quite useful, but sometimes lead to severe and systematic errors.” We focused on biases, both because we found them interesting in their own right and because they provided evidence for the heuristics of judgment. We did not ask ourselves whether all intuitive judgments under uncertainty are produced by the heuristics we studied; it is now clear that they are not. In particular, the accurate intuitions of experts are better explained by the effects of prolonged practice than by heuristics. We can now draw a richer and more balanced picture, in which skill and heuristics are alternative sources of intuitive judgments and choices.

The psychologist Gary Klein tells the story of a team of firefighters that entered a house in which the kitchen was on fire. Soon after they started hosing down the kitchen, the commander heard himself shout, “Let’s get out of here!” without realizing why. The floor collapsed almost immediately after the firefighters escaped. Only after the fact did the commander realize that the fire had been unusually quiet and that his ears had been unusually hot. Together, these impressions prompted what he called a “sixth sense of danger.” He had no idea what was wrong, but he knew something was wrong. It turned out that the heart of the fire had not been in the kitchen but in the basement beneath where the men had stood.

We have all heard such stories of expert intuition: the chess master who walks past a street game and announces “White mates in three” without stopping, or the physician who makes a complex diagnosis after a single glance at a patient. Expert intuition strikes us as magical, but it is not. Indeed, each of us performs feats of intuitive expertise many times each day. Most of us are pitch-perfect in detecting anger in the first word of a telephone call, recognize as we enter a room that we were the subject of the conversation, and quickly react to subtle signs that the driver of the car in the next lane is dangerous. Our everyday intuitive abilities are no less marvelous than the striking insights of an experienced firefighter or physician, only more common.

The psychology of accurate intuition involves no magic. Perhaps the best short statement of it is by the great Herbert Simon, who studied chess masters and showed that after thousands of hours of practice they come to see the pieces on the board differently from the rest of us. You can feel Simon’s impatience with the mythologizing of expert intuition when he writes: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.”

We are not surprised when a two-year-old looks at a dog and says “doggie!” because we are used to the miracle of children learning to recognize and name things. Simon’s point is that the miracles of expert intuition have the same character. Valid intuitions develop when experts have learned to recognize familiar elements in a new situation and to act in a manner that is appropriate to it. Good intuitive judgments come to mind with the same immediacy as “doggie!”

Unfortunately, professionals’ intuitions do not all arise from true expertise. Many years ago I visited the chief investment officer of a large financial firm, who told me that he had just invested some tens of millions of dollars in the stock of Ford Motor Company. When I asked how he had made that decision, he replied that he had recently attended an automobile show and had been impressed. “Boy, do they know how to make a car!” was his explanation. He made it very clear that he trusted his gut feeling and was satisfied with himself and with his decision. I found it remarkable that he had apparently not considered the one question that an economist would call relevant: Is Ford stock currently underpriced? Instead, he had listened to his intuition; he liked the cars, he liked the company, and he liked the idea of owning its stock. From what we know about the accuracy of stock picking, it is reasonable to believe that he did not know what he was doing.

The specific heuristics that Amos and I studied provide little help in understanding how the executive came to invest in Ford stock, but a broader conception of heuristics now exists, which offers a good account. An important advance is that emotion now looms much larger in our understanding of intuitive judgments and choices than it did in the past. The executive’s decision would today be described as an example of the affect heuristic, where judgments and decisions are guided directly by feelings of liking and disliking, with little deliberation or reasoning.

When confronted with a problem, choosing a chess move or deciding whether to invest in a stock, the machinery of intuitive thought does the best it can. If the individual has relevant expertise, she will recognize the situation, and the intuitive solution that comes to her mind is likely to be correct. This is what happens when a chess master looks at a complex position: the few moves that immediately occur to him are all strong. When the question is difficult and a skilled solution is not available, intuition still has a shot: an answer may come to mind quickly, but it is not an answer to the original question. The question that the executive faced (should I invest in Ford stock?) was difficult, but the answer to an easier and related question (do I like Ford cars?) came readily to his mind and determined his choice. This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.

The spontaneous search for an intuitive solution sometimes fails, neither an expert solution nor a heuristic answer comes to mind. In such cases we often find ourselves switching to a slower, more deliberate and effortful form of thinking. This is the slow thinking of the title. Fast thinking includes both variants of intuitive thought, the expert and the heuristic, as well as the entirely automatic mental activities of perception and memory, the operations that enable you to know there is a lamp on your desk or retrieve the name of the capital of Russia.

The distinction between fast and slow thinking has been explored by many psychologists over the last twenty-five years. For reasons that I explain more fully in the next chapter, I describe mental life by the metaphor of two agents, called System 1 and System 2, which respectively produce fast and slow thinking. I speak of the features of intuitive and deliberate thought as if they were traits and dispositions of two characters in your mind. In the picture that emerges from recent research, the intuitive System 1 is more influential than your experience tells you, and it is the secret author of many of the choices and judgments you make. Most of this book is about the workings of System 1 and the mutual influences between it and System 2.

WHAT COMES NEXT

The book is divided into five parts. Part 1 presents the basic elements of a two-systems approach to judgment and choice. It elaborates the distinction between the automatic operations of System 1 and the controlled operations of System 2, and shows how associative memory, the core of System 1, continually constructs a coherent interpretation of what is going on in our world at any instant. I attempt to give a sense of the complexity and richness of the automatic and often unconscious processes that underlie intuitive thinking, and of how these automatic processes explain the heuristics of judgment. A goal is to introduce a language for thinking and talking about the mind.

Part 2 updates the study of judgment heuristics and explores a major puzzle: Why is it so difficult for us to think statistically? We easily think associatively, we think metaphorically, we think causally, but statistics requires thinking about many things at once, which is something that System 1 is not designed to do.

The difficulties of statistical thinking contribute to the main theme of Part 3, which describes a puzzling limitation of our mind: our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight. My views on this topic have been influenced by Nassim Taleb, the author of The Black Swan. I hope for watercooler conversations that intelligently explore the lessons that can be learned from the past while resisting the lure of hindsight and the illusion of certainty.

The focus of part 4 is a conversation with the discipline of economics on the nature of decision making and on the assumption that economic agents are rational. This section of the book provides a current view, informed by the two-system model, of the key concepts of prospect theory, the model of choice that Amos and I published in 1979. Subsequent chapters address several ways human choices deviate from the rules of rationality. I deal with the unfortunate tendency to treat problems in isolation, and with framing effects, where decisions are shaped by inconsequential features of choice problems. These observations, which are readily explained by the features of System 1, present a deep challenge to the rationality assumption favored in standard economics.

Part 5 describes recent research that has introduced a distinction between two selves, the experiencing self and the remembering self, which do not have the same interests. For example, we can expose people to two painful experiences. One of these experiences is strictly worse than the other, because it is longer. But the automatic formation of memories, a feature of System 1, has its rules, which we can exploit so that the worse episode leaves a better memory. When people later choose which episode to repeat, they are, naturally, guided by their remembering self and expose themselves (their experiencing self) to unnecessary pain. The distinction between two selves is applied to the measurement of well-being, where we find again that what makes the experiencing self happy is not quite the same as what satisfies the remembering self. How two selves within a single body can pursue happiness raises some difficult questions, both for individuals and for societies that view the well-being of the population as a policy objective.

A concluding chapter explores, in reverse order, the implications of three distinctions drawn in the book: between the experiencing and the remembering selves, between the conception of agents in classical economics and in behavioral economics (which borrows from psychology), and between the automatic System 1 and the effortful System 2. I return to the virtues of educating gossip and to what organizations might do to improve the quality of judgments and decisions that are made on their behalf.

Two articles I wrote with Amos are reproduced as appendixes to the book. The first is the review of judgment under uncertainty that I described earlier. The second, published in 1984, summarizes prospect theory as well as our studies of framing effects. The articles present the contributions that were cited by the Nobel committee and you may be surprised by how simple they are. Reading them will give you a sense of how much we knew a long time ago, and also of how much we have learned in recent decades.

PART ONE, TWO SYSTEMS

1: The Characters of the Story

To observe your mind in automatic mode, glance at the image below.

Figure 1

.

Your experience as you look at the woman’s face seamlessly combines what we normally call seeing and intuitive thinking. As surely and quickly as you saw that the young woman’s hair is dark, you knew she is angry.

Furthermore, what you saw extended into the future. You sensed that this woman is about to say some very unkind words, probably in a loud and strident voice. A premonition of what she was going to do next came to mind automatically and effortlessly. You did not intend to assess her mood or to anticipate what she might do, and your reaction to the picture did not have the feel of something you did. It just happened to you. It was an instance of fast thinking.

Now look at the following problem:

17×24

You knew immediately that this is a multiplication problem, and probably knew that you could solve it, with paper and pencil, if not without. You also had some vague intuitive knowledge of the range of possible results. You would be quick to recognize that both 12,609 and 123 are implausible. Without spending some time on the problem, however, you would not be certain that the answer is not 568. A precise solution did not come to mind, and you felt that you could choose whether or not to engage in the computation. If you have not done so yet, you should attempt the multiplication problem now, completing at least part of it.

You experienced slow thinking as you proceeded through a sequence of steps. You first retrieved from memory the cognitive program for multiplication that you learned in school, then you implemented it. Carrying out the computation was a strain. You felt the burden of holding much material in memory, as you needed to keep track of where you were and of where you were going, while holding on to the intermediate result. The process was mental work: deliberate, effortful, and orderly, a prototype of slow thinking. The computation was not only an event in your mind; your body was also involved. Your muscles tensed up, your blood pressure rose, and your heart rate increased. Someone looking closely at your eyes while you tackled this problem would have seen your pupils dilate. Your pupils contracted back to normal size as soon as you ended your work, when you found the answer (which is 408, by the way) or when you gave up.

TWO SYSTEMS

Psychologists have been intensely interested for several decades in the two modes of thinking evoked by the picture of the angry woman and by the multiplication problem, and have offered many labels for them. I adopt terms originally proposed by the psychologists Keith Stanovich and Richard West, and will refer to two systems in the mind, System 1 and System 2.

– System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.

– System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

The labels of System 1 and System 2 are widely used in psychology, but I go further than most in this book, which you can read as a psychodrama with two characters.

When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do. Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps. I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations of System 1. You will be invited to think of the two systems as agents with their individual abilities, limitations, and functions.

In rough order of complexity, here are some examples of the automatic activities that are attributed to System 1:

– Detect that one object is more distant than another.

– Orient to the source of a sudden sound.

– Complete the phrase “bread and …”

– Make a “disgust face” when shown a horrible picture.

-Detect hostility in a voice.

– Answer 2 + 2 = ?

– Read words on large billboards.

– Drive a car on an empty road.

– Find a strong move in chess (if you are a chess master).

– Understand simple sentences.

– Recognize that a “meek and tidy soul with a passion for detail” resembles an occupational stereotype.

All these mental events belong with the angry woman, they occur automatically and require little or no effort. The capabilities of System 1 include innate skills that we share with other animals. We are born prepared to perceive the world around us, recognize objects, orient attention, avoid losses, and fear spiders. Other mental activities become fast and automatic through prolonged practice. System 1 has learned associations between ideas (the capital of France?); it has also learned skills such as reading and understanding nuances of social situations. Some skills, such as finding strong chess moves, are acquired only by specialized experts. Others are widely shared. Detecting the similarity of a personality sketch to an occupational stereotype requires broad knowledge of the language and the culture, which most of us possess. The knowledge is stored in memory and accessed without intention and without effort.

Several of the mental actions in the list are completely involuntary. You cannot refrain from understanding simple sentences in your own language or from orienting to a loud unexpected sound, nor can you prevent yourself from knowing that 2 + 2 = 4 or from thinking of Paris when the capital of France is mentioned. Other activities, such as chewing, are susceptible to voluntary control but normally run on automatic pilot. The control of attention is shared by the two systems. Orienting to a loud sound is normally an involuntary operation of System 1, which immediately mobilizes the voluntary attention of System 2. You may be able to resist turning toward the source of a loud and offensive comment at a crowded party, but even if your head does not move, your attention is initially directed to it, at least for a while. However, attention can be moved away from an unwanted focus, primarily by focusing intently on another target.

The highly diverse operations of System 2 have one feature in common: they require attention and are disrupted when attention is drawn away. Here are some examples:

– Brace for the starter gun in a race.

– Focus attention on the clowns in the circus.

– Focus on the voice of a particular person in a crowded and noisy room.

– Look for a woman with white hair.

– Search memory to identify a surprising sound.

– Maintain a faster walking speed than is natural for you.

– Monitor the appropriateness of your behavior in a social situation.

– Count the occurrences of the letter a in a page of text.

– Tell someone your phone number.

– Park in a narrow space (for most people except garage attendants).

– Compare two washing machines for overall value.

– Fill out a tax form.

– Check the vaIidity of a complex logical argument.

In aIl these situations you must pay attention, and you will perform less well, or not at all, if you are not ready or if your attention is directed inappropriately. System 2 has some ability to change the way System 1 works, by programming the normally automatic functions of attention and memory. When waiting for a relative at a busy train station, for example, you can set yourself at will to look for a white-haired woman or a bearded man, and thereby increase the likelihood of detecting your relative from a distance. You can set your memory to search for capital cities that start with N or for French existentialist novels. And when you rent a car at London’s Heathrow Airport, the attendant will probably remind you that “we drive on the left side of the road over here.” In all these cases, you are asked to do something that does not come naturally, and you will find that the consistent maintenance of a set requires continuous exertion of at least some effort.

The often-used phrase “pay attention” is apt: you dispose of a limited budget of attention that you can allocate to activities, and if you try to go beyond your budget, you will fail. It is the mark of effortful activities that they interfere with each other, which is why it is difficult or impossible to conduct several at once. You could not compute the product of 17 x 24 while making a left turn into dense traffic, and you certainly should not try. You can do several things at once, but only if they are easy and undemanding. You are probably safe carrying on a conversation with a passenger while driving on an empty highway, and many parents have discovered, perhaps with some guilt, that they can read a story to a child while thinking of something else.

Everyone has some awareness of the limited capacity of attention, and our social behavior makes allowances for these limitations. When the driver of a car is overtaking a truck on a narrow road, for example, adult passengers quite sensibly stop talking. They know that distracting the driver is not a good idea, and they also suspect that he is temporarily deaf and will not hear what they say.

Intense focusing on a task can make people effectively blind, even to stimuli that normally attract attention.

The most dramatic demonstration was offered by Christopher Chabris and Daniel Simons in their book The Invisible Gorilla. They constructed a short film of two teams passing basketballs, one team wearing white shirts, the other wearing black. The viewers of the film are instructed to count the number of passes made by the white team, ignoring the black players. This task is difficult and completely absorbing. Halfway through the video, a woman wearing a gorilla suit appears, crosses the court, thumps her chest, and moves on. The gorilla is in view for 9 seconds. Many thousands of people have seen the video, and about half of them do not notice anything unusual. It is the counting task, and especially the instruction to ignore one of the teams, that causes the blindness. No one who watches the video without that task would miss the gorilla. Seeing and orienting are automatic functions of System 1, but they depend on the allocation of some attention to the relevant stimulus. The authors note that the most remarkable observation of their study is that people find its results very surprising. Indeed, the viewers who fail to see the gorilla are initially sure that it was not there, they cannot imagine missing such a striking event. The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness.

PLOT SYNOPSIS

The interaction of the two systems is a recurrent theme of the book, and a brief synopsis of the plot is in order.

In the story I will tell, Systems 1 and 2 are both active whenever we are awake. System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings. If endorsed by System 2, impressions and intuitions turn into beliefs, and impulses turn into voluntary actions. When all goes smoothly, which is most of the time, System 2 adopts the suggestions of System 1 with little or no modification. You generally believe your impressions and act on your desires, and that is fine, usually.

When System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment. System 2 is mobilized when a question arises for which System 1 does not offer an answer, as probably happened to you when you encountered the multiplication problem 17 x 24. You can also feel a surge of conscious attention whenever you are surprised. System 2 is activated when an event is detected that violates the model of the world that System 1 maintains. In that world, lamps do not jump, cats do not bark, and gorillas do not cross basketball courts. The gorilla experiment demonstrates that some attention is needed for the surprising stimulus to be detected. Surprise then activates and orients your attention: you will stare, and you will search your memory for a story that makes sense of the surprising event.

System 2 is also credited with the continuous monitoring of your own behavior, the control that keeps you polite when you are angry, and alert when you are driving at night. System 2 is mobilized to increased effort when it detects an error about to be made. Remember a time when you almost blurted out an offensive remark and note how hard you worked to restore control. In summary, most of what you (your System 2) think and do originates in your System 1, but System 2 takes over when things get difficult, and it normally has the last word.

The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate. System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off. If you are shown a word on the screen in a language you know, you will read it, unless your attention is totaly focused elsewhere.

CONFLICT

Figure 2 is a variant of a classic experiment that produces a conflict between the two systems. You should try the exercise before reading on.

Figure 2

.

You were almost certainly successful in saying the correct words in both tasks, and you surely discovered that some parts of each task were much easier than others. When you identified upper and lowercase, the left-hand column was easy and the right-hand column caused you to slow down and perhaps to stammer or stumble. When you named the position of words, the left-hand column was difficult and the right-hand column was much easier.

These tasks engage System 2, because saying “upper/lower” or “right/left” is not what you routinely do when looking down a column of words. One of the things you did to set yourself for the task was to program your memory so that the relevant words (upper and lower for the first task) were “on the tip of your tongue.” The prioritizing of the chosen words is effective and the mild temptation to read other words was fairly easy to resist when you went through the first column. But the second column was different, because it contained words for which you were set, and you could not ignore them. You were mostly able to respond correctly, but overcoming the competing response was a strain, and it slowed you down. You experienced a conflict between a task that you intended to carry out and an automatic response that interfered with it.

Conflict between an automatic reaction and an intention to control it is common in our lives. We are all familiar with the experience of trying not to stare at the oddly dressed couple at the neighboring table in a restaurant. We also know what it is like to force our attention on a boring book, when we constantly find ourselves returning to the point at which the reading lost its meaning. Where winters are hard, many drivers have memories of their car skidding out of control on the ice and of the struggle to follow well rehearsed instructions that negate what they would naturally do: “Steer into the skid, and whatever you do, do not touch the brakes!” And every human being has had the experience of not telling someone to go to hell. One of the tasks of System 2 is to overcome the impulses of System 1. In other words, System 2 is in charge of self-control.

ILLUSIONS

To appreciate the autonomy of System 1, as well as the distinction between impressions and beliefs, take a good look at figure 3.

Figure 3

.

This picture is unremarkable: two horizontal lines of different lengths, with fins appended, pointing in different directions. The bottom line is obviously longer than the one above it. That is what we all see, and we naturally believe what we see. If you have already encountered this image, however, you recognize it as the famous Müller-Lyer illusion. As you can easily confirm by measuring them with a ruler, the horizontal lines are in fact identical in length.

Now that you have measured the lines, you, your System 2, the conscious being you call “I”, have a new belief: you know that the lines are equally long. If asked about their length, you will say what you know. But you still see the bottom line as longer. You have chosen to believe the measurement, but you cannot prevent System 1 from doing its thing; you cannot decide to see the lines as equal, although you know they are. To resist the illusion, there is only one thing you can do: you must learn to mistrust your impressions of the length of lines when fins are attached to them. To implement that rule, you must be able to recognize the illusory pattern and recall what you know about it. If you can do this, you will never again be fooled by the Müller-Lyer illusion. But you will still see one line as longer than the other.

Not all illusions are visual. There are illusions of thought, which we call cognitive illusions. As a graduate student, I attended some courses on the art and science of psychotherapy. During one of these lectures, our teacher imparted a morsel of clinical wisdom. This is what he told us:

“You will from time to time meet a patient who shares a disturbing tale of multiple mistakes in his previous treatment. He has been seen by several clinicians, and all failed him. The patient can lucidly describe how his therapists misunderstood him, but he has quickly perceived that you are different. You share the same feeling, are convinced that you understand him, and will be able to help.” At this point my teacher raised his voice as he said, “Do not even think of taking on this patient! Throw him out of the office! He is most likely a psychopath and you will not be able to help him.”

Many years later I learned that the teacher had warned us against pschopathic charm, and the leading authority in the study of psychopathy confirmed that the teacher’s advice was sound. The analogy to the Müller-Lyer illusion is close. What we were being taught was not how to feel about that patient. Our teacher took it for granted that the sympathy we would feel for the patient would not be under our control; it would arise from System 1. Furthermore, we were not being taught to be generally suspicious of our feelings about patients. We were told that a strong attraction to a patient with a repeated history of failed treatment is a danger sign, like the fins on the parallel lines. It is an illusion, a cognitive illusion and I (System 2) was taught how to recognize it and advised not to believe it or act on it.

The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. Biases cannot always be avoided, because System 2 may have no clue to the error. Even when cues to likely errors are available, errors can be prevented only by the enhanced monitoring and effortful activity of System 2.

As a way to live your life, however, continuous vigilance is not necessarily good, and it is certainly impractical. Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people’s mistakes than our own.

USEFUL FICTIONS

You have been invited to think of the two systems as agents within the mind, with their individual personalities, abilities, and limitations. I will often use sentences in which the systems are the subjects, such as, “System 2 calculates products.”

The use of such language is considered a sin in the professional circles in which I travel, because it seems to explain the thoughts and actions of a person by the thoughts and actions of little people inside the person’s head. Grammatically the sentence about System 2 is similar to “The butler steals the petty cash.” My colleagues would point out that the butler’s action actually explains the disappearance of the cash, and they rightly question whether the sentence about System 2 explains how products are calculated. My answer is that the brief active sentence that attributes calculation to System 2 is intended as a description, not an explanation. It is meaningful only because of what you already know about System 2. It is shorthand for the following: “Mental arithmetic is a voluntary activity that requires effort, should not be performed while making a left turn, and is associated with dilated pupils and an accelerated heart rate.”

Similarly, the statement that “highway driving under routine conditions is left to System 1” means that steering the car around a bend is automatic and almost effortless. It also implies that an experienced driver can drive on an empty highway while conducting a conversation. Finally, “System 2 prevented James from reacting foolishly to the insult” means that James would have been more aggressive in his response if his capacity for effortful control had been disrupted (for example, if he had been drunk).

System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictitious characters. Systems 1 and 2 are not systems in the standard sense of entities with interacting aspects or parts. And there is no one part of the brain that either of the systems would call home. You may well ask: What is the point of introducing fictitious characters with ugly names into a serious book? The answer is that the characters are useful because of some quirks of our minds, yours and mine. A sentence is understood more easily if it describes what an agent (System 2) does than if it describes what something is, what properties it has. In other words, “System 2” is a better subject for a sentence than “mental arithmetic.” The mind, especially System 1, appears to have a special aptitude for the construction and interpretation of stories about active agents, who have personalities, habits, and abilities. You quickly formed a bad opinion of the thieving butler, you expect more bad behavior from him, and you will remember him for a while. This is also my hope for the language of systems.

Why call them System 1 and System 2 rather than the more descriptive “automatic system” and “effortful system”? The reason is simple: “Automatic system” takes longer to say than “System 1” and therefore takes more space in our working memory. This matters, because anything that occupies your working memory reduces your ability to think. You should treat “System 1” and “System 2” as nicknames, like Bob and Joe, identifying characters that you will get to know over the course of this book. The fictitious systems make it easier for me to think about judgment and choice, and will make it easier for you to understand what I say.

SPEAKING OF SYSTEM 1 AND SYSTEM 2

“He had an impression, but some of his impressions are illusions.”

“This was a pure System 1 response. She reacted to the threat before she recognized it.”

“This is your System 1 talking. Slow down and let your System 2 take control.”

PART TWO

Attention and Effort

In the unlikely event of this book being made into a film, System 2 would be a supporting character who believes herself to be the hero. The defining feature of System 2, in this story, is that its operations are effortful, and one of its main characteristics is laziness, a reluctance to invest more effort than is strictly necessary. As a consequence, the thoughts and actions that System 2 believes it has chosen are often guided by the figure at the center of the story, System 1. However, there are vital tasks that only System 2 can perform because they require effort and acts of self-control in which the intuitions and impulses of System 1 are overcome.

MENTAL EFFORT

If you wish to experience your System 2 working at full tilt, the following exercise will do; it should bring you to the limits of your cognitive abilities within 5 seconds. To start, make up several strings of 4 digits, all different, and write each string on an index card. Place a blank card on top of the deck. The task that you will perform is called Add-1. Here is how it goes:

Start beating a steady rhythm (or better yet, set a metronome at 1/sec). Remove the blank card and read the four digits aloud. Wait for two beats, then report a string in which each of the original digits is incremented by 1. If the digits on the card are 5294, the correct response is 6305. Keeping the rhythm is important.

Few people can cope with more than four digits in the Add-1 task, but if you want a harder challenge, please try Add-3.

If you would like to know what your body is doing while your mind is hard at work, set up two piles of books on a sturdy table, place a video camera on one and lean your chin on the other, get the video going, and stare at the camera lens while you work on Add-1 or Add-3 exercises. Later, you will find in the changing size of your pupils a faithful record of how hard you worked.

I have a long personal history with the Add-1 task. Early in my career I spent a year at the University of Michigan, as a visitor in a laboratory that studied hypnosis. Casting about for a useful topic of research, I found an article in Scientific American in which the psychologist Eckhard Hess described the pupil of the eye as a window to the soul. I reread it recently and again found it inspiring.

It begins with Hess reporting that his wife had noticed his pupils widening as he watched beautiful nature pictures, and it ends with two striking pictures of the same good looking woman, who somehow appears much more attractive in one than in the other. There is only one difference: the pupils of the eyes appear dilated in the attractive picture and constricted in the other.

Hess also wrote of belladonna, a pupil-dilating substance that was used as a cosmetic, and of bazaar shoppers who wear dark glasses in order to hide their level of interest from merchants.

One of Hess’s findings especially captured my attention. He had noticed that the pupils are sensitive indicators of mental effort, they dilate substantially when people multiply two-digit numbers, and they dilate more if the problems are hard than if they are easy. His observations indicated that the response to mental effort is distinct from emotional arousal. Hess’s work did not have much to do with hypnosis, but I concluded that the idea of a visible indication of mental effort had promise as a research topic. A graduate student in the lab, Jackson Beatty, shared my enthusiasm and we got to work.

Beatty and I developed a setup similar to an optician’s examination room, in which the experimental participant leaned her head on a chin-and-forehead rest and stared at a camera while listening to prerecorded information and answering questions on the recorded beats of a metronome. The beats triggered an infrared flash every second, causing a picture to be taken. At the end of each experimental session, we would rush to have the film developed, project the images of the pupil on a screen, and go to work with a ruler. The method was a perfect fit for young and impatient researchers: we knew our results almost immediately, and they always told a clear story.

Beatty and I focused on paced tasks, such as Add-1, in which we knew precisely what was on the subject’s mind at any time. We recorded strings of digits on beats of the metronome and instructed the subject to repeat or transform the digits one by one, maintaining the same rhythm. We soon discovered that the size of the pupil varied second by second, reflecting the changing demands of the task. The shape of the response was an inverted V. As you experienced it if you tried Add-1 or Add-3, effort builds up with every added digit that you hear, reaches an almost intolerable peak as you rush to produce a transformed string during and immediately after the pause, and relaxes gradually as you “unload” your short-term memory.

The pupil data corresponded precisely to subjective experience: longer strings reliably caused larger dilations, the transformation task compounded the effort, and the peak of pupil size coincided with maximum effort. Add-1 with four digits caused a larger dilation than the task of holding seven digits for immediate recall. Add-3, which is much more difficult, is the most demanding that I ever observed. In the first 5 seconds, the pupil dilates by about 50% of its original area and heart rate increases by about 7 beats per minute. This is as hard as people can work, they give up if more is asked of them. When we exposed our subjects to more digits than they could remember, their pupils stopped dilating or actually shrank.

We worked for some months in a spacious basement suite in which we had set up a closed-circuit system that projected an image of the subject’s pupil on a screen in the corridor; we also could hear what was happening in the laboratory. The diameter of the projected pupil was about a foot; watching it dilate and contract when the participant was at work was a fascinating sight, quite an attraction for visitors in our lab. We amused ourselves and impressed our guests by our ability to divine when the participant gave up on a task. During a mental multiplication, the pupil normally dilated to a large size within a few seconds and stayed large as long as the individual kept working on the problem; it contracted immediately when she found a solution or gave up.

As we watched from the corridor, we would sometimes surprise both the owner of the pupil and our guests by asking, “Why did you stop working just now?” The answer from inside the lab was often, “How did you know?” to which we would reply, “We have a window to your soul.”

The casual observations we made from the corridor were sometimes as informative as the formal experiments. I made a significant discovery as I was idly watching a woman’s pupil during a break between two tasks. She had kept her position on the chin rest, so I could see the image of her eye while she engaged in routine conversation with the experimenter. I was surprised to see that the pupil remained small and did not noticeably dilate as she talked and listened. Unlike the tasks that we were studying, the mundane conversation apparently demanded little or no effort, no more than retaining two or three digits. This was a eureka moment: I realized that the tasks we had chosen for study were exceptionally effortful. An image came to mind: mental life, today I would speak of the life of System 2, is normally conducted at the pace of a comfortable walk, sometimes interrupted by episodes of jogging and on rare occasions by a frantic sprint. The Add-1 and Add-3 exercises are sprints, and casual chatting is a stroll.

We found that people, when engaged in a mental sprint, may become effectively blind. The authors of The Invisible Gorilla had made the gorilla “invisible” by keeping the observers intensely busy counting passes. We reported a rather less dramatic example of blindness during Add-1. Our subjects were exposed to a series of randomly flashin letters while they worked. They were told to give the task complete priority, but they were also asked to report, at the end of the digit task, whether the letter K had appeared at any time during the trial. The main finding was that the ability to detect and report the target letter changed in the course of the 10 seconds of the exercise. The observers almost never missed a K that was shown at the beginning or near the end of the Add-1 task but they missed the target almost half the time when mental effort was at its peak, although we had pictures of their wide-open eye staring straight at it. Failures of detection followed the same inverted-V pattern as the dilating pupil. The similarity was reassuring: the pupil was a good measure of the physical arousal that accompanies mental effort, and we could go ahead and use it to understand how the mind works.

Much like the electricit meter outside your house or apartment, the pupils offer an index of the current rate at which mental energy is used. The analogy goes deep. Your use of electricity depends on what you choose to do, whether to light a room or toast a piece of bread. When you turn on a bulb or a toaster, it draws the energy it needs but no more. Similarly, we decide what to do, but we have limited control over the effort of doing it. Suppose you are shown four digits, say, 9462, and told that your life depends on holding them in memory for 10 seconds. However much you want to live, you cannot exert as much effort in this task as you would be forced to invest to complete an Add-3 transformation on the same digits.

System 2 and the electrical circuits in your home both have limited capacity, but they respond differently to threatened overload. A breaker trips when the demand for current is excessive, causing all devices on that circuit to lose power at once. In contrast, the response to mental overload is selective and precise: System 2 protects the most important activity, so it receives the attention it needs; “spare capacity” is allocated second by second to other tasks.

In our version of the gorilla experiment, we instructed the participants to assign priority to the digit task. We know that they followed that instruction, because the timing of the visual target had no effect on the main task. If the critical letter was presented at a time of high demand, the subjects simply did not see it. When the transformation task was less demanding, detection performance was better.

The sophisticated allocation of attention has been honed by a long evolutionary history. Orienting and responding quickly to the gravest threats or most promising opportunities improved the chance of survival, and this capability is certainly not restricted to humans. Even in modern humans, System 1 takes over in emergencies and assigns total priority to seIf-protective actions. Imagine yourself at the wheel of a car that unexpectedly skids on a large oil slick. You will find that you have responded to the threat before you became fully conscious of it.

Beatty and I worked together for only a year, but our collaboration had a large effect on our subsequent careers. He eventually became the leading authority on “cognitive pupillometry,” and I wrote a book titled Attention and Effort, which was based in large part on what we learned together and on follow-up research I did at Harvard the following year. We learned a great deal about the working mind, which I now think of as System 2, from measuring pupils in a wide variety of tasks.

As you become skilled in a task, its demand for energy diminishes. Studies of the brain have shown that the pattern of activity associated with an action changes as skill increases, with fewer brain regions involved. Talent has similar effects. Highly intelligent individuals need less effort to solve the same problems, as indicated by both pupil size and brain activity. A general “law of least effort” applies to cognitive as well as physical exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action. In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature.

The tasks that we studied varied considerably in their effects on the pupil. At baseline, our subjects were awake, aware, and ready to engage in a task, probably at a higher level of arousal and cognitive readiness than usual. Holding one or two digits in memory or learning to associate a word with a digit (3 = door) produced reliable effects on momentary arousal above that baseline, but the effects were minuscule, only 5% of the increase in pupil diameter associated with Add-3. A task that required discriminating between the pitch of two tones yielded significantly larger dilations. Recent research has shown that inhibiting the tendency to read distracting words (as in figure 2 of the preceding chapter) also induces moderate effort. Tests of short-term memory for six or seven digits were more effortful. As you can experience, the request to retrieve and say aloud your phone number or your spouse’s birthday also requires a brief but significant effort, because the entire string must be held in memory as a response is organized. Mental multiplication of two-digit numbers and the Add-3 task are near the limit of what most people can do.

What makes some cognitive operations more demanding and effortful than others? What outcomes must we purchase in the currency of attention? What can System 2 do that System 1 cannot? We now have tentative answers to these questions.

Effort is required to maintain simultaneously in memory several ideas that require separate actions, or that need to be combined according to a rule, rehearsing your shopping list as you enter the supermarket, choosing between the fish and the veal at a restaurant, or combining a surprising result from a survey with the information that the sample was small, for example. System 2 is the only one that can follow rules, compare objects on several attributes, and make deliberate choices between options. The automatic System 1 does not have these capabilities. System 1 detects simple relations (“they are all alike,” “the son is much taller than the father”) and excels at integrating information about one thing, but it does not deal with multiple distinct topics at once, nor is it adept at using purely statistical information. System 1 will detect that a person described as “a meek and tidy soul, with a need for order and structure, and a passion for detail” resembles a caricature librarian, but combining this intuition with knowledge about the small number of librarians is a task that only System 2 can perform, if System 2 knows how to do so, which is true of few people.

A crucial capability of System 2 is the adoption of “task sets”: it can program memory to obey an instruction that overrides habitual responses. Consider the following: Count all occurrences of the letter f in this page. This is not a task you have ever performed before and it will not come naturally to you, but your System 2 can take it on. it will be effortful to set yourself up for this exercise, and effortful to carry it out, though you will surely improve with practice. Psychologists speak of “executive control” to describe the adoption and termination of task sets, and neuroscientists have identified the main regions of the brain that serve the executive function. One of these regions is involved whenever a conflict must be resolved. Another is the prefrontal area of the brain, a region that is substantially more developed in humans than in other primates, and is involved in operations that we associate with intelligence.

Now suppose that at the end of the page you get another instruction: count all the commas in the next page. This will be harder, because you will have to overcome the newly acquired tendency to focus attention on the letter f. One of the significant discoveries of cognitive psychologists in recent decades is that switching from one task to another is effortful, especially under time pressure. The need for rapid switching is one of the reasons that Add-3 and mental multiplication are so difficult. To perform the Add-3 task, you must hold several digits in your working memory at the same time, associating each with a particular operation: some digits are in the queue to be transformed, one is in the process of transformation, and others, already transformed, are retained for reporting. Modern tests of working memory require the individual to switch repeatedly between two demanding tasks, retaining the results of one operation while performing the other. People who do well on these tests tend to do well on tests of general intelligence. However, the ability to control attention is not simply a measure of intelligence; measures of efficiency in the control of attention predict performance of air traffic controllers and of Israeli Air Force pilots beyond the effects of intelligence.

Time pressure is another driver of effort. As you carried out the Add-3 exercise, the rush was imposed in part by the metronome and in part by the load on memory. Like a juggler with several balls in the air, you cannot afford to slow down; the rate at which material decays in memory forces the pace, driving you to refresh and rehearse information before it is lost. Any task that requires you to keep several ideas in mind at the same time has the same hurried character. Unless you have the good fortune of a capacious working memory, you may be forced to work uncomfortably hard. The most effortful forms of slow thinking are those that require you to think fast.

You surely observed as you performed Add-3 how unusual it is for your mind to work so hard. Even if you think for a living, few of the mental tasks in which you engage in the course of a working day are as demanding as Add-3, or even as demanding as storing six digits for immediate recall. We normally avoid mental overload by dividing our tasks into multiple easy steps, committing intermediate results to long-term memory or to paper rather than to an easily overloaded working memory. We cover long distances by taking our time and conduct our mental lives by the law of least effort.

SPEAKING OF ATTENTION AND EFFORT

“I won’t try to solve this while driving. This is a pupil-dilating task. It requires mental effort!”

“The law of least effort is operating here. He will think as little as possible.”

“She did not forget about the meeting. She was completely focused on something else when the meeting was set and she just didn’t hear you.”

“What came quickly to my mind was an intuition from System 1. I’ll have to start over and search my memory deliberately.”

Three

The Lazy Controller

I spend a few months each year in Berkeley, and one of my great pleasures there is a daily four-mile walk on a marked path in the hills, with a fine view of San Francisco Bay. I usually keep track of my time and have learned a fair amount about effort from doing so. I have found a speed, about 17 minutes for a mile, which I experience as a stroll. I certainly exert physical effort and burn more calories at that speed than if I sat in a recliner, but I experience no strain, no conflict, and no need to push myself. I am also able to think and work while walking at that rate. Indeed, I suspect that the mild physical arousal of the walk may spill over into greater mental alertness.

System 2 also has a natural speed. You expend some mental energy in random thoughts and in monitoring what goes on around you even when your mind does nothing in particular, but there is little strain. Unless you are in a situation that makes you unusually wary or selfconscious, monitoring what happens in the environment with others, to depend on others, or to accept demands from others.

The psychologist who has done this remarkable research, Kathleen Vohs, has been laudably restrained in discussing the implications of her findings, leaving the task to her readers. Her experiments are profound, her findings suggest that living in a culture that surrounds us with reminders of money may shape our behavior and our attitudes in ways that we do not know about and of which we may not be proud. Some cultures provide frequent reminders of respect, others constantly remind their members of God, and some societies prime obedience by large images of the Dear Leader. Can there be any doubt that the ubiquitous portraits of the national leader in dictatorial societies not only convey the feeling that “Big Brother Is Watching” but also lead to an actual reduction in spontaneous thought and independent action?

The evidence of priming studies suggests that reminding people of their mortality increases the appeal of authoritarian ideas, which may become reassuring in the context of the terror of death. Other experiments have confirmed Freudian insights about the role of symbols and metaphors in unconscious associations. For example, consider the ambiguous word fragments W_ _ H and S_ _ P. People who were recently asked to think of an action of which they are ashamed are more likely to complete those fragments as WASH and SOAP and less likely to see WISH and SOUP. Furthermore, merely thinking about stabbing a coworker in the back leaves people more inclined to buy soap, disinfectant, or detergent than batteries, juice, or candy bars. Feeling that one’s soul is stained appears to trigger a desire to cleanse one’s body, an impulse that has been dubbed the “Lady Macbeth effect.”

“The world makes much less sense than you think. The coherence comes mostly from the way your mind works.”

“They were primed to find flaws, and this is exactly what they found.”

“His System 1 constructed a story, and his System 2 believed it. It happens to all of us.”

“I made myself smile and I’m actually feeling better!”

. . .

from

Thinking, Fast And Slow

by Daniel Kahneman

get it at Amazon.com

“Which of the me’s is me?” An Unquiet Mind. A Memoir of Moods and Madness – Kay Redfield Jamison.

Kay Jamison’s story is not of someone who has succeeded despite having a severe disorder, but of someone whose particular triumphs are a consequence of her disorder. She would not have observed what she has if she had not experienced what she did. The fact that she has endured such battles helps her to understand them in others.

“The disease that has, on several occasions, nearly killed me does kill tens of thousands of people every year: most are young, most die unnecessarily, and many are among the most imaginative and gifted that we as a society have. The major clinical problem in treating manic-depressive illness is not that there are not effective medications, there are, but that patients so often refuse to take them. Freedom from the control imposed by medication loses its meaning when the only alternatives are death and insanity.”

Her remarkable achievements are a beacon of hope to those who imagine that they cannot survive their condition, much less thrive with it.

I doubt sometimes whether a quiet & unagitated life would have suited me, yet I sometimes long for it. – Byron.

For centuries, the prevailing wisdom had been that having a mental illness would prevent a doctor from providing competent care, would indicate a vulnerability that would undermine the requisite aura of medical authority, and would kill any patient’s trust. In the face of this intense stigmatization, many bright and compassionate people with mental illness avoided the field of medicine, and many physicians with mental illness lived in secrecy. Kay Redfield Jamison herself led a closeted life for many years, even as she coauthored the standard medical textbook on bipolar illness. She suffered from the anguish inherent in her illness and from the pain that comes of living a lie.

With the publication of An Unquiet Mind, she left that lie behind, revealing her condition not only to her immediate colleagues and patients, but also to the world, and becoming the first clinician ever to describe travails with bipolar illness in a memoir. It was an act of extraordinary courage, a grand risk taking infused with a touch of manic exuberance, and it broke through a firewall of prejudice. You can have bipolar illness and be a brilliant clinician; you can have bipolar illness and be a leading authority on the condition, informed by your experiences rather than blinded by them. You can have bipolar illness and still have a joyful, fruitful, and dignified life.

Kay Jamison’s story is not of someone who has succeeded despite having a severe disorder, but of someone whose particular triumphs are a consequence of her disorder. Ovid said, “The wounded doctor heals best” and Jamison’s open hearted declarations have been a salve for the wounded psyches of untold thousands of people; her unquiet mind has often soothed the minds of others. Her discernments come from a rare combination of observation and experience: she would not have observed what she has if she had not experienced what she did. The fact that she has endured such battles helps her to understand them in others, and her frankness about them offers an antidote to the pervasive shame that Cloisters so many mentally ill people in fretful isolation.

Her remarkable achievements are a beacon of hope to those who imagine that they cannot survive their condition, much less thrive with it. Those who address mental illnesses tend to do so with either rigor or empathy; Jamison attains a rare marriage of the two. Just as her clinical work has been strengthened by her personal experience, her personal experience has been informed by her academic insights.

It is different to go through manic and depressive episodes when you know everything there is to know about your condition than it is to go through them in ignorance, constantly ambushed by the apparently inexplicable.

Like many people with mental illness, Jamison has had to reckon with the impossibility of separating her personality from her condition. “Which of the me’s is me?” she asks rhetorically in these pages. She kept up a nearly willful self-ignorance for years before she succumbed to knowledge; she resisted remedy at first because she feared she might lose some of her essential self to it. It took repeated descents and ascents into torment to instigate a kind of acquiescence. She has become glad of that surrender; it has saved a life that turns out to be well worth living. As this book bleached away her erstwhile denial, it has mediated her readers’ denial, too. As a professor of psychiatry at Johns Hopkins University and in her frequent lectures around the globe, Kay Jamison has taught a younger generation of doctors how to make sense of their patients: not merely how to treat them, but how to help them.

Though An Unquiet Mind does not provide diagnostic criteria or propose specific courses of treatment, it remains very much a book about medicine, with a touchingly fond portrait of science. Jamison expresses enormous gratitude to the doctors who have treated her and to the researchers who established the modes of treatment that have kept her alive. She engages medicine’s resonant clarities, and she tolerates the relative primitivism of our understanding of the brain.

Appreciating the biology of her illness and the mechanisms of its therapies allowed her to achieve a truce with her bipolar illness, and science informed her choice to speak openly about her skirmishes with it. That peace has not entirely precluded further episodes, but it makes them easier to tolerate when they come. Equally, it has given her the courage to stay on medication and the resilience to sustain other forms of self-care.

You can feel in Jamison’s writing a bracing honesty unmarred by self-pity. It seems clear Jamison is not by nature an exhibitionist, and making so much of her private life into public property cannot have been easy for her. On every page, you sense the resolve it has required. Her book differs from much confessional writing in that, although she describes certain experiences in agonizing detail, she maintains a vocabulary of discretion. An Unquiet Mind may have been intended as a book about an illness, not about a life, but it is both. There is satisfaction in making your affliction useful to other people; it redeems what seemed in the instance to be useless experiences. That insistence on making something good out of something bad is the vital force in her writing.

I met Kay Jamison in 1995, shortly after the publication of An Unquiet Mind, when I had first decided to write about depression. I contacted her to request an interview, and she suggested we have lunch; she then invited me to my first serious scientific conference, a suicide symposium she had organized, attended by the leading figures in the field. Her kindness to me in the early stages of my research points to a personal generosity that mirrors the brave generosity of her books. The forbearance that has made her a good clinician and a good writer also makes her a good friend.

In the years since then, Jamison has produced a corpus of work that, in a very different kind of bipolarity, limns the glittering revelations of psychosis only to return to its perilous ordeals. Touched with Fire (1993) had already chronicled the artistic achievements of people with bipolar illness; Night Falls Fast (1999) tackles the impossible subject of suicide; Exuberance (2004) tells us how unipolar mania has generated many intellectual and artistic breakthroughs; and Nothing was the Same (2009) is a closely observed and deeply personal account of losing her second husband to cancer, a journey complicated by her unreliable moods. Her illness runs through these books even when it is not her explicit topic. But that recurrent theme does not narrow the books into ego studies; instead, it makes them startlingly, powerfully intimate.

Jamison consistently evinces a romantic attachment to language itself. Her sentences flow out in an often poetic rapture, and she displays a sustaining love for the poetry of others, quoting it by the apposite yard. Few doctors know poetry so well, and few poets understand so much biology, and Jamison serves as a translator between humanism and science, which are so often disparate vocabularies for the same phenomena. While poetry inflects her literary voice, it sits comfortably beside a sense of humor. Irony is among her best defenses against gloom, and the zing of her comic asides makes reading about unbearable things a great deal more bearable. The crossing point of precision, luminosity, and hilarity may be the safest domain for an inconsistent mind, a nexus of relief for someone whose stoicism cannot fully assuage her distress.

Two decades after its publication, An Unquiet Mind remains fresh. There’s been a bit more science in the field and a great deal of social change regarding mental illness, change this book helped to create: a society in which what was relentlessly shameful is more easily and frequently acknowledged. The book delineates not how to treat the condition, but how to live with the condition and its treatments, and that remains relevant even as actual treatments evolve.

Jamison does not stint on her own despair, but she has constructed meaning and built an identity from it. While she might not have opted for this illness, neither does she entirely regret it; she prefers, as she writes so movingly, a life of passionate turbulence to one of tedious calm. Learning to appreciate the things you also regret makes for a good way forward. If you have bipolar illness, this book will help you to forgive yourself for everything that has gone awry; if you do not, it will perhaps show how a steely tenacity can imbue disasters with value, a capacity that stands to enrich any and every life.

Andrew Solomon

Kay Redfield Jamison

Prologue

When it’s two o’clock in the morning, and you’re manic, even the UCLA Medical Center has a certain appeal. The hospital, ordinarily a cold clotting of uninteresting buildings, became for me, that fall morning not quite twenty years ago, a focus of my finely wired, exquisitely alert nervous system. With vibrissae twinging, antennae perked, eyes fast forwarding and fly faceted, I took in everything around me. I was on the run. Not just on the run but fast and furious on the run, darting back and forth across the hospital parking lot trying to use up a boundless, restless, manic energy. I was running fast, but slowly going mad.

The man I was with, a colleague from the medical school, had stopped running an hour earlier and was, he said impatiently, exhausted. This, to a saner mind, would not have been surprising: the usual distinction between day and night had long since disappeared for the two of us, and the endless hours of scotch, brawling, and fallings about in laughter had taken an obvious, if not final, toll. We should have been sleeping or working, publishing not perishing, reading journals, writing in charts, or drawing tedious scientific graphs that no one would read.

Suddenly a police car pulled up. Even in my less than totally lucid state of mind I could see that the officer had his hand on his gun as he got out of the car. “What in the hell are you doing running around the parking lot at this hour?” he asked. A not unreasonable question. My few remaining islets of judgment reached out to one another and linked up long enough to conclude that this particular situation was going to be hard to explain. My colleague, fortunately, was thinking far better than I was and managed to reach down into some deeply intuitive part of his own and the world’s collective unconscious and said, “We’re both on the faculty in the psychiatry department.” The policeman looked at us, smiled, went back to his squad car, and drove away. Being professors of psychiatry explained everything.

Within a month of signing my appointment papers to become an assistant professor of psychiatry at the University of California, Los Angeles, I was well on my way to madness; it was 1974, and I was twenty-eight years old. Within three months I was manic beyond recognition and just beginning a long, costly personal war against a medication that I would, in a few years’ time, be strongly encouraging others to take. My illness, and my struggles against the drug that ultimately saved my life and restored my sanity, had been years in the making.

For as long as I can remember I was frighteningly, although often wonderfully, beholden to moods. Intensely emotional as a child, mercurial as a young girl, first severely depressed as an adolescent, and then unrelentingly caught up in the cycles of manicdepressive illness by the time I began my professional life, I became, both by necessity and intellectual inclination, a student of moods. It has been the only way I know to understand, indeed to accept, the illness I have; it also has been the only way I know to try and make a difference in the lives of others who also suffer from mood disorders.

The disease that has, on several occasions, nearly killed me does kill tens of thousands of people every year: most are young, most die unnecessarily, and many are among the most imaginative and gifted that we as a society have.

The Chinese believe that before you can conquer a beast you first must make it beautiful. In some strange way, I have tried to do that with manic-depressive illness. It has been a fascinating, albeit deadly, enemy and companion; I have found it to be seductively complicated, a distillation both of what is finest in our natures, and of what is most dangerous. In order to contend with it, I first had to know it in all of its moods and infinite disguises, understand its real and imagined powers. Because my illness seemed at first simply to be an extension of myself, that is to say, of my ordinarily changeable moods, energies, and enthusiasms, I perhaps gave it at times too much quarter. And, because I thought I ought to be able to handle my increasingly violent mood swings by myself, for the first ten years I did not seek any kind of treatment. Even after my condition became a medical emergency, I still intermittently resisted the medications that both my training and clinical research expertise told me were the only sensible way to deal with the illness I had.

My manias, at least in their early and mild forms, were absolutely intoxicating states that gave rise to great personal pleasure, an incomparable flow of thoughts, and a ceaseless energy that allowed the translation of new ideas into papers and projects. Medications not only cut into these fast-flowing, highflying times, they also brought with them seemingly intolerable side effects. it took me far too long to realize that lost years and relationships cannot be recovered, that damage done to oneself and others cannot always be put right again, and that freedom from the control imposed by medication loses its meaning when the only alternatives are death and insanity.

The war that I waged against myself is not an uncommon one. The major clinical problem in treating manic-depressive illness is not that there are not effective medications, there are, but that patients so often refuse to take them. Worse yet, because of a lack of information, poor medical advice, stigma, or fear of personal and professional reprisals, they do not seek treatment at all.

Manic-depression distorts moods and thoughts, incites dreadful behaviors, destroys the basis of rational thought, and too often erodes the desire and will to live. It is an illness that is biological in its origins, yet one that feels psychological in the experience of it; an illness that is unique in conferring advantage and pleasure, yet one that brings in its wake almost unendurable suffering and, not infrequently, suicide.

I am fortunate that I have not died from my illness, fortunate in having received the best medical care available, and fortunate in having the friends, colleagues, and family that I do. Because of this, I have in turn tried, as best I could, to use my own experiences of the disease to inform my research, teaching, clinical practice, and advocacy work.

Through writing and teaching I have hoped to persuade my colleagues of the paradoxical core of this quicksilver illness that can both kill and create; and, along with many others, have tried to change public attitudes about psychiatric illnesses in general and manic depressive illness in particular. It has been difficult at times to weave together the scientific discipline of my intellectual field with the more compelling realities of my own emotional experiences. And yet it has been from this binding of raw emotion to the more distanced eye of clinical science that I feel I have obtained the freedom to live the kind of life I want, and the human experiences necessary to try and make a difference in public awareness and clinical practice.

I have had many concerns about writing a book that so explicitly describes my own attacks of mania, depression, and psychosis, as well as my problems acknowledging the need for ongoing medication. Clinicians have been, for obvious reasons of licensing and hospital privileges, reluctant to make their psychiatric problems known to others. These concerns are often well warranted.

I have no idea what the longterm effects of discussing such issues so openly will be on my personal and professional life, but, whatever the consequences, they are bound to be better than continuing to be silent. I am tired of hiding, tired of misspent and knotted energies, tired of the hypocrisy, and tired of acting as though I have something to hide.

One is what one is, and the dishonesty of hiding behind a degree, or a title, or any manner and collection of words, is still exactly that: dishonest. Necessary, perhaps, but dishonest. I continue to have concerns about my decision to be public about my illness, but one of the advantages of having had manic-depressive illness for more than thirty years is that very little seems insurmountably difficult. Much like crossing the Bay Bridge when there is a storm over the Chesapeake, one may be terrified to go toward, but there is no question of going back. I find myself somewhat inevitably taking a certain solace in Robert Lowell’s essential question, Yet why not say what happened?

Part One

THE WILD BLUE YONDER

Into the Sun

I was standing with my head back, one pigtail caught between my teeth, listening to the jet overhead. The noise was loud, unusually so, which meant that it was close. My elementary school was near Andrews Air Force Base, just outside Washington; many of us were pilots’ kids, so the sound was a matter of routine. Being routine, however, didn’t take away from the magic, and I instinctively looked up from the playground to wave. I knew, of course, that the pilot couldn’t see me, I always knew that, just as I knew that even if he could see me the odds were that it wasn’t actually my father. But it was one of those things one did, and anyway I loved any and all excuses just to stare up into the skies. My father, a career Air Force officer, was first and foremost a scientist and only secondarily a pilot. But he loved to fly, and, because he was a meteorologist, both his mind and his soul ended up being in the skies. Like my father, I looked up rather more than I looked out.

When I would say to him that the Navy and the Army were so much older than the Air Force, had so much more tradition and legend, he would say, Yes, that’s true, but the Air Force is the future. Then he would always add: And we can fly. This statement of creed would occasionally be followed by an enthusiastic rendering of the Air Force song, fragments of which remain with me to this day, nested together, somewhat improbably, with phrases from Christmas carols, early poems, and bits and pieces of the Book of Common Prayer: all having great mood and meaning from childhood, and all still retaining the power to quicken the pulses.

So I would listen and believe and, when I would hear the words “Off we go into the wild blue yonder,” I would think that “wild” and “yonder” were among the most wonderful words I had ever heard; likewise, I would feel the total exhilaration of the phrase “Climbing high, into the sun” and know instinctively that I was a part of those who loved the vastness of the sky.

The noise of the jet had become louder, and I saw the other children in my second grade class suddenly dart their heads upward. The plane was coming in very low, then it streaked past us, scarcely missing the playground. As we stood there clumped together and absolutely terrified, it flew into the trees, exploding directly in front of us. The ferocity of the crash could be felt and heard in the plane’s awful impact; it also could be seen in the frightening yet terrible lingering loveliness of the flames that followed. Within minutes, it seemed, mothers were pouring onto the playground to reassure children that it was not their fathers; fortunately for my brother and sister and myself, it was not ours either. Over the next few days it became clear, from the release of the young pilot’s final message to the control tower before he died, that he knew he could save his own life by bailing out. He also knew, however, that by doing so he risked that his unaccompanied plane would fall onto the playground and kill those of us who were there.

The dead pilot became a hero, transformed into a scorchingly vivid, completely impossible ideal for what was meant by the concept of duty. It was an impossible ideal, but all the more compelling and haunting because of its very unobtainability. The memory of the crash came back to me many times over the years, as a reminder both of how one aspires after and needs such ideals, and of how killingly difficult it is to achieve them. I never again looked at the sky and saw only vastness and beauty. From that afternoon on I saw that death was also and always there.

Although, like all military families, we moved a lot, by the fifth grade my older brother, sister, and I had attended four different elementary schools, and we had lived in Florida, Puerto Rico, California, Tokyo, and Washington, twice, our parents, especially my mother, kept life as secure, warm, and constant as possible. My brother was the eldest and the steadiest of the three of us children and my staunch ally, despite the three year difference in our ages. I idolized him growing up and often trailed along after him, trying very hard to be inconspicuous, when he and his friends would wander off to play baseball or cruise the neighborhood. He was smart, fair, and self-confident, and I always felt that there was a bit of extra protection coming my way whenever he was around. My relationship with my sister, who was only thirteen months older than me, was more complicated. She was the truly beautiful one in the family, with dark hair and wonderful eyes, who from the earliest times was almost painfully aware of everything around her. She had a charismatic way, a fierce temper, very black and passing moods, and little tolerance for the conservative military lifestyle that she felt imprisoned us all. She led her own life, defiant, and broke out with abandon whenever and wherever she could. She hated high school and, when we were living in Washington, frequently skipped classes to go to the Smithsonian or the Army Medical Museum or just to smoke and drink beer with her friends.

She resented me, feeling that I was, as she mockingly put it, “the fair-haired one”, a sister, she thought, to whom friends and schoolwork came too easily, passing far too effortlessly through life, protected from reality by an absurdly optimistic view of people and life. Sandwiched between my brother, who was a natural athlete and who never seemed to see less-than-perfect marks on his college and graduate admission examinations, and me, who basically loved school and was vigorously involved in sports and friends and class activities, she stood out as the member of the family who fought back and rebelled against what she saw as a harsh and difficult world. She hated military life, hated the constant upheaval and the need to make new friends, and felt the family politeness was hypocrisy.

Perhaps because my own violent struggles with black moods did not occur until I was older, I was given a longer time to inhabit a more benign, less threatening, and, indeed to me, a quite wonderful world of high adventure. This world, I think, was one my sister had never known. The long and important years of childhood and early adolescence were, for the most part, very happy ones for me, and they afforded me a solid base of warmth, friendship, and confidence. They were to be an extremely powerful amulet, a potent and positive countervailing force against future unhappiness. My sister had no such years, no such amulets. Not surprisingly, perhaps, when both she and I had to deal with our respective demons, my sister saw the darkness as being within and part of herself, the family, and the world. I, instead, saw it as a stranger; however lodged within my mind and soul the darkness became, it almost always seemed an outside force that was at war with my natural self.

My sister, like my father, could be vastly charming: fresh, original, and devastatingly witty, she also was blessed with an extraordinary sense of aesthetic design. She was not an easy or untroubled person, and as she grew older her troubles grew with her, but she had an enormous artistic imagination and soul. She also could break your heart and then provoke your temper beyond any reasonable level of endurance. Still, I always felt a bit like pieces of earth to my sister’s fire and flames.

For his part, my father, when involved, was often magically involved: ebullient, funny, curious about almost everything, and able to describe with delight and originality the beauties and phenomena of the natural world. A snowflake was never just a snowflake, nor a cloud just a cloud. They became events and characters, and part of a lively and oddly ordered universe. When times were good and his moods were at high tide, his infectious enthusiasm would touch everything. Music would fill the house, wonderful new pieces of jewelry would appear, a moonstone ring, a delicate bracelet of cabochon rubies, a pendant fashioned from a moody sea, green stone set in a swirl of gold, and we’d all settle into our listening mode, for we knew that soon we would be hearing a very great deal about whatever new enthusiasm had taken him over. Sometimes it would be a discourse based on a passionate conviction that the future and salvation of the world was to be found in windmills; sometimes it was that the three of us children simply had to take Russian lessons because Russian poetry was so inexpressibly beautiful in the original.

*

from

An Unquiet Mind. A Memoir of Moods and Madness

by Kay Redfield Jamison

get it at Amazon.com

The Great God of Depression. How mental illness stopped being a terrible dark secret – Pagan Kennedy * DARKNESS VISIBLE. A MEMOIR of MADNESS – William Styron.

The pain of severe depression is quite unimaginable to those who have not suffered it, and it kills in many instances because its anguish can no longer be borne.

The most honest authorities face up squarely to the fact that serious depression is not readily treatable. Failure of alleviation is one of the most distressing factors of the disorder as it reveals itself to the victim, and one that helps situate it squarely in the category of grave diseases.

One by one, the normal brain circuits begin to drown, causing some of the functions of the body and nearly all of those of instinct and intellect to slowly disconnect.

Inadvertently I had helped unlock a closet from which many souls were eager to come out. It is possible to emerge from even the deepest abyss of despair and “once again behold the stars.”

Nearly 30 years ago, the author William Styron outed himself as mentally ill. “My days were pervaded by a gray drizzle of unrelenting horror,” he wrote in a New York Times op-ed article, describing the deep depression that had landed him in the psych ward. He compared the agony of mental illness to that of a heart attack. Pain is pain, Whether it’s in the mind or the body. So why, he asked, were depressed people treated as pariahs?

A confession of mental illness might not seem like a big deal now, but it was back then. In the 1980s, “if you were depressed, it was a terrible dark secret that you hid from the world,” according to Andrew Solomon, a historian of mental illness and author of “The Noonday Demon.” “People with depression were seen as pathetic and even dangerous. You didn’t let them near your kids.”

From William Styron’s Op-Ed on Depression. “In the popular mind, suicide is usually the work of a coward or sometimes, paradoxically, a deed of great courage, but it is neither; the torment that precipitates the act makes it often one of blind necessity.”

The response to Mr. Styron’s op-ed was immediate. Letters flooded into The New York Times. The readers thanked him, blurted out their stories and begged him for more. “Inadvertently I had helped unlock a closet from which many souls were eager to come out,” Mr. Styron wrote later.

“It was like the #MeToo movement,” Alexandra Styron, the author’s daughter, told me. “Somebody comes out and says: ‘This happened. This is real. This is what it feels like.’ And it just unleashed the floodgates.”

Readers were electrified by Mr. Styron’s confession in part because he inhabited a storybook world of glamour. After his novel “Sophie’s Choice” was adapted into a blockbuster movie in 1982, Mr. Styron rocketed from mere literary success to Hollywood fame. Meryl Streep, who won an Oscar for playing Sophie, became a lifelong friend, adding to Mr. Styron’s roster of illustrious buddies, from “Jimmy” Baldwin to Arthur Miller. He appeared at gala events with his silver hair upswept in a genius-y pompadour and his face ruddy from summers on Martha’s Vineyard. And yet he had been so depressed that he had eyed the knives in his kitchen with suicide-lust.

William Styron

James L.W. West, Mr. Styron’s friend and biographer, told me that Mr. Styron had never wanted to become “the guru of depression.” But after his article, he felt he had a duty to take on that role.

His famous memoir of depression, “Darkness Visible,” came out in October 1990. It was Mr. Styron’s curiosity about his own mind, and his determination to use himself as a case study to understand a mysterious disease, that gave the book its political power. “Darkness Visible” demonstrated that patients could be the owners and describers of their mental disorders, upending centuries of medical tradition in which the mentally ill were discredited and shamed. The brain scientist Alice Flaherty, who was Mr. Styron’s close friend and doctor, has called him “the great god of depression” because his influence on her field was so profound. His book became required reading in some medical schools, where physicians were finally being trained to listen to their patients.

Mr. Styron also helped to popularize a new way of looking at the brain. In his telling, suicidal depression is a physical ailment, as unconnected to the patient’s moral character as cancer. The book includes a cursory discussion of the chemistry of the brain neurotransmitters, serotonin and so forth. For many readers, it was a first introduction to scientific ideas that are now widely accepted.

For people with severe mood disorders, “Darkness Visible” became a guidebook. “I got depressed and everyone said to me: ‘You have to read the Bill Styron book. You have to read the Bill Styron book. Have you read the Bill Styron book? Let me give you a copy of the Bill Styron book,”’ Mr. Solomon told me. “On the one hand an absolutely harrowing read, and on the other hand one very much rooted in hope.”

The book benefited from perfect timing. It appeared contemporaneously with the introduction of Prozac and other mood disorder medications with fewer side effects than older psychiatric drugs. Relentlessly advertised on TV and in magazines, they seemed to promise protection. And though Mr. Styron himself probably did not take Prozac and was rather skeptical about drugs, his book became the bible of that era.

He also inspired dozens of writers including Mr. Solomon and Dr. Flaherty to chronicle their own struggles. In the 1990s, bookstores were crowded with mental-illness memoirs, Kay Redfield Jamison’s “An Unquiet Mind,” Susanna Kaysen’s “Girl, Interrupted” and Elizabeth Wurtzel’s “Prozac Nation,” to name a few. You read; you wrote; you survived.

It was an optimistic time. In 1999, with “Darkness Visible” in its 25th printing, Mr. Styron told Diane Rehm in an NPR interview: “I’m in very good shape, if I may be so bold as to say that.” He continued, “It’s as if I had purged myself of this pack of demons.”

It wouldn’t last. In the summer of 2000, he crashed again. In the last six years of his life, he would check into mental hospitals and endure two rounds of electroshock therapy.

Mr. Styron’s story mirrors the larger trends in American mental health over the past few decades. During the exuberance of the 1990s, it seemed possible that drugs would one day wipe out depression, making suicide a rare occurrence. But that turned out to be an illusion. In fact, the American suicide rate has continued to climb since the beginning of the 21st century.

We don’t know why this is happening, though we do have a few clues. Easy access to guns is probably contributing to the epidemic: Studies show that when people are able to reach for a firearm, a momentary urge to self-destruct is more likely to turn fatal. Oddly enough, climate change may also be to blame: A new study shows that rising temperatures can make people more prone to suicide.

With suicidal depression so widespread, we find ourselves needing new ways to talk about it, name its depredations and help families cope with it. Mr. Styron’s mission was to invent this new language of survival, but he did so at high cost to his own mental health.

When he revealed his history of depression, he inadvertently set a trap for himself. He became an icon of recovery. His widow, Rose Styron, told me that readers would call the house at all hours when they felt suicidal, and Mr. Styron would counsel them. He always took those calls, even when they woke him at 3 in the morning.

When he plunged into depression again in 2000, Mr. Styron worried about disappointing his fans. “When he crashed, he felt so guilty because he thought he’d let down all the people he had encouraged in ‘Darkness Visible,’” Ms. Styron told me. And he became painfully aware that if he ever did commit suicide, that private act would ripple out all over the world. The consequences would be devastating for his readers, some of whom might even decide to imitate him.

And so, one dark day in the summer of 2000, he wrote up a statement to be released in the event of his suicide. “I hope that readers of ‘Darkness Visible’ past, present and future will not be discouraged by the manner of my dying,” his message began. It was an attempt to inoculate his fans from the downstream effects of his own selfdestruction.

Mr. Styron’s family described this sense of his that succumbing to depression a second time made him a fraud.

DARKNESS VISIBLE.

A MEMOIR of MADNESS

William Styron

For the thing which I greatly feared is come upon me, and that which I was afraid of Is come unto me. I was not in safety, neither had I rest, neither was I quiet; yet trouble came. -Job

One

IN PARIS ON A CHILLY EVENING LATE IN OCTOBER OF 1985 I first became fully aware that the struggle with the disorder in my mind, a struggle which had engaged me for several months, might have a fatal outcome. The moment of revelation came as the car in which I was riding moved down a rain slick street not far from the Champs Elysées and slid past a dully glowing neon sign that read HOTEL WASHINGTON. I had not seen that hotel in nearly thirty-five years, since the spring of 1952, when for several nights it had become my initial Parisian roosting place.

In the first few months of my Wanderjahr, I had come down to Paris by train from Copenhagen, and landed at the Hotel Washington through the whimsical determination of a New York travel agent. In those days the hotel was one of the many damp, plain hostelries made for tourists, chiefly American, of very modest means who, if they were like me, colliding nervously for the first time with the French and their droll kinks, would always remember how the exotic bidet, positioned solidly in the drab bedroom, along with the toilet far down the ill-lit hallway, virtually defined the chasm between Gallic and Anglo-Saxon cultures.

But I stayed at the Washington for only a short time. Within days I had been urged out of the place by some newly found young American friends who got me installed in an even seedier but more colorful hotel in Montparnasse, hard by Le Dome and other suitably literary hangouts. (In my mid-twenties, I had just published a first novel and was a celebrity, though one of very low rank since few of the Americans in Paris had heard of my book, let alone read it.) And over the years the Hotel Washington gradually disappeared from my consciousness.

It reappeared, however, that October night when I passed the gray stone facade in a drizzle, and the recollection of my arrival so many years before started flooding back, causing me to feel that I had come fatally full circle. I recall saying to myself that when I left Paris for New York the next morning it would be a matter of forever. I was shaken by the certainty with which I accepted the idea that I would never see France again, just as I would never recapture a lucidity that was slipping away from me with terrifying speed.

Only days before I had concluded that I was suffering from a serious depressive illness, and was floundering helplessly in my efforts to deal with it. I wasn’t cheered by the festive occasion that had brought me to France. Of the many dreadful manifestations of the disease, both physical and psychological, a sense of self-hatred, or, put less categorically, a failure of self-esteem, is one of the most universally experienced symptoms, and I had suffered more and more from a general feeling of worthlessness as the malady had progressed.

My dank joylessness was therefore all the more ironic because I had flown on a rushed four day trip to Paris in order to accept an award which should have sparklingly restored my ego. Earlier that summer I received word that I had been chosen to receive the Prix Mondial Cino del Duca, given annually to an artist or scientist whose work reflects themes or principles of a certain “humanism.” The prize was established in memory of Cino del Duca, an immigrant from Italy who amassed a fortune just before and after World War II by printing and distributing cheap magazines, principally comic books, though later branching out into publications of quality; he became proprietor of the newspaper Paris-Jour.

He also produced movies and was a prominent racehorse owner, enjoying the pleasure of having many winners in France and abroad. Aiming for nobler cultural satisfactions, he evolved into a renowned philanthropist and along the way established a book publishing firm that began to produce works of literary merit (by chance, my first novel, Lie Down in Darkness, was one of del Duca’s offerings, in a translation entitled Un Lit de Ténébres); by the time of his death in 1967 this house, Editions Mondiales, became an important entity of a multifold empire that was rich yet prestigious enough for there to be scant memory of its comic book origins when del Duca’s widow, Simone, created a foundation whose chief function was the annual bestowal of the eponymous award.

The Prix Mondial Cino del Duca has become greatly respected in France, a nation pleasantly besotted with cultural prize giving, not only for its eclecticism and the distinction shown in the choice of its recipients but for the openhandedness of the prize itself, which that year amounted to approximately $25,000. Among the winners during the past twenty years have been Konrad Lorenz, Alejo Carpentier, Jean Anouilh, Ignazio Silone, Andrei Sakharov, Jorge Luis Borges and one American, Lewis Mumford. (No women as yet, feminists take note.)

As an American, I found it especially hard not to feel honored by inclusion in their company. While the giving and receiving of prizes usually induce from all sources an unhealthy uprising of false modesty, backbiting, selftorture and envy, my own view is that certain awards, though not necessary, can be very nice to receive. The Prix del Duca was to me so straightforwardly nice that any extensive self-examination seemed silly, and so I accepted gratefully, writing in reply that I would honor the reasonable requirement that I be present for the ceremony. At that time I looked forward to a leisurely trip, not a hasty turnaround. Had I been able to foresee my state of mind as the date of the award approached, I would not have accepted at all.

Depression is a disorder of mood, so mysteriously painful and elusive in the way it becomes known to the self, to the mediating intellect, as to verge close to being beyond description.

It thus remains nearly incomprehensible to those who have not experienced it in its extreme mode, although the gloom, “the blues” which people go through occasionally and associate with the general hassle of everyday existence are of such prevalence that they do give many individuals a hint of the illness in its catastrophic form. But at the time of which I write I had descended far past those familiar, manageable doldrums. In Paris, I am able to see now, I was at a critical stage in the development of the disease, situated at an ominous way station between its unfocused stirrings earlier that summer and the near violent denouement of December, which sent me into the hospital. I will later attempt to describe the evolution of this malady, from its earliest origins to my eventual hospitalization and recovery, but the Paris trip has retained a notable meaning for me.

On the day of the award ceremony, which was to take place at noon and be followed by a formal luncheon, I woke up at midmorning in my room at the Hétel Pont Royal commenting to myself that I felt reasonably sound, and I passed the good word along to my wife, Rose. Aided by the minor tranquilizer Halcion, I had managed to defeat my insomnia and get a few hours’ sleep. Thus I was in fair spirits.

But such wan cheer was an habitual pretense which I knew meant very little, for I was certain to feel ghastly before nightfall. I had come to a point where I was carefully monitoring each phase of my deteriorating condition. My acceptance of the illness followed several months of denial during which, at first, I had ascribed the malaise and restlessness and sudden fits of anxiety to withdrawal from alcohol; I had abruptly abandoned whiskey and all other intoxicants that June.

During the course of my worsening emotional climate I had read a certain amount on the subject of depression, both in books tailored for the layman and in weightier professional works including the psychiatrists’ bible, DSM (The Diagnostic and Statistical Manual of the American Psychiatric Association). Throughout much of my life I have been compelled, perhaps unwisely, to become an autodidact in medicine, and have accumulated a better than average amateur’s knowledge about medical matters (to which many of my friends, surely unwisely, have often deferred), and so it came as an astonishment to me that I was close to a total ignoramus about depression, which can be as serious a medical affair as diabetes or cancer. Most likely, as an incipient depressive, I had always subconsciously rejected or ignored the proper knowledge; it cut too close to the psychic bone, and I shoved it aside as an unwelcome addition to my store of information.

At any rate, during the few hours when the depressive state itself eased off long enough to permit the luxury of concentration, I had recently filled this vacuum with fairly extensive reading and I had absorbed many fascinating and troubling facts, which, however, I could not put to practical use.

The most honest authorities face up squarely to the fact that serious depression is not readily treatable. Unlike, let us say, diabetes, where immediate measures taken to rearrange the body’s adaptation to glucose can dramatically reverse a dangerous process and bring it under control, depression in its major stages possesses no quickly available remedy: failure of alleviation is one of the most distressing factors of the disorder as it reveals itself to the victim, and one that helps situate it squarely in the category of grave diseases.

Except in those maladies strictly designated as malignant or degenerative, we expect some kind of treatment and eventual amelioration, by pills or physical therapy or diet or surgery, with a logical progression from the initial relief of symptoms to final cure. Frighteningly, the layman sufferer from major depression, taking a peek into some of the many books currently on the market, will find much in the way of theory and symptomatology and very little that legitimately suggests the possibility of quick rescue. Those that do claim an easy way out are glib and most likely fraudulent. There are decent popular works which intelligently point the way toward treatment and cure, demonstrating how certain therapies, psychotherapy or pharmacology, or a combination of these, can indeed restore people to health in all but the most persistent and devastating cases; but the wisest books among them underscore the hard truth that serious depressions do not disappear overnight.

All of this emphasizes an essential though difficult reality which I think needs stating at the outset of my own chronicle: the disease of depression remains a great mystery. It has yielded its secrets to science far more reluctantly than many of the other major ills besetting us. The intense and sometimes comically strident factionalism that exists in present day psychiatry, the schism between the believers in psychotherapy and the adherents of pharmacology, resembles the medical quarrels of the eighteenth century (to bleed or not to bleed) and almost defines in itself the inexplicable nature of depression and the difficulty of its treatment. As a clinician in the field told me honestly and, I think, with a striking deftness of analogy: “If you compare our knowledge with Columbus’s discovery of America, America is yet unknown; we are still down on that little island in the Bahamas.”

In my reading I had learned, for example, that in at least one interesting respect my own case was atypical. Most people who begin to suffer from the illness are laid low in the morning, with such malefic effect that they are unable to get out of bed. They feel better only as the day wears on. But my situation was just the reverse. While I was able to rise and function almost normally during the earlier part of the day, I began to sense the onset of the symptoms at midafternoon or a little later, gloom crowding in on me, a sense of dread and alienation and, above all, stifling anxiety. I suspect that it is basically a matter of indifference whether one suffers the most in the morning or the evening: if these states of excruciating near paralysis are similar, as they probably are, the question of timing would seem to be academic. But it was no doubt the turnabout of the usual daily onset of symptoms that allowed me that morning in Paris to proceed without mishap, feeling more or less self-possessed, to the gloriously ornate palace on the Right Bank that houses the Fondation Cino del Duca. There, in a rococo salon, I was presented with the award before a small crowd of French cultural figures, and made my speech of acceptance with what I felt was passable aplomb, stating that while I was donating the bulk of my prize money to various organizations fostering French-American goodwill, including the American Hospital in Neuilly, there was a limit to altruism (this spoken jokingly) and so I hoped it would not be taken amiss if I held back a small portion for myself.

What I did not say, and which was no joke, was that the amount I was withholding was to pay for two tickets the next day on the Concorde, so that I might return speedily with Rose to the United States, where just a few days before I had made an appointment to see a psychiatrist. For reasons that I’m sure had to do with a reluctance to accept the reality that my mind was dissolving, I had avoided seeking psychiatric aid during the past weeks, as my distress intensified. But I knew I couldn’t delay the confrontation indefinitely, and when I did finally make contact by telephone with a highly recommended therapist, he encouraged me to make the Paris trip, telling me that he would see me as soon as I returned. I very much needed to get back, and fast.

Despite the evidence that I was in serious difficulty, I wanted to maintain the rosy view. A lot of the literature available concerning depression is, as I say, breezily optimistic, spreading assurances that nearly all depressive states will be stabilized or reversed if only the suitable antidepressant can be found; the reader is of course easily swayed by promises of quick remedy. In Paris, even as I delivered my remarks, I had a need for the day to be over, felt a consuming urgency to fly to America and the office of the doctor, who would whisk my malaise away with his miraculous medications. I recollect that moment clearly now, and am hardly able to believe that I possessed such ingenuous hope, or that I could have been so unaware of the trouble and peril that lay ahead.

Simone del Duca, a large dark-haired woman of queenly manner, was understandably incredulous at first, and then enraged, when after the presentation ceremony I told her that I could not join her at lunch upstairs in the great mansion, along with a dozen or so members of the Académie Frangaise, who had chosen me for the prize. My refusal was both emphatic and simpleminded; I told her point-blank that I had arranged instead to have lunch at a restaurant with my French publisher, Frangoise Gallimard. Of course this decision on my part was outrageous; it had been announced months before to me and everyone else concerned that a luncheon, moreover, a luncheon in my honor, was part of the day’s pageantry. But my behavior was really the result of the illness, which had progressed far enough to produce some of its most famous and sinister hallmarks: confusion, failure of mental focus and lapse of memory. At a later stage my entire mind would be dominated by anarchic disconnections; as I have said, there was now something that resembled bifurcation of mood: lucidity of sorts in the early hours of the day, gathering murk in the afternoon and evening. It must have been during the previous evening’s murky distractedness that I made the luncheon date with Frangoise Gallimard, forgetting my del Duca obligations. That decision continued to completely master my thinking, creating in me such obstinate determination that now I was able to blandly insult the worthy Simone del Duca. “Alors!” she exclaimed to me, and her face flushed angrily as she whirled in a stately volte-face, “au revoir!”

Suddenly I was flabbergasted, stunned with horror at what I had done. I fantasized a table at which sat the hostess and the Académie Frangaise, the guest of honor at La Coupole. I implored Madame’s assistant, a bespectacled woman with a clipboard and an ashen, mortified expression, to try to reinstate me: it was all a terrible mistake, a mixup, a malentendu. And then I blurted some words that a lifetime of general equilibrium, and a smug belief in the impregnability of my psychic health, had prevented me from believing I could ever utter; I was chilled as I heard myself speak them to this perfect stranger.

“I’m sick,” I said, “un probleme psychiatrique.”

Madame del Duca was magnanimous in accepting my apology and the lunch went off without further strain, aIthough I couldn’t completely rid myself of the suspicion, as we chatted somewhat stiffly, that my benefactress was still disturbed by my conduct and thought me a weird number. The lunch was a long one, and when it was over I felt myself entering the afternoon shadows with their encroaching anxiety and dread. A television crew from one of the national channels was waiting (I had forgotten about them, too), ready to take me to the newly opened Picasso Museum, where I was supposed to be filmed looking at the exhibits and exchanging comments with Rose.

This turned out to be, as I knew it would, not a captivating promenade but a demanding struggle, a major ordeal. By the time we arrived at the museum, having dealt with heavy traffic, it was past four o’clock and my brain had begun to endure its familiar siege: panic and dislocation, and a sense that my thought processes were being engulfed by a toxic and unnameable tide that obliterated any enjoyable response to the living world. This is to say more specifically that instead of pleasure, certainly instead of the pleasure I should be having in this sumptuous showcase of bright genius, I was feeling in my mind a sensation close to, but indescribably different from, actual pain.

This leads me to touch again on the elusive nature of such distress. That the word “indescribable” should present itself is not fortuitous, since it has to be emphasized that if the pain were readily describable most of the countless sufferers from this ancient affliction would have been able to confidently depict for their friends and loved ones (even their physicians) some of the actual dimensions of their torment, and perhaps elicit a comprehension that has been generally lacking; such incomprehension has usually been due not to a failure of sympathy but to the basic inability of healthy people to imagine a form of torment so alien to everyday experience.

For myself, the pain is most closely connected to drowning or suffocation, but even these images are off the mark. William James, who battled depression for many years, gave up the search for an adequate portrayal, implying its near-impossibility when he wrote in The Varieties of Religious Experience:

“It is a positive and active anguish, a sort of psychical neuralgia wholly unknown to normal life.”

The pain persisted during my museum tour and reached a crescendo in the next few hours when, back at the hotel, I fell onto the bed and lay gazing at the ceiling, nearly immobilized and in a trance of supreme discomfort. Rational thought was usually absent from my mind at such times, hence trance.

I can think of no more apposite word for this state of being, a condition of helpless stupor in which cognition was replaced by that “positive and active anguish.”

And one of the most unendurable aspects of such an interlude was the inability to sleep. It had been my custom of a near lifetime, like that of vast numbers of people, to settle myself into a soothing nap in the late afternoon, but the disruption of normal sleep patterns is a notoriously devastating feature of depression; to the injurious sleeplessness with which I had been afflicted each night was added the insult of this afternoon insomnia, diminutive by comparison but all the more horrendous because it struck during the hours of the most intense misery. It had become clear that I would never be granted even a few minutes’ relief from my full-time exhaustion. I clearly recall thinking, as I lay there while Rose sat nearby reading, that my afternoons and evenings were becoming almost measurably worse, and that this episode was the worst to date. But I somehow managed to reassemble myself for dinner with, who else? -Francoise Gallimard, co-victim along with Simone del Duca of the frightful lunchtime contretemps.

The night was blustery and raw, with a chill wet wind blowing down the avenues, and when Rose and I met Francoise and her son and a friend at La Lorraine, a glittering brasserie not far from L’Etoile, rain was descending from the heavens in torrents. Someone in the group, sensing my state of mind, apologized for the evil night, but I recall thinking that even if this were one of those warmly scented and passionate evenings for which Paris is celebrated I would respond like the zombie I had become. The weather of depression is unmodulated, its light a brownout.

And zombielike, halfway through the dinner, I lost the del Duca prize check for $25,000. Having tucked the check in the inside breast pocket of my jacket, I let my hand stray idly to that place and realized that it was gone. Did I “intend” to lose the money? Recently I had been deeply bothered that I was not deserving of the prize. I believe in the reality of the accidents we subconsciously perpetrate on ourselves, and so how easy it was for this loss to be not loss but a form of repudiation, offshoot of that seIf-loathing (depression’s premier badge) by which I was persuaded that I could not be worthy of the prize, that I was in fact not worthy of any of the recognition that had come my way in the past few years. Whatever the reason for its disappearance, the check was gone, and its loss dovetailed well with the other failures of the dinner: my failure to have an appetite for the grand plateau de fruits de mer placed before me, failure of even forced laughter and, at last, virtually total failure of speech.

At this point the ferocious inwardness of the pain produced an immense distraction that prevented my articulating words beyond a hoarse murmur; I sensed myself turning walleyed, monosyllabic, and also I sensed my French friends becoming uneasily aware of my predicament. It was a scene from a bad Operetta by now: all of us near the floor, searching for the vanished money. Just as I signaled that it was time to go, Francoise’s son discovered the check, which had somehow slipped out of my pocket and fluttered under an adjoining table, and we went forth into the rainy night. Then, while I was riding in the car, I thought of Albert Camus and Romain Gary.

Two

WHEN I WAS A YOUNG WRITER THERE HAD BEEN A stage where Camus, almost more than any other contemporary literary figure, radically set the tone for my own view of life and history. I read his novel The Stranger somewhat later than I should have, I was in my early thirties, but after finishing it I received the stab of recognition that proceeds from reading the work of a writer who has wedded moral passion to a style of great beauty and whose unblinking vision is capable of frightening the soul to its marrow.

The cosmic loneliness of Meursault, the hero of that novel, so haunted me that when I set out to write The Confessions of Nat Turner I was impelled to use Camus’s device of having the story flow from the point of view of a narrator isolated in his jail cell during the hours before his execution. For me there was a spiritual connection between Meursault’s frigid solitude and the plight of Nat Turner, his rebel predecessor in history by a hundred years, likewise condemned and abandoned by man and God.

Camus’s essay “Reflections on the Guillotine” is a virtually unique document, freighted with terrible and fiery logic; it is difficult to conceive of the most vengeful supporter of the death penalty retaining the same attitude after exposure to scathing truths expressed with such ardor and precision. I know my thinking was forever altered by that work, not only turning me around completely, convincing me of the essential barbarism of capital punishment, but establishing substantial claims on my conscience in regard to matters of responsibility at large. Camus was a great cleanser of my intellect, ridding me of countless sluggish ideas, and through some of the most unsettling pessimism I had ever encountered causing me to be aroused anew by life’s enigmatic promise.

The disappointment I always felt at never meeting Camus was compounded by that failure having been such a near miss. I had planned to see him in 1960, when I was traveling to France and had been told in a letter by the writer Romain Gary that he was going to arrange a dinner in Paris where I would meet Camus. The enormously gifted Gary, whom I knew slightly at the time and who later became a cherished friend, had informed me that Camus, whom he saw frequently, had read my Un Lit de Te’nebres and had admired it; I was of course greatly flattered and felt that a get-together would be a splendid happening. But before I arrived in France there came the appalling news: Camus had been in an automobile crash, and was dead at the cruelly young age of forty-six. I have almost never felt so intensely the loss of someone I didn’t know. I pondered his death endlessly. Although Camus had not been driving he supposedly knew the driver, who was the son of his publisher, to be a speed demon; so there was an element of recklessness in the accident that bore overtones of the near-suicidal, at least of a death flirtation, and it was inevitable that conjectures concerning the event should revert back to the theme of suicide in the writer’s work.

One of the century’s most famous intellectual pronouncements comes at the beginning of The Myth of Sisyphus: “There is but one truly serious philosophical problem, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.” Reading this for the first time I was puzzled and continued to be throughout much of the essay, since despite the work’s persuasive logic and eloquence there was a lot that eluded me, and I always came back to grapple vainly with the initial hypothesis, unable to deal with the premise that anyone should come close to wishing to kill himself in the first place.

A later short novel, The Fall, I admired with reservations; the guilt and seIf-condemnation of the lawyer-narrator, gloomily spinning out his monologue in an Amsterdam bar, seemed a touch clamorous and excessive, but at the time of my reading I was unable to perceive that the lawyer was behaving very much like a man in the throes of clinical depression. Such was my innocence of the very existence of this disease. Camus, Romain told me, occasionally hinted at his own deep despondency and had spoken of suicide. Sometimes he spoke in jest, but the jest had the quality of sour wine, upsetting Romain. Yet apparently he made no attempts and so perhaps it was not coincidental that, despite its abiding tone of melancholy, a sense of the triumph of life over death is at the core of The Myth of Sisyphus with its austere message: in the absence of hope we must still struggle to survive, and so we doby the skin of our teeth.

It was only after the passing of some years that it seemed credible to me that Camus’s statement about suicide, and his general preoccupation with the subject, might have sprung at least as strongly from some persistent disturbance of mood as from his concerns with ethics and epistemology. Gary again discussed at length his assumptions about Camus’s depression during August of 1978, when I had lent him my guest cottage in Connecticut, and I came down from my summer home on Martha’s Vineyard to pay him a weekend visit. As we talked I felt that some of Romain’s suppositions about the seriousness of Camus’s recurring despair gained weight from the fact that he, too, had begun to suffer from depression, and he freely admitted as much. It was not incapacitating, he insisted, and he had it under control, but he felt it from time to time, this leaden and poisonous mood the color of verdigris, so incongruous in the midst of the lush New England summer. A Russian Jew born in Lithuania, Romain had always seemed possessed of an Eastern European melancholy, so it was hard to tell the difference. Nonetheless, he was hurting. He said that he was able to perceive a flicker of the desperate state of mind which had been described to him by Camus.

Gary’s situation was hardly lightened by the presence of Jean Seberg, his lowa-born actress wife, from whom he had been divorced and, I thought, long estranged. I learned that she was there because their son, Diego, was at a nearby tennis camp. Their presumed estrangement made me surprised to see her living with Romain, surprised too, no, shocked and saddened, by her appearance: all her once fragile and luminous blond beauty had disappeared into a puffy mask. She moved like a Sleepwalker, said little, and had the blank gaze of someone tranquilized (or drugged, or both) nearly to the point of catalepsy. I understood how devoted they still were, and was touched by his solicitude, both tender and paternal. Romain told me that Jean was being treated for the disorder that afflicted him, and mentioned something about antidepressant medications, but none of this registered very strongly, and also meant little.

This memory of my relative indifference is important because such indifference demonstrates powerfully the outsider’s inability to grasp the essence of the illness. Camus’s depression and now Romain Gary’s, and certainly Jean’s, were abstract ailments to me, in spite of my sympathy, and I hadn’t an inkling of its true contours or the nature of the pain so many victims experience as the mind continues in its insidious meltdown.

In Paris that October night I knew that I, too, was in the process of meltdown. And on the way to the hotel in the car I had a clear revelation. A disruption of the circadian cycle, the metabolic and glandular rhythms that are central to our workaday life, seems to be involved in many, if not most, cases of depression; this is why brutal insomnia so often occurs and is most likely why each day’s pattern of distress exhibits fairly predictable alternating periods of intensity and relief. The evening’s relief for me, an incomplete but noticeable letup, like the change from a torrential downpour to a steady shower, came in the hours after dinner time and before midnight, when the pain lifted a little and my mind would become lucid enough to focus on matters beyond the immediate upheaval convulsing my system. Naturally I looked forward to this period, for sometimes I felt close to being reasonably sane, and that night in the car I was aware of a semblance of clarity returning, along with the ability to think rational thoughts. Having been able to reminisce about Camus and Romain Gary, however, I found that my continuing thoughts were not very consoling.

The memory of Jean Seberg gripped me with sadness. A little over a year after our encounter in Connecticut she took an overdose of pills and was found dead in a car parked in a cul-de-sac off a Paris avenue, where her body had lain for many days. The following year I sat with Romain at the Brasserie Lipp during a long lunch while he told me that, despite their difficulties, his loss of Jean had so deepened his depression that from time to time he had been rendered nearly helpless. But even then I was unable to comprehend the nature of his anguish. I remembered that his hands trembled and, though he could hardly be called superannuated, he was in his mid-sixties, his voice had the wheezy sound of very old age that I now realized was, or could be, the voice of depression; in the vortex of my severest pain I had begun to develop that ancient voice myself. I never saw Romain again. Claude Gallimard, Francoise’s father, had recollected to me how, in 1980, only a few hours after another lunch where the talk between the two old friends had been composed and casual, even lighthearted, certainly anything but somber, Romain Gary, twice winner of the Prix Goncourt (one of these awards pseudonymous, the result of his having gleefully tricked the critics), hero of the Republic, valorous recipient of the Croix de Guerre, diplomat, bon vivant, womanizer par excellence, went home to his apartment on the rue du Bac and put a bullet through his brain.

It was at some point during the course of these musings that the sign HOTEL WASHINGTON swam across my vision, bringing back memories of my long ago arrival in the city, along with the fierce and sudden realization that I would never see Paris again. This certitude astonished me and filled me with a new fright, for while thoughts of death had long been common during my siege, blowing through my mind like icy gusts of wind, they were the formless shapes of doom that I suppose are dreamed of by people in the grip of any severe affliction. The difference now was in the sure understanding that tomorrow, when the pain descended once more, or the tomorrow after that, certainly on some not too distant tomorrow, I would be forced to judge that life was not worth living and thereby answer, for myself at least, the fundamental question of philosophy.

Three

TO MANY OF US WHO KNEW ABBIE HOFFMAN EVEN slightly, as I did, his death in the spring of 1989 was a sorrowful happening. Just past the age of fifty, he had been too young and apparently too vital for such an ending; a feeling of chagrin and dreadfulness attends the news of nearly anyone’s suicide, and Abbie’s death seemed to me especially cruel.

I had first met him during the wild days and nights of the 1968 Democratic Convention in Chicago, where I had gone to write a piece for The New York Review of Books, and I later was one of those who testified on behalf of him and his fellow defendants at the trial, also in Chicago, in 1970. Amid the pious follies and morbid perversions of American life, his antic style was exhilarating, and it was hard not to admire the hellraising and the brio, the anarchic individualism.

I wish I had seen more of him in recent years; his sudden death left me with a particular emptiness, as suicides usually do to everyone. But the event was given a further dimension of poignancy by what one must begin to regard as a predictable reaction from many: the denial, the refusal to accept the fact of the suicide itself, as if the voluntary act, as opposed to an accident, or death from natural causes, were tinged with a delinquency that somehow lessened the man and his character.

Abbie’s brother appeared on television, grief, ravaged and distraught; one could not help feeling compassion as he sought to deflect the idea of suicide, insisting that Abbie, after all, had always been careless with pills and would never have left his family bereft. However, the coroner confirmed that Hoffman had taken the equivalent of 150 phenobarbitals.

It’s quite natural that the people closest to suicide victims so frequently and feverishly hasten to disclaim the truth; the sense of implication, of personal guilt, the idea that one might have prevented the act if one had taken certain precautions, had somehow behaved differently, is perhaps inevitable. Even so, the sufferer, whether he has actually killed himself or attempted to do so, or merely expressed threats, is often, through denial on the part of others, unjustly made to appear a wrongdoer.

A similar case is that of Randall Jarrell, one of the fine poets and critics of his generation, who on a night in 1965, near Chapel Hill, North Carolina, was struck by a car and killed. Jarrell’s presence on that particular stretch of road, at an odd hour of the evening, was puzzling, and since some of the indications were that he had deliberately let the car strike him, the early conclusion was that his death was suicide. Newsweek, among other publications, said as much, but Jarrell’s widow protested in a letter to that magazine; there was a hue and cry from many of his friends and supporters, and a coroner’s jury eventually ruled the death to be accidental. Jarrell had been suffering from extreme depression and had been hospitalized; only a few months before his misadventure on the highway and while in the hospital, he had slashed his wrists.

Anyone who is acquainted with some of the jagged contours of Jarrell’s life, including his violent fluctuations of mood, his fits of black despondency, and who, in addition, has acquired a basic knowledge of the danger signals of depression, would seriously question the verdict of the coroner’s jury. But the stigma of selfinflicted death is for some people a hateful blot that demands erasure at all costs. (More than two decades after his death, in the Summer 1986 issue of The American Scholar, a one time student of Jarrell’s, reviewing a collection of the poet’s letters, made the review less a literary or biographical appraisal than an occasion for continuing to try to exorcise the vile phantom of suicide.)

Randall Jarrell almost certainly killed himself. He did so not because he was a coward, nor out of any moral feebleness, but because he was afflicted with a depression that was so devastating that he could no longer endure the pain of it.

This general unawareness of what depression is really like was apparent most recently in the matter of Primo Levi, the remarkable Italian writer and survivor of Auschwitz who, at the age of sixty-seven, hurled himself down a stairwell in Turin in 1987. Since my own involvement with the illness, I had been more than ordinarily interested in Levi’s death, and so, late in 1988, when I read an account in The New York Times about a symposium on the writer and his work held at New York University, was fascinated but, finally, appalled. For, according to the article, many of the participants, worldly writers and scholars, seemed mystified by Levi’s suicide, mystified and disappointed. It was as if this man whom they had all so greatly admired, and who had endured so much at the hands of the Nazis, a man of exemplary resilience and courage, had by his suicide demonstrated a frailty, a crumbling of character they were loath to accept. In the face of a terrible absolute self-destruction, their reaction was helplessness and (the reader could not avoid it) a touch of shame.

My annoyance over all this was so intense that I was prompted to write a short piece for the op-ed page of the Times. The argument I put forth was fairly straightforward:

The pain of severe depression is quite unimaginable to those who have not suffered it, and it kills in many instances because its anguish can no longer be borne.

The prevention of many suicides will continue to be hindered until there is a general awareness of the nature of this pain. Through the healing process of time, and through medical intervention or hospitalization in many cases, most people survive depression, which may be its only blessing; but to the tragic legion who are compelled to destroy themselves there should be no more reproof attached than to the victims of terminal cancer.

I had set down my thoughts in this Times piece rather hurriedly and spontaneously, but the response was equally spontaneous, and enormous. It had taken, I speculated, no particular originality or boldness on my part to speak out frankly about suicide and the impulse toward it, but I had apparently underestimated the number of people for whom the subject had been taboo, a matter of secrecy and shame. The overwhelming reaction made me feel that inadvertently I had helped unlock a closet from which many souls were eager to come out and proclaim that they, too, had experienced the feelings I had described. It is the only time in my life I have felt it worthwhile to have invaded my own privacy, and to make that privacy public. And I thought that, given such momentum, and with my experience in Paris as a detailed example of what occurs during depression, it would be useful to try to chronicle some of my own experiences with the illness and in the process perhaps establish a frame of reference out of which one or more valuable conclusions might be drawn.

Such conclusions, it has to be emphasized, must still be based on the events that happened to one man. In setting these reflections down I don’t intend my ordeal to stand as a representation of what happens, or might happen, to others. Depression is much too complex in its cause, its symptoms and its treatment for unqualified conclusions to be drawn from the experience of a single individual. Although as an illness depression manifests certain unvarying characteristics, it also allows for many idiosyncrasies; I’ve been amazed at some of the freakish phenomena, not reported by other patients, that it has wrought amid the twistings of my mind’s labyrinth.

Depression afflicts millions directly, and millions more who are relatives or friends of victims. It has been estimated that as many as one in ten Americans will suffer from the illness. As assertively democratic as a Norman Rockwell poster, it strikes indiscriminately at all ages, races, creeds and classes, though women are at considerably higher risk than men. The occupational list (dressmakers, barge captains, sushi chefs, cabinet members) of its patients is too long and tedious to give here; it is enough to say that very few people escape being a potential victim of the disease, at least in its milder form. Despite depression’s eclectic reach, it has been demonstrated with fair convincingness that artistic types (especially poets) are particularly vulnerable to the disorder, which, in its graver, clinical manifestation takes upward of twenty percent of its victims by way of suicide.

Just a few of these fallen artists, all modern, make up a sad but scintillant roll call: Hart Crane, Vincent van Gogh, Virginia Woolf, Arshile Gorky, Cesare Pavese, Romain Gary, Vachel Lindsay, Sylvia Plath, Henry de Montherlant, Mark Rothko, John Berryman, Jack London, Ernest Hemingway, William Inge, Diane Arbus, Tadeusz Borowski, Paul Celan, Anne Sexton, Sergei Esenin, Vladimir Mayakovsky, the list goes on. (The Russian poet Mayakovsky was harshly critical of his great contemporary Esenin’s suicide a few years before, which should stand as a caveat for all who are judgmental about self destruction.)

When one thinks of these doomed and splendidly creative men and women, one is drawn to contemplate their childhoods, where, to the best of anyone’s knowledge, the seeds of the illness take strong root; could any of them have had a hint, then, of the psyche’s perishability, its exquisite fragility? And why were they destroyed, while others, similarly stricken, struggled through?

Four

WHEN I WAS FIRST AWARE THAT I HAD BEEN LAID low by the disease, I felt a need, among other things, to register a strong protest against the word “depression.” Depression, most people know, used to be termed “melancholia,” a word which appears in English as early as the year 1303 and crops up more than once in Chaucer, who in his usage seemed to be aware of its pathological nuances. “Melancholia” would still appear to be a far more apt and evocative word for the blacker forms of the disorder, but it was usurped by a noun with a bland tonality and lacking any magisterial presence, used indifferently to describe an economic decline or a rut in the ground, a true wimp of a word for such a major illness. It may be that the scientist generally held responsible for its currency in modern times, a Johns Hopkins Medical School faculty member justly venerated, the Swiss born psychiatrist Adolf Meyer, had a tin ear for the finer rhythms of English and therefore was unaware of the semantic damage he had inflicted by offering “depression” as a descriptive noun for such a dreadful and raging disease. Nonetheless, for over seventy-five years the word has slithered innocuously through the language like a slug, leaving little trace of its intrinsic malevolence and preventing, by its very insipidity, a general awareness of the horrible intensity of the disease when out of control.

As one who has suffered from the malady in extremis yet returned to tell the tale, I would lobby for a truly arresting designation. “Brainstorm,” for instance, has unfortunately been preempted to describe, somewhat jocularly, intellectual inspiration. But something along these lines is needed. Told that someone’s mood disorder has evolved into a storm, a veritable howling tempest in the brain, which is indeed what a clinical depression resembles like nothing else, even the uninformed layman might display sympathy rather than the standard reaction that “depression” evokes, something akin to “So what?” or “You’ll pull out of it” or “We all have bad days.” The phrase “nervous breakdown” seems to be on its way out, certainly deservedly so, owing to its insinuation of a vague spinelessness, but we still seem destined to be saddled with “depression” until a better, sturdier name is created.

The depression that engulfed me was not of the manic type, the one accompanied by euphoric highs, which would have most probably presented itself earlier in my life. I was sixty when the illness struck for the first time, in the “unipolar” form, which leads straight down. I shall never learn what “caused” my depression, as no one will ever learn about their own. To be able to do so will likely forever prove to be an impossibility, so complex are the intermingled factors of abnormal chemistry, behavior and genetics. Plainly, multiple components are involved, perhaps three or four, most probably more, in fathomless permutations.

That is why the greatest fallacy about suicide lies in the belief that there is a single immediate answer, or perhaps combined answers, as to why the deed was done.

The inevitable question “Why did he, or she do it?” usually leads to odd speculations, for the most part fallacies themselves. Reasons were quickly advanced for Abbie Hoffman’s death: his reaction to an auto accident he had suffered, the failure of his most recent book, his mother’s serious illness. With Randall Jarrell it was a declining career cruelly epitomized by a vicious book review and his consequent anguish. Primo Levi, it was rumored, had been burdened by caring for his paralytic mother, which was more onerous to his spirit than even his experience at Auschwitz.

Any one of these factors may have lodged like a thorn in the sides of the three men, and been a torment. Such aggravations may be crucial and cannot be ignored. But most people quietly endure the equivalent of injuries, declining careers, nasty book reviews, family illnesses. A vast majority of the survivors of Auschwitz have borne up fairly well. Bloody and bowed by the outrages of life, most human beings still stagger on down the road, unscathed by real depression.

To discover why some people plunge into the downward spiral of depression, one must search beyond the manifest crisis, and then still fail to come up with anything beyond wise conjecture.

The storm which swept me into a hospital in December began as a cloud no bigger than a wine goblet the previous June. And the cloud, the manifest crisis, involved alcohol, a substance I had been abusing for forty years. Like a great many American writers, whose sometimes lethal addiction to alcohol has become so legendary as to provide in itself a stream of studies and books, I used alcohol as the magical conduit to fantasy and euphoria, and to the enhancement of the imagination. There is no need to either rue or apologize for my use of this soothing, often sublime agent, which had contributed greatly to my writing; although I never set down a line while under its influence, I did use it, often in conjunction with music, as a means to let my mind conceive visions that the unaltered, sober brain has no access to. Alcohol was an invaluable senior partner of my intellect, besides being a friend whose ministrations I sought daily, sought also, I now see, as a means to calm the anxiety and incipient dread that I had hidden away for so long somewhere in the dungeons of my spirit.

The trouble was, at the beginning of this particular summer, that I was betrayed. It struck me quite suddenly, almost overnight: I could no longer drink. It was as if my body had risen up in protest, along with my mind, and had conspired to reject this daily mood bath which it had so long welcomed and, who knows? perhaps even come to need. Many drinkers have experienced this intolerance as they have grown older. I suspect that the crisis was at least partly metabolic, the liver rebelling, as if to say, “No more, no more”, but at any rate I discovered that alcohol in minuscule amounts, even a mouthful of wine, caused me nausea, a desperate and unpleasant wooziness, a sinking sensation and ultimately a distinct revulsion. The comforting friend had abandoned me not gradually and reluctantly, as a true friend might do, but like a shot, and I was left high and certainly dry, and unhelmed.

Neither by will nor by choice had I became an abstainer; the situation was puzzling to me, but it was also traumatic, and I date the onset of my depressive mood from the beginning of this deprivation. Logically, one would be overjoyed that the body had so summarily dismissed a substance that was undermining its health; it was as if my system had generated a form of Antabuse, which should have allowed me to happily go my way, satisfied that a trick of nature had shut me off from a harmful dependence. But, instead, I began to experience a vaguely troubling malaise, a sense of something having gone cockeyed in the domestic universe I’d dwelt in so long, so comfortably. While depression is by no means unknown when people stop drinking, it is usually on a scale that is not menacing. But it should be kept in mind how idiosyncratic the faces of depression can be.

It was not really alarming at first, since the change was subtle, but I did notice that my surroundings took on a different tone at certain times: the shadows of nightfall seemed more somber, my mornings were less buoyant, walks in the woods became less zestful, and there was a moment during my working hours in the late afternoon when a kind of panic and anxiety overtook me, just for a few minutes, accompanied by a visceral queasiness, such a seizure was at least slightly alarming, after all. As I set down these recollections, I realize that it should have been plain to me that I was already in the grip of the beginning of a mood disorder, but I was ignorant of such a condition at that time.

When I reflected on this curious alteration of my consciousness, and I was baffled enough from time to time to do so, I assumed that it all had to do somehow with my enforced withdrawal from alcohol. And, of course, to a certain extent this was true. But it is my conviction now that alcohol played a perverse trick on me when we said farewell to each other: although, as everyone should know, it is a major depressant, it had never truly depressed me during my drinking career, acting instead as a shield against anxiety.

Suddenly vanished, the great ally which for so long had kept my demons at bay was no longer there to prevent those demons from beginning to swarm through the subconscious, and I was emotionally naked, vulnerable as I had never been before.

Doubtless depression had hovered near me for years, waiting to swoop down. Now I was in the first stage, premonitory, like a flicker of sheet lightning barely perceived, of depression’s black tempest.

I was on Martha’s Vineyard, where I’ve Spent a good part of each year since the 1960s, during that exceptionally beautiful summer. But I had begun to respond indifferently to the island’s pleasures. I felt a kind of numbness, an enervation, but more particularly an odd fragility, as if my body had actually become frail, hypersensitive and somehow disjointed and clumsy, lacking normal coordination. And soon I was in the throes of a pervasive hypochondria. Nothing felt quite right with my corporeal self; there were twitches and pains, sometimes intermittent, often seemingly constant, that seemed to presage all sorts of dire infirmities. (Given these signs, one can understand how, as far back as the seventeenth century, in the notes of contemporary physicians, and in the perceptions of John Dryden and others, a connection is made between melancholia and hypochondria; the words are often interchangeable, and were so used until the nineteenth century by writers as various as Sir Walter Scott and the Brontés, who also linked melancholy to a preoccupation with bodily ills.) It is easy to see how this condition is part of the psyche’s apparatus of defense: unwilling to accept its own gathering deterioration, the mind announces to its indwelling consciousness that it is the body with its perhaps correctable defects, not the precious and irreplaceable mind, that is going haywire.

In my case, the overall effect was immensely disturbing, augmenting the anxiety that was by now never quite absent from my waking hours and fueling still another strange behavior pattern, a fidgety restlessness that kept me on the move, somewhat to the perplexity of my family and friends. Once, in late summer, on an airplane trip to New York, I made the reckless mistake of downing a scotch and soda, my first alcohol in months, which promptly sent me into a tailspin, causing me such a horrified sense of disease and interior doom that the very next day I rushed to a Manhattan internist, who inaugurated a long series of tests. Normally I would have been satisfied, indeed elated, when, after three weeks of high-tech and extremely expensive evaluation, the doctor pronounced me totally fit; and I was happy, for a day or two, until there once again began the rhythmic daily erosion of my mood, anxiety, agitation, unfocused dread.

By now I had moved back to my house in Connecticut. It was October, and one of the unforgettable features of this stage of my disorder was the way in which my old farmhouse, my beloved home for thirty years, took on for me at that point when my spirits regularly sank to their nadir an almost palpable quality of ominousness. The fading evening light, akin to that famous “slant of light” of Emily Dickinson’s, which spoke to her of death, of chill extinction, had none of its familiar autumnal loveliness, but ensnared me in a suffocating gloom. I wondered how this friendly place, teeming with such memories of (again in her words) “Lads and Girls,” of “laughter and ability and sighing, and Frocks and Curls,” could almost perceptibly seem so hostile and forbidding. Physically, I was not alone. As always Rose was present and listened with unflagging patience to my complaints. But I felt an immense and aching solitude. I could no longer concentrate during those afternoon hours, which for years had been my working time, and the act of writing itself, becoming more and more difficult and exhausting, stalled, then finally ceased.

There were also dreadful, pouncing seizures of anxiety. One bright day on a walk through the woods with my dog I heard a flock of Canada geese honking high above trees ablaze with foliage; ordinarily a sight and sound that would have exhilarated me, the flight of birds caused me to stop, riveted with fear, and I stood stranded there, helpless, shivering, aware for the first time that I had been stricken by no mere pangs of withdrawal but by a serious illness whose name and actuality I was able finally to acknowledge. Going home, I couldn’t rid my mind of the line of Baudelaire’s, dredged up from the distant past, that for several days had been skittering around at the edge of my consciousness: “I have felt the wind of the wing of madness.”

Our perhaps understandable modern need to dull the sawtooth edges of so many of the afflictions we are heir to has led us to banish the harsh old-fashioned words: madhouse, asylum, insanity, melancholia, lunatic, madness.

But never let it be doubted that depression, in its extreme form, is madness. The madness results from an aberrant biochemical process. It has been established with reasonable certainty (after strong resistance from many psychiatrists, and not all that long ago) that such madness is chemically induced amid the neurotransmitters of the brain, probably as the result of systemic stress, which for unknown reasons causes a depletion of the chemicals norepinephrine and serotonin, and the increase of a hormone, cortisol.

With all of this upheaval in the brain tissues, the alternate drenching and deprivation, it is no wonder that the mind begins to feel aggrieved, stricken, and the muddied thought processes register the distress of an organ in convulsion. Sometimes, though not very often, such a disturbed mind will turn to violent thoughts regarding others. But with their minds turned agonizingly inward, people with depression are usually dangerous only to themselves. The madness of depression is, generally speaking, the antithesis of violence. It is a storm indeed, but a storm of murk. Soon evident are the slowed-down responses, near paralysis, psychic energy throttled back close to zero. Ultimately, the body is affected and feels sapped, drained.

That fall, as the disorder gradually took full possession of my system, I began to conceive that my mind itself was like one of those outmoded small town telephone exchanges, being gradually inundated by flood waters: one by one, the normal circuits began to drown, causing some of the functions of the body and nearly all of those of instinct and intellect to slowly disconnect.

There is a well known checklist of some of these functions and their failures. Mine conked out fairly close to schedule, many of them following the pattern of depressive seizures. I particularly remember the lamentable near disappearance of my voice. It underwent a strange transformation, becoming at times quite faint, wheezy and spasmodic, a friend observed later that it was the voice of a ninety year old. The libido also made an early exit, as it does in most major illnesses, it is the superfluous need of a body in beleaguered emergency. Many people lose all appetite; mine was relatively normal, but I found myself eating only for susistence: food, like everything else within the scope of sensation, was utterly without savor. Most distressing of all the instinctual disruptions was that of sleep, along with a complete absence of dreams.

Exhaustion combined with sleeplessness is a rare torture. The two or three hours of sleep I was able to get at night were always at the behest of the Halcyon, a matter which deserves particular notice. For some time now many experts in psychopharmacology have warned that the benzodiazepine family of tranquilizers, of which Halcion is one (Valium and Ativan are others), is capable of depressing mood and even precipitating a major depression. Over two years before my siege, an insouciant doctor had prescribed Ativan as a bedtime aid, telling me airily that I could take it as casually as aspirin. The Physicians’ Desk Reference, the pharmacological bible, reveals that the medicine I had been ingesting was (a) three times the normally prescribed strength, (b) not advisable as a medication for more than a month or so, and (c) to be used with special caution by people of my age. At the time of which I am speaking I was no longer taking Ativan but had become addicted to Halcion and was consuming large doses. It seems reasonable to think that this was still another contributory factor to the trouble that had come upon me. Certainly, it should be a caution to others.

At any rate, my few hours of sleep were usually terminated at three or four in the morning, when I stared up into yawning darkness, wondering and writhing at the devastation taking place in my mind, and awaiting the dawn, which usually permitted me a feverish, dreamless nap. I’m fairly certain that it was during one of these insomniac trances that there came over me the knowledge, a weird and shocking revelation, like that of some long beshrouded metaphysical truth, that this condition would cost me my life if it continued on such a course. This must have been just before my trip to Paris.

Death, as I have said, was now a daily presence, blowing over me in cold gusts. I had not conceived precisely how my end would come. In short, I was still keeping the idea of suicide at bay. But plainly the possibility was around the corner, and I would soon meet it face to face.

What I had begun to discover is that, mysteriously and in ways that are totally remote from normal experience.

The gray drizzle of horror induced by depression takes on the quality of physical pain. But it is not an immediately identifiable pain, like that of a broken limb. It may be more accurate to say that despair, owing to some evil trick played upon the sick brain by the inhabiting psyche, comes to resemble the diabolical discomfort of being imprisoned in a fiercely overheated room. And because no breeze stirs this caldron, because there is no escape from this smothering confinement, it is entirely natural that the victim begins to think ceaselessly of oblivion.

*

Five

. . .

*

from

DARKNESS VISIBLE. A MEMOIR of MADNESS

by William Styron

get it at Amazon.com

Depressive Realism. Interdisciplinary perspectives – Colin Feltham.

Depressive Realism argues that people with mild-to-moderate depression have a more accurate perception of reality than nondepressives.

This book challenges the tacit hegemony of contemporary positive thinking, as well as the standard assumption in cognitive behavioural therapy that depressed individuals must have cognitive distortions.

The kind of world we live in, and that we are, cyclically determines how we feel and think. Some of us perceive and construe the world in dismal terms and believe our construal to be truer than competing accounts. Depending on what the glass is half-full of, the Depressive Realist may regard it as worthless, tasteless, poisonous or ultimately futile to drink.

I do not mean to say that people who experience clinical depression should not have therapy if they wish to, nor even that it does not sometimes help. Rather, I believe the assumption should not be made that depressive or negative views about life and experience necessarily correlate with psychological illness.

Depressive Realism seriously questions the standard assumption in cognitive behaviour therapy that depressed individuals must have cognitive distortions, and indeed reverses this to ask whether DRs might have a more objective grasp of reality than others, and a stubborn refusal to embrace illusion.

I argue that human life contains many glaringly tragic and depressing components and the denial or minimisation of these adds yet another level of depression.

Depressive realism is a worldview of human existence that is essentially negative, and which challenges assumptions about the value of life and the institutions claiming to answer life’s problems. Drawing from central observations from various disciplines, this book argues that a radical honesty about human suffering might initiate wholly new ways of thinking, in everyday life and in clinical practice for mental health, as well as in academia.

Divided into sections that reflect depressive realism as a worldview spanning all academic disciplines, chapters provide examples from psychology, psychotherapy, philosophy and more to suggest ways in which depressive realism can critique each discipline and academia overall. This book challenges the tacit hegemony of contemporary positive thinking, as well as the standard assumption in cognitive behavioural therapy that depressed individuals must have cognitive distortions. It also appeals to the utility of depressive realism for its insights, its pursuit of truth, as well as its emphasis on the importance of learning from negativity and failure. Arguments against depressive realism are also explored.

This book makes an important contribution to our understanding of depressive realism within an interdisciplinary context. It will be of key interest to academics, researchers and postgraduates in the fields of psychology, mental health, psychotherapy, history and philosophy. It will also be of great interest to psychologists, psychotherapists and counsellors.

Colin Feltham is Emeritus Professor of Critical Counselling Studies at Sheffield Hallam University. He is also External Associate Professor of Humanistic Psychology at the University of Southern Denmark.

Introduction

One could declare this to be simply a book about pessimism but that term would be inaccurate and insufficient. A non-verbal shortcut into the subject could be had by listening to Tears for Fears’ Mad World or Dinah Washington’s This Bitter Earth, or perhaps just by reading today’s newspaper. Depressive realism is the term used throughout this book but it will often be abbreviated to DR for ease of reading, referring to the negative worldview and also to anyone subscribing to this worldview (a DR, or DRs). DRs themselves may regard the ‘depressive’ part of the label as gratuitous, thinking their worldview to be simply realism just as Buddhism holds dukkha to be a fact of life.

Initially, it may seem that this book has a traditional mental health or psychological focus, but it draws from a range of interdisciplinary sources, is pertinent to diverse contexts and hopefully of interest to readers in the fields of philosophical anthropology, philosophy of mental health and existentialism and psychotherapy. I imagine it may be of negative, argumentative interest to some theologians, anthropologists, psychologists, social scientists and related lay readers.

Although more implicitly than explicitly, the message running throughout the book is that the kind of world we live in, and that we are, cyclically determines how we feel and think. We will disagree about what kind of world it is, but I hope we might agree that the totality of our history and surroundings has much more impact on us than simply what goes round in our heads.

Depressive realism can be defined, described and contextualised in several ways. its first use appears to have been by Alloy and Abramson (1979) in a paper describing a psychology experiment comparing the judgements of mildly depressed and non-depressed people. It is necessary to make some clarification at the outset about ‘clinical depression’. I do not believe that depression is a desirable state, or that those who are severely depressed are more accurate in their evaluations of life than others (Carson et al., 2010). This is not a book advocating suicide as a solution to life’s difficulties, nor am I advocating voluntary human extinction, nor is the text intended to promote hatred of humanity. The DR discussed here should not be mistaken for a consensual, life-hating suicide cult even if it includes respect for the challenging views of Benatar (2006) and Perry (2014). Nor can one assume that all ‘depressives’ necessarily have permanently and identically pessimistic worldviews, nor indeed that the lines drawn by the psychological professionals between all such mood states are accurate. But one can ask that the majority worldview that ‘Iife is alright’ be set against the DR view that life contains arrestingly negative features (Ligotti, 2010).

The strictly psychological use of DR has now expanded into the world of literary criticism, for example, in Jeffery’s (2011) text on Michel Houellebecq. It is this second, less technical sense of DR on which I focus mainly in this book, that is, on the way in which some of us perceive and construe the world in dismal terms and believe our construal to be truer than competing accounts. Inevitably, within this topic we find ourselves involved in rather tedious realism wars or epistemological battles between yea-sayers, nay-sayers and those who fantasise that objective evidence exists that can end the wars.

Insofar as any term includes ‘realism’, we can say it has a philosophical identity. In the case of DR, the philosophical pessimism most closely associated with Schopenhauer may be its natural home. Existentialism is often considered a negative philosophy, and sometimes wholly nihilistic, but in fact it includes or allows for several varieties of worldview. DR receives the same kind of criticism as existentialism often has, which is that it is less an explicit philosophy than a mood, or a rather vague expression of the personalities, projections and opinions of certain writers or artists.

Depressive realism as it is translated from psychology to philosophy can be said to refer to the belief that phenomena are accurately perceived as having negative weighting. Put differently, we can say that ‘the truth about life’ always turns out to be more negative than positive, and hence any sustained truth-seeking must eventually find itself mired in unpleasant discoveries.

We then come to synonyms or closely related terms and ideas. These include, in alphabetical order, absurdism, anthropathology, antihumanism, cynicism, depressionism, disenchantment, emptiness, existential anxiety and depression, futilitarianism, meaninglessness, melancholia, misanthropy, miserabilism, nihilism, pessimism, radical scepticism, rejectionism, tedium vitae, tragedy, tragicomedy or Weltschmerz. We could add saturninity, melancology and other terms if we wanted to risk babellian excess, or flag up James Joyce’s ‘unhappitants of the earth’ as a suitable descriptor for DRs. We could stray into Buddhist territory and call up the concepts of samsara and dukkha. I do not claim that such terms are synonymous or that those who would sign up to DR espouse them all but they are closely associated, unless you are a semantically obsessive philosopher.

Dienstag (2006) denies any necessary commonality between different intellectual expressions of pessimism, and Weller (2011) demonstrates a connoisseur-ship of nuances of nihilism. Kushlev et al. (2015) point out that sadness and unhappiness are not identical. But Daniel (2013) stresses the assemblage of melancholy, and Bowring (2008) provides a very useful concise history, geography and semantics of melancholy.

Here is one simple illustration of how the shades of DR blend into one, not in any linear progression but pseudo-cyclically. The DR often experiences the weariness of one who has seen it all before, is bored and has had enough; the melancholy of the one who feels acutely the elusiveness and illusion of happiness, the impermanence of life and always smells death in the air; the pessimism of one whose prophetic intuition knows that all proposed quasi-novel solutions must eventually fade to zero; the nihilism of one whose folly-spotting and illusion-sensing radar never rests; the depression of one whose black dog was always there, returns from time to time and may grow a little blacker in old age; the sorrowful incredulity at the gullible credulity of hope-addicts and faith-dealers; the deep sadness of one who travels extensively and meets many people whose national and personal suffering is written all over their faces; and the bleakly aloof fundamentalism of one who believes his epistemology to be superior to other, always shallower accounts. In some cases an extreme form of DR may tip into contemptuous or active nihilism, for example, DeCasseres’s (2013) ‘baleful vision’.

But DR need not be, seldom is, a state of maximum or unchanging bleakness or sheer unhappiness, and many DRs like Cioran, Beckett and Zapffe could be very funny, as is Woody Allen. But grey-skies thinking is the DR’s natural default position and ambivalence his highest potential.

A broad, working definition of depressive realism runs as follows: depressive realism is a worldview of human existence as essentially negative. To qualify this, we have to say that some DRs regard the ‘world’ (everything from the cosmos to everyday living) as wholly negative, as a burdensome absurdity, while some limit its negativity to human experience, or to certain aspects or eras of humanity or to sensate life. ‘Existence is no good at all’ probably covers the first outlook (see Ligotti, 2010), and ‘existence contains much more bad than good’ the second (Benatar, 2006). We might also speak of dogmatic DR and a looser, attitudinal DR that seeks dialogue.

Critics of DR, of whom there are many as we shall see, often joke lamely about the perceived glass half empty mentality underlying this view, and tirelessly point out the cliché that a glass half empty is half-full. DR may not deny that life includes or seems to include some positive values, sometimes, but it is founded on the belief, the assertion, that it is overall more negative than positive. And, depending on what the glass is haIf-full of, the DR may regard it as worthless, tasteless, poisonous or ultimately futile to drink.

The succinct ingredients of DR are perhaps as follows. The human species is overdeveloped into two strands, the clever and inventive, and the destructive and distressing, all stemming from evolutionarily accidental surplus consciousness. We have developed to the point of outgrowing the once necessary God myth, confronting the accidental origins of everything and realising that our individual lives end completely at death. We have to live and grow old with these sad and stubborn facts. We must sometimes look at the vast night sky and see our diminutive place reflected in it, and we realise that our species’ existence itself is freakishly limited and all our earthly purposes are ultimately for nought.

We can never organise optimal living conditions for ourselves, and we realise that our complex societies contain abundant absurdities. World population increases, information overload increases and new burdens outweigh any benefits of material progress however clever and inventive we are. We claim to value truth but banish these facts from our consciousness by all manner of mendacious, tortuous mental and behavioural devices. The majority somehow either denies all of the above or manages not to think about it. But it unconsciously nags at even the most religious and optimistic, and the compulsion to deny it drives fundamentalist religious revival, capitalist growth, war and mental illness.

Depressive realism may generate a range of attitudes from decisive suicidality or leaden apathy through to cheerful cynicism, eloquent disenchantment and compassionate or violent nihilism. We can argue that everyone has a worldview whether implicit or explicit, unconscious or conscious, inarticulate or eloquent. Wilhelm von Humboldt is credited with the origins of the concept, using the term Weltansicht (world meaning), with Weltanschauung arriving a little later with Kant and Hegel.

DR may contain idiosyncratic affects, perceptions and an overall worldview, the scale of negativity of which fluctuates. It may be embodied at an early age or emerge later with ageing and upon reflection, or after suffering a so-called ‘nadir experience’, and may even be overturned, although this event is probably rare. Often, we cannot help but see the world in the way we happen to see it, whether pessimistically or optimistically, even if our moods sometimes fluctuate upwards or downwards. Typically, no matter how broadminded or open to argument we consider ourselves to be, we all feel that we are right. The DR certainly fits this position, often regarding himself as a relentlessly sceptical truth-seeker where others buy into complacent thought and standard social illusions.

The person who has no particular take on existence, who genuinely takes each day or moment as it comes, is arguably rare.

We should ask what it is that is depressed in DR and what it is to which the realism points. Melancholy was once the more common term, depression simply meaning something being pushed downwards, as in dejected spirits. This downwardness places depression in line etymologically with the downwardness of pessimism, not to mention countless metaphors such as Bunyan’s trough of despond.

From the 17th century depression gained its clinical identity but the roots lie in much earlier humoral theory. Whichever metaphor is employed, however, we might ask why ‘upwards’ is implied to be the norm, and in what sense ‘downwards’ should be applied to ‘unhappy consciousness’. Heaven has always been located upwards and hell downwards. More accurate metaphors for depression might involve inward or horizontal states. But this would still leave the question of why outwardness and verticality should be regarded as more normal, or the view of the depressed, melancholic, downward, inward or horizontal human being as less acceptable or normal than its opposite, unless on purely statistical grounds.

I think it is fair and proper to make my own position as transparently clear as possible. In spite of critiques of writing from ‘the view from nowhere’, most academic writing persists in a quasi-objective style resting on the suspiciously erased person of the author. Like most DRs, my personality and outlook has always included a significantly depressive or negative component. I was once diagnosed in my early 30s in a private psychotherapy clinic as having chronic mild depression. I have often been the butt of teasing and called an Eeyore or cynic. I am an atheist.

I have had a fair amount of therapy during my life but in looking back I have to say that:

a. none of that therapy has fundamentally changed the way I experience life, and

b. my mature belief is that I was always this way, that is, someone with a ‘depressive outlook’.

Only quite recently have I come to regard this as similar to the claim made by most gay people that they were born gay, or have been gay for as long as they remember, that they do not think of themselves in pathological terms and they do not believe homosexuality to be a legitimate object for therapeutic change.

I do not mean to say that people who experience clinical depression should not have therapy if they wish to, nor even that it does not sometimes help. Rather, I believe the assumption should not be made that depressive or negative views about life and experience necessarily correlate with psychological illness.

Since I have worked in the counselling and psychotherapy field for about 35 years, I have some explaining to do, which appears mainly in Chapter 6.

Appearing in the series Explorations in Mental Health as this book does, I should like to give a brief sense of location here. In truth this is an interdisciplinary subject that by its nature has no exclusive home. On the other hand, given my academic background, there are some clear links with psychology, psychotherapy and counselling. On the question of mental health, the contribution of DR is to re-examine assumptions about ‘good’ mental health and in particular to challenge the standard pathological view of depression as sick, and with therapists as having a clinical mandate to pronounce on everything with depressive or gloomy connotations.

The line between so-called existential anxiety and so-called death and health anxiety can be a fine one, and we should question the agonised revisions and diagnostic hyperinflation by the contributors to the DSM over such matters (APA, 2013; Frances, 2014).

DR seriously questions the standard assumption in cognitive behaviour therapy that depressed individuals must have cognitive distortions, and indeed reverses this to ask whether DRs might have a more objective grasp of reality than others, and a stubborn refusal to embrace illusion.

In conducting this challenge we are taken well beyond psychology into ontology, history, the philosophy of mental health and other disciplines. The mission of this book is hardly to revolutionise the field of mental health, but it is in part to reassess the link between perceived depression, pessimism and negative worldviews.

But a book of this kind emerges not only from a personal position and beliefs. I may experience my share of low mood, insomnia, conflict and death anxiety, but my views are also informed abundantly by wide reading, observations of everyday life and friends. Mirroring the ‘blind, pitiless indifference and cruelty of nature’ (Dawkins, 2001), I see around me a man in his 80s passing his days in the fog of Alzheimer’s, another in his 70s with Parkinson’s disease, a woman suffering from many sad medical after-effects of leg amputation, another woman suffering from menopausal mood swings, couples revealing the cracks in their allegedly smooth relationships, several young men struggling gloomily to find any fit between their personalities and the workplace, colleagues putting a brave face on amid insane institutional pressures and the list of merely first world suffering could go on and on.

The sources of this common brutalism are biological and social. The examples of suffering easily outnumber any clear examples of the standard optimistic depiction of happy humans, yet this latter narrative continues to assert itself, backed up by cheerful statistics and miserabilism countering examples.

I argue that human life contains many glaringly tragic and depressing components and the denial or minimisation of these adds yet another level of depression.

The lead characters in DR will emerge during the book. It may be useful here, however, to mention those who feature prominently in the DR gallery. These include Gautama Buddha, Arthur Schopenhauer, Giacomo Leopardi, Philipp Mainlander, Thomas Hardy, Edgar Saltus, Sigmund Freud, Samuel Beckett, E.M. Cioran, Peter Wessel Zapffe, Thomas Ligotti, John Gray, David Benatar and Michel Houellebecq.

One of the admitted difficulties in such billing is that those still alive might well disown membership of this or any group. Another problem is who can really be excluded: for example, why not include Kierkegaard, Nietzsche, Dostoevsky, Kafka, Camus? As well as the so-called greats, we should pause to remember more minor writers, for example, the Scottish poet James Thomson (1834-82) whose The City of Dreadful Night captures perfectly many DR themes (see Chapter 4). Sloterdijk (1987) included in his similar ‘cabinet of cynics’ an idiosyncratic trawl from Diogenes to Heidegger; Feld’s (2011) ‘children of Saturn’ features Dante and Ficino.

In truth DRs may be scattered both interdisciplinarily and transhistorically (Breeze, 2014). To some extent questions of DR membership are addressed in the text, but it is true to say such discriminations are not my main focus.

This book is structured loosely by disciplines in order to demonstrate the many sources and themes involved. My treatment of these disciplines will not satisfy experts in those disciplines and must appear at times náive, imprecise or inaccurate, but these fields impinge on us, claim to define how we live and suffer and what remedies might exist. In another kind of civilisation we might have no such epistemological divisions. I look at how these disciplines inform DR but also use DR as critical leverage to examine their shortcomings.

Hence, Chapter 1 excavates some of the relevant evolutionary and common historical themes.

Chapter 2 looks at some religious themes and the theologies explicating these, as well as the contemporary fascination with spirituality and its downsides.

In Chapter 3 I examine a number of philosophical themes connecting with DR.

Some examples in literature and film are analysed in Chapter 4.

Psychology comes into focus in Chapter 5, to be complemented and contrasted with psychotherapy and the psychological therapies in Chapter 6.

In Chapter 7 socio-political themes are scrutinised insofar as they illustrate DR.

I then move on to science, technology and the future in Chapter 8, again in order to depict the dialectic between these and DR.

The ‘lifespan and everyday life’ is the focus of Chapter 9, which takes a partial turn away from academic disciplines to the more experiential.

Arguments against DR, as comprehensive as I can make them in a concise form, comprise Chapter 10, while the final chapter envisages the possible utility of DR.

One of the many things DRs find depressing about the societies we live in is that those of us shaped ironically by twisted educational systems to think and write about such matters, and lucky to find a haIf-accommodating employment niche, are likely to be in or associated with academia. This institution has survived for many centuries and in spite of its elitist niche remains somewhat influential, although far less influential than its personnel imagine.

In its current form it is being ravaged by the so-called new public management but at the same time in its social science, arts and humanities departments is defiantly dominated by left-wing academics whose writing style is often highly symbolic, obfuscatory, arguably often meaningless (Sokal, 2009) and designed for coded communication with a mere minutiae of the general population, that is, academic peers.

On the other hand, academia can also suffer from a kind of censorship-by-demand-for-evidence, meaning that common observation, subjectivity and anecdote are erased or downgraded and a statistics inebriated tyranny reigns supreme. Once when presenting some of the themes in this book to an academic ‘research group’, I was told I had cherry-picked too many bad examples, as if my colleagues were all paragons of balanced argument and nothing short of watertight pseudo-objectivity could be tolerated: in my view this itself is an example of silencing the DR nihilism that threatens an uncritically ‘life is good’ assumption.

A dilemma facing anyone who hopes to capture the essence of depressive realism and the parrhesia within it concerns the style in which to write and the assumptions and allusions to make. Universities seem barely fit for purpose any longer, or their purpose is unclear and some have predicted their demise (Readings, 1997; Evans, 2005). This should not surprise us, on the contrary, we should learn to expect such decline as an inescapable part of the entropy of human institutions but it is a current aspect of our depressing social landscape.

I have only partly followed the academic convention of obsessively citing evidence and precise sources of evidence. In some cases, where no references are given, my figures and examples derive from unattributed multiple internet sources; I do not necessarily make any claims to authority or accuracy, and the reader should check on sources if he or she has such a need. In many instances I use terms such ‘many people believe’, which might irritate conventional social scientists. I also use anecdote, opinion and naturalistic observations fairly freely. Academic discourse is, I think, very similar to the ‘rhetoric’ exposed by Michelstaedter (2004), in contrast with the persuasion of personally earned insights and authentic observation, as Kierkegaard too would have recommended.

A confession. What appears above is what is expected of a writer, a logical outline, a promise of reading pleasures to come and of finding and offering meaning even in the teeth of meaninglessness (a trick accomplished by the sophistry of Critchley [1997], among other academic prestidigitators). As I moved from the publisher’s acceptance of my proposal to the task of actual composition I began to wonder if I could in fact do it. ‘Let’s do this thing’ is a common American expression of committed and energetic project initiation. As befits a text on depressive realism, the author is bedevilled by doubt: more of a Beckettian ‘is this thing even worth beginning?’ The topic is so massive that one is suffocated on all sides by the weight of precedents and related information, the beckoning nuances, the normative opposition to it and the hubris of attempting it. I anguished over the possibility of a subtitle, something like ‘perspectives on pointlessness’, that might convey a mixture of nihilism and humour. Such are our needs for and struggles with sublimation, and our neophilia, that it is tempting not to bother. However, here it is.

Chapter 1

Big history, anthropathology and depressive realism

Can we say there is something intrinsically fantastic (unlikely), admirable (beautifully complex) and simultaneously tragic (entropically doomed from the outset) about the universe? And about ourselves, the only selfconscious part of the universe as far as we know, struggling to make sense of our own existence, busily constructing and hoping for explanations even as we sail individually and collectively into oblivion? Was the being or something that came out of nothing ever a good thing (a random assertion of will in Schopenhauerian terms), a good thing for a while that then deteriorated, a good thing that has its ups and downs but will endure or a good thing that must sooner or later end? Or perhaps neither good nor bad?

Depressive realism looks not only to the distant future but into the deepest past, interpreting it as ultimately negatively toned.

It is quite possible and indeed common practice for depressive realists and others to explicate their accounts without recourse to history. It appears that much contemporary academic discourse, certainly in the social sciences, is tacitly structured abiologically and ahistorically, as if in spite of scientific accounts we have not yet accepted any more than creationists that we are blindly evolved and evolving beings. In other words, in spite of much hand-wringing, many maintain a resignedly agnogenic position as regards the origins of the human malaise: we do not and may never know the causes.

But we have not appeared from nowhere, we are not selfcreating or God-created, we were not born as a species a few hundred or a few thousand years ago, we are not in any deep sense merely Plato’s heirs. Neither Marxist dialectical materialism nor Engels’ dialectics of nature capture the sheer temporal depth of evolution and its ultimate cosmogony (Shubin, 2014). Existence, beyond the animal drive to survive, is atheleological and unpromising. Religious and romantic theleologies largely avoid examination of our material roots and probable limits. From a certain DR perspective it is not only the future that has a dismal hue; an analysis of the deep past also yields much sorry material.

My preference is to begin with certain historical and materialist questions. The reasoning behind this is that (a) we have accounts of and claims to explain the existence of life as once benign but having become at some stage corrupt; (b) we might find new, compelling explanations for the negative pathways taken by humanity; (c) recorded observations of human tragedy that can be loosely called depressive realism are found in some of the earliest literature; (d) this procedure helps us to compare large scale and long-term DR propositions with relevant microphenomena and transient patterns. This anchorage in deep history does not necessarily imply a materialist reductionism to follow but it tends, I believe, to show a ceaselessly adaptive, evolutionarily iterative process and entropic trajectory via complexity.

The emerging disciplines of deep and big history challenge the arbitrary starting points, divisions and events of traditional history by going back to the earliest known of cosmic and non-human events, charting any discernible patterns and drawing tentative conclusions. Spier (2011) offers an excellent condensed account of this kind, but we probably need to add as a reinforcer the argument from Kraus (2012) that something from nothing is not only possible but inevitable and explicable by scientific laws. Indeed, it is necessary to begin here as a way of further eroding theistic claims that want to start with God and thereby insist on God’s (illusory) continuing sustenance and guiding purpose.

It is not the creation ex nihilo of the mythological, pre-scientific God, the omnipotent being who brought forth the universe from chaos that any longer helps us to understand our world, but modern science.

We do not know definitively how we evolved, but we have convincing enough causal threads at our disposal. Here I intend to sift through those of most interest in exploring the question of why our world has become such a depressing place.

We are animals but apparently higher animals, so far evolved beyond even our nearest relatives that some regard human beings as of another order of nature altogether. Given the millennia of religious belief that shaped our picture of ourselves, the Darwinian revolution even today is not accepted by all. Even some scientists who purport to accept the standard evolutionary account do not seem to accept our residual animal nature emotionally (Tallis, 2011).

But it is important to begin by asking about the life of wild animaIs. They must defend themselves against predators by hiding or fighting, and they must eat by grazing, scavenging or predation; they must reproduce and where necessary protect their young. Many animals spend a great deal of their time asleep, and some play. Social animals cultivate their groups by hunting together, communicating or grooming. Some animals protect their territory, build nests or rudimentary homes and a few make primitive tools; some migrate, and some maintain hierarchical structures. Most animals live relatively short lives, live with constant risk and are vigilant.

However it happened, human beings differ from animals in having developed a consciousness linked with tool-making, language and massive, highly structured societies that have taken us within millennia into today’s complex, earth spanning and nature dominating civilisation. Wild animals certainly suffer, contrary to idyllic fantasies of a harmonious nature but their suffering is mostly acute, resulting from injury, hunger and predation, and their lives are not extended beyond their natural ability to survive.

Our ingenuity and suffering are two sides of the same coin.

Weapon making and cooperation allowed us to rise above constant vulnerability to predators, but our lives are now often too safe, bland and boring, since we have forfeited the purpose of day-to-day survival. We have also benefited from becoming cleverer at the cost of loss of sensory acuity. Accordingly, and with painful paradox, we are driven to seek ‘meaning’ and we are gratuitously violent (Glover, 2001; White, 2012). Animals have no such problems.

*

from

Depressive Realism. Interdisciplinary perspectives

by Colin Feltham

get it at Amazon.com

Breaking Down Is Waking Up. Can psychological suffering be a spiritual gateway? – Dr Russell Razzaque.

There are as many types of mental illness as there are people who suffer them.

The World Health Organization estimates that approximately 450 million people worldwide have a mental health problem.

None of us is immune from the existential worry that nags away in the back of our mind. We are all vulnerable to emotional and psychological turmoil in our lives and there is something fundamental about the human condition that makes it so.

There is something at the core of the experience of mental illness that draws sufferers towards the spiritual. Their suffering is an echo of the suffering we all contain within us.

EVERYONE NEEDS A BANISTER; a fixed point of reference from which we understand and engage with life. We need something to hold on to, so that when we’re hit by life’s inevitable disappointments, pain or traumas, we won’t fall too far into confusion, despair or hopelessness. With a weak banister we risk getting knocked off course, losing our bearings and falling prey to stress, psychological turmoil and mental illness. A strong banister will stand the test of time in an ever changing world, giving us more confidence to face the knocks and hardships of life more readily.

Understanding who we are and how we fit into the world is a quest we start at birth and continue through the whole of our lives. Sometimes these questions come to the fore, but usually they bubble away somewhere beneath the surface: ‘Who am I?’ ‘Am I normal?’ ‘Why am I here?’ ‘Is there any real point to life?’ Deep down inside we know that nothing lasts, the trees, landscapes and life around us will all one day perish, just as surely as we ourselves will, and everyone we know too. But we have evolved ways to hold this reality and the questions it hurls up at bay.

We construct banisters to help us navigate our way round this maze of pain and insecurity: a set of beliefs and lifestyles that help us form a concrete context to make sense of things and, as the saying goes, ‘keep calm and carry on’. But, for most of us, the core beliefs and lifestyles that hold us together still leave us vulnerable to instability. The sense of identity we evolve is so precarious that we’re often buffeted by life onto shaky ground. And, as a consequence, we become prone to various forms of psychological distress; indeed, for vast swathes of society this proceeds all the way to mental illness, whether that be labelled as anxiety, depression, bipolar disorder or the most severe form of mental illness, psychosis.

There are as many types of mental illness as there are people who suffer them. One of the reasons I decided to specialize in psychiatry, shortly after qualifying from medical school, was that, unlike any other branch of medicine, no two people I saw ever came to me with the same issues. Although different presentations might loosely fit into different categories, there appeared to me to be as many ways of becoming mentally unwell as there were ways of being human. I have since specialized in the more severe and acute end of psychiatry. I currently work in a secure, intensive care facility but to this day, in 16 years of practice, I have never seen two cases that were exactly the same.

And the numbers just seem to be going up. In the UK today, one in four adults experiences at least one diagnosable mental health problem in any one year. In the USA, the figure is the same and this equates to just over 20 million people experiencing depression and 2.4 million diagnosed with schizophrenia, a severe form of mental illness where the individual experiences major disturbances in thoughts and perceptions. The World Health Organization estimates that approximately 450 million people worldwide have a mental health problem.

Beyond these figures, however, are all the people who struggle with various levels of stress throughout life and, all the while, carry a fear at the back of their minds, that they too may one day slide into mental illness. In my experience, this is a fear that pervades virtually every stratum of society. Rarely am I introduced as a psychiatrist to new people in a social gathering without at least some of them quietly feeling, or even explicitly reporting, that they worry that one day they are going to need my help. Such comments are often made in jest, but the genuine anxiety that underlies them is rarely far beneath the surface. There is a niggling worry at the back of many people’s minds that something might be wrong with them; that something isn’t quite right. What they don’t realize, however, in their own private suffering, is just how much company they have in this fear. Indeed, I include myself and my colleagues among them, too. None of us is immune from the existential worry that nags away in the back of our mind.

But, if we look closely, there is also another process that can be discerned underneath all of this. Deep down inside every bubbling cauldron of insecurity, we can also find the seeds of a kind of liberation. Something is just waiting to burst forth. This something is hard to define or describe in language, but it is often in our darkest hours that we can feel it the most. And the further we fall the closer to it we get. This is why, I believe, mental illness can be so powerful, not just because of the deep distress that it contains, but also because of the authentic potential that it represents.

Mental illness, however, is just one aspect of a continuum we are all on. All of us have different ways of reacting emotionally to the experiences we encounter in life and the ones that involve a high level of distress either for oneself or for others are the ones we choose to label as mental illness. And it is this end of the spectrum that l will focus on most in this book, as it is these most stark forms of distress that present us with the greatest opportunity to observe the seeds within, and thus, ultimately, learn what is in all of us too.

There may be a variety of factors that contribute to the various forms of mental illness, of course, from childhood traumas to one’s genetic make up, but as the cut-off point always centres around distress which is grounded in subjective experience the definition itself will always remain somewhat arbitrary. That’s not to say that such definitions have no utility. By helping us communicate with each other about these complex shapes of suffering, they will also help us communicate our ideas with one another about how to help reduce the suffering encountered.

That is why I use these terms in this book, but it should be noted that I attach this large caveat from the outset. Ultimately, the only person who can really describe a person’s suffering is the sufferer himself; outside that individual, the rest of us are always necessarily off the mark. What must invariably be remembered, however, is that there is no ‘them’ and ‘us’. We are all vulnerable to emotional and psychological turmoil in our lives and there is something fundamental about the human condition that makes it so.

That is why I believe, as a psychiatrist, that the best research I ever engage in is when I explore my own vulnerabilities. That is when I start to connect with threads of the suffering that my patients are undergoing too. And what I find particularly fascinating about this process is that the deeper I descend into my own world of emotional insecurity, the more I grow to appreciate an indescribable dimension to reality that so many of my patients talk about in spiritual terms, engage with, and indeed rely upon so much of the time.

In a survey of just under 7,500 people, published in early 2013, researchers from University College London found a strong correlation between people suffering mental illness and those with a spiritual perspective on life. Though the results confused many, to me they made perfect sense.

There is something at the core of the experience of mental illness that draws sufferers towards the spiritual. Their suffering is an echo of the suffering we all contain within us.

That is why I can say from the outset, and without reticence, that my insights are based largely on a subjective pathway to our shared inner world. And it is through this perspective that I have evolved what I believe is a new banister: a new way of seeing the world and being within it. It is, however, not just that my introspection has taught me about my patients, but that my patients have also taught me about myseIf. Indeed I can safely say that I have gleaned just as much from the individuals I have cared for as I have from the professionals and teachers I have learnt from.

I consider myself hugely lucky to work in a profession in which looking into myself and learning about my own inner world has been, and continues to be, a vital requirement of my work (though, it has to be said that, sadly, many within my profession do not recognize this). It has propelled me into a journey of limitless exploration of both myself and the people I care for and this has led me to ever deeper understandings of the nature of mental illness, the mind and reality itself. I have drawn upon a diverse array of wisdom along the way, and my journey has ultimately led me to construct a synthesis of modern psychiatry and ancient philosophy; of new scientific findings and old spiritual practices.

But this banister comes with a health warning, as indeed all should. Just as a set of perspectives and insights can be a useful support in times of instability, so too can overreIiance on them become counterproductive. That is why a banister needs to be held lightly. Gripping too tightly to anything in life is a recipe for exhaustion and, consequently, even greater instability.

What we need is a banister that, when held lightly, can allow us to move forward, rather than hold us back. I believe that such an understanding of reality and our place within it actually exists; it is also imperative to our survival as a species. I believe that life’s potential is far greater than most of us are ever aware of, and that our limitations are a lot more illusory than we know. In a sense I feel we are all suffering from a form of mental illness a resistance to the realization of our true nature, and to that end I humbly offer this book as a guiding rail out of the turmoil.

My Journey. An Exploration of Inner and Outer Worlds

Chapter 1

Wisdom in Bedlam

‘One must still have chaos in oneself to be able to give birth to a dancing star.’ Friedrich Nietzsche

MENTAL ILLNESS IS SOMETHING that most of us shy away from. Someone who exhibits behaviour or feelings that are considered out of the ordinary will, sooner or later, experience a fairly broad radius of avoidance around them. Even in psychiatric hospitals this is evident, where the less ill patients will veer away from those who are more unwell. The staff themselves are often prone to such avoidance, too. But contrary to this natural reflex that exists within all of us, moving closer to, and spending time with, someone suffering mental illness can often be quite an enlightening experience. It took me many years to realize this myself, but through the cloud of symptoms, a fascinating display of insight and depth can often be found in even the most acutely unwell. And this turned out to be true whatever the type of mental illness. The problem might be mood related for example, depression or bipolar or what we term neurotic like anxiety, panic or post-traumatic stress disorder or all the way up to the paranoia or hearing voices that we see at the most severe stage of mental illness termed psychosis. Indeed, the more severe the symptoms, the deeper the wisdom that appeared to be contained (though often hidden) within it.

A frequent observation of mine, for example, is just how perceptive the people I treat can be, regardless of the very evident turbulence that is going on inside. It is not uncommon for those who are newly admitted to share with me their impressions of the nursing and other staff on the ward with an uncanny degree of accuracy within only a few days of arrival. They’ll sometimes rapidly intuit the diverse array of temperaments, perspectives and personality traits among staff members and so have a feel for who is best to approach, avoid, or even wind up, depending on their mental state and needs at the time. It is likely that this acute sensitivity is one of the initial causes of their mental illness in the first place, but the flip side is that they have also managed to glean a lot about life from their experiences to date. This wisdom is often hidden by the symptoms of their illness, but it lurks there under the surface, often ready to flow out after a little gentle probing. I am frequently struck by the profundity of what I hear from my patients during our sessions and I often find myself feeding this same wisdom back to them even when, at the same time, they are undoubtedly experiencing and manifesting a degree of almost indescribable psychological pain.

Most of us spend our lives going to work, earning a salary, feeding our families and perhaps indulging in sport or entertainment at the weekends. Rarely are we able to step back from it all and wonder what the purpose of all this is, or whether or not we have our perspectives right. During the football World Cup one year, a patient told me that he felt such events served a deeper purpose for society, ‘It stops us thinking about the plight of the poor around the world.’ Events such as this kept us anaesthetized, he believed, so we could avoid confronting the depths of inequality and injustice around the globe, and that would ultimately enable the system that propped up the very corporations who were sponsoring these events to keep going. I had to admit that I had never thought of it that way before.

Compassion is a frequent theme I observe in those suffering mental illness, even though they are usually receiving treatment in a hospital setting because, on some level, they are failing to demonstrate compassion towards either themselves or others. I have often been moved by hearing of an older patient with a more chronic history of mental ill-health, perhaps due to repeated long-term drug use, or failure to engage with therapy, taking the time to approach a younger man, maybe admitted to hospital for the first time, and in effect tell him, ‘Don’t do what I did, son. Please learn from my mistakes.’ There are few moments, I believe, that are more powerfully therapeutic than that.

It is only in the last few years that we have discovered, after trialling a variety of treatments, that one of the most powerful interventions for what are known as the ‘negative symptoms’ of schizophrenia, is exercise. These negative features relate to a lack of energy, drive, motivation and, often, basic functional activity. Whatever the diagnostic label you choose to put on it, this can often be the most disabling part of such illnesses, and there are hardly any known treatments for it. Although an evidence base has recently evolved around the practice of regular exercise. I never quite understood why this could be until a patient one day put forward a hypothesis to me. It takes you out of your mind, he explained to me. ‘You see doc, you can’t really describe a press-up. You just do it.’ The whirlwinds within could be overcome for a few moments at least, while attention is paid, instead, to the body. Suddenly I realized why going to the gym was the highlight of his week.

A rarely described but key feature of mental illness, therefore, is just how paradoxical it can be, with the same person who is plagued by negative, obsessional or irrational thoughts, also able to demonstrate an acute and perceptive understanding of the people and world around him. It is as if one mental faculty deteriorates, only for another one to branch out somewhere else; or rather, consciousness constricts in one area only to expand in another. There is actually some quite startling experimental evidence to back this up. An interesting study was conducted by neuroscientists at Hannover Medical School in Germany and University College London, Institute of Cognitive Neuroscience. It involved a hollowmask experiment. Essentially, when we are shown a two-dimensional photograph of a white face mask, it will look exactly the same whether it is pointing outwards with the convex face towards the camera or inwards with the concave inside of the face towards the camera. This is known as the hollow-mask illusion.

Such photographs were shown to a sample of control volunteers.

Sometimes the face pointed outwards, and sometimes inwards. Almost every time the hollow, inward-pointing concave face was shown to them, they misinterpreted it and reported that they were seeing the outward-pointing face of the mask instead. This miscategorization of the illusion actually occurred 99% of the time. The same experiment was then performed on a sample of individuals with a diagnosis of schizophrenia. They did not fall for the illusion: 93% of the time, this group was actually correctly able to identify when the photo placed before them was, in fact, an inward-pointing concave mask.

Clearly what we see here is an expansion in perceptual ability compared to normal controls. Data like this has begun to pierce the notion that mental illness is purely a negative or pathological experience. In fact, in this study, it was the normal controls who were less in touch with reality than those with a psychotic illness!

The most interesting aspect of this is that, whether they be suffering neurosis, depression, bipolar or even psychotic disorders, many people actually have some awareness of the fact that they are also somehow connecting, through this process, to a more profound reality that they were like the rest of us hitherto ignorant of. The experience might be disconcerting, even acutely frightening, but there is a sense that there is also something restorative about it too; they are rediscovering some roots they, perhaps along with the rest of us, had long forgotten about. One patient put it to me this way, ‘I feel like I am waking up. But it’s very scary because I feel like I have been regressing at the same time. It’s almost as if I needed to go through this in order to wake up.’

This sense of a wider meaning and purpose behind a breakdown is not an uncommon theme among the people I see but it is, nevertheless, so counterintuitive that it continues to halt me in my tracks whenever I encounter it. In psychiatry, for genuinely caring reasons, we are striving to reduce the distress that the people we see are experiencing. That, after all, is the reason we became health-care professionals in the first place: to heal the sick. So our reflex, whenever we see people in any kind of pain, is to remove it. But when one senses that the sufferer himself/herself sees value in the experience then we need to stop and think. So long as they are not a risk to themselves or others, perhaps our usual reflex to extinguish such an experience might lead to the suppression of something that could otherwise have been valuable or even potentially transformative.

I have had many experiences of treating people who, even after a terrible episode of psychotic breakdown, came out the other end saying that this was good for them and that the experience, despite being horrendous, was something they needed to go through. This has sometimes been attributed to an expansion of awareness that they felt they needed, and that they believed the illness brought to them. A patient once talked with me about a profound, almost overwhelming, sense of gentleness and warmth he felt when listening to music one evening, just hours before his relapse into psychosis, and as we were talking in the session, he suddenly looked up at me and said, with a mixture of awe and joy on his face, and tears in his eyes, ‘Sometimes I feel that there is something out there so beautiful and so much bigger than me, but I just can’t handle it.’

Though we will be exploring the whole gamut of psychological distress and mental illness in this book, it is the psychotic experience that usually invokes the greatest stereotype and stigma, and so merits extra attention in this opening chapter. Psychosis is when someone is said to have lost touch with reality, and this may involve hearing voices, seeing things or holding some delusional ideas. The idea that someone suffering psychosis can also be the conduit of genuinely deep wisdom and insight, therefore, surprises most people, even mental-health professionals who might not be familiar with this client group. First-person accounts of this are not easy to find in the academic literature, but one particularly good case study was published by David Lukoff in the Journal of Transpersonal Psychology. He wrote it in conjunction with a gentleman who had himself suffered a psychotic breakdown and went by the pseudonym of Howard Everest. Howard was able, in a very articulate way, to describe his own breakdown which he referred to as a form of personal odyssey both during and after it actually happened.

. . .

*

Dr Russell Razzaque is a London based psychiatrist with sixteen years experience in adult mental health. He has worked for a number of national and international organizations during his career including the University of Cambridge, the UK Home Office and the Ministry of Justice, and he currently works in acute mental health services in the NHS in east London. He is also a published author in human psychology with several books on the subject, and he writes columns for a number of publications including Psychology Today, The Independent, The Guardian and USA Today.

*

from

Breaking Down Is Waking Up. Can psychological suffering be a spiritual gateway?

by Dr Russell Razzaque

get it at Amazon.com

Inequality breeds stress and anxiety. No wonder so many Britons are suffering – Richard Wilkinson and Kate Pickett.

Studies of people who are most into our consumerist culture have found that they are the least happy, the most insecure and often suffer poor mental health.

Understanding inequality means recognising that it increases school shootings, bullying, anxiety levels, mental illness and consumerism because it threatens feelings of self-worth.

In equal societies, citizens trust each other and contribute to their community. This goes into reverse in countries like ours.

The gap between image and reality yawns ever wider. Our rich society is full of people presenting happy smiling faces both in person and online, but when the Mental Health Foundation commissioned a large survey last year, it found that 74% of adults were so stressed they felt overwhelmed or unable to cope. Almost a third had had suicidal thoughts and 16% had selfharmed at some time in their lives. The figures were higher for women than men, and substantially higher for young adults than for older age groups. And rather than getting better, the long-term trends in anxiety and mental illness are upwards.

For a society that believes happiness is a product of high incomes and consumption, these figures are baffling. However, studies of people who are most into our consumerist culture have found that they are the least happy, the most insecure and often suffer poor mental health.

An important part of the explanation involves the psychological effects of inequality. The greater the material differences between us, the more important status and money become. They are increasingly seen as if they were a measure of a person’s inner worth. And, as research shows, the result is that the more unequal the society, the more people feel anxiety about status and how they are seen and judged. These effects are seen across all income groups from the poorest to the richest tenth of the population.

Inequality increases our insecurities about selfworth because it emphasises status and strengthens the idea that some people are worth much more than others. People at the top appear supremely important, almost as superior beings, while others are made to feel as if they are of little or no value. A study of how people experience low social status in different countries found, predictably, that people felt they were failures. They felt a strong sense of shame and despised themselves for failing. Whether they lived in countries as rich as the UK and Norway, or as poor as Uganda and Pakistan, made very little difference to what it felt like to be near the bottom of the social ladder.

Studies have shown that conspicuous consumption is intensified by inequality. If you live in a more unequal area, you are more likely to spend money on a flashy car and shop for status goods. The strength of this effect on consumption can be seen in the tendency for inequality to drive up levels of personal debt as people try to enhance their status.

But it is not just that inequality increases status anxiety. For many, it would be nearer to the truth to say that it is an assault on their feeling of self-worth. It increases what psychologists have called the “social evaluative threat”, where social contact becomes increasingly stressful. The result for some is low self-esteem and a collapse of self-confidence. For them, social gatherings become an ordeal to be avoided. As they withdraw from social life they suffer higher levels of anxiety and depression.

Others react quite differently to the greater ego threat of invidious social comparisons. They react by trying to boost the impression they give to others. Instead of being modest about achievements and abilities, they flaunt them.

Rising narcissism is part of the increased concern with impression management. A study of what has been called “self-enhancement” asked people in different countries how they rated themselves relative to others. Rather like the tell-tale finding that 90% of the population think they are better drivers than average, more people in more unequal countries rated themselves above average on a number of different dimensions. They claimed, for example, that they were cleverer and more attractive than most people.

Nor does the damage stop there. Psychological research has shown that a number of mental illnesses and personality disorders are linked to issues of dominance and subordination exacerbated by inequality. Some, like depression, are related to an acceptance of inferiority, others relate to an endless attempt to defend yourself from being looked down on and disrespected. Still others are borne of the assumption of superiority or to an endless struggle for it. Confirming the picture, the international data shows not only that mental illness as a whole is more common in more unequal societies, but specifically that depression, schizophrenia and psychoses are all more common in those societies.

What is perhaps saddest about this picture is that good social relationships and involvement in community life have been shown repeatedly to be powerful determinants of health and happiness. But it is exactly here that great inequality throws another spanner in the works. By making class and status divisions more powerful, it leads to a decline in community life, a reduction in social mobility, an increase in residential segregation and fewer inter-class marriages.

More equal societies are marked by strong community life, high levels of trust, a greater willingness to help others, and low levels of violence. As inequality rises, all this goes into reverse. Community life atrophies, people cease to trust each other, and homicide rates are higher.

In the most unequal societies, like Mexico and South Africa, the damage has gone further: citizens have become afraid of each other. Houses are barricaded with bars on windows and doors, razor wire atop walls and fences.

And as inequality increases, a higher proportion of a country’s labour force is employed in what has been called “guard labour” the security staff, prison officers and police we use to protect ourselves from each other.

Understanding inequality means recognising that it increases school shootings, bullying, anxiety levels, mental illness and consumerism because it threatens feelings of self-worth.

Richard Wilkinson and Kate Pickett are the authors of The Inner Level: How More Equal Societies Reduce Stress, Restore Sanity and Improve Everyone’s Wellbeing

Life after Severe Childhood Trauma. I Think I’ll Make It. A True Story of Lost and Found – Kat Hurley.

Had I known I should have been squirreling away memories as precious keepsakes, I would have scavenged for more smiles, clung to each note of contagious laughter and lingered steadfast in every embrace.

Memory is funny like that: futile facts and infinitesimal details are fixed in time, yet things you miss, things you wish you paid fuller attention to, you may never see again.

“I learned this, at least, by my experiment: that if one advances confidently in the direction of his dreams, and endeavors to live the life which he has imagined, he will meet with a success unexpected in common hours.”

Henry David Thoreau, Walden: Or, Life in the Woods

To write this book, I relied heavily on archived emails and journals, researched facts when I thought necessary, consulted with some of the people who appear in the book, and called upon my own memory, which has a habitual tendency to embellish, but as it turns out, there wasn’t much need for that here. Events in this book may be out of sequence, a handful of locations were changed to protect privacy, many conversations and emails were re-created, and a few names and identifying characteristics have been changed.

It was hardly a secret growing up that psychologists predicted I would never lead a truly happy and normal life. Whether those words were intended for my ears or not seemed of little concern, given the lack of disclaimer to follow. There was no telling what exceedingly honest bits of information would slip through the cracks of our family’s filtration system of poor Roman Catholic communication. I mean, we spoke all the time but rarely talked. On the issues at least, silence seemed to suit us best, yet surprising morsels of un-sugarcoated facts would either fly straight out of the horse’s mouth or trickle their way down through the boys until they hit me, the baby.

I was five when I went to therapy. Twice. On the second visit, the dumb lady asked me to draw what I felt on a piece of plain construction paper. I stared at the few crayons next to the page when I told her politely that I’d rather not. We made small talk instead, until the end of the hour when she finally stood up, walked to the door and invited my grandma in. They whispered some before she smiled at me and waved. I smiled back, even if she was still dumb. I’m sure it had been suggested that I go see her anyway, because truth be known, psychologists were a “bunch of quacks,” according to my grandma. When I said I didn’t want to go back, nobody so much as batted an eye.

And that was the end of that.

When I draw up some of my earliest most vivid memories, what I see reminds me of an old slide projector, screening crooked, fuzzy images at random. in the earliest scenes, I am lopsidedly pigtailed, grass stained, clothes painfully clashing. In one frame I am ready for my first day of preschool in my bright red, pill-bottomed bathing suit, standing at the bottom of the stairs where my mom has met me to explain, through her contained laughter, that a carpool isn’t anything near as fun as it sounds. In another, I am in the living room, turning down the volume on my mom’s Richard Simmons tape so I can show her that, all on my own yet only with a side-puckered face, I’d learned how to snap. In one scene, I’m crouched down in the closet playing hide-and-seek, recycling my own hot Cheerio breath, patiently waiting to be found, picking my toes. Soon Mom would come home and together we’d realize that the boys weren’t seeking (babysitting) me at all, they’d simply gone down the street to play with friends.

I replay footage of the boys, Ben and Jack, pushing me in the driveway, albeit unintentionally, toward the busy road on my first day with no training wheels, and (don’t worry, I tattled) intentionally using me as the crash-test dummy when they sent me flying down the stairs in a laundry basket. I have the scene of us playing ice hockey in the driveway after a big ice storm hit, me proudly dropping the puck while my brothers Stanley Cup serious faced off.

I call up the image of me cross-legged on my parents’ bed, and my mom’s horrified face when she found me scissors in hand thrilled with what she referred to as my new “hacked” do. That same bed, in another scene, gets hauled into my room when it was no longer my parents’, and my mom, I presume, couldn’t stand to look at it any longer. I can still see the worry on her face in those days and the disgust on his. I see the aftermaths of the few fights they couldn’t help but have us witness.

Most of the scenes are of our house at the top of the hill on McClintock Drive, but a few are of Dad’s townhouse in Rockville, near the roller rink. I remember his girlfriend, Amy, and how stupid I thought she was. I remember our Atari set and all our cool new stuff over there. And, of course, I remember Dad’s really annoying crack-of-dawn routine of “Rise and Shine!”

I was my daddy’s darling, and my mommy’s little angel.

Then without warning I wasn’t.

Had I known I should have been squirreling away memories as precious keepsakes, I would have scavenged for more smiles, clung to each note of contagious laughter and lingered steadfast in every embrace. Memory is funny like that: futile facts and infinitesimal details are fixed in time, yet things you miss, things you wish you paid fuller attention to, you may never see again.

I was just a regular kid before I was ever really asked to “remember.” Up until then, I’d been safe in my own little world: every boo-boo kissed, every bogeyman chased away. And for a small voice that had never been cool enough, clever enough, or captivating enough, it was finally my turn. There was no other choice; I was the only witness.

“Tell us everything you know, Katie. It is very important that you try to remember everything you saw.”

August 11, 1983

I am five. I’ll be in kindergarten this year, Ben is going to third grade, Jack will be in seventh. I’m not sure where the boys are today; all I know is that I’m glad it’s just me and Mom. We’re in the car, driving in our Ford wagon, me bouncing unbuckled in the way back. We sing over the radio like we always do. We’re on our way to my dad’s office, for the fivehundredth time. Not sure why, again, except that “they have to talk.” They always have to talk. Ever since Dad left and got his new townhouse with his new girlfriend, all they do is talk.

Mom pulls into a space in front of the office. The parking lot for some reason is practically empty. His cleaning business is all the way in the back of this long, lonely stretch of warehouse offices, all boring beige and ugly brown, with big garage doors and small window fronts.

“You can stay here, sweetie pie I won’t be long.”

I have some of my favorite coloring books and a giant box of crayons; I’ll be fine.

Time passes in terms of works of art. Goofy, Mickey, and Donald are all colored to perfection be fore I even think to look up. I am very fond of my artistic abilities; my paint by numbers are exquisite, and my papier-maché, as far as I’m concerned, has real promise for five. All of my works are fridge-worthy; even my mom thinks so. My special notes and handmade cards litter her nightstand, dresser, and bathroom counter.

I hear a scream. Like one I’d never heard before, except on TV. Was that her? I sit still for a second, wait for another clue. That wasn’t her. But something tells me to check anyway just in case.

I scramble out from the way back, over the seat, and try to open the door, but I’m locked in why would she look me in? I tug at the lock and let myself out. With the car door still open, I scurry to the front window of my dad’s shop, and on my tiptoes, ten fingers to the ledge, I can see inside. The cage with the snakes is there, the desk and chairs are there, the cabinets and files are there, everything looks normal like the last time I was inside. Where are they?

Then through the window, I see my mom. At the end of the hall, I can see her through the doorway. But just her feet. Well, her feet and part of her legs. They are there, on the floor her sandals still on. I can make out the tip of his shoe too, at her thigh, like he’s sitting on top of her. She is still. I don’t get it. Why are they on the floor? I try to open the door, but it’s locked. I don’t recall knocking; maybe I did. I do know that I didn’t yell to be let in, call for help, or demand that I know what was going on.

It wasn’t her. It sounded like it came from down the street, I tell myself. Maybe it wasn’t a scream scream, anyway. Someone was probably just playing, I convince myself. I get back in the car. I close the door behind me and color some more.

Only two pages are colored in this time. Not Mickey and friends, Snow White now. Fairy tales. My dad knocks on the window, startling me, smiling. “Hey, princess. Your mom is on the phone with Aunt Jeannie, so you’ll just see her Monday. You’re coming with me, kiddo. We have to go get your brother.”

Everything I’ve seen is forgotten. My dad’s convincing smile, tender voice, and earnest eyes make all my fright disappear. He told me she was on the phone, and I believed him. How was I supposed to know that dads could lie?

Two days later, my brothers and I were at the beach on a job with Dad when our grandparents surprised us with the news. “Your mother is missing.” And it was only then, when I sensed the fear they tried so intently to wash from their faces, that the realization struck me as stark panic, that l was brought back to the scene for the first time and heard the scream I understood was really her.

My testimony would later become the turning point in the case, reason enough to convict my father, who in his cowardice had covered all his traces. Even after his conviction, it would be three more years until he fully confessed to the crime. I was eight when I stood, uncomfortable, in a stiff dress at her grave for the second time more flowers, same priest, same prayers.

To say I grew up quickly, though, as people have always suspected, would be a stretch. Certainly, I was more aware, but the shades of darkness were graced with laughter and lullabies and being a kid and building forts, and later, learning about my period from my crazy grandma.

I honestly don’t remember being treated any differently, from Grandma Kate at least. If I got any special attention, I didn’t know it. Life went on. Time was supposed to heal all wounds. My few memories of mom, despite my every attempt, faded with each passing holiday.

I was in Mrs. Dunne’s third grade class when my dad finally confessed. We faced a whole ’nother wave of reporters, news crews, and commotion. They replayed the footage on every channel: me, five years old again, clad in overalls, with my Care Bear, walking into the courtroom. And just like before, my grandpa taped all the news reels. “So we never forget,” he said.

For our final TV interview, my grandparents, the boys, and I sat in our church clothes in the front room to answer the reporter’s questions. I shifted around on Grandma Kate’s lap in my neatly pressed striped Easter dress. Everybody had a turn to talk. I was last. “Katie, now that the case is closed, do you think you will be able to move on?”

I’m not sure how I knew it then, especially when so many years of uncertainty were still to come, but I was confident: “Yeah.” I grinned. “I think I’ll make it.”

Chapter One

TEACHING MOMENT

“Well, I just called to tell you I’ve made up my mind.” Silence “I will not be returning to school next year.”

Silence “I don’t know where I’m going or what I’m going to do I just know I cannot come back.”

Barbara, my faculty chair, on the other end of the line, fumed. I could hear it in each syllable of Catholic guilt she spat back at me. We’d ended a face-to-face meeting the day before with, “I’ll call you tomorrow with my decision,” as we agreed to disagree on the fact that the students were more important than my mental health and well-being.

“What will they do without you? You know how much they love you. We created this new position for you, and now you’re just going to leave? Who will teach the class? It’s August!” she agonized.

God, she was good. She had this guilt thing down pat. An ex-nun, obviously an expert, and this was the first I’d been on her bad side, a whole year’s worth of smiles, waves and high-fives in the hallways seemed to get clapped out with the erasers.

It was true; I loved the kids and didn’t want to do this so abruptly, like this is August. This was not my idea of a resume builder. Nevertheless, as each bit of honesty rose from my lips, I felt freer and freer and more true to myself than I’d felt in, well, a long frickin’ time. A sense of relief washed through me in a kind of cathartic baptism, cleansing me of the guilt. I stopped pacing. A warm breeze swept over the grass on the hill in front of our condo then over me. I stood on the sidewalk still nervous, sweating, smiling, teary-eyed. I can’t believe I just did that.

St. Anne’s was a very liberal Catholic school, which ironically, had given me a new faith in the closeminded. The building housed a great energy of love and family. I felt right at home walking through its doors even at new-teacher orientation, despite it having been a while since I needed to be shown the ropes. I’d already been teaching for six years in a position where I’d been mentoring, writing curriculum and leading administrative teams. I normally didn’t do very well on the bottom rung of the totem pole, but more pay with less responsibility had its merit.

It was definitely different, but a good different. I felt newly challenged in a bigger school, looked forward to the many programs already in place and the diversity of the staff and student body. The ceremonies performed in the religion-based setting seemed foreign at first, yet witnessing the conviction of our resident nuns and tenured faculty restored a respect I had lost over the years. They were the hymns that I recognized, the verses I used to recite, the prayers I was surprised I still remembered, the responses I thought I’d never say again.

The first time we had Mass together as an entire school, I was nearly brought to tears. I got goose bumps when the notes from the piano reverberated off the backboards on the court the gym-turned-place-of-worship hardly seemed the place to recommit. Yet, hearing the harmony of our award-winning gospel choir and witnessing the level of participation from the students, faculty, and administration, I was taken aback. The maturity of devotion in the room was something I had never experienced in any of my churches growing up. Students, lip-synching their words, distracted and bored, still displayed more enthusiasm than the lumps hunched at my old parishes.

It was during that first Mass that I realized there was only one person who could have gotten me there to a place she would have been so proud to tell her bridge club I was working. She would have been thrilled for me to find God here. The God she knew, her Catholic God the one who had listened to her rosary, day after day, her pleas for her family’s health and well-being, her pleas for her own peace and forgiveness. Gma had orchestrated it all. I was certain.

As that realization unfolded, I saw a glimpse of her endearing eyes, her tender smile before me, and with that my body got hot, my lashes heavy, soaked with a teary mist. Although it would be months till I stumbled upon a glimpse of what some might call God, it was here, at St. Anne’s, where I gained a tradition I had lost, a perspective I had thought impossible, a familiarity that let me feel a part of something, and a trust that may have ultimately led me straight out the door.

Our kitchen, growing up, reeked of canned beans and burnt edges. Grandma Kate knew of only one way to cook meat, crispy. On most nights, the fire alarm let us know that dinner was ready. The table was always set before I’d come running in, at the sound of her call, breathless from playing, to scrub the dirt from my fingernails. She was a diligent housewife, though at times she played the part of something far more independent. The matriarch, we called her the gel to the whole damn bunch of us: her six, or five rather, and us three.

She responded to Grandma Kate, or just Kate, or Kitty, as her friends from St. Cecelia’s called her, or Catherine, as she generally introduced herself, or Gma, as I later deemed her all names necessary to do and be everything that she was to all of us.

She and I had our moments through my adolescence where the chasm of generations between us was more evident than we’d bother to address. They’d sold their five-bedroom home in Manor Club when it was just she and Grandpa left alone inside the walls baring all their memories. The house had character worn into its beams by years of raising six children and consequently taking the abuse of the (then) eleven grandchildren like a docile Golden Retriever.

It wasn’t long after my grandpa died that I moved back in with Gma. At fifteen, it was just she and I in their new two-bedroom condo like college roommates, bickering at each other’s annoying habits, ridiculing each other’s guests, and sharing intimate details about each other’s lives when all guards were off and each other was all we had.

Despite our differences, her narratives always fascinated me. I had grown up on Gma’s tales and adventures of her youth. In most of her stories, she depicted the trials of the Depression and conversely, the joys of simplicity. She encouraged any craft that didn’t involve sitting in front of the television. She believed in hard work, and despite her dyslexia, was the first woman to graduate from Catholic University’s Architectural School in the mid-1940s. “Of course,” she said. “There was no such thing as dyslexia in my day. Those nuns damn near had me convinced I was just plain dumb.”

She was a trained painter and teacher, a fine quilter, gardener, and proud lefty. She had more sides to her than a rainbow-scattering prism. When we were young and curious, flooding her with questions, we’d “look it up” together. When we had ideas, no matter how silly, she’d figure out a plan to somehow help us make it happen. All of us grandkids had ongoing special projects at any given time: whether it was building in the garage, sewing in the living room, painting in the basement, or taking long, often lost, “adventures” that brought us closer to her past.

She was from Washington DC, so subway rides from Silver Spring into the city were a regular episode. We spent so many hours in the Museum of Natural History I might attribute one of my cavities to its famous astronaut ice cream. We also went to see the cherry blossoms when they were in bloom each year, visited the National Zoo and toured the Washington Monument as well as several of the surviving parks and canal trails from her childhood.

It was on these journeys that she and I would discuss life, politics, war, religion, and whatever else came to mind. She was a woman of many words, so silences were few and far between. I got to know her opinion on just about everything because nothing was typically left unsaid, nothing.

By the time I was in high school and college, the only music we could agree on in the car was the Sister Act soundtrack. On our longer jaunts when conversation dripped to a minimum, I would toss in the tape before the banter went sour, which was a given with our opposite views on nearly everything. I’d slide back the sunroof, and we’d sing till our hearts were content.

“Hail mother of mercy and of love. Oh, Maria!”

She played the grouchy old nun, while I was Whoopi, trying to change her stubborn ways.

Gma and I both loved musicals, but while I was off scalping tickets to see Rent on Broadway, which she would have found too loud and too crude (God knows she would have had a thing or two to say about the “fairy” drag queen), she was content with her video of Fiddler on the Roof.

As I sat in the theater recently for the Broadway performance of Lion King, I couldn’t help but picture her sitting there beside me, her big, brown eyes shifted right with her good ear turned to the stage; it was a show we would have both agreed on.

For the theater, she would have wetted down her short gray wispy hair and parted it to the side and then patted it down just so with both hands. A blouse and a skirt would have already been picked out, lying on the bed. The blouse would get tucked in and the belt fastened not too far below her bra line. Then she’d unroll her knee highs from the toes and slip on some open toe sandals, depending on the season; she didn’t mind if the hose showed. Some clip-on earrings might have made their way to her virgin lobes, if she remembered, and she would have puckered up in the hallway mirror with a tube of Clairol’s light pink lipstick from her pocketbook before announcing that she was ready.

Gma would have loved the costumes, the music, the precision in each detail. And in the car ride home, I can hear her now, yelling over the drone of the car’s engine because her hearing aid had remained in the dresser drawer since the day she brought it home. “There wasn’t but one white fella’ in the whole gosh dern show. Every last one of ’em was black as the day is long, but boy could they sing. God, what beautiful voices they had, and even as deaf as I am I could understand what they were saying. They were all so well spoken.”

Rarely does a day go by that I don’t smile at one of her idioms or imagine one of her crazy shenanigans, her backward lessons, or silly songs. I used to feel guilty about the proportion I spent missing her over the amount I did my own mother. I guess it makes sense, though, to miss what I knew for far longer, and I suppose I had been swimming laps in the gaping void I housed for my mom.

Over the years, I often thought if I truly searched for my mom she would give me a sign, but where would I even look? Or would I even dare? Gma believed in those kinds of things, and despite having long lost my religion, she made me believe.

She told me a story once, without even looking up from the quilt she mended, about a dark angel who sat in a chair by the window in the corner of the room, accompanying her in the hospital as her mother lay on her deathbed gripped by cancer. She said the angel’s presence alone had been enough to give her peace. I had watched her get mistyeyed while she brought herself back to the scene, still pushing the thimble to the fabric. Another time, she continued, she sat on the front step of their first house on Pine Hill in hysterics as she’d just gotten word of her three-year-old daughter’s cancer diagnosis; she’d felt a hand on her shoulder enough to calm her. She knew then she wasn’t alone.

These conversations became typical when it was just us. When she cried, so did I. We wore each other’s pain like thick costume makeup, nothing a good cry and some heavy cold cream couldn’t take off. She shared with me her brinks of meltdowns after losing my mother, and I grew up knowing that she had far more depth than her overt simplicity echoed.

It wasn’t until my latter college years, though, when we had become so close we were able to overlook most of our differences. By then, I wanted all the time I spent running away back; I wanted my high school bad attitude and disrespect erased; I wanted the smell of my cigarette smoke in her station wagon to finally go away. She was my history. She was my companion. She was home to me.

In the last few years, we shared our haunts, our fears, our regrets. Yet, we laughed a lot. She never minded being the butt of any good joke. She got crazier and goofier in her old age, shedding more of her crossbred New England proper and Southern Belle style. One of my favorite memories was of the time my college roommate, Kathleen, and I taught her how to play “Asshole” at our Bethany, Delaware, beach house.

Gma had said, “The kids were all down here whoopin’ it up the other night playing a game, havin’ a good ol’ time, hootin’ and hollerin’. I would like to learn that game. They kept shouting some curse word what’s it called again?”

“Asshole?” I had said.

“Yup, that must be it. Asshole sounds right. Think you can teach this old bird?”

Kathleen and I nearly fell over at the request but were obliged to widen Gma’s eyes to the awesome college beer-drinking game full of presidents, assholes, and beer bitches. And she loved it, quite possibly a little tipsy after a few rounds. We didn’t typically play Asshole with Jacob’s Creek chardonnay.

Throughout the course of several conversations, Gma assured me that she’d had a good life and when the time came, she’d be ready. In those last few years, if I stood in her condo and so much as mentioned the slightest gesture of admiration toward anything she owned, she’d say, “Write your name on the back.” She’d have the Scotch tape and a Sharpie out before I could even reconsider.

It was 2003, a year into my teaching career, when Gma finally expressed how proud she was of me. She said that my mom had always wanted to be a teacher, that she was surely proud of me, too. I’ll never forget waking up to my brother’s phone call, his voice solemn. I was devastated.

It was my mom and Gma who helped Brooke and I get our house, I always said. I had signed a contract to start at St. Anne’s in the fall, so we needed a home outside the city that would make my new commute toward DC more bearable. Three years after Gma died, since I wasn’t speaking to God much in those days, I asked Mom and Gma to help us out if they could. Brooke, who only knew Gma through my incessant stories, was just as kooky as I was when it came to talking to the dead so she never batted an eye at the references I made to the china cabinet.

Gma’s old antique china cabinet green until she stripped, sanded and painted it maroon the year she moved to her condo sat in the dining room of our rented row house in Baltimore. (The smell of turpentine will always remind me of her leathered hands.) Sometimes, for no good reason, the door would fall slightly ajar, and each time it did, I swore she was trying to tell me something. While dating a girl I imagine Gma was not particularly fond of, I eventually had to put a matchbook in the door just to keep the damn thing closed it creeped me out in the mornings when I’d wake up to the glass door gaping.

The exact night Brooke and I put the contract in on our house, we mentioned something to Gma before going to bed, kissing our hands and casually patting the side of the paint-chipped cabinet. The next morning wide open. Two days later contract accepted. I was elated; I’d never had such a good feeling about anything.

I felt so close to my team of guardian angels then. Everything seemed to be in its delightfully divine order, and I thanked them immensely from the moment we began the purchasing process until the time we moved in, displaying my gratitude thereafter with each stroke of my paintbrush and each rock pulled from the garden. I adored the home we were blessed with, our cute little cobblestone accented condo, our very first house. Even though we knew it wasn’t a forever home, it was ours to make our own for now. And we did or we started to.

So when the fairy tale began to fall apart, just a little over a year later, I couldn’t help but question everything, intentions, meaning. There was no sign from the china cabinet. None of it made sense, the reason behind it all, I mean. Sure, I had always known growing up that everything has its reason. I have lived by that motto, but I could make no sense of this. It’s one thing for a relationship to fall apart, but to have gone all this way, with the house to tie us even further? I was beside myself.

Needless to say, my bits of gratitude tapered off as I felt like I had less and less to be thankful for. I still talked to Mom and Gma, but not without first asking, “Why?” And something, quite possibly the silence that made the question seem rhetorical told me I was going to have to get through this on my own. Perhaps it was a test of independence or a sudden stroke of bad karma for all the years spent being an obnoxious teenager, ungrateful, untrustworthy. Either way I was screwed; of that much I was certain.

I had always wanted to leave. To go away, I mean. Study abroad or go live in another state and explore. I had traveled a little in college but nowhere extensively. So, as all the boxes moved into our brand new house were unpacked and making their way into storage, the reality of being bound started creeping into my dreams through suffocation. I was faintly torn. Not enough to dampen the mood, because I imagined that somehow all that other stuff, my writing, my passions, would come later. It would all fall into place somehow. I guess I trusted even in the slightest possibility, although I knew that with each year of teaching, the job that was supposed to give me time off to be creative, I felt more and more comfortable and lackadaisical about pursuing my dreams.

I took a writing course online that drove in some discipline, only to drop it midway when things got complicated. Brooke often entertained the idea of moving to California, which kept me content, although I knew with the look of things that was only getting further and further from practical. But since being honest with myself wasn’t my strong suit, I ignored my intuition, and looking back, ignored a lot of signs that might have politely escorted me out the door rather than having it slammed in my face.

Chapter Two

SEX IN PLURALISTIC SOCIETY

I took a course, Sex in a Pluralistic Society, in my last semester of college. Somehow I thought it was going to be a lecture on the sociology of gender. Keep in mind this was the same semester I tried to cram in all my last requirements, registering for other such gems as Plagues and People; Death, Dying and Bereavement; and History of Theology.

Yes, the sex class was the lighter side to my schedule, but my prude Catholic upbringing made a sex journal, “Or, if you don’t have a partner, make it a self-love journal,” a really difficult assignment. Plus, the guy who taught the class just creeped me out. The videos he made us watch, I’m still traumatized. A classmate and I thought to complain, on several occasions, but it was both of our last semesters so it’s fair to say that, like me, she left that sort of tenacity to the underclassmen.

Despite the dildos, the pornography, and the daylong discussion on G-spots, I did take away one valuable lesson from that loony old perv. It was toward the end of the semester when the concept of love was finally introduced. By then, I had done my fair share of heart breaking and had tasted the bitter side of breakup a few times myself. I was sure I knew everything he had to say.

Instead, I was surprised to find myself taking notes when he broke down the Greeks’ take on the four different kinds of love: agape, eros, philia, storge. We discussed unconditional love versus conditional love. Yeah, yeah; I knew all that. He went on to describe eros as manic love, obsessive love, desperate love.

“This is the kind of love movies are based on. It’s high energy, high drama, requires no sleep, is built on attraction, jealousy runs rampant; it comes in like a storm and subsides often as quickly as it came in.” I cringed when he said, “It’s immature love.”

And here, I thought, this is what it was all about. All lesbian love, at least all those wonderful, electrifying things! Eros it even sounds erroneous.

It was when I was dating the most confident and beautiful, twinkly-eyed woman I’d ever laid my hands on, some four years later, that I was brought back to that lecture. Despite our good intentions and valiant attempts at maturity, Brooke and I had a relationship built on many of those very erroneous virtues. It was movie-worthy high passion infused with depths that felt like coming down from a rock star kind of party.

Perhaps it’s because it began all wrong. She was fresh out of college. I was already teaching, working weekends at a chick bar in Baltimore at the time, Coconuts, our very own Coyote Ugly. One night, a friend of hers (she admitted later) noticed me, in my finest wicker cowboy hat and cut-off shirt, slinging beers and lining up shots between stolen, flirtatious moments on the dance floor. A week later, Brooke and I were fixed up at a party. We were both in other relationships that we needed excuses to get out of, so why not? She was beautiful (did I say that already?), tall, caramel skin and hazel eyes, tomboy cute when she was feeling sporty, simply stunning when dressed to the nines. She even fell into her dad’s Brazilian accent after a few cocktails, which sealed it for me; I was enamored. Plus, she was a bona fide lesbian (a first for me), and we wore the same size shoe. What more do you need?

We did everything together: tennis, basketball, squash. She’d patiently sit on the beach while I surfed. I always said yes to her shopping trips. We even peed with the door open so as to not interrupt conversation. And I’m almost certain I slept right on top of her for at least a solid year, I’d never been considered a “peanut” before. In fact, I don’t think we separated at all for the first couple of years we dated, now that I think about it. Maybe for an odd trip, but it didn’t go without feeling like we’d lost a limb, I swear. We’d always say, “No more than five days,” as if we wouldn’t have been able to breathe on day six.

When we first started dating, I went to Japan for nine days to visit my brother Jack who had been stationed at Atsugi. I was pretty pathetic. It was my first time traveling alone, so when I stepped off the plane on foreign soil and my family wasn’t at the gate ready to collect me, I quickly reverted to my inner child, the sweeping panic stretched from my tippy toes to my fingertips.

It was the same feeling I used to get in Kmart when I’d look up from the shelf to tug at the skirt of the lady standing beside me, only to be both mortified and petrified when I realized that face and body didn’t belong to my Gma. I’m not sure who was supposed to be keeping track of whom, but whoever it was did so poorly. Hence the reason why I developed a system: I’d go sit in the back of our station wagon where I knew it would be impossible for me to be forgotten among the dusty racks of stiff clothes. The first time I put this system into place, unbeknownst to anyone, I resurfaced from the car when the two police cars arrived, to see what all the hubbub was about. Boy, were they glad to see me when I strolled back through the automatic sliding doors, unaware of all the excitement I had started.

Thank God my sister-in-law found me in Tokyo after I’d already figured out how to work the phones and had dialed home. Brooke had calmed me down by talking me through the basics: I wasn’t lost; I was just on the brink of being found, she assured me. I’d hung up and collected myself by the time Jill and the kids arrived.

Every evening in Japan, I slid away from the family and hid in my room where I clumsily punched hundreds of calling card numbers into the phone just so I could hear Brooke’s voice before bed. And like me, she was dying inside at the distance between us.

Sure, there were some caution signs, some red flags being waved, but all the good seemed to outweigh the bad, and who’s perfect, really? I thought some of my ideologies about love were too lofty and maybe, just maybe, I had to accept that I would never have all that I desired from a relationship, like say, trust. Plus, people grow, they mature, relationships mature; surely we’d be the growing kind. We liked self-help books. We had a shelf where they sat, most of them at least half-read.

Her family loved me, and I adored them. Yes, it took a while for them to get used to the idea of me being more than just Brooke’s “roommate.” Thankfully, the week Brooke came out to her family, a close family friend, battling breast cancer, took a turn for the worse. Brooke came back at her parents’ retorts with, “Well, at least I don’t have cancer.” And to that, well they had to agree.

Brooke and I traveled together. We loved the beach. We loved food and cute, quaint little restaurants. We loved playing house and raising a puppy. We loved talking about our future and a big fat gay wedding, and most of all we loved being loved. We bought each other flowers and little presents and surprised each other with dinner and trips and concert tickets. I’ll never forget the anniversary when she had me get all dressed up just to trick me into a beautiful candlelit dinner at home. I could have sat at that table forever, staring into her shimmering, smiling eyes, or let her hold me for just as long as we danced among the rose petals she’d scattered at our feet.

It was for all those reasons that the darkness never outweighed the light, the screaming matches, the silent treatments, the distrust, the jealousy. All those things seemed part of our short past when we began shopping for our first home. It was a blank slate, a new beginning signing the paperwork, picking out furniture, remodeling our kitchen.

God, we danced so much in that kitchen.

We laughed at our goofy dog, Porter. We cried on our couch, watching movies. We supported each other in our few separate endeavors. We shared chores and “mom” duty and bills and credit cards. And I think it was under the weight of all the things of which we were once so proud that it all began to crumble. “Do you have to slam the cabinets like that?” as if I were picking up new habits to purposefully push her away. “I hate fighting like this in front of him!” she’d say, pointing at Porter. “Look, we’re making him nervous.”

She sobbed and sobbed, and her big beautiful eyes remained bloodshot for at least six months as I watched her slip away from me. I begged her to tell me what she needed, and even that she couldn’t do.

Brooke finally had a social life that I supported wholeheartedly, but that social life seemed to echo more and more of what was wrong with us. During the day things appeared fine and good and normal, but at night her cold shoulder sent me shivering further and further to the opposite side of the bed until I eventually moved into the spare bedroom.

I didn’t get it. I said that I did, that I understood, but I didn’t.

She spent an awful lot of time with a “friend.” Julie, a mutual friend, or so I thought. We all hung out together, thus I didn’t think to question anything until it became more and more blatant. I would beg, “Just tell me what’s going on with the two of you. I’m a big girl. I’ll just walk away. But I can’t just sit around here feeling batty while you deny what I can see with my own two eyes!”

She wouldn’t admit to it. “Nothing is going on.” She said she just needed time to figure herself out.

In the meantime, I was still her home. I was still her best friend and even at the furthest distance she’d pushed me to, I was the one who calmed her when the weight of it all made her come unhinged.

I was the one who rubbed her back and kissed her forehead.

She wanted me to be an asshole, so she’d have an excuse. She wanted me to get pissed to lessen her compounding guilt. I’m not sure if it was that I couldn’t or that I wouldn’t do either of those things. I still hung onto what I’d promised with that sparkly little ring I’d given her, not the real thing, but a big promise. I had taken it all very seriously. “In sickness and in health.” And here she was before me, as far as I could see it, sick.

Well, sick was the only diagnosis that wouldn’t allow me to hate her as she inhabited our home with me, a platonic roommate, sometimes cold and aloof and other times recognizable and warm. I felt like we had somehow been dragged into the drama of a bad after-school special without the happy commercials of sugary cereals and toys that will never break or end up like the Velveteen Rabbit, who ironically, I was really starting to resemble in the confines of our condo with its walls caving in.

While the final days of summer strode past in their lengthy hour, the honest words, “I want to take a break,” were inescapably spoken. I felt sick, stunned by the syllables as they fell from her lips. We’d been at the beach for the weekend where I naively thought we might be able to spend some time all to ourselves, mending the stacks of broken things between us. I knew this had to do with Julie, but still nobody had the guts to admit it. I was infuriated. So much so, that I reduced myself to checking cell phone logs and sleuthing around my own home. I hated myself for the lengths I allowed her to push me.

There was no way I could return to school as my signed contract promised. I couldn’t imagine focusing on my students while I was so busy focusing on my failing relationship. Although the last thing I wanted to do was uproot myself, I had finally begun to gather the pebbles of self-respect that would eventually become its new foundation. I had to go.

And with the phone call to my faculty chair, I did exactly what I never imagined I would do. I resigned. I had never been so excited to throw in the towel well, except for that one awful restaurant where I was too much of a coward to quit so I faxed in my resignation an hour before my shift that time felt good too. But this was different. I didn’t chicken out. I stood up to Barbara’s crucifix-firing cannon and prevailed.

When Brooke and I weren’t fighting or walking on eggshells around each other, she dove into my arms expressing her undying love for me, and I held the stranger I no longer connected with, consoling her. I didn’t know what to make of all the mixed emotions. I had taken my accusations to Julie herself to try to get some answers, but she laughed at my arguments, claiming Brooke was “too confused” to be dating anyone right now. Julie was older, with graying wisps, loafers and pleated pants. To look at her anymore made me sick. And, after all, Brooke still wore the ring I’d given her. Still, after nine months, none of it made any sense.

The night she woke me, cross-legged on the floor at my bed because she couldn’t sleep and it was driving her crazy, she looked desperate. I held her and stroked her hair, calming her with my patient voice, exuding every ounce of love that could look past my own pain to reduce hers. Healthy? Probably not. But that was the only way I knew how to love her. To put everything of me aside. Everything.

I have always wanted a family. From the time I was little I knew I would be a mom. At eight, I thought marrying a rich man and becoming a housewife was the golden ticket to true happiness, along with becoming the president, a monkey trainer, and a marine biologist. My pending future changed with the weather, but rich was almost always a constant. A valid measure of success at eight, I suppose. A family, and its entire construct, was very important to me: the house, the dog, the hus or now the wife, all of it.

And that’s what Brooke and I had, or we talked like we did. Raising our puppy from ten weeks to his “man”hood and buying household goods on joint credit cards. We were all grown up like a real family. With our names linked on more than just the dog’s birth certificate, “taking a break” was really a separation and anything beyond that was really a divorce. I hadn’t reached that logic in my head, perhaps because I still refused to believe that all I imagined was disintegrating before me, where I stood, clenching fistfuls of hopeless dust.

Chapter Three

ON THE GOOD FOOT

I toyed with the idea of California, as I had always talked about. No reason to stay here. Seriously, with no excuses holding me back, I searched tirelessly for jobs on craigslist day in and day out. And there was an edge of excitement in taking control, or that’s what I convinced myself was going on. I applied for a few teaching jobs in California, Colorado, British Columbia, and even New York. I was intrigued by the schools that touted their outdoor education programs and offered classes like rock climbing and snowboarding. I reasoned with myself: teaching can’t be all that bad with a mountain backdrop and class cancelations for white-water rafting.

*

from

I Think I’ll Make It. A True Story of Lost and Found

by Kat Hurley

get it at Amazon.com

Mental Illness, Why Some and not Others? Gene-Environment Interaction and Differential Susceptibility – Scott Barry Kaufman * Gene-Environment Interaction in Psychological Traits and Disorders – Danielle M. Dick * Differential Susceptibility to Environmental Influences – Jay Belsky.

“Whether your story is about having met with emotional pain or physical pain, the important thing is to take the lid off of those feelings. When you keep your emotions repressed, that’s when the body starts to try to get your attention. Because you aren’t paying attention. Our childhood is stored up in our bodies, and one day, the body will present its bill.”

Bernie Siegel MD


In recent years numerous studies show the importance of gene-environment interactions in psychological development, but here’s the thing: we’re not just finding that the environment matters in determining whether mental illness exists. What we’re discovering is far more interesting and nuanced: Some of the very same genes that under certain environmental conditions are associated with some of the lowest lows of humanity, under supportive conditions are associated with the highest highs of human flourishing.

Evidence that adverse rearing environments exert negative effects particularly on children and adults presumed “vulnerable” for temperamental or genetic reasons may actually reflect something else: heightened susceptibility to the negative effects of risky environments and to the beneficial effects of supportive environments. Putatively vulnerable children and adults are especially susceptible to both positive and negative environmental effects.

Children rated highest on externalizing behavior problems by teachers across the primary school years were those who experienced the most harsh discipline prior to kindergarten entry and who were characterized by mothers at age 5 as being negatively reactive infants.

Susceptibility factors are the moderators of the relation between the environment and developmental outcome. Is it that negativity actually reflects a highly sensitive nervous system on which experience registers powerfully negatively when not regulated by the caregiver, but positively when coregulation occurs?
Referred to by some scientists as the “differential susceptibility hypothesis”, these findings shouldn’t be understated. They are revolutionary, and suggest a serious rethinking of the role of genes in the manifestation of our psychological traits and mental “illness”. Instead of all of our genes coding for particular psychological traits, it appears we have a variety of genetic mutations that are associated with sensitivity to the environment, for better and worse.

Known epigenetic modifications (cell specialization, X inactivation, genomic imprinting) all occur early in development and are stable. The discovery that epigenetic modifications continue to occur across development, and can be reversible and more dynamic, has represented a major paradigm shift in our understanding of environmental regulation of gene expression.

Glossary
Gene: Unit of heredity; a stretch of DNA that codes for a protein.
GxE: Gene-environment Interaction.
Epigenetics: Modifications to the genome that do not involve a change in nucleotide sequence.
Heritability: The proportion of total phenotypic variance that can be accounted for by genetic factors.
Logistic Regression: A statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes).
In logistic regression, the dependent variable is binary or dichotomous, i.e. it only contains data coded as 1 (TRUE, success, pregnant, etc.) or 0 (FALSE, failure, non-pregnant, etc.)
Transcription Factor: In molecular biology, a transcription factor (TF) (or sequence-specific DNA-binding factor) is a protein that controls the rate of transcription of genetic information from DNA to messenger RNA, by binding to a specific DNA sequence. The function of TFs is to regulate – turn on and off – genes in order to make sure that they are expressed in the right cell at the right time and in the right amount throughout the life of the cell and the organism.
Nucleotide: Organic molecules that are the building blocks of DNA and RNA. They also have functions related to cell signaling, metabolism, and enzyme reactions.
MZ: Monozygotic. Of twins derived from a single ovum (egg), and so identical.
DZ: Dizygotic. Of twins derived from two separate ova (eggs). Fraternal twin or nonidentical twin.
DNA: Deoxyribonucleic Acid.
RNA: Ribonucleic acid is a polymeric molecule essential in various biological roles in coding, decoding, regulation, and expression of genes. RNA and DNA are nucleic acids, and, along with lipids, proteins and carbohydrates, constitute the four major macromolecules essential for all known forms of life.
Polymorphism: A location in a gene that comes in multiple forms.
Allele: Natural variation in the genetic sequence; can be a change in a single nucleotide or longer stretches of DNA.
GWAS: Genome-wide Association Study.
ORs: Odds Ratios.
Phenotype: The observed outcome under study; can be the manifestation of both genetic and/or environmental factors.
Dichotomy: A division or contrast between two things that are or are represented as being opposed or entirely different.
Chromosome: A single piece of coiled DNA containing many genes, regulatory elements, and other nucleotide sequences.

Gene-Environment Interaction and Differential Susceptibility

Scott Barry Kaufman

Only a few genetic mutations have been discovered so far that demonstrate differential susceptibility effects. Most of the genes that have been discovered contribute to the production of the neurotransmitters dopamine and serotonin. Both of these biological systems contribute heavily to many aspects of engagement with the world, positive emotions, anxiety, depression, and mood fluctuations. So far, the evidence suggests (but is still tentative) that certain genetic variants under harsh and abusive conditions are associated with anxiety and depression, but that the very same genetic variants are associated with the lowest levels of anxiety, depression, and fear under supportive, nurturing conditions. There hasn’t been too much research looking at differential susceptibility effects on other systems that involve learning and exploration, however.

Enter a brand new study

Rising superstar Rachael Grazioplene and colleagues focused on the cholinergic system, a biological system crucially involved in neural plasticity and learning. Situations that activate the cholinergic system involve “expected uncertainty” such as going to a new country you’ve never been before and knowing that you’re going to face things you’ve never faced before. This stands in contrast to “unexpected uncertainty”, which occurs when your expectations are violated, such as thinking you’re going to a Las Vegas family friendly Cirque Du Soleil only to realize you’ve actually gotten a ticket to an all-male dance revue called “Thunder from Down Under” (I have no idea where that example came from). Those sorts of experiences are more strongly related to the neurotransmitter norepinephrine.

Since the cholinergic system is most active in situations when a person can predict that learning is possible, this makes the system a prime candidate for the differential susceptibility effect. As the researchers note, unpredictable and novel environments could function as either threats or incentive rewards. When the significance of the environment is uncertain, both caution and exploration are adaptive. Therefore, traits relating to anxiety or curiosity should be influenced by cholinergic genetic variants, with developmental experiences determining whether individuals find expected uncertainty either more threatening or more promising.

To test their hypothesis, they focused on a polymorphism in the CHRNA4 gene, which builds a certain kind of neural receptor that the neurotransmitter binds to. These acetylcholine receptors are distributed throughout the brain, and are especially involved in the functioning of dopamine in the striatum. Genetic differences in the CHRNA4 gene seem to change the sensitivity of the brain’s acetylcholine system because small structural changes in these receptors make acetylcholine binding more or less likely. Previous studies have shown associations between variation in the CHRNA4 gene and neuroticism as well as laboratory tests of attention and working memory.

The researchers looked at the functioning of this gene among a group of 614 children aged 8-13 enrolled in a week long day camp. Half of the children in the day camp were selected because they had been maltreated (sexual maltreatment), whereas the other half was carefully selected to come from the same socioeconomic status but not have experienced any maltreatment. This study provides the ideal experimental design and environmental conditions to test the differential susceptibility effect. Not only were the backgrounds of the children clearly defined, but also dramatically different from each other. Additionally, all children engaged in the same novel learning environment, an environment well suited for cholinergic functioning. What did they find?

Individuals with the T/T variation of the CHRNA4 gene who were maltreated showed higher levels of anxiety (Neuroticism) compared to those with the C allele of this gene. They appeared to be more likely to learn with higher levels of uncertainty. In contrast, those with the T/T allele who were not maltreated were low in anxiety (Neuroticism) and high in curiosity (Openness to Experience). What’s more, this effect was independent of age, race, and sex.

These environments, the T/T allele (which is much rarer in the general population than the C allele) may be beneficial, bringing out lower levels of anxiety and increased curiosity in response to situations containing expected uncertainty.

These results are certainly exciting, but a few important caveats are in order. For one thing, the T/T genotype is very rare in the general population, which makes it all the more important for future studies to attempt to replicate these findings. Also, we’re talking vanishingly small effects here. The CHRNA4 variant only explained at most 1% of the variation in neuroticism and openness to experience. So we shouldn’t go around trying to predict individual people’s futures based on knowledge of a single gene and a single environment.

Scientifically speaking though, this level of prediction is expected based on the fact that all of our psychological dispositions are massively polymorphic (consists of many interacting genes). Both gene-gene and gene-environment interactions must be taken into account.

Indeed, recent research found that the more sensitivity (“plasticity”) genes relating to the dopamine and serotonin systems adolescent males carried, the less selfregulation they displayed under unsupportive parenting conditions. In line with the differential susceptibility effect, the reverse was also found: higher levels of selfregulation were displayed by the adolescent males carrying more senstivity genes when they were reared under supportive parenting conditions.

The findings by Grazioplene and colleagues add to a growing literature on acetylcholine’s role in the emergence of schizophrenia and mood disorders. As the researcher’s note, these findings, while small in effect, may have maltreatment is a known risk factor for many psychiatric disorders. Children with the T/T genotype of CHRNA4 rsl 044396 may be more likely to learn fearful responses in harsh and abusive environments, but children with the very same genotype may be more likely to display curiosity and engagement in response to uncertainty under normal or supportive conditions.

While it’s profoundly difficult predicting the developmental trajectory of any single individual, this research suggests we can influence the odds that people will retreat within themselves or unleash the fundamentally human drive to explore and create.

Gene-Environment Interaction in Psychological Traits and Disorders

Danielle M. Dick

There has been an explosion of interest in studying gene-environment interactions (GxE) as they relate to the development of psychopathology. In this article, I review different methodologies to study gene-environment interaction, providing an overview of methods from animal and human studies and illustrations of gene-environment interactions detected using these various methodologies. Gene-environment interaction studies that examine genetic influences as modeled latently (e.g., from family, twin, and adoption studies) are covered, as well as studies of measured genotypes.

Importantly, the explosion of interest in gene-environment interactions has raised a number of challenges, including difficulties with differentiating various types of interactions, power, and the scaling of environmental measures, which have profound implications for detecting gene-environment interactions. Taking research on gene-environment interactions to the next level will necessitate close collaborations between psychologists and geneticists so that each field can take advantage of the knowledge base of the other.

INTRODUCTION

Gene-environment interaction (GxE) has become a hot topic of research, with an exponential increase in interest in this area in the past decade. Consider that PubMed lists only 24 citations for “gene environment interaction” prior to the year 2000, but nearly four times that many in the first half of the year 2010 alone! The projected publications on gene-environment interaction for 2008–2010 are on track to constitute more than 40% of the total number of publications on gene-environment interaction indexed in PubMed.

Where does all this interest stem from? It may, in part, reflect a merging of interests from fields that were traditionally at odds with one another. Historically, there was a perception that behavior geneticists focused on genetic influences on behavior at the expense of studying environmental influences and that developmental psychologists focused on environmental influences and largely ignored genetic factors. Although this criticism is not entirely founded on the part of either field, methodological and ideological differences between these respective fields meant that genetic and environmental influences were traditionally studied in isolation.

More recently, there has been recognition on the part of both of these fields that both genetic and environmental influences are critical components to developmental outcome and that it is far more fruitful to attempt to understand how these factors come together to impact psychological outcomes than to argue about which one is more important. As Kendler and Eaves argued in their article on the joint effect of genes and environments, published more than two decades ago:

It is our conviction that a complete understanding of the etiology of most psychiatric disorders will require an understanding of the relevant genetic risk factors, the relevant environmental risk factors, and the ways in which these two risk factors interact. Such understanding will only arise from research in which the important environmental variables are measured in a genetically informative design. Such research will require a synthesis of research traditions within psychiatry that have often been at odds with one another in the past. This interaction between the research tradition that has focused on the genetic etiology of psychiatric illness and that which has emphasized environmental causation will undoubtedly be to the benefit of both. (Kendler & Eaves 1986, p. 288)

The PubMed data showing an exponential increase in published papers on gene-environment interaction suggest that that day has arrived. This has been facilitated by the rapid advances that have taken place in the field of genetics, making the incorporation of genetic components into traditional psychological studies a relatively easy and inexpensive endeavor. But with this surge of interest in gene-environment interaction, a number of new complications have emerged, and the study of gene-environment interaction faces new challenges, including a recent backlash against studying gene-environment interaction (Risch et al. 2009). Addressing these challenges will be critical to moving research on gene-environment interaction forward in a productive way.

In this article, I first review different study designs for detecting gene-environment interaction, providing an overview of methods from animal and human studies. I cover gene-environment interaction studies that examine genetic influences as modeled latently as well as studies of measured genotypes. In the study of latent gene-environment interaction, specific genotypes are not measured, but rather genetic influence is inferred based on observed correlations between people who have different degrees of genetic and environmental sharing. Thus, latent gene-environment interaction studies examine the aggregate effects of genes rather than any one specific gene.

Molecular genetic studies, in contrast, have generally focused on one specific gene of interest at a time. Relevant examples of gene-environment interaction across these different methodologies are provided, though these are meant to be more illustrative than exhaustive, intended to introduce the reader to relevant studies and findings generated across these various designs.

Subsequently I review more conceptual issues surrounding the study of gene-environment interaction, covering the nature of gene-environment interaction effects as well as the challenges facing the study of gene-environment interaction, such as difficulties with differentiating various types of interactions, and how issues such as the scaling of environmental measures can have profound implications for studying gene-environment interaction.

I include an overview of epigenetics, a relatively new area of study that provides a potential biological mechanism by which the environment can moderate gene expression and affect behavior.

Finally, I conclude with recommendations for future directions and how we can take research on gene-environment interaction to the next level.

DEFINING GENE-ENVIRONMENT INTERACTION AND DIFFERENTIATING GENE-ENVIRONMENT CORRELATION

It is important to first address some aspects of terminology surrounding the study of gene-environment interaction. In lay terms, the phrase gene-environment interaction is often used to mean that both genes and environments are important. In statistical terms, this does not necessarily indicate an interaction but could be consistent with an additive model, in which there are main effects of the environment and main effects of genes.

But in a statistical sense an interaction is a very specific thing, referring to a situation in which the effect of one variable cannot be understood without taking into account the other variable. Their effects are not independent. When we refer to gene-environment interaction in a statistical sense, we are referring to a situation in which the effect of genes depends on the environment and/or the effect of the environment depends on genotype. We note that these two alternative conceptualizations of gene-environment interaction are indistinguishable statistically. It is this statistical definition of gene-environment interaction that is the primary focus of this review (except where otherwise noted).

It is also important to note that genetic and environmental influences are not necessarily independent factors. That is to say that although some environmental influences may be largely random, such as experiencing a natural disaster, many environmental influences are not entirely random (Kendler et al. 1993).

This phenomenon is called gene-environment correlation.

Three specific ways by which genes may exert an effect on the environment have been delineated (Plomin et al. 1977, Scarr & McCartney 1983):

(a) Passive gene-environment correlation refers to the fact that among biologically related relatives (i.e., nonadoptive families), parents provide not only their children’s genotypes but also their rearing environment. Therefore, the child’s genotype and home environment are correlated.

(b) Evocative gene-environment correlation refers to the idea that individuals’ genotypes influence the responses they receive from others. For example, a child who is predisposed to having an outgoing, cheerful disposition might be more likely to receive positive attention from others than a child who is predisposed to timidity and tears. A person with a grumpy, abrasive temperament is more likely to evoke unpleasant responses from coworkers and others with whom he/she interacts than is a cheerful, friendly person. Thus, evocative gene-environment correlation can influence the way an individual experiences the world.

(c) Active gene-environment correlation refers to the fact that an individual actively selects certain environments and takes away different things from his/her environment, and these processes are influenced by an individual’s genotype. Therefore, an individual predisposed to high sensation seeking may be more prone to attend parties and meet new people, thereby actively influencing the environments he/she experiences.

Evidence exists in the literature for each of these processes. The important point is that many sources of behavioral influence that we might consider “environmental” are actually under a degree of genetic influence (Kendler & Baker 2007), so often genetic and environmental influences do not represent independent sources of influence. This also makes it difficult to determine whether the genes or the environment is the causal agent. If, for example, individuals are genetically predisposed toward sensation seeking, and this makes them more likely to spend time in bars (a gene-environment correlation), and this increases their risk for alcohol problems, are the predisposing sensation-seeking genes or the bar environment the causal agent?

In actuality, the question is moot, they both played a role; it is much more informative to try to understand the pathways of risk than to ask whether the genes or the environment was the critical factor. Though this review focuses on gene-environment interaction, it is important for the reader to be aware that this is but one process by which genetic and environmental influences are intertwined. Additionally, gene-environment correlation must be taken into account when studying gene-environment interaction, a point that is mentioned again later in this review. Excellent reviews covering the nature and importance of gene-environment correlation also exist (Kendler 2011).

METHODS FOR STUDYING GENE-ENVIRONMENT INTERACTION

Animal Research

Perhaps the most straightforward method for detecting gene-environment interaction is found in animal experimentation: Different genetic strains of animals can be subjected to different environments to directly test for gene-environment interaction. The key advantage of animal studies is that environmental exposure can be made random to genotype, eliminating gene-environment correlation and associated problems with interpretation.

The most widely cited example of this line of research is Cooper and Zubek’s 1958 experiment, in which rats were selectively bred to perform differently in a maze-running experiment (Cooper & Zubek 1958). Under standard environmental conditions, one group of rats consistently performed with few errors (“maze bright”), while a second group committed many errors (“maze dull”). These selectively bred rats were then exposed to various environmental conditions: an enriched condition, in which rats were reared in brightly colored cages with many moveable objects, or a restricted condition, in which there were no colors or toys. The enriched condition had no effect on the maze bright rats, although it substantially improved the performance of the maze dull rats, such that there was no difference between the groups.

Conversely, the restrictive environment did not affect the performance of the maze dull rats, but it substantially diminished the performance of the maze bright rats, again yielding no difference between the groups and demonstrating a powerful gene-environment interaction.

A series of experiments conducted by Henderson on inbred strains of mice, in which environmental enrichment was manipulated, also provides evidence for gene-environment interaction on several behavioral tasks (Henderson 1970, 1972). These studies laid the foundation for many future studies, which collectively demonstrate that environmental variation can have considerable differential impact on outcome depending on the genetic make-up of the animal (Wahlsten et al. 2003).

However, animal studies are not without their limitations. Gene-environment interaction effects detected in animal studies are still subject to the problem of scale (Mather & Jinks 1982), as discussed in greater detail later in this review.

Human Research

Traditional behavior genetic designs

Demonstrating gene-environment interaction in humans has been considerably more difficult where ethical constraints require researchers to make use of natural experiments so environmental exposures are not random. Three traditional study designs have been used to demonstrate genetic influence on behavior: family studies, adoption studies, and twin studies. These designs have been used to detect gene-environment interaction also, and each is discussed in turn.

Family studies

Demonstration that a behavior aggregates in families is the first step in establishing a genetic basis for a disorder (Hewitt & Turner 1995). Decreasing similarity with decreasing degrees of relatedness lends support to genetic influence on a behavior (Gottesman 1991). This is a necessary, but not sufficient, condition for heritability. Similarity among family members is due both to shared genes and shared environment; family studies cannot tease apart these two sources of variance to determine whether familiality is due to genetic or common environmental causes (Sherman et al. 1997).

However, family studies provide a powerful method for identifying gene-environment interaction. By comparing high-risk children, identified as such by the presence of psychopathology in their parents, with a control group of low-risk individuals, it is possible to test the effects of environmental characteristics on individuals varying in genetic risk (Cannon et al. 1990).

In a high-risk study of Danish children with schizophrenic mothers and matched controls, institutional rearing was associated with an elevated risk of schizophrenia only among those children with a genetic predisposition (Cannon et al. 1990). When these subjects were further classified on genetic risk as having one or two affected parents, a significant interaction emerged between degree of genetic risk and birth complications in predicting ventricle enlargement: The relationship between obstetric complications and ventricular enlargement was greater in the group of individuals with one affected parent as compared to controls, and greater still in the group of individuals with two affected parents (Cannon et al. 1993). Another study also found that among individuals at high risk for schizophrenia, experiencing obstetric complications was related to an earlier hospitalization (Malaspina et al. 1999).

Another creative method has made use of the natural experiment of family migration to demonstrate gene-environment interaction: The high rate of schizophrenia among African-Caribbean individuals who emigrated to the United Kingdom is presumed to result from gene-environment interaction. Parents and siblings of first-generation African-Caribbean probands have risks of schizophrenia similar to those for white individuals in the area. However, the siblings of second-generation African-Caribbean probands have markedly elevated rates of schizophrenia, suggesting that the increase in schizophrenia rates is due to an interaction between genetic predispositions and stressful environmental factors encountered by this population (Malaspina et al. 1999, Moldin & Gottesman 1997).

Although family studies provide a powerful design for demonstrating gene-environment interaction, there are limitations to their utility. High-risk studies are very expensive to conduct because they require the examination of individuals over a long period of time. Additionally, a large number of high-risk individuals must be studied in order to obtain a sufficient number of individuals who eventually become affected, due to the low base rate of most mental disorders. Because of these limitations, few examples of high-risk studies exist.

Adoption studies

Adoption and twin studies are able to clarify the extent to which similarity among family members is due to shared genes versus shared environment. In their simplest form, adoption studies involve comparing the extent to which adoptees resemble their biological relatives, with whom they share genes but not family environment, with the extent to which adoptees resemble their adoptive relatives, with whom they share family environment but not genes.

Adoption studies have been pivotal in advancing our understanding of the etiology of many disorders and drawing attention to the importance of genetic factors. For example, Heston’s historic adoption study was critical in dispelling the myth of schizophrenogenic mothers in favor of a genetic transmission explaining the familiality of schizophrenia (Heston & Denney 1967).

Furthermore, adoption studies provide a powerful method of detecting gene-environment interactions and have been called the human analogue of strain-by-treatment animal studies (Plomin & Hershberger 1991). The genotype of adopted children is inferred from their biological parents, and the environment is measured in the adoptive home. Individuals thought to be at genetic risk for a disorder, but reared in adoptive homes with different environments, are compared to each other and to control adoptees.

This methodology has been employed by a number of research groups to document gene-environment interactions in a variety of clinical disorders: In a series of Iowa adoption studies, Cadoret and colleagues demonstrated that a genetic predisposition to alcohol abuse predicted major depression in females only among adoptees who also experienced a disturbed environment, as defined by psychopathology, divorce, or legal problems among the adoptive parents (Cadoret et al. 1996).

In another study, depression scores and manic symptoms were found to be higher among individuals with a genetic predisposition and a later age of adoption (suggesting a more transient and stressful childhood) than among those with only a genetic predisposition (Cadoret et al. 1990).

In an adoption study of Swedish men, mild and severe alcohol abuse were more prevalent only among men who had both a genetic predisposition and more disadvantaged adoptive environments (Cloninger et al. 1981).

The Finnish Adoptive Family Study of Schizophrenia found that high genetic risk was associated with increased risk of schizophrenic thought disorder only when combined with communication deviance in the adoptive family (Wahlberg et al. 1997).

Additionally, the adoptees had a greater risk of psychological disturbance, defined as neuroticism, personality disorders, and psychoticism, when the adoptive family environment was disturbed (Tienari et al. 1990).

These studies have demonstrated that genetic predispositions for a number of psychiatric disorders interact with environmental influences to manifest disorder.

However, adoption studies suffer from a number of methodological limitations. Adoptive parents and biological parents of adoptees are often not representative of the general population. Adoptive parents tend to be socioeconomically advantaged and have lower rates of mental problems, due to the extensive screening procedures conducted by adoption agencies (Kendler 1993). Biological parents of adoptees tend to be atypical, as well, but in the opposite way. Additionally, selective placement by adoption agencies is confounding the clear-cut separation between genetic and environmental effects by matching adoptees and adoptive parents on demographics, such as race and religion. An increasing number of adoptions are also allowing contact between the biological parents and adoptive children, further confounding the traditional genetic and environmental separation that made adoption studies useful for genetically informative research.

Finally, greater contraceptive use is making adoption increasingly rare (Martin et al. 1997). Accordingly, this research strategy has become increasingly challenging, though a number of current adoption studies continue to make important contributions to the field (Leve et al. 2010; McGue et al. 1995, 1996).

Twin studies

Twins provide a number of ways to study gene-environment interaction. One such method is to study monozygotic twins reared apart (MZA). MZAs provide a unique opportunity to study the influence of different environments on identical genotypes. In the Swedish Adoption/Twin Study of Aging, data from 99 pairs of MZAs were tested for interactions between childhood rearing and adult personality (Bergeman et al. 1988).

Several significant interactions emerged. In some cases, the environment had a stronger impact on individuals genetically predisposed to be low on a given trait (based on the cotwin’s score). For example, individuals high in extraversion expressed the trait regardless of the environment; however, individuals predisposed to low extraversion had even lower scores in the presence of a controlling family.

In other traits, the environment had a greater impact on individuals genetically predisposed to be high on the trait: Individuals predisposed to impulsivity were even more impulsive in a conflictual family environment; individuals low on impulsivity were not affected.

Finally, some environments influenced both individuals who were high and low on a given trait, but in opposite directions: Families that were more involved masked genetic differences between individuals predisposed toward high or low neuroticism, but greater genetic variation emerged in less controlling families.

The implementation of population-based twin studies, inclusion of measured environments into twin studies, and advances in biometrical modeling techniques for twin data made it possible to study gene-environment interaction within the framework of the classic twin study. Traditional twin studies involve comparisons of monozygotic (MZ) and dizy-gotic (DZ) twins reared together. MZ twins share all of their genetic variation, whereas DZ twins share on average 50% of their genetic make-up; however, both types of twins are age-matched siblings sharing their family environments. This allows heritability, or the proportion of variance attributed to additive genetic effects, to be estimated by (a) doubling the difference between the correlation found between MZ twins and the correlation found between DZ twins, for quantitative traits, or ( b ) comparing concordance rates between MZs and DZs, for qualitative disorders (McGue & Bouchard 1998).

Biometrical model-fitting made it possible for researchers to address increasingly sophisticated research questions by allowing one to statistically specify predictions made by various hypotheses and to compare models testing competing hypotheses. By modeling data from subjects who vary on exposure to a specified environment, one could test whether there is differential expression of genetic influences in different environments.

Early examples of gene-environment interaction in twin models necessitated “grouping” environments to fit multiple group models. The basic idea was simple: Fit models to data for people in environment 1 and environment 2 separately and then test whether there were significant differences in the importance of genetic and environmental factors across the groups using basic structural equation modeling techniques. In an early example of gene-environment interaction, data from the Australian twin register were used to test whether the relative importance of genetic effects on alcohol consumption varied as a function of marital status, and in fact they did (Heath et al. 1989).

Having a marriage-like relationship reduced the impact of genetic influences on drinking: Among the younger sample of twins, genetic liability accounted for but half as much variance in drinking among married women (31%) as among unmarried women (60%). A parallel effect was found among the adult twins: Genetic effects accounted for less than 60% of the variance in married respondents but more than 76% in unmarried respondents (Heath et al. 1989).

In an independent sample of Dutch twins, religiosity was also shown to moderate genetic and environmental influences on alcohol use initiation in females (with nonsignificant trends in the same direction for males): In females without a religious upbringing, genetic influences accounted for 40% of the variance in alcohol use initiation compared to 0% in religiously raised females. Shared environmental influences were far more important in the religious females (Koopmans et al. 1999).

In data from our population-based Finnish twin sample, we also found that regional residency moderates the impact of genetic and environmental influences on alcohol use. Genetic effects played a larger role in longitudinal drinking patterns from late adolescence to early adulthood among individuals residing in urban settings, whereas common environmental effects exerted a greater in-fluence across this age range among individuals in rural settings (Rose et al. 2001).

When one has pairs discordant for exposure, it is also possible to ask about genetic correlation between traits displayed in different environments.

One obvious limitation of modeling gene-environment interaction in this way was that it constrained investigation to environments that fell into natural groupings (e.g., married/unmarried; urban/rural) or it forced investigators to create groups based on environments that may actually be more continuous in nature (e.g., religiosity). In the first extension of this work to quasi-continuous environmental moderation, we developed a model that allowed genetic and environmental influences to vary as a function of a continuous environmental moderator and used this model to follow-up on the urban/rural interaction reported previously (Dick et al. 2001).

We believed it likely that the urban/rural moderation effect reflected a composite of different processes at work. Accordingly, we expanded the analyses to incorporate more specific information about neighborhood environments, using government-collected information about the specific municipalities in which the twins resided (Dick et al. 2001). We found that genetic influences were stronger in environments characterized by higher rates of migration in and out of the municipality; conversely, shared environmental influences predominated in local communities characterized by little migration.

We also found that genetic predispositions were stronger in communities composed of a higher percentage of young adults slightly older than our age-18 Finnish twins and in regions where there were higher alcohol sales.

Further, the magnitude of genetic moderation observed in these models that allowed for variation as a function of a quasi-continuous environmental moderator was striking, with nearly a fivefold difference in the magnitude of genetic effects between environmental extremes in some cases.

The publication of a paper the following year (Purcell 2002) that provided straightforward scripts for continuous gene-environment interaction models using the most widely used program for twin analyses, Mx (Neale 2000), led to a surge of papers studying gene-environment interaction in the twin literature. These scripts also offered the advantage of being able to take into account gene-environment correlation in the context of gene-environment interaction. This was an important advance because previous examples of gene-environment interaction in twin models had been limited to environments that showed no evidence of genetic effects so as to avoid the confounding of gene-environment interaction with gene-environment correlation.

Using these models, we have demonstrated that genetic influences on adolescent substance use are enhanced in environments with lower parental monitoring (Dick et al. 2007c) and in the presence of substance-using friends (Dick et al. 2007b). Similar effects have been demonstrated for more general externalizing behavior: Genetic influences on antisocial behavior were higher in the presence of delinquent peers (Button et al. 2007) and in environments characterized by high parental negativity (Feinberg et al. 2007), low parental warmth (Feinberg et al. 2007), and high paternal punitive discipline (Button et al. 2008).

Further, in an extension of the socioregional-moderating effects observed on age-18 alcohol use, we found a parallel moderating role of these socioregional variables on age-14 behavior problems in girls in a younger Finnish twin sample. Genetic influences assumed greater importance in urban settings, communities with greater migration, and communities with a higher percentage of slightly older adolescents.

Other psychological outcomes have also yielded significant evidence of gene-environment interaction effects in the twin literature. For example, a moderating effect, parallel to that reported for alcohol consumption above, has been reported for depression symptoms (Heath et al. 1998) in females. A marriage-like relationship reduced the influence of genetic liability to depression symptoms, paralleling the effect found for alcohol consumption: Genetic factors accounted for 29% of the variance in depression scores among married women, but for 42% of the variance in young unmarried females and 51% of the variance in older unmarried females (Heath et al. 1998).

Life events were also found to moderate the impact of factors influencing depression in females (Kendler et al. 1991). Genetic and/or shared environmental influences were significantly more important in influencing depression in high-stress than in low-stress environments, as defined by a median split on a life-event inventory, although there was insufficient power to determine whether the moderating influence was on genetic or environmental effects.

More than simply accumulating examples of moderation of genetic influence by environmental factors, efforts have been made to integrate this work into theoretical frameworks surrounding the etiology of different clinical conditions. This is critical if science is to advance beyond individual observations to testable broad theories.

A 2005 review paper by Shanahan and Hofer suggested four processes by which social context may moderate the relative importance of genetic effects (Shanahan & Hofer 2005).

The environment may (a) trigger or (b) compensate for a genetic predisposition, (c) control the expression of a genetic predisposition, or (d ) enhance a genetic predisposition (referring to the accentuation of “positive” genetic predispositions).

These processes are not mutually exclusive and can represent different ends of a continuum. For example, the interaction between genetic susceptibility and life events may represent a situation whereby the experience of life events triggers a genetic susceptibility to depression. Conversely, “protective” environments, such as marriage-like relationships and low stress levels, can buffer against or reduce the impact of genetic predispositions to depressive problems.

Many different processes are likely involved in the gene-environment interactions observed for substance use and antisocial behavior. For example, family environment and peer substance use/delinquency likely constitute a spectrum of risk or protection, and family/friend environments that are at the “poor” extreme may trigger genetic predispositions toward substance use and antisocial behavior, whereas positive family and friend relationships may compensate for genetic predispositions toward substance use and antisocial behavior.

Social control also appears to be a particularly relevant process in substance use, as it is likely that being in a marriage-like relationship and/or being raised with a religious upbringing exert social norms that constrain behavior and thereby reduce genetic predispositions toward substance use.

Further, the availability of the substance also serves as a level of control over the ability to express genetic predispositions, and accordingly, the degree to which genetic influences will be apparent on an outcome at the population level. In a compelling illustration of this effect, Boardman and colleagues used twin data from the National Survey of Midlife Development in the United States and found a significant reduction in the importance of genetic influences on people who smoke regularly following legislation prohibiting smoking in public places (Boardman et al. 2010).

Molecular analyses

All of the analyses discussed thus far use latent, unmeasured indices of genetic influence to detect the possible presence of gene-environment interaction. This is largely because it was possible to test for the presence of latent genetic influence in humans (via comparisons of correlations between relatives with different degrees of genetic sharing) long before molecular genetics yielded the techniques necessary to identify specific genes influencing complex psychological disorders.

However, recent advances have made the collection of deoxyribonucleic acid (DNA) and resultant genotyping relatively cheap and straightforward. Additionally, the publication of hig profile papers brought gene-environment interaction to the forefront of mainstream psychology. In a pair of papers published in Science in 2002 and 2003, respectively, Caspi and colleagues analyzed data from a prospective, longitudinal sample from a birth cohort from New Zealand, followed from birth through adulthood.

In the 2002 paper, they reported that a functional polymorphism in the gene encoding the neurotransmitter-metabolizing enzyme monoamine oxidase A (MAOA) moderated the effect of maltreatment: Males who carried the genotype conferring high levels of MAOA expression were less likely to develop antisocial problems when exposed to maltreatment (Caspi et al. 2002). In the 2003 paper, they reported that a functional polymorphism in the promoter region of the serotonin transporter gene (5-HTT) was found to moderate the influence of stressful life events on depression. Individuals carrying the short allele of the 5-HTT promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than did individuals homozygous for the long allele (Caspi et al. 2003).

Both studies were significant in demonstrating that genetic variation can moderate individuals’ sensitivity to environmental events.

These studies sparked a multitude of reports that aimed to replicate, or to further extend and explore, the findings of the original papers, resulting in huge literatures surrounding each reported gene-environment interaction in the years since the original publications (e.g., Edwards et al. 2009, Enoch et al. 2010, Frazzetto et al. 2007, Kim-Cohen et al. 2006, McDermott et al. 2009,Prom-Wormley et al. 2009, Vanyukov et al. 2007, Weder et al. 2009). It is beyond the scope of this review to detail these studies; however, of note was the publication in 2009 of a highly publicized meta-analysis of the interaction between 5-HTT, stressful life events, and risk of depression that concluded there was “no evidence that the serotonin transporter genotype alone or in interaction with stressful life events is associated with an elevated risk of depression in men alone, women alone, or in both sexes combined” (Risch et al. 2009). Further, the authors were critical of the rapid embracing of gene-environment interaction and the substantial resources that have been devoted to this research.

The paper stimulated considerable backlash against the study of gene-environment interactions, and the pendulum appeared to be swinging back the other direction. However, a recent review by Caspi and colleagues entitled “Genetic Sensitivity to the Environment: The Case of the Serotonin Transporter Gene and Its Implications for Studying Complex Diseases and Traits” highlighted the fact that evidence for involvement of 5-HTT in stress sensitivity comes from at least four different types of studies, including observational studies in humans, experimental neuroscience studies, studies in nonhuman primates, and studies of 5HTT mutations in rodents (Caspi et al. 2010).

Further, the authors made the distinction between different cultures of evaluating gene-environment interactions: a purely statistical (theory-free) approach that relies wholly on meta-analysis (e.g., such as that taken by Risch et al. 2009) versus a construct-validity (theory-guided) approach that looks for a nomological network of convergent evidence, such as the approach that they took.

It is likely that this distinction also reflects differences in training and emphasis across different fields. The most cutting-edge genetic strategies at any given point, though they have changed drastically and rapidly over the past several decades, have generally involved atheoretical methods for gene identification (Neale et al. 2008). This was true of early linkage analyses, where ~400 to 1,000 markers were scanned across the genome to search for chromosomal regions that were shared by affected family members, suggesting there may be a gene in that region that harbored risk for the particular outcome under study. This allowed geneticists to search for genes without having to know anything about the underlying biology, with the ideas that the identification of risk genes would be informative as to etiological processes and that our understanding of the biology of most psychiatric conditions is limited.

Although it is now recognized that linkage studies were underpowered to detect genes of small effect, such as those now thought to be operating in psychiatric conditions, this atheoretical approach was retained in the next generation of gene-finding methods that replaced linkage, the implementation of genome-wide association studies (GWAS) (Cardon 2006). GWAS also have the general framework of scanning markers located across the entire genome in an effort to detect association between genetic markers and disease status; however, in GWAS over a million markers (or more, on the newest genetic platforms) are analyzed.

The next technique on the horizon is sequencing, in which entire stretches of DNA are sequenced to know the exact base pair sequence for a given region (McKenna et al. 2010).

From linkage to sequencing, common across all these techniques is an atheoretical framework for finding genes that necessarily involves conducting very large numbers of tests. Accordingly, there has been great emphasis in the field of genetics on correction for multiple testing (van den Oord 2007). In addition, the estimated magnitude of effect size of genetic variants thought to influence complex behavioral outcomes has been continually shifted downward as studies that were sufficiently powered to detect effect sizes previously thought to be reasonable have failed to generate positive findings (Manolio et al. 2009). GWAS have led the field to believe that genes influencing complex behavioral outcomes likely have odds ratios (ORs) on the order of magnitude of 1.1. This has led to a need for incredibly large sample sizes, requiring meta-analytic GWAS efforts with several tens of thousands of subjects (Landi et al. 2009, Lindgren et al. 2009).

It is important to note there has been increasing attention to the topic of gene-environment interaction from geneticists (Engelman et al. 2009). This likely reflects, in part, frustration and difficulty with identifying genes that impact complex psychiatric outcomes. Several hypotheses have been put forth as possible explanations for the failure to robustly detect genes involved in psychiatric outcomes, including a genetic model involving far more genes, each of very small effect, than was previously recognized, and failure to pay adequate attention to rare variants, copy number variants, and gene-environment interaction (Manolio et al. 2009).

Accordingly, gene-environment interaction is being discussed far more in the area of gene finding than in years past; however, these discussions often involve atheoretical approaches and center on methods to adequately detect gene-environment interaction in the presence of extensive multiple testing (Gauderman 2002, Gauderman et al. 2010). The papers by Risch et al. (2009) and Caspi et al. (2010) on the interaction between 5-HTT, life stress, and depression highlight the conceptual, theoretical, and practical differences that continue to exist between the fields of genetics and psychology surrounding the identification of gene-environment interaction effects.

THE NATURE OF GENE-ENVIRONMENT INTERACTION

An important consideration in the study of gene-environment interaction is the nature, or shape, of the interaction that one hypothesizes. There are two primary types of interactions.

One type of interaction is the fan-shaped interaction. In this type of interaction, the influence of genotype is greater in one environmental context than in another. This is the kind of interaction that is hypothesized by a diathesis-stress framework, whereby genetic influences become more apparent, i.e., are more strongly related to outcome, in the presence of negative environmental conditions. There is a reduced (or no) association of genotype with outcome in the absence of exposure to particular environmental conditions.

The literature surrounding depression and life events would be an example of a hypothesized fan-shaped interaction: When life stressors are encountered, genetically vulnerable individuals are more prone to developing depression, whereas in the absence of life stressors, these individuals may be no more likely to develop depression. In essence, it is only when adverse environmental conditions are experienced that the genes “come on-line.”

Gene-environment interactions in the area of adolescent substance use are also hypothesized to be fan-shaped, where some environmental conditions will allow greater opportunity to express genetic predispositions, allowing for more variation by genotype, and other environments will exert social control in such a way as to curb genetic expression (Shanahan & Hofer 2005), leading to reduced genetic variance.

Twin analyses yielding evidence of genetic influences being more or less important in different environmental contexts are generally suggestive of fan-shaped interactions. Changes in the overall heritability do not necessarily dictate that any one specific susceptibility gene will operate in a parallel manner; however, a change in heritability suggests that at least a good portion of the involved genes (assuming many genes of approximately equal and small effect) must be operating in that manner for a difference in heritability by environment to be detectable.

The diathesis-stress model has largely been the dominant model in psychiatry. Gene-finding efforts have focused on the search for vulnerability genes, and gene-environment interaction has been discussed in the context of these genetic effects becoming more or less important under particular environmental conditions.

Different types of gene-environment interactions.

More recently, an alternative framework has been proposed by Belsky and colleagues, the differential susceptibility hypothesis, in which the same individuals who are most adversely affected by negative environments may also be those who are most likely to benefit from positive environments. Rather than searching for “vulnerability genes” influencing psychiatric and behavioral outcomes, they propose the idea of “plasticity genes,” or genes involved in responsivity to environmental conditions (Belsky et al. 2009).

Belsky and colleagues reviewed the literatures surrounding gene-environment interactions associated with three widely studied candidate genes, MAOA, 5-HTT, and DRD4, and suggested that the results provide evidence for differential susceptibility associated with these genes (Belsky et al. 2009).

Their hypothesis is closely related to the concept of biological sensitivity to context (Ellis & Boyce 2008). The idea of biological sensitivity to context has its roots in evolutionary developmental biology, whereby selection pressures should favor genotypes that support a range of phenotypes in response to environmental conditions because this flexibility would be beneficial from the perspective of survival of the species. However, biological sensitivity to context has the potential for both positive effects under more highly supportive environmental conditions and negative effects in the presence of more negative environmental conditions. This theory has been most fully developed and discussed in the context of stress reactivity (Boyce & Ellis 2005), where it has been demonstrated that highly reactive children show disproportionate rates of morbidity when raised in adverse environments, but particularly low rates when raised in low-stress, highly supportive environments (Ellis et al. 2005). In these studies, high reactivity was defined by response to different laboratory challenges, and the authors noted that the underlying cellular mechanisms that would produce such responses are currently unknown, though genetic factors are likely to play a role (Ellis & Boyce 2008).

Although fan-shaped and crossover interactions are theoretically different, in practice, they can be quite difficult to differentiate. One can imagine several “variations on the theme” for both fan-shaped and crossover interactions. In general for a fan-shaped interaction, a main effect of genotype will be present as well as a main effect of the environment. There is a main effect of genotype at both environmental extremes; it is simply far stronger in environment 5 (far right side of the graph) as compared to environment 1 (far left side). But one could imagine a fan-shaped interaction where there was no genotypic effect at one extreme (e.g., the lines converge to the same phenotypic mean at environment).

Further, fan-shaped interactions can differ in the slope of the lines for each genotype, which indicate how much the environment is modifying genetic effects. In the crossover interaction shown above, the lines cross at environment 3 (i.e., in the middle). But crossover interactions can vary in the location of the crossover. It is possible that crossing over only occurs at the environmental extreme.

As previously noted, the crossing over of the genotypic groups in the Caspi et al. publications of the interactions between the 5-HTT gene, life events, and depression (Caspi et al. 2003) and between MAOA, maltreatment, and antisocial behavior (Caspi et al. 2002) occurred at the extreme low ends of the environmental measures, and the degree of crossing over was quite modest. Rather, the shape of the interactions (and the way the interactions were conceptualized in the papers) was largely fan-shaped, whereby certain genotypic groups showed stronger associations with outcome as a function of the environmental stressor.

Also, in both cases, the genetic variance was far greater under one environmental extreme than the other, rather than being approximately equivalent at both ends of the distribution, but with genotypic effects in opposite directions. In general, it is assumed that main effects of genotype will not be detected in crossover interactions, but this will actually depend on the frequency of the different levels of the environment. This is also true of fan-shaped interactions, but to a lesser degree.

Evaluating the relative importance, or frequency of existence, of each type of interaction is complicated by the fact that there is far more power to detect crossover interactions than fan-shaped interactions. Knowing that most of our genetic studies are likely underpowered, we would expect a preponderance of crossover effects to be detected as compared to fan-shaped effects purely as a statistical artifact. Further, even when a crossover effect is observed, power considerations can make it difficult to determine if it is “real.” For example, an interaction observed in our data between the gene CHRM2, parental monitoring, and adolescent externalizing behavior yielded consistent evidence for a gene-environment interaction, with a crossing of the observed regression lines. However, the mean differences by genotype were not significant at either end of the environmental continuum, so it is unclear whether the crossover reflected true differential susceptibility or simply overfitting of the data across the environmental levels containing the majority of the observations, which contributed to a crossing over of the regression lines at one environmental extreme (Dick et al. 2011).

Larger studies would have greater power to make these differentiations; however, there is the unfortunate paradox that the samples with the greatest depth of phenotypic information, allowing for more complex tests about risk associated with particular genes, usually have much smaller sample sizes due to the trade-off necessary to collect the rich phenotypic information. This is an important issue for gene-environment interaction studies in general: Most have been underpowered, and this raises concerns about the likelihood that detected effects are true positives. There are several freely available programs to estimate power (Gauderman 2002, Purcell et al. 2003), and it is critical that papers reporting gene-environment interaction effects (or a lack thereof) include information about the power of their sample in order to interpret the results.

Another widely contested issue is whether gene-environment interactions should be examined only when main effects of genotype are detected. Perhaps not surprisingly, this is the approach most commonly advocated by statistical geneticists (Risch et al. 2009) and that was recommended by the Psychiatric GWAS Consortium (Psychiatr. GWAS Consort. Steer. Comm. 2008). However, this strategy could preclude the detection of crossover interaction effects as well as gene-environment interactions that occur in the presence of relatively low-frequency environments. In addition, if genetic effects are conditional on environmental exposure, main effects of genotype could vary across samples, that is to say, a genetic effect could be detected in one sample and fail to replicate in another if the samples differ on environmental exposure.

Another issue with the detection and interpretation of gene-environment interaction effects involves the range of environments being studied. For example, if we assume that the five levels of the environment shown above represent the true full range of environments that exist, if a particular study only included individuals from environments 3–5, it would conclude that there is a fan-shaped gene-environment interaction. Belsky and colleagues (2009) have suggested this may be particularly problematic in the psychiatric literature because only in rare exceptions (Bakermans-Kranenburg & van Ijzendoorn 2006, Taylor et al. 2006) has the environment included both positive and negative ends of the spectrum. Rather, the absence of environmental stressors has usually constituted the “low” end of the environment, e.g., the absence of life stressors (Caspi et al. 2003) or the absence of maltreatment (Caspi et al. 2002). This could lead individuals to conclude there is a fan-shaped interaction because they are essentially failing to measure, with reference to figure above, environments 0-3, which represent the positive end of the environmental continuum. One can imagine a number of other incorrect conclusions that could be drawn about the nature of gene-environment interaction effects as a result of restricted range of environmental measures. For example, in B, measurement of individuals from environments 0-3 would lead one to conclude that genetic effects play a stronger role at lower levels of environmental exposure. Measurement of individuals from environments 3-5 would lead one to conclude that genetic effects play a stronger role at higher levels of exposure to the same environmental variable. In Figure A, if measurement of individuals was limited to environments 0-3, depending on sample size, there may be inadequate power to detect deviation from a purely additive genetic model, e.g., the slope of the genotypic lines may not be significantly different.

It is also important to note that not only are there several scenarios that would lead one to make incorrect conclusions about the nature of a gene-environment interaction effect, there are also scenarios that would lead one to conclude that a gene-environment interaction exists when it actually does not. Several of these are detailed in a sobering paper by my colleague Lindon Eaves, in which significant evidence for gene-environment interaction was detected quite frequently using standard regression methods, when the simulated data reflected strictly additive models (Eaves 2006). This was particularly problematic when using logistic regression where a dichotomous diagnosis was the outcome. The problem was further exaggerated when selected samples were analyzed.

An additional complication with evaluating gene-environment interactions in psychology is that often our environmental measures don’t have absolute scales of measurement. For example, what is the “real” metric for measuring a construct like parent-child bonding, or maltreatment, or stress? This becomes critical because fan-shaped interactions are very sensitive to scaling. Often a transformation of the scale scores will make the interaction disappear. What does it mean if the raw variable shows an interaction but the log transformation of the scale scores does not? Is the interaction real? Is one metric for measuring the environment a better reflection of the “real” nature of the environment than another?

Many of the environments of interest to psychologists do not have true metrics, such as those that exist for measures such as height, weight, or other physiological variables. This is an issue for the study of gene-environment interaction. It becomes even more problematic when you consider that logistic regression is the method commonly used to test for gene-environment interactions with dichotomous disease status outcomes. Logistic regression involves a logarithmic transformation of the probability of being affected. By definition, this changes the nature of the relationship between the variables being modeled. This compounds problems associated with gene-environment interactions being scale dependent.

EPIGENETICS: A POTENTIAL BIOLOGICAL MECHANISM FOR GENE-ENVIRONMENT INTERACTION

An enduring question remains in the study of gene-environment interaction: how does the environment “get under the skin”? Stated in another way:

What are the biological processes by which exposure to environmental events could affect outcome?

Epigenetics is one candidate mechanism. Excellent recent reviews on this topic exist (Meaney 2010, Zhang & Meaney 2010), and I provide a brief overview here.

It is important to note, however, that although epigenetics is increasingly discussed in the context of gene-environment interaction, it does not relate directly to gene-environment interaction in the statistical sense, as differentiated previously in this review. That is to say that epigenetic processes likely tell us something about the biological mechanisms by which the environment can affect gene expression and impact behavior, but they are not informative in terms of distinguishing between additive versus interactive environmental effects.

Although variability exists in defining the term, epigenetics generally refers to modifications to the genome that do not involve a change in nucleotide sequence. To understand this concept, let us review a bit about basic genetics.

The expression of a gene is influenced by transcription factors (proteins), which bind to specific sequences of DNA. It is through the binding of transcription factors that genes can be turned on or off. Epigenetic mechanisms involve changes to how readily transcription factors can access the DNA. Several different types of epigenetic changes are known to exist that involve different types of chemical changes that can regulate DNA transcription.

One epigenetic process that affects transcription binding is DNA methylation. DNA methylation involves the addition of a methyl group (CH3) onto a cytosine (one of the four base pairs that make up DNA). This leads to gene silencing because methylated DNA hinders the binding of transcription factors.

A second major regulatory mechanism is related to the configuration of DNA. DNA is wrapped around clusters of histone proteins to form nucleosomes. Together the nucleosomes of DNA and histone are organized into chromatin. When the chromatin is tightly condensed, it is difficult for transcription factors to reach the DNA, and the gene is silenced. In contrast, when the chromatin is opened, the gene can be activated and expressed. Accordingly, modifications to the histone proteins that form the core of the nucleosome can affect the initiation of transcription by affecting how readily transcription factors can access the DNA and bind to their appropriate sequence.

Epigenetic modifications of the genome have long been known to exist. For example, all cells in the body share the same DNA; accordingly, there must be a mechanism whereby different genes are active in liver cells than, for example, brain cells. The process of cell specialization involves silencing certain portions of the genome in a manner specific to each cell. DNA methylation is a mechanism known to be involved in cell specialization.

Another well known example of DNA methylation involves X-inactivation in females. Because females carry two copies of the X chromosome, one must be inactivated. The silencing of one copy of the X chromosome involves DNA methylation.

Genomic imprinting is another long established principle known to involve DNA methylation. In genomic imprinting the expression of specific genes is determined by the parent of origin. For example, the copy of the gene inherited from the mother is silenced, while the copy inherited from the father is active (or vice versa). The silent copy is inactive through processes involving DNA methylation. These changes all involve epigenetic processes parallel to those currently attracting so much attention.

However, the difference is that these known epigenetic modifications (cell specialization, X inactivation, genomic imprinting) all occur early in development and are stable.

The discovery that epigenetic modifications continue to occur across development, and can be reversible and more dynamic, has represented a major paradigm shift in our understanding of environmental regulation of gene expression.

Animal studies have yielded compelling evidence that early environmental manipulations can be associated with long-term effects that persist into adulthood. For example, maternal licking and grooming in rats is known to have long-term influences on stress response and cognitive performance in their offspring (Champagne et al. 2008, Meaney 2010). Further, a series of studies conducted in macaque monkeys demonstrates that early rearing conditions can result in long-term increased aggression, more reactive stress response, altered neurotransmitter functioning, and structural brain changes (Stevens et al. 2009). These findings parallel research in humans that suggests that early life experiences can have long-term effects on child development (Loman & Gunnar 2010). Elegant work in animal models suggests that epigenetic changes may be involved in these associations (Meaney 2010, Zhang & Meaney 2010).

Evaluating epigenetic changes in humans is more difficult because epigenetic marks can be tissue specific. Access to human brain tissue is limited to postmortem studies of donated brains, which are generally unique and unrepresentative samples and must be interpreted in the context of those limitations. Nonetheless, a recent study of human brain samples from the Quebec Suicide Brain Bank found evidence of increased DNA methylation of the exon 1F promoter in hippocampal samples from suicide victims compared with controls, but only if suicide was accompanied with a history of childhood maltreatment (McGowan et al. 2009). Importantly, this paralleled epigenetic changes originally observed in rat brain in the ortholog of this locus.

Another line of evidence suggesting epigenetic changes that may be relevant in humans is the observation of increasing discordance in epigenetic marks in MZ twins across time. This is significant because MZ twins have identical genotypes, and therefore, differences between them are attributed to environmental influences. In a study by Fraga and colleagues (2005), MZ twins were found to be epigenetically indistinguishable during the early years of life, but older MZ twins exhibited remarkable differences in their epigenetic profiles. These findings suggest that epigenetic changes may be a mechanism by which environmental influences contribute to the differences in outcome observed for a variety of psychological traits of interest between genetically identical individuals.

The above studies complement a growing literature demonstrating differences in gene expression in humans as a function of environmental experience. One of the first studies to analyze the relationship between social factors and human gene expression compared healthy older adults who differed in the extent to which they felt socially connected to others (Cole et al. 2007). Using expression profiles obtained from blood cells, a number of genes were identified that showed systematically different levels of expression in people who reported feeling lonely and distant from others.

Interestingly, these effects were concentrated among genes that are involved in immune response.

The results provide a biological mechanism that could explain why socially isolated individuals show heightened vulnerability to diseases and illnesses related to immune function.

Importantly, they demonstrate that our social worlds can exert biologically significant effects on gene expression in humans (for a more extensive review, see Cole 2009).

CONCLUSIONS

This review has attempted to provide an overview of the study of gene-environment interaction, starting with early animal studies documenting gene-environment interaction, to demonstrations of similar effects in family, adoption, and twin studies.

Advances in twin modeling and the relative ease with which gene-environment interaction can now be modeled has led to a significant increase in the number of twin studies documenting changing importance of genetic influence across environmental contexts. There is now widespread documentation of gene-environment interaction effects across many clinical disorders (Thapar et al. 2007).

These findings have led to more integrated etiological models of the development of clinical outcomes. Further, since it is now relatively straightforward and inexpensive to collect DNA and conduct genotyping, there has been a surge of studies testing for gene-environment interaction with specific candidate genes.

Psychologists have embraced the incorporation of genetic components into their studies, and geneticists who focus on gene finding are now paying attention to the environment in an unprecedented way. However, now that the initial excitement surrounding gene-environment interaction has begun to wear off, a number of challenges involved in the study of gene-environment interaction are being recognized.

These include difficulties with interpreting interaction effects (or the lack thereof), due to issues surrounding the measurement and scaling of the environment, and statistical concerns surrounding modeling gene-environment interactions and the nature of their effects.

So where do we go from here? Individuals who jumped on the gene-environment interaction bandwagon are now discovering that studying this process is harder than it first appeared. But there is good reason to believe that gene-environment interaction is a very important process in the development of clinical disorders. So rather than abandon ship, I would suggest that as a field, we just need to proceed with more caution.

SUMMARY POINTS

– Gene-environment interaction refers to the phenomenon whereby the effect of genes depends on the environment, or the effect of the environment depends on genotype. There is now widespread documentation of gene-environment interaction effects across many clinical disorders, leading to more integrated etiological models of the development of clinical outcomes.

– Twin, family, and adoption studies provide methods to study gene-environment interaction with genetic effects modeled latently, meaning that genes are not directly measured, but rather genetic influence is inferred based on correlations across relatives. Advances in genotyping technology have contributed to a proliferation of studies testing for gene-environment interaction with specific measured genes. Each of these designs has its own strengths and limitations.

– Two types of gene-environment interaction have been discussed in greatest detail in the literature: fan-shaped interactions, in which the influence of genotype is greater in one environmental context than in another; and crossover interactions, in which the same individuals who are most adversely affected by negative environments may also be those who are most likely to benefit from positive environments. Distinguishing between these types of interactions poses a number of challenges.

– The range of environments studied and the lack of a true metric for many environmental measures of interest create difficulties for studying gene-environment interactions. Issues surrounding power, and the use of logistic regression and selected samples, further compound the difficulty of studying gene-environment interactions. These issues have not received adequate attention by many researchers in this field.

– Epigenetic processes may tell us something about the biological mechanisms by which the environment can affect gene expression and impact behavior. The growing literature demonstrating differences in gene expression in humans as a function of environmental experience demonstrates that our social worlds can exert biologically significant effects on gene expression in humans.

– Much of the current work on gene-environment interactions does not take advantage of the state of the science in genetics or psychology; advancing this area of study will require close collaborations between psychologists and geneticists.

Differential Susceptibility to Environmental Influences

Jay Belsky

Evidence that adverse rearing environments exert negative effects particularly on children and adults presumed “vulnerable” for temperamental or genetic reasons may actually reflect something else: heightened susceptibility to the negative effects of risky environments and to the beneficial effects of supportive environments.

Building on Belsky’s (Belsky & Pluess) evolutionary inspired differential susceptibility hypothesis, stipulating that some individuals, including children, are more affected, both for better and for worse, by their environmental exposures and developmental experiences, recent research consistent with this claim is reviewed. It reveals that in many cases, including both observational field studies and experimental intervention ones, putatively vulnerable children and adults are especially susceptible to both positive and negative environmental effects. In addition to reviewing relevant evidence, unknowns in the differential susceptibility equation are highlighted.

Introduction

Most students of child development probably do not presume that all children are equally susceptible to rearing (or other environmental) effects; a long history of research on interactions between parenting and temperament, or parenting by temperament interactions, clearly suggests otherwise. Nevertheless, it remains the case that most work still focuses on effects of environmental exposures and developmental experiences that apply equally to all children so-called main effects of parenting or poverty or being reared by a depressed mother, thus failing to consider interaction effects, which reflect the fact that whether, how, and how much these contextual conditions influence the child may depend on the child’s temperament or some other characteristic of individuality.

Research on parenting-by-temperament interactions is based on the premise that what proves effective for some individuals in fostering the development of some valued outcome, or preventing some problematic one may simply not do so for others. Commonly tested are diathesis-stress hypotheses derived from multiplerisk/transactional frameworks in which individual characteristics that make children “vulnerable” to adverse experiences placing them “at risk” of developing poorly are mainly influential when there is at the same time some contributing risk from the environmental context (Zuckerman, 1999).

Diathesis refers to the latent weakness or vulnerability that a child or adult may carry (e.g., difficult temperament, particular gene), but which does not manifest itself, thereby undermining well-being, unless the individual is exposed to conditions of risk or stress.

After highlighting some research consistent with a diathesis-stress or dual-risk perspective, I raise questions on the basis of other findings about how the first set of data has been interpreted, advancing the evolutionary inspired proposition that some children, for temperamental or genetic reasons, are actually more susceptible to both (a) the adverse effects of unsupportive parenting and (b) the beneficial effects of supportive rearing.

Finally, I draw conclusions and highlight some “unknowns in the differential-susceptibility equation.”

Diathesis-Stress, Dual-Risk and Vulnerability

The view that infants and toddlers manifesting high levels of negative emotion are at special risk of problematic development when they experience poor quality rearing is widespread.

Evidence consistent with this view can be found in the work of Morrell and Murray, who showed that it was only highly distressed and irritable 4-month-old boys who experienced coercive and rejecting mothering at this age who continued to show evidence, 5 months later, of emotional and behavioural dysregulation. Relatedly, Belsky, Hsieh, and Cernic observed that infants who scored high in negative emotionality at 12 months of age and who experienced the least supportive mothering and fathering across their second and third years of life scored highest on externalizing problems at 36 months of age. And Deater, Deckard and Dodge reported that:

Children rated highest on externalizing behavior problems by teachers across the primary school years were those who experienced the most harsh discipline prior to kindergarten entry and who were characterized by mothers at age 5 as being negatively reactive infants.

The adverse consequences of the co-occurrence of a child risk factor (ie, a diathesis; e.g., negative emotionality) and problematic parenting also is evident in Caspi and Moflitt’s ground breaking research on gene-by-environment (GXE) interaction. Young men followed from early childhood were most likely to manifest high levels of antisocial behavior when they had both (a) a history of child maltreatment and (b) a particular variant of the MAO-A gene, a gene previously linked to aggressive behaviour. Such results led Rutter, like others, to speak of “vulnerable individuals,” a concept that also applies to children putatively at risk for compromised development due to their behavioral attributes. But is “vulnerability” the best way to conceptualize the kind of person-environment interactions under consideration?

Beyond Diathesis, Stress, DualRisk and Vulnerability

Working from an evolutionary perspective, Belsky (Belsky & Pluess) theorized that children, especially within a family, should vary in their susceptibility to both adverse and beneficial effects of rearing influence. Because the future is uncertain, in ancestral times, just like today, parents could not know for certain (consciously or unconsciously) what rearing strategies would maximise reproductive fitness, that is, the dispersion of genes in future generations, the ultimate goal of Darwinian evolution.

To protect against all children being steered, inadvertently, in a parental direction that proved disastrous at some later point in time, developmental processes were selected to vary children’s susceptibility to rearing (and other environmental influences).

In what follows, I review evidence consistent with this claim which highlights early negative emotionality and particular candidate genes as “plasticity factors” making individuals more susceptible to both supportive and unsupportive environments, that is, “for better and for worse”.

Negative Emotionality as Plasticity Factor

The first evidence which Belsky could point to consistent with his differential susceptibility hypothesis concerned early negative emotionality. Children scoring high on this supposed “risk factor”, particularly in the early years, appeared to benefit disproportionately from supportive rearing environments.

Feldman, Greenbaum, and Yirmiya found, for example, that 9-month-olds scoring high on negativity who experienced low levels of synchrony in mother-infant interaction manifested more noncompliance during clean-up at age two than other children did. When such infants experienced mutually synchronous mother-infant interaction, however, they displayed greater self-control than did children manifesting much less negativity as infants. Subsequently, Kochanska, Aksan, and Joy observed that highly fearful 15-month-olds experiencing high levels of power-assertive paternal discipline were most likely to cheat in a game at 38 months, yet when cared for in a supportive manner such negatively emotional, fearful toddlers manifested the most rule-compatible conduct.

In the time since Belsky and Pluess reviewed evidence like that just cited, highlighting the role of negative emotionality as a “plasticity factor”, even more evidence to this effect has emerged in the case of children. Consider in this regard work linking (1) maternal empathy and anger with externalizing problems; (2) mutual responsiveness observed in the mother-child dyad with effortful control; (3) intrusive maternal behavior and poverty with executive functioning; and (4) sensitive parenting with social, emotional and cognitive-academic development.

Experimental studies designed to test Belsky’s differential susceptibility hypothesis are even more suggestive than the longitudinal correlational evidence just cited. Blair discovered that it was highly negative infants who benefited most in terms of both reduced levels of externalizing behavior problems and enhanced cognitive functioning from a multi-faceted infant-toddler intervention program whose data he reanalyzed. Thereafter, Klein Velderman, Bakermans-Kranenburg, Juffer, and van Ijzendoorn found that experimentally induced changes in maternal sensitivity exerted greater impact on the attachment security of highly negatively reactive infants than it did on other infants. In both experiments, environmental influences on “vulnerable” children were for better instead of for worse.

As it turns out, there is ever growing experimental evidence that early negative emotionality is a plasticity factor. Consider findings showing that it is infants who score relatively low on irritability as newborns who fail to benefit from an otherwise security promoting intervention and infants who show few, if any, mild perinatal adversities known to be related to limited negative emotionality who fail to benefit from computer based instruction otherwise found to promote preschoolers’ phonemic awareness and early literacy.

In other words, only the putatively “vulnerable”, those manifesting or likely to manifest high levels of negativity experienced developmental enhancement as a function of the interventions cited. Similar results emerge among older children, as Scott and O’Connor’s parenting intervention resulted in the most positive change in conduct among emotionally dysregulated children (i.e., loses temper, angry, touchy).

Genes as Plasticity Factors

Perhaps nowhere has the diathesis-stress framework informed person-X-environment interaction research more than in the study of GXE interaction. Recent studies involving measured genes and measured environments also document both for better and for worse environmental effects, in the case of susceptible individuals as it turns out. Here I consider evidence pertaining to two specific candidate genes before turning attention to research examining multiple genes at the same time.

DRD4

One of the most widely studied genetic polymorphisms in research involving measured genes and measured environments pertains to a particular allele (or variant) of the dopamine receptor gene, DRD4. Because the dopaminergic system is engaged in attentional, motivational, and reward mechanisms and one variant of this polymorphism, the 7-repeat allele, has been linked to lower dopamine reception efficiency. Van Ijzendoorn and Bakerman Kranenburg predicted this allele would moderate the association between maternal unresolved loss or trauma and infant attachment disorganization. Having the 7-repeat DRD4 allele substantially increased risk for disorganization in children exposed to maternal unresolved loss/trauma, as expected, consistent with the diathesis-stress framework; yet when children with this supposed “vulnerability gene” were raised by mothers who had no unresolved loss, they displayed significantly less disorganization than agemates without the allele, regardless of mothers’ unresolved loss status.

Similar results emerged when the interplay between DRD4 and observed parental insensitivity in predicting externalizing problems was studied in a group of 47 twins. Children carrying the 7-repeat DRD4 allele raised by insensitive mothers displayed more externalizing behaviors than children without the DRD4 7-repeat (irrespective of maternal sensitivity), whereas children with the 7-repeat allele raised by sensitive mothers showed the lowest levels of externalizing problem behavior.

Such results suggest that conceptualizing the 7-repeat DRD4 allele exclusively in risk-factor terms is misguided, as this variant of the gene seems to heighten susceptibility to a wide variety of environments, with supportive and risky contexts promoting, respectively, positive and negative functioning.

In the time since I last reviewed such differential susceptibility related evidence, ever more GXE findings pertaining to DRD4 (and other polymorphisms) have appeared consistent with the notion that there are individual differences in developmental plasticity. Consider in this regard recent differential susceptibility related evidence showing heightened or exclusive susceptibility of individuals carrying the 7repeat allele when the environmental predictor and developmental outcome were, respectively, (a) maternal positivity and prosocial behavior; (b) early nonfamilial childcare and social competence; (c) contextual stress and support and adolescent negative arousal; (d) childhood adversity and young adult persistent alcohol dependence; and (e) newborn risk status (i.e., gestational age, birth weight for gestational age, length of stay in NICU) and observed maternal sensitivity.

Especially noteworthy, perhaps are the results of a meta-analysis of GXE research involving dopamine related genes showing that children eight and younger respond to positive and negative developmental experiences and environmental exposures in a manner consistent with differential susceptibility.

As in the case of negative emotionality, intervention research also underscores the susceptibility to 7-repeat carriers of the DRD4 gene to benefit disproportionately from supportive environments. Kegel, Bus and van I]zendoorn tested and found support for the hypothesis that it would be DRD4-7R carriers who would benefit from specially designed computer games promoting phonemic awareness and, thereby, early literacy in their randomized control trial (RCT). Other such RCT results point in the same direction with regard to DRD4-7R, including research on African American teenagers in which substance use was the outcome examined.

5-HTTLPR

Perhaps the most studied polymorphism in research on GXE interactions is the serotonin transporter gene, 5-HTTLPR. Most research distinguishes those who carry one or two short alleles (8/3, 3/1) and those homozygous for the long allele (1/1). The short allele has generally been associated with reduced expression of the serotonin transporter molecule, which is involved in the reuptake of serotonin from the synaptic cleft and thus considered to be related to depression, either directly or in the face of adversity. Indeed, the short allele has often been conceptualized as a “depression gene”.

Caspi and associates were the first to show that the 5-HTTLPR moderates effects of stressful life events during early adulthood on depressive symptoms, as well as on probability of suicide ideation/attempts and of major depression episode at age 26 years. Individuals with two 3 alleles proved most adversely affected whereas effects on 1/1 genotypes were weaker or entirely absent. Of special significance, however, is that carriers of the 3/3 allele scored best on the outcomes just mentioned when stressful life events were absent, though not by very much.

Multiple research groups have attempted to replicate Caspi et al.’’s findings of increased vulnerability to depression in response to stressful life events for individuals with one or more copies of the 5 allele, with many succeeding, but certainly not all. The data presented in quite a number of studies indicates, however, that individuals carrying short alleles (s/s, s/l) did not just function most poorly when exposed to many stressors, but best, showing least problems when encountering few or none. Calling explicit attention to such a pattern of results, Taylor and associates reported that young adults homozygous for short alleles (s/s) manifested greater depressive symptomatology than individuals with other allelic variants when exposed to early adversity (i.e., problematic child rearing history), as well as many recent negative life events, yet the fewest symptoms when they experienced a supportive early environment or recent positive experiences. The same for-better-and-for-worse pattern of results concerning depression are evident in Eley et al.’s research on adolescent girls who were and were not exposed to risky family environments.

The effect of 5-HTTLPR in moderating environmental influences in a manner consistent with differential susceptibility is not restricted to depression and its symptoms. It also emerges in studies of anxiety and ADHD, particularly ADHD which persists into adulthood. In all these cases, emotional abuse in childhood or a generally adverse childrearing environment, it proved to be those individuals carrying short alleles who responded to developmental or concurrent experiences in a for-better-and-for-worse manner, depending on the nature of the experience in question.

Since last reviewing such 5-HTTLPR-related GXE research consistent with differential susceptibility, ever more evidence in line with the just cited work has emerged. Consider in this regard evidence showing for-better-and-for-worse results in the case of those carrying one or more short alleles of 5-HTTLPR when the rearing predictor and child outcome were, respectively, (a) maternal responsiveness and child moral internalization, (b) child maltreatment and children’s antisocial behavior, and (c) supportive parenting and children’s positive affect.

Differential susceptibility related findings also emerged (among male African-American adolescents) when (d) perceived racial discrimination was used to predict conduct problems; (e) when life events were used to predict neuroticism, and (f) life satisfaction of young adults; and (g) when retrospectively reported childhood adversity was used to explain aspects of impulsivity among college students (e.g., pervasive influence of feelings, feelings trigger action). Especially noteworthy are the results of a recent meta-analysis of GXE findings pertaining to children under 18 years of age, showing that short allele carriers are more susceptible to the effects of both positive and negative developmental experiences and environmental exposures, at least in the case of Caucasians.

As was the case with DRD4, there is also evidence from intervention studies documenting differential susceptibility. Consider in this regard Drury and associates data showing that it was only children growing up in Romanian orphanages who carried 5-HTTLPR short alleles who benefited from being randomly assigned to high quality foster care in terms of reductions in the display of indiscriminant friendliness. Eley and associates also documented intervention benefits restricted to short allele carriers in their study of cognitive behavior therapy for children suffering from severe anxiety, but their design included only treated children (i.e., did not involve a randomly assigned control group).

Polygenetic Plasticity

Most GxE research, like that just considered, has focused on one or another polymorphism, like DRD4 or 5-HTTLPR. In recent years, however, work has emerged focusing on multiple polymorphisms and thus reflecting the operation of epistatic (i.e., GXG) interactions, as well as GxGxE ones.

One can distinguish polygenetic GxE research in terms of the basis used for creating multigene composites. One strategy involves identifying genes which show main effects and then compositing only these to then test an interaction with some environmental parameter. Another approach is to composite genes for a secondary, follow-up analysis that have been found in a first round of inquiry to generate significant GxE interactions.

When Cicchetti and Rogosch applied this approach using four different polymorphisms, they found that as the number of sensitivity-to-the-environment alleles increased, so did the degree to which maltreated and non-maltreated low-income children differed on a composite measure of resilient functioning in a for-better-and-for-worse manner.

A third approach which has now been used successfully a number of times to chronicle differential susceptibility involves compositing a set of genes selected on an apriori basis before evaluating GxE. Consider in this regard evidence indicating that 2-gene composites moderate links (a) between sexual abuse and adolescent depression/anxiety and somatic symptoms (b) between perceived racial discrimination and risk related cognitions reflecting a fast vs. slow life-history strategy (c) between contextual stress/support and aggression in young adulthood and (d) between social class and post-partum depression.

Of note, too is evidence that a 3-gene composite moderates the relation between a hostile, demoralizing community and family environment and aggression in early adulthood and that a 5-gene composite moderates the relation between parenting and adolescent self-control.

Given research already reviewed, it is probably not surprising that there is also work examining genetically moderated intervention effects focusing on multi-gene composites rather than singular candidate genes. Consider in this regard the Drury et al.’s findings showing that even though the genetic polymorphism brain derived neurotrophic factor, BDNF, did not all by itself operate as a plasticity factor when it came to distinguishing those who did and did not benefit from the aforementioned foster-care intervention implemented with institutionalized children in Romania, the already-noted moderating effect of 5-HTTLPR was amplified if a child carried Met rather than Val alleles of BDNF along with short 5-HTTLPR alleles. In other words, the more plasticity alleles children carried, the more their indiscriminate friendliness declined over time when assigned to foster care and the more it increased if they remained institutionalized.

Consider next Brody, Chen and Beach’s confirmed prediction that the more GABAergic and Dopaminergic genes African American teens carried, the more protected they were from increasing their alcohol use over time when enrolled in a whole-family prevention program. Such results once again call attention to the benefits of moving beyond single polymorphisms when it comes to operationalizing the plasticity phenotype. They also indicate that even if a single gene may not by itself moderate an intervention (or other environmental) effect, it could still play a role in determining the degree to which an individual benefits. These are insights future investigators and interventionists should keep in mind when seeking to illuminate “what works for whom?”

Unknowns in the Differential Susceptibility Equation

The notion of differential susceptibility, derived as it is from evolutionary theorizing, has gained great attention in recent years, including a special section in the journal Development and Psychopathology.

Although research summarized here suggests that the concept has utility, there are many “unknowns,” several of which are highlighted in this concluding section.

Domain General or Domain Specilic?

Is it the case that some children, perhaps those who begin life as highly negatively emotional, are more susceptible both to a wide variety of rearing influences and with respect to a wide variety of developmental outcomes as is presumed in the use of concepts like “fixed” and “plastic” strategists, with the latter being highly malleable and the former hardly at all? Boyce and Ellis contend that a general psychobiological reactivity makes some children especially vulnerable to stress and thus to general health problems. Or is it the case, as Belsky wonders and Kochanska, Aksan, and Joy argue, that different children are susceptible to different environmental influences (e.g., nurturance, hostility) and with respect to different outcomes? Pertinent to this idea are findings of Caspi and Mofiitt indicating that different genes differentially moderated the effect of child maltreatment on antisocial behavior (MAO-A) and on depression (5HTT).

Continuous Versus Discrete Plasticity?

The central argument that children vary in their susceptibility to rearing influences raises the question of how to conceptualize differential susceptibility: categorically (some children highly plastic and others not so at all) or continuously (some children simply more malleable than others)? It may even be that plasticity is discrete for some environment-outcome relations, with some individuals affected and others not at all (e.g., gender specific effects), but that plasticity is more continuous for other susceptibility factors (e.g., in the case of the increasing vulnerability to stress of parents with decreasing dopaminergic efficiency. Certainly the work which composites multiple genotypes implies that there is a “plasticity gradient”, with some children higher and some lower in plasticity.

Mechanisms

Susceptibility factors are the moderators of the relation between the environment and developmental outcome, but they do not elucidate the mechanism of differential influence.

Several (non-mutually exclusive) explanations have been advanced for the heightened susceptibility of negatively emotional infants. Suomi posits that the timidity of “uptight” infants affords them extensive opportunity to learn by watching, a view perhaps consistent with Bakermans-Kranenburg and van Ijzendoorn’s aforementioned findings pertaining to DRD4, given the link between the dopamine system and attention. Kochanska et al., contend that the ease with which anxiety is induced in fearful children makes them highly responsive to parental demands.

And Belsky speculates that negativity actually reflects a highly sensitive nervous system on which experience registers powerfully negatively when not regulated by the caregiver but positively when coregulation occurs, a point of view somewhat related to Boyce and Ellis’ proposal that susceptibility may reflect prenatally programmed hyper-reactivity to stress.

*

Childhood Adversity Can Change Your Brain. How People Recover From Post Childhood Adversity Syndrome – Donna Jackson Nakazawa * Future Directions in Childhood Adversity and Youth Psychopathology – Katie A. McLaughlin.

Childhood Adversity: exposure during childhood or adolescence to environmental circumstances that are likely to require significant psychological, social, or neurobiological adaptation by an average child and that represent a deviation from the expectable environment.

Early emotional trauma changes who we are, but we can do something about it.

The brain and body are never static; they are always in the process of becoming and changing.

Findings from epidemiological studies indicate clearly that exposure to childhood adversity powerfully shapes risk for psychopathology.

This research tells us that what doesn’t kill you doesn’t necessarily make you stronger; far more often, the opposite is true.

Donna Jackson Nakazawa

If you’ve ever wondered why you’ve been struggling a little too hard for a little too long with chronic emotional and physical health conditions that just won’t abate, feeling as if you’ve been swimming against some invisible current that never ceases, a new field of scientific research may offer hope, answers, and healing insights.

In 1995, physicians Vincent Felitti and Robert Anda launched a large scale epidemiological study that probed the child and adolescent histories of 17,000 subjects, comparing their childhood experiences to their later adult health records. The results were shocking: Nearly two thirds of individuals had encountered one or more Adverse Childhood Experiences (ACEs), a term Felitti and Anda coined to encompass the chronic, unpredictable, and stress inducing events that some children face. These included growing up with a depressed or alcoholic parent; losing a parent to divorce or other causes; or enduring chronic humiliation, emotional neglect, or sexual or physical abuse. These forms of emotional trauma went beyond the typical, everyday challenges of growing up.

The number of Adverse Childhood Experiences an individual had had predicted the amount of medical care she’d require as an adult with surprising accuracy:

– Individuals who had faced 4 or more categories of ACEs were twice as likely to be diagnosed with cancer as individuals who hadn’t experienced childhood adversity.

– For each ACE Score a woman had, her risk of being hospitalized with an autoimmune disease rose by 20 percent.

– Someone with an ACE Score of 4 was 460 percent more likely to suffer from depression than someone with an ACE Score of 0.

– An ACE Score greater than or equal to 6 shortened an individual’s lifespan by almost 20 years.

The ACE Study tells us that experiencing chronic, unpredictable toxic stress in childhood predisposes us to a constellation of chronic conditions in adulthood. But why? Today, in labs across the country, neuroscientists are peering into the once inscrutable brain-body connection, and breaking down, on a biochemical level, exactly how the stress we face when we’re young catches up with us when we’re adults, altering our bodies, our cells, and even our DNA. What they’ve found may surprise you.

Some of these scientific findings can be a little overwhelming to contemplate. They compel us to take a new look at how emotional and physical pain are intertwined.

1. Epigenetic Shifts

When we’re thrust over and over again into stress inducing situations during childhood or adolescence, our physiological stress response shifts into overdrive, and we lose the ability to respond appropriately and effectively to future stressors 10, 20, even 30 years later. This happens due to a process known as gene methylation, in which small chemical markers, or methyl groups, adhere to the genes involved in regulating the stress response, and prevent these genes from doing their jobs.

As the function of these genes is altered, the stress response becomes re-set on ”high” for life, promoting inflammation and disease.

This can make us more likely to overreact to the everyday stressors we meet in our adult life, an unexpected bill, a disagreement with a spouse, or a car that swerves in front of us on the highway, creating more inflammation. This, in turn, predisposes us to a host of chronic conditions, including autoimmune disease, heart disease, cancer, and depression.

Indeed, Yale researchers recently found that children who’d faced chronic, toxic stress showed changes “across the entire genome,” in genes that not only oversee the stress response, but also in genes implicated in a wide array of adult diseases. This new research on early emotional trauma, epigenetic changes, and adult physical disease breaks down longstanding delineations between what the medical community has long seen as “physical” disease versus what is “mental” or “emotional.”

2. Size and Shape of the Brain

Scientists have found that when the developing brain is chronically stressed, it releases a hormone that actually shrinks the size of the hippocampus, an area of the brain responsible of processing emotion and memory and managing stress. Recent magnetic resonance imaging (MRI) studies suggest that the higher an individual’s ACE Score, the less gray matter he or she has in other key areas of the brain, including the prefrontal cortex, an area related to decision making and self regulatory skills, and the amygdala, or fear-processmg center. Kids whose brains have been changed by their Adverse Childhood Experiences are more likely to become adults who find themselves over-reacting to even minor stressors.

3. Neural Pruning

Children have an overabundance of neurons and synaptic connections; their brains are hard at work, trying to make sense of the world around them. Until recently, scientists believed that the pruning of excess neurons and connections was achieved solely in a “use-it-or-lose-it” manner, but a surprising new player in brain development has appeared on the scene: non-neuronal brain cells, known as microglia, which make up one-tenth of all the cells in the brain, and are actually part of the immune system, participate in the pruning process. These cells prune synapses like a gardener prunes a hedge. They also engulf and digest entire cells and cellular debris, thereby playing an essential housekeeping role.

But when a child faces unpredictable, chronic stress of Adverse Childhood Experiences, microglial cells “can get really worked up and crank out neurochemicals that lead to neuroinflammation,” says Margaret McCarthy, PhD, whose research team at the University of Maryland Medical Center studies the developing brain. “This below-the-radar state of chronic neuroinflammation can lead to changes that reset the tone of the brain for life.”

That means that kids who come into adolescence with a history of adversity and lack the presence of a consistent, loving adult to help them through it may become more likely to develop mood disorders or have poor executive functioning and decision-making skills.

4. Telomeres

Early trauma can make children seem “older,” emotionally speaking, than their peers. Now, scientists at Duke University; the University of California, San Francisco; and Brown University have discovered that Adverse Childhood Experiences may prematurely age children on a cellular level as well. Adults who’d faced early trauma show greater erosion in what are known as telomeres, the protective caps that sit on the ends of DNA strands, like the caps on Shoelaces, to keep the genome healthy and intact. As our telomeres erode, we’re more likely to develop disease, and our cells age faster.

5. Default Mode Network

Inside each of our brains, a network of neurocircuitry, known as the “default mode network,” quietly hums along, like a car idling in a driveway. It unites areas of the brain associated with memory and thought integration, and it’s always on standby, ready to help us to figure out what we need to do next. “The dense connectivity in these areas of the brain help us to determine what’s relevant or not relevant, so that we can be ready for whatever our environment is going to ask of us,” explains Ruth Lanius, neuroscientist professor of psychiatry, and director of the Post Traumatic Stress (PTSD) Research Unit at the University of Ontario.

But when children face early adversity and are routinely thrust into a state of fight-or-flight, the default mode network starts to go offline; it’s no longer helping them to figure out what’s relevant, or what they need to do next.

According to Lanius, kids who’ve faced early trauma have less connectivity in the default mode network, even decades after the trauma occurred. Their brains don’t seem to enter that healthy idling position, and so they may have trouble reacting appropriately to the world around them.

6. Brain-Body Pathway

Until recently, it’s been scientifically accepted that the brain is ”immune-privileged,” or cut off from the body’s immune system. But that turns out not to be the case, according to a groundbreaking study conducted by researchers at the University of Virginia School of Medicine. Researchers found that an elusive pathway travels between the brain and the immune system via lymphatic vessels. The lymphatic system, which is part of the circulatory system, carries lymph, a liquid that helps to eliminate toxins, and moves immune cells from one part of the body to another. Now we know that the immune system pathway includes the brain.

The results of this study have profound implications for ACE research. For a child who’s experienced adversity, the relationship between mental and physical suffering is strong: the inflammatory chemicals that flood a child’s brain when she’s chronically stressed aren’t confined to the brain alone; they’re shuttled from head to toe.

7. Brain Connectivity

Ryan Herringa, neuropsychiatrist and assistant professor of child and adolescent psychiatry at the University of Wisconsin, found that children and teens who’d experienced chronic childhood adversity showed weaker neural connections between the prefrontal cortex and the hippocampus. Girls also displayed weaker connections between the prefrontal cortex and the amygdala. The prefrontalcortex-amygdala relationship plays an essential role in determining how emotionally reactive we’re likely to be to the things that happen to us in our day-to-day life, and how likely we are to perceive these events as stressful or dangerous.

According to Herringa:

“If you are a girl who has had Adverse Childhood Experiences and these brain connections are weaker, you might expect that in just about any stressful situation you encounter as life goes on, you may experience a greater level of fear and anxiety.”

Girls with these weakened neural connections, Herringa found, stood at a higher risk for developing anxiety and depression by the time they reached late adolescence. This may, in part, explain why females are nearly twice as likely as males to suffer from later mood disorders.

This science can be overwhelming, especially to those of us who are parents. So, what can you do if you or a child you love has been affected by early adversity?

The good news is that, just as our scientific understanding of how adversity affects the developing brain is growing, so is our scientific insight into how we can offer the children we love resilient parenting, and how we can all take small steps to heal body and brain. Just as physical wounds and bruises heal, just as we can regain our muscle tone, we can recover function in under-connected areas of the brain. The brain and body are never static; they are always in the process of becoming and changing.

Donna Jackson Nakazawa

8 Ways People Recover From Post Childhood Adversity Syndrome

New research leads to new approaches with wide benefits.

In this infographic, I show the link between Adverse Childhood Experiences, later physical adult disease, and what we can do to heal.

Cutting edge research tells us that experiencing childhood emotional trauma can play a large role in whether we develop physical disease in adulthood. In Part 1 of this series we looked at the growing scientific link between childhood adversity and adult physical disease. This research tells us that what doesn’t kill you doesn’t necessarily make you stronger; far more often, the opposite is true.

Adverse Childhood Experiences (ACES), which include emotional or physical neglect; harm developing brains, predisposing them to autoimmune disease, heart disease, cancer, debression, and a number of other chronic conditions, decades after the trauma took place.

Recognizing that chronic childhood stress can play a role, along with genetics and other factors, in developing adult illnesses and relationship challenges, can be enormously freeing. If you have been wondering why you’ve been struggling a little too hard for a little too long with your emotional and physical wellbeing, feeling as if you’ve been swimming against some invisible current that never ceases this “aha” can come as a welcome relief. Finally, you can begin to see the current and understand how it’s been working steadily against you all of your life.

Once we understand how the past can spill into the present, and how a tough childhood can become a tumultuous, challenging adulthood, we have a new possibility of healing. As one interviewee in my new book, Childhood Disrupted: How Your Biography Becomes Your Biology, and How You Can Heal, said, when she learned about Adverse Childhood Experiences for the first time, “Now I understand why I’ve felt all my life as if I’ve been trying to dance without hearing any music.” Suddenly, she felt the possibility that by taking steps to heal from the emotional wounds of the past she might find a new layer of healing in the present.

There is truth to the old saying that knowledge is power. Once you understand that your body and brain have been harmed by the biological impact of early emotional trauma, you can at last take the necessary, science based steps to remove the fingerprints that early adversity left on your neurobiology. You can begin a journey to healing, to reduce your proclivity to inflammation, depression, addiction, physical pain, and disease.

Science tells us that biology does not have to be destiny. ACEs can last a lifetime but they don’t have to. We can reboot our brains. Even if we have been set on high reactive mode for decades or a lifetime, we can still dial it down. We can respond to life’s inevitable stressors more appropriately and shift away from an overactive inflammatory response. We can become neurobiologically resilient. We can turn bad epigenetics into good epigenetics and rescue ourselves.

Today, researchers recognize a range of promising approaches to help create new neurons (known as neurogenesis), make new synaptic connections between those neurons (known as synaptogenesis), promote new patterns of thoughts and reactions, bring underconnected areas of the brain back online, and reset our stress response so that we decrease the inflammation that makes us ill.

We have the capacity, within ourselves, to create better health. We might call this brave undertaking “the neurobiology of awakening.”

There can be no better time than now to begin your own awakening, to proactively help yourself and those you love, embrace resilience, and move forward toward growth, even transformation.

Here are 8 steps to try:

1. Take the ACE Questionnaire

The single most important step you can take toward healing and transformation is to fill out the ACE Questionnaire for yourself and share your results with your health, care practitioner. For many people, taking the 10-question survey “helps to normalize the conversation about Adverse Childhood Experiences and their impact on our lives,” says Vincent Felitti, co-founder of the ACE Study. “When we make it okay to talk about what happened, it removes the power that secrecy so often has.”

You’re not asking your healthcare practitioner to act as your therapist, or to change your prescriptions; you’re simply acknowledging that there might be a link between your past and your present. Ideally, given the recent discoveries in the field of ACE research, your doctor will also acknowledge that this link is plausible, and add some of the following modalities to your healing protocol.

2. Begin Writing to Heal.

Think about writing down your story of childhood adversity, using a technique psychologists call “writing to heal.” James Pennebaker, professor of psychology at the University of Texas, Austin, developed this assignment, which demonstrates the effects of writing as a healing modality. He suggests: “Over the next four days, write down your deepest emotions and thoughts about the emotional upheaval that has been influencing your life the most. in your writing, really let go and explore the event and how it has affected you. You might tie this experience to your childhood, your relationship with your parents, people you have loved or love now…Write continuously for twenty minutes a day.”

When Pennebaker had students complete this assignment, their grades went up. When adults wrote to heal, they made fewer doctors’ visits and demonstrated changes in their immune function. The exercise of writing about your secrets, even if you destroy what you’ve written afterward, has been shown to have positive health effects.

3. Practice Mindfulness Meditation.

A growing body of research indicates that individuals who’ve practiced mindfulness meditation and Mindfulness Based Stress Reduction (MBSR) show an increase in gray matter in the same parts of the brain that are damaged by Adverse Childhood Experiences and shifts in genes that regulate their physiological stress response.

According to Trish Magyari, LCPC, a mindfulness-based psychotherapist and researcher who specializes in trauma and illness, adults abuse who took part in a “trauma-sensitive” MBSR program, had less anxiety and depression, and demonstrated fewer PTSD symptoms, even two years after taking the course.

Many meditation centers offer MBSR classes and retreats, but you can practice anytime in your own home. Choose a time and place to focus on your breath as it enters and leaves your nostrils; the rise and fall of your chest; the sensations in your hands or through the whole body; or sounds within or around you. If you get distracted, just come back to your anchor.

There are many medications you can take that dampen the sympathetic nervous system (which ramps up your stress response when you come into contact with a stressor), but there aren’t any medications that boost the parasympathetic nervous system (which helps to calm your body down after the stressor has passed).

Your breath is the best natural calming treatment, and it has no side effects.

4. Yoga

When children face ACEs, they often store decades of physical tension from a fight, flight, or freeze state of mind in their bodies. PET scans show that yoga decreases blood flow to the amygdala, the brain’s alarm center, and increases blood flow to the frontal lobe and prefrontal cortex, which help us to react to stressors with a greater sense of equanimity.

Yoga has also be found to increase levels of GABA, or gamma aminobutyric acid, a chemical that improves brain function, promotes calm, and helps to protect us against depression and anxiety.

5. Therapy

Sometimes, the long lasting effects of childhood trauma are just too great to tackle on our own. In these cases, says Jack Kornfield, psychologist and meditation teacher, “meditation is not always enough.” We need to bring unresolved issues into a therapeutic relationship, and get backup in unpacking the past.

When we partner with a skilled therapist to address the adversity we may have faced decades ago, those negative memories become paired with the positive experience of being seen by someone who accepts us as we are, and a new window to healing opens.

Part of the power of therapy lies in an allowing safe person. A therapist’s unconditional acceptance helps us to modify the circuits in our brain that tell us that we can’t trust anyone, and grow new, healthier neural connections.

It can also help us to heal the underlying, cellular damage of traumatic stress, down to our DNA. In one study, patients who underwent therapy showed changes in the integrity of their genome, even a year after their regular sessions ended.

6. EEG Neurofeedback

Electroencephalographic (EEG) Neurofeedback is a clinical approach to healing childhood trauma in which patients learn to influence their thoughts and feelings by watching their brain’s electrical activity in real-time, on a laptop screen. Someone hooked up to the computer via electrodes on his scalp might see an image of a field; when his brain is under-activated in a key area, the field, which changes in response to neural activity, may appear to be muddy and gray, the flowers wilted; but when that area of the brain reactivates, it triggers the flowers to burst into color and birds to sing. With practice, the patient learns to initiate certain thought patterns that lead to neural activity associated with pleasant images and sounds.

You might think of a licensed EEG Neurofeedback therapist as a musical conductor, who’s trying to get different parts of the orchestra to play a little more softly in some cases, and a little louder in others, in order to achieve harmony. After just one EEG Neurofeedback session, patients showed greater neural connectivity and improved emotional resilience, making it a compelling option for those who’ve suffered the long lasting effects of chronic, unpredictable stress in childhood.

7. EMDR Therapy

Eye Movement Desensitization and Reprocessing (EMDR) is a potent form of psychotherapy that helps individuals to remember difficult experiences safely and relate those memories in ways that no longer cause pain in the present.

Here’s how it works:

EMDR-certified therapists help patients to trigger painful emotions. As these emotions lead the patients to recall specific difficult experiences, they are asked to shift their gaze back and forth rapidly, often by following a pattern of lights or a wand that moves from right to left, right to left, in a movement that simulates the healing action of REM sleep.

The repetitive directing of attention in EMDR induces a neurobiological state that helps the brain to re-integrate neural connections that have been dysregulated by chronic, unpredictable stress and past experiences. This re-integration can, in turn, lead to a reduction in the episodic, traumatic memories we store in the hippocampus, and downshift the amygdala’s activity. Other studies have shown that EMDR increases the volume of the hippocampus,

EMDR therapy has been endorsed by the World Health Organization as one of only two forms of psychotherapy for children and adults in natural disasters and war settings.

8. Rally Community Healing

Often, ACEs stem from bad relationships, neglectful relatives, schoolyard bullies, abusive partners, but the right kinds of relationships can help to make us whole again. When we find people who support us, when we feel “tended and befriended,” our bodies and brains have a better shot at healing. Research has found that having strong social ties improves outcomes for women with breast cancer, multiple sclerosis, and other diseases. In part, that’s because positive interactions with others boost our production of oxytocin, a feel-good hormone that dials down the inflammatory stress response.

If you’re at a loss for ways to connect, try a mindfulness meditation community or an MBSR class, or pass along the ACE Questionnaire or even my newest book, Childhood Disrupted: How Your Biography Becomes Your Biology, and How You Can Heal, to family and friends to spark important, meaningful conversations.

You’re Not Alone

Whichever modalities you and your physician choose to implement, it’s important to keep in mind that you’re not alone. When you begin to understand that your feelings of loss, shame, guilt, anxiety, or grief are shared by so many others, you can lend support and swap ideas for healing.

When you embrace the process of healing despite your Adverse Childhood Experiences, you don’t just become who you might have been if you hadn’t encountered childhood suffering in the first place. You gain something better, the hard earned gift of life wisdom, which you bring forward into every arena of your life. The recognition that you have lived through hard times drives you to develop deeper empathy, seek more intimacy, value life’s sweeter moments, and treasure your connectedness to others and to the world at large. This is the hard won benefit of having known suffering.

Best of all, you can find ways to start right where you are, no matter where you find yourself.

Future Directions in Childhood Adversity and Youth Psychopathology

Katie A. McLaughlin, Department of Psychology, University of Washington

Abstract

Despite long standing interest in the influence of adverse early experiences on mental health, systematic scientific inquiry into childhood adversity and developmental outcomes has emerged only recently. Existing research has amply demonstrated that exposure to childhood adversity is associated with elevated risk for multiple forms of youth psychopathology.

In contrast. knowledge of developmental mechanisms linking childhood adversity to the onset of Psychopathology, and whether those mechanisms are general or specific to particular kinds of adversity, remains cursory.

Greater understanding of these pathways and identification of protective factors that buffer children from developmental disruptions following exposure to adversity is essential to guide the development of interventions to prevent the onset of psychopathology following adverse childhood experiences,

This article provides recommendations for future research in this area. In particular, use of a consistent definition of childhood adversity, integration of studies of typical development with those focused on childhood adversity, and identification of distinct dimensions of environmental experience that differentially influence development are required to uncover mechanisms that explain how childhood adversity is associated with numerous psychopathology outcomes (i.e., multifinality) and identify moderators that shape divergent trajectories following adverse childhood experiences.

A transdiagnostic model that highlights disruptions in emotional processing and poor executive functioning as key mechanisms linking childhood adversity with multiple forms of psychopathology is presented as a starting point in this endeavour. Distinguishing between general and specific mechanisms linking childhood adversity with psychopathology is needed to generate empirically informed interventions to prevent the long term consequences of adverse early environments on children’s development.

The lasting influence of early experience on mental health across the lifespan has been emphasized in theories of the etiology of psychopathology since the earliest formulations of mental illness. In particular, the roots of mental disorder have often been argued to be a consequence of adverse environmental experiences occurring in childhood. Despite this long standing interest, systematic scientific inquiry into the effects of childhood adversity on health and development has emerged only recently.

Prior work on childhood adversity focused largely on individual types of adverse experiences, such as death of a parent, divorce, sexual abuse, or poverty, and research on these topics evolved as relatively independent lines of inquiry. The transition to considering these types of adversities as indicators of the same underlying construct was prompted, in part, by the findings of a seminal study examining childhood adversity as a determinant of adult physical and mental health and advances in theoretical conceptualizations of stress. Specifically. findings from the Adverse Childhood Experiences (ACE) Study documented high levels of cooccurrence of multiple forms of childhood adversity and strong associations of exposure to adverse childhood experiences with a wide range of adult health outcomes (Dong et al., 2004; Edwards, Holden, Felitti, & Anda, 2003; Felitti et al., 1998).

Around the same time, the concept of allostatic load was introduced as a comprehensive neurobiological model of the effects of stress (McEwen, 1998, 2000). Allostatic load provided a framework for explaining the neurobiological mechanisms linking a variety of adverse social experiences to health. Together, these discoveries sparked renewed interest in the childhood determinants of physical and mental health. Since that time there has been a veritable explosion of research into the impact of childhood adversity on developmental outcomes, including psychopathology.

CHILDHOOD ADVERSITY AND PSYCHOPATHOLOGY

Over the past two decades, hundreds of studies have examined the associations between exposure to childhood adversity and risk for psychopathology (Evans, Li, & Whipple, 2013). Here, I briefly review this evidence, focusing specifically on findings from epidemiological studies designed to allow inferences to be drawn at the population level. These studies have documented five general patterns with regard to childhood adversity and the distribution of mental disorders in the population.

First, despite differences across studies in the prevalence of specific types of adversity, all population based studies indicate that exposure to childhood adversity is common. The prevalence of exposure to childhood adversity is estimated at about 50% in the U.S. population across numerous epidemiological surveys (Green et al., 2010; Kessler, Davis, & Kendler, 1997; McLaughlin, Conron, Koenen, & Gilman, 2010; McLaughlin, Green et al., 2012). Remarkably similar prevalence estimates have been documented in other high income countries, as well as in low and middle income countries worldwide (Kessler et al., 2010).

Second, individuals who have experienced childhood adversity are at elevated risk for developing a lifetime mental disorder compared to individuals without such exposure, and the odds of developing a lifetime mental disorder increase as exposure to adversity increases (Edwards et al., 2003; Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010; McLaughlin, Conron, etal., 2010; McLaughlin, Green, et al., 2012).

Third, exposure to childhood adversity confers vulnerability to psychopathology that persists across the life course. Childhood adversity exposure is associated not only with risk of mental disorder onset in childhood and adolescence (McLaughlin, Green, et al., 2012) but also with elevated odds of developing a first onset mental disorder in adulthood, which persists after adjustment for mental disorders beginning at earlier stages of development (Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010).

Fourth, the associations of childhood adversity with different types of commonly occurring mental disorders are largely nonspecific. Individuals who have experienced childhood adversity experience greater odds of developing mood, anxiety, substance use, and disruptive behavior disorders, with little meaningful variation in the strength of associations across disorder classes (Green et al., 2010; Kessler et al., 1997; Kessler et al., 2010; McLaughlin, Green, et al., 2012).

Recent epidemiological findings suggest that the associations of child maltreatment, a commonly measured form of adversity, with lifetime mental disorders operate entirely through a latent liability to experience internalizing and externalizing psychopathology with no direct effects on specific mental disorders that are not explained by this latent vulnerability (Caspi et al., 2014; Keyes et al., 2012).

Finally, exposure to childhood adversity explains a substantial proportion of mental disorder onsets in the population, both in the United States and cross nationally (Afifi et al., 2008′, Green et a1., 2010; Kessler et al., 2010; McLaughlin, Green, et al., 2012). This reflects both the high prevalence of exposure to childhood adversity and the strong association of childhood adversity with the onset of psychopathology.

Together, findings from epidemiological studies indicate clearly that exposure to childhood adversity powerfully shapes risk for psychopathology in the population.

As such, it is time for the field to move beyond these types of basic descriptive studies to research designs aimed at identifying the underlying developmental mechanisms linking childhood adversity to psychopathology. Although ample research has been conducted examining mechanisms linking individual types of adversity to psychopathology (e.g., sexual abuse; Trickett, Noll, & Putnam, 2011), far less is known about which of these mechanisms are common across different types of adversity versus specific to particular types of experiences. Greater understanding of these pathways, as well as the identification of protective factors that buffer children from disruptions in emotional, cognitive, social, and neurobiological development following exposure to adversity, is essential to guide the development of interventions to prevent the onset of psychopathology in children exposed to adversity, a critical next step for the field.

However, persistent issues regarding the definition and measurement of childhood adversity must be addressed before meaningful progress on mechanisms, protective factors, and prevention of psychopathology following childhood adversity will be possible.

FUTURE DIRECTIONS IN CHILDHOOD ADVERSITY AND YOUTH PSYCHOPATHOLOGY

This article has two primary goals. The first is to provide recommendations for future research on childhood adversity and youth psychopathology. These recommendations relate to the definition and measurement of childhood adversity, the integration of studies of typical development with those on childhood adversity, and the importance of distinguishing between general and specific mechanisms linking childhood adversity to psychopathology.

The second goal is to provide a transdiagnostic model of mechanisms linking childhood adversity and youth psychopathology that incorporates each of these recommendations.

Defining Childhood Adversity

Childhood adversity is a construct in search of a definition. Despite the burgeoning interest and research attention devoted to childhood adversity, there is a surprising lack of consistency with regard to the definition and measurement of the construct. Key issues remain unaddressed in the literature regarding the definition of childhood adversity and the boundary conditions of the construct. To what does the construct of childhood adversity refer? What types of experiences qualify as childhood adversity and what types do not?

Where do we draw the line between normative experiences of stress and those that qualify as an adverse childhood experience? How does the construct of childhood adversity differ from other constructs that have been linked to psychopathology risk, including stress, toxic stress. and trauma? It will be critical to gain clarity on these definitional issues before more complex questions regarding mechanisms and protective factors can be systematically examined.

Even in the seminal ACE Study that spurred much of the recent research into childhood adversity, a concrete definition of adverse childhood experience is not provided. The original article from the study argues for the importance of understanding the lasting health effects of child abuse and “household dysfunction,” the latter of which is never defined specifically (Felitti et al., 1998). The CDC website for the ACE Study indicates that the ACE score. a count of the total number of adversities experienced. is designed to assess ”the total amount of stress experienced during childhood.”

Why has a concrete definition of childhood adversity remained elusive? As I see it, there is a relatively simple explanation for this notable gap in the literature. Childhood adversity is difficult to define but fairly obvious to most observers. making the construct an example of the classic standard of you know it when you see it. Although this has allowed a significant scientific knowledge base on childhood adversity to emerge within a relatively short period, the lack of an agreed upon definition of the construct represents a significant impediment to future progress in the field.

How can we begin to build scientific consensus on the definition of childhood adversity? Critically, we must come to an agreement about what childhood adversity is and what it is not. Adversity is defined as “a state or instance of serious or continued difficulty or misfortune; a difficult situation or condition; misfortune or tragedy” (“Adversity,” 2015).

This provides a reasonable starting point. Adversity is an environmental event that must be serious (i.e., severe) or a series of events that continues overtime (i.e.. chronic).

Building on Scott Monroe‘s (2008) definition of life stress and models of experience expectant brain development (Baumrind, 1993; Fox. Levitt, & Nelson. 2010), I propose that childhood adversity should be defined as experiences that are likely to require significant adaptation by an average child and that represent a deviation from the expectable environment. The expectable environment refers to a wide range of species typical environmental inputs that the human brain requires to develop normally. These include sensory inputs (e.g., variation in patterned light information that is required for normal development of the visual system), exposure to language, and the presence of a sensitive and responsive caregiver (Fox et al., 2010).

As I have argued elsewhere (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014), deviations from the expectable environment often take two primary forms: an absence of expected inputs (e. g., limited exposure to language or the absence of a primary caregiver), or the presence of unexpected inputs that represent significant threats to the physical integrity or well being of the child (e.g., exposure to violence).

A similar approach to classifying key forms of child adversity has been articulated by others as well (Farah et al., 2008; Humphreys & Zeanah, 2015). These experiences can either be chronic (e.g.. prolonged neglect) or involve single events that are severe enough to represent a deviation from the expectable environment (e.g., sexual abuse).

Together, this provides a working definition of childhood adversity: exposure during childhood or adolescence to environmental circumstances that are likely to require significant psychological, social, or neurobiological adaptation by an average child and that represent a deviation from the expectable environment.

This definition provides some clarity about what childhood adversity is not. The clearest boundary condition involves the developmental timing of exposure; experiences classified as childhood adversity must occur prior to adulthood, either during childhood or adolescence. Most research on childhood adversity has taken a broad definition of childhood, including events occurring during either childhood or adolescence. Although the demarcation between adolescence and adulthood is itself a point of debate, relative consensus exists regarding the onset of adult roles as the end of adolescence (Steinberg, 2014).

Second. childhood adversity refers to an event or ongoing events in the environment. Childhood adversity thus refers only to specific environmental circumstances or events and not to an individual child’s response to those circumstances.

Third, childhood adversity refers to environmental conditions that are likely to require significant psychological, social, or neurobiological adaptation by an average child; therefore, events that represent transient or minor hassles should not qualify.

What types of events should be considered severe enough to warrant classification as adversity? Although there is no absolute rule or formula that can be used to distinguish circumstances or events requiring significant adaptation from those that are less severe or impactful, childhood adversity should include conditions or events that are likely to have a meaningful and lasting impact on developmental processes for most children who experience them. In other words, experiences that could alter fundamental aspects of development in emotional, cognitive, social, or neurobiological domains are the types of experiences that should qualify as adversity.

Studies of childhood adversity should clearly define the study specific decision rules used to distinguish between adversity and more normative stressors.

Finally, environmental circumstances or stressors that do not represent deviations from the expectable environment should not be classified as childhood adversity. In other words. childhood adversity should not include any and all stressors that occur during childhood or adolescence. Two examples of childhood stressors that would likely not qualify as childhood adversity based on this definition, because they do not meet the condition of representing a deviation from the expectable environment, are moving to a new school and death of an elderly grandparent. Each of these childhood stressors should require adaptation by an average child. and could influence mental health and development. However, neither represents a deviation from the expectable childhood environment and therefore does not meet the proposed definition of childhood adversity.

A key question for the field is whether the definition of childhood adversity should be narrow or broad. This question will determine whether other common forms of adversity or stress should be considered as indicators of childhood adversity. For example, many population based studies have included parental psychopathology and divorce as forms of adversity (Felitti et al., 1998; Green et al.. 2010). Given the high prevalence of psychopathology and divorce in the population, consideration of any form of parental psychopathology or any type of divorce as a form of adversity results in a fairly broad definition of adversity; certainly, not all cases of parental psychopathology or all divorces result in significant adversity for children. A more useful approach might be to consider only those cases of parental psychopathology or divorce that result in parenting behavior that deviates from the expectable environment (i. e., consistent unavailability, unresponsiveness, or insensitive care) or that generate other types of significant adversity for children (e.g., economic adversity, emotional abuse, etc.) as meeting the threshold for childhood adversity. Providing these types of boundary conditions is important to prevent the construct of childhood adversity from meaning everything and nothing at the same time.

Finally, how does childhood adversity differ from related constructs, including stress, toxic stress, and trauma that can also occur during childhood? What is unique about the construct of childhood adversity that is not captured in definitions of these similar constructs?

First, how is childhood adversity different from stress? The prevailing conceptualization of life stress defines the construct as the adaptation of an organism to specific circumstances that change over time (Monroe, 2008). This definition includes three primary components that interact with one another: environment (the circumstance or event that requires adaptation by the organism), organism (the response to the environmental stimulus), and time (the interactions between the organism and the environment over time; Monroe, 2008). In contrast, childhood adversity refers only to the first of these three components, the environmental aspect of stress.

Second. how is adversity different from toxic stress, a construct recently developed by Jack Shonkoff and colleagues (Shonkoff & Garner, 2012)? Toxic stress refers to the second component of stress just described, the response of the organism. Specifically, toxic stress refers to exaggerated, frequent, or prolonged activation of physiological stress response systems in response to an accumulation of multiple adversities over time in the absence of protection from a supportive caregiver (Shonkoff & Garner, 2012). The concept of toxic stress is conceptually similar to the construct of allostatic load as defined by McEwen (2000) and focuses on a different aspect of stress than childhood adversity.

Finally, how is childhood adversity distinct from trauma? Trauma is defined as exposure to actual or threatened death. serious injury, or sexual violence, either by directly experiencing or witnessing such events or by learning of such events occurring to a close relative or friend (American Psychiatric Association, 2013). Traumatic events occurring in childhood represent one potential form of childhood adversity, but not all types of childhood adversity are traumatic. Examples of adverse childhood experiences that would not be considered traumatic are neglect; poverty; and the absence of a stable, supportive caregiver.

The first concrete recommendation for future research is that the field must utilize a consistent definition of childhood adversity. A useful definition must have clarity about what childhood adversity is and what it is not, provide guidance about decision rules for applying the definition in specific contexts, and increase consistency in the measurement and application of childhood adversity across studies. The definition proposed here that childhood adversity involves experiences that are likely to require significant adaptation by an average child and that represent a deviation from the expectable environment-represents a starting point in this endeavor, although consideration of alterative definitions and scholarly debate about the relative merits of different definitions is encouraged.

Integrating Studies of Typical and Atypical Development

A developmental psychopathology perspective emphasizes the reciprocal and integrated nature of our understanding of normal and abnormal development (Cicchetti, 1996′, Cicchetti & Lynch, 1993; Lynch & Cicchetti, 1998). Normal developmental patterns must be characterized to identify developmental deviations, and abnormal developmental outcomes shed light on the normal developmental processes that lead to maladaptation when disrupted (Cicchetti, 1993; Sroufe, 1990). Maladaptive outcomes, including psychopathology, are considered to be the product of developmental processes (Sroufe, 1997, 2009). This implies that in order to uncover mechanisms linking childhood adversity to psychopathology, the developmental trajectory of the candidate emotional, cognitive, social, or neurobiological process under typical circumstances must first be characterized before examining how exposure to an adverse environment alters that trajectory. This approach has been utilized less frequently than would be expected in the literature on childhood adversity.

Recent work from Nim Tottenham’s lab on functional connectivity between the amygdala and medial prefrontal cortex (mPFC) highlights the utility of this strategy. In an initial study, Gee, Humphreys, et a1. (2013) demonstrated age related changes in amygdala-mPFC functional connectivity in a typically developing sample of children during a task involving passive viewing of fearful and neutral faces. Specifically, they observed a developmental shift from a pattern of positive amygdala-mPFC functional connectivity during early and middle childhood to a pattern of negative connectivity (i.e., higher mPFC activity, lower amygdala activity) beginning in the prepubertal period and continuing throughout adolescence (Gee. Humphreys, et al., 2013). Next, they examined how exposure to institutional rearing in infancy influenced these age related changes, documenting a more mature pattern of negative functional connectivity among young children with a history of institutionalization (Gee, Gabard Dumam, et a1., 2013).

Utilizing this type of approach is important not only to advance knowledge of developmental mechanisms underlying childhood adversity-psychopathology associations but also to leverage research on adverse environmental experiences to inform our understanding of typical development. Specifically, as frequently argued by Cicchetti (Cicchetti & Toth, 2009), research on atypical or aberrant developmental processes can provide a window into typical development not available through other means, This is particularly relevant in studies of some forms of childhood adversity that involve an absence of expected inputs from the environment, such as institutional rearing and child neglect (McLaughlin. Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014). Examining the developmental consequences associated with deprivation in a particular type of input from the environment (e.g., the presence of an attachment figure, exposure to complex language) can provide insights into the types of environmental inputs that are required for a system or set of competencies to develop normally.

Evidence on the developmental trajectories of children raised in institutional settings provides an illustrative example. Institutions for abandoned and orphaned children vary widely, but a common feature across them is the absence of an attachment figure who provides sensitive and responsive care for each child (Smyke et al., 2007; Tottenham, 2012; Zeanah et al., 2003). Developmental research on children raised in institutional settings has provided ample evidence about the importance of the attachment relationship in early development for shaping numerous aspects of development, Unsurprisingly, most children raised in institutions fail to develop a secure attachment relationship to a caregiver; this is particularly true if children remain in institutional care past the age of 2 years (Smyke, Zeanah, Fox, Nelson, & Guthrie, 2010; Zeanah, Smyke, Koga, Carlson, & The Bucharest Early Intervention Project Core Group, 2005).

Children reared in institutional settings also exhibit social skills deficits, delays in language development, lasting disruptions in executive functioning skills, decrements in IQ, and atypical patterns of emotional processing (Almas et al., 2012; Bos. Fox, Zeanah, & Nelson, 2009; Nelson et al., 2007; Tibu et al., 2016; Tottenham et al., 2011; Windsor et al., 2011). Institutional rearing also has wide ranging impacts on patterns of brain development, including neural structure and function (Gee et al., 2013; McLaughlin, Fox, Zeanah, & Nelson, 20] 1; McLaughlin, Sheridan, Winter, et al., 2014; Sheridan, Fox, Zeanah, McLaughlin, & Nelson, 2012; Tottenham et al., 2011).

Although children raised in institutional settings often experience deprivation in environmental inputs of many kinds, it is likely that the absence of a primary attachment figure in early development explains many of the downstream consequences of institutionalization on developmental outcomes. Indeed, recent evidence suggests that disruptions in attachment may be a causal mechanism linking institutional rearing with the onset of anxiety and depression in children. Specifically, in a randomized controlled trial of foster care as an intervention for orphaned children in Romania, improvements in attachment security were a mechanism underlying the preventive effects of the intervention on the onset of anxiety and depression in children (McLaughlin, Zeanah, Fox, & Nelson, 2012). By examining the developmental consequences of the absence of an expected input from the environment, namely, the presence of a primary attachment figure, studies of institutional rearing provide strong evidence for the centrality of the early attachment relationship in shaping numerous aspects of development.

Sensitive Periods

The integration of studies on typical and atypical development may be particularly useful in the identification of sensitive periods. Developmental psychopathology emphasizes the cumulative and hierarchical nature of development (Gottlieb, 1991a, 1991b; Sroufe, 2009; Sroufe, Egeland, & Kreutzer, I990; Werner & Kaplan, 1963). Learning and acquisition of competencies at one point in development provide the scaffolding upon which subsequent skills and competencies are built, such that capabilities from previous periods are consolidated and reorganized in a dynamic, unfolding process across time. The primary developmental tasks occurring at the time of exposure to a risk factor are thought to be the most likely to be interrupted or disrupted by the experience. Developmental deviations from earlier periods are then carried forward and have consequences for children’s ability to successfully accomplish developmental tasks in a later period (Cicchetti & Toth, 1998; Sroufe, 1997). In other words, early experiences constrain future learning of patterns or associations that represent departures from those that were previously learned (Kuhl, 2004).

This concept points to a critical area for future research on childhood adversity involving the identification of sensitive periods of emotional, cognitive, social, and neurobiological development when inputs from the environment are particularly influential. Sensitive periods have been identified both in sensory development and in the development of complex social-cognitive skills, including language (Hensch, 2005′, Kuhl. 2004).

Emerging evidence from cognitive neuroscience also suggests the presence of developmental periods when specific regions of the brain are most sensitive to the effects of stress and adversity (Andersen et al., 2008).

However, identification of sensitive periods has remained elusive in other domains of emotional and social development, potentially reflecting the fact that sensitive periods exist for fewer processes in these domains. However, determining how anomalous or atypical environmental inputs influence developmental processes differently based on the timing of exposure provides a unique opportunity to identify sensitive periods in development; in this way, research on adverse environments can inform our understanding of typical development by highlighting the environmental inputs that are necessary to foster adaptive development.

Identifying sensitive periods of emotional and social development requires detailed information on the timing of exposure to atypical or adverse environments, which is challenging to measure. To date, studies of institutional rearing have provided the best opportunity for studying sensitive periods in human emotional and social development, as it is straightforward to determine the precise period during which the child lived in the institutional setting.

Studies of institutional rearing have identified a sensitive period for the development of a secure attachment relationship at around 2 years of age; the majority of children placed into stable family care before that time ultimately develop secure attachments to a caregiver, whereas the majority of children placed after 2 years fail to develop secure attachments (Smyke et al., 2010).

Of interest, a sensitive period occurring around 2 years of age has also been identified for other domains, including reactivity of the autonomic nervous system and hypothalamic pituitary adrenal (HPA) axis to the environment and a neural marker of affective style (i.e., frontal electroencephalogram asymmetry; McLaughlin et al., 20] l; McLaughlin, Sheridan, et al., 2015), suggesting the importance of the early attachment relationship in shaping downstream aspects of emotional and neurobiological development.

The second concrete recommendation for future research is to integrate studies of typical development with those focused on understanding the impact of childhood adversity, in particular, research that can shed light on sensitive periods in emotional, social, cognitive, and neurobiological development is needed. Identifying the developmental processes that are disrupted by exposure to particular types of adverse environments will be facilitated by first characterizing the typical developmental trajectories of the processes in question. In turn, studies of atypical or adverse environments should be leveraged to inform our understanding of the types of environmental inputs that are required, and when, for particular systems to develop normally.

Given the inherent problems in retrospective assessment of timing of exposure to particular environmental experiences, longitudinal studies with repeated measurements of environmental experience and acquisition of developmental competencies are likely to be most informative. Alternatively, the occurrence of exogenous events like natural disasters, terrorist attacks, and changes in policies or the availability of resources (e.g., the opening of the casino on a Native American reservation; Costello, Compton, Keeler, & Angold, 2003) provides additional opportunities to study sensitive periods of development. Identifying sensitive periods is likely to yield critical insights into the points in development when particular capabilities are most likely to be influenced by environmental experience, an issue of central importance for understanding both typical and atypical development. Such information can be leveraged to inform decisions about the points in time when psychosocial interventions for children exposed to adversity are likely to be maximally efficacious.

Explaining Multifinality

The principle of multifinality is central to developmental psychopathology (Cicchetti, 1993). Multifinality refers to the process by which the same risk and/or protective factors may ultimately lead to different developmental outcomes (Cicchetti & Rogosch, 1996).

It has been repeatedly demonstrated that most forms of childhood adversity are associated with elevated risk for the onset of virtually all commonly occurring mental disorders (Green et al., 2010; McLaughlin, Green, et al., 2012). As noted earlier, recent evidence suggests that child maltreatment is associated with a latent liability for psychopathology that explains entirely the associations of maltreatment with specific mental disorders (Caspi et al., 2014; Keyes et al., 2012). However, the mechanisms that explain how child maltreatment, or other forms of adversity, influence a generalized liability to psychopathology have not been specified. To date, there have been few attempts to articulate a model explaining how childhood adversity leads to the diversity of mental disorders with which it is associated (i. e., multifinality). What are the mechanisms that explain this generalized vulnerability to psychopathology arising from adverse early experiences? Are these mechanisms shared across multiple forms of childhood adversity, or are they specific to particular types of adverse experience?

Identifying general versus specific mechanisms will require changes in the way we conceptualize and measure childhood adversity. Prior research has followed one of two strategies. The first involves studying individual types of childhood adversity, such as parental death, physical abuse, neglect, or poverty (Chase Lansdale, Cherlin, & Kieman, 1995; Dubowitz, Papas, Black, & Starr, 2002; Fristad, Jedel, Weller, & Weller, 1993; Mullen, Martin, Anderson, Romans, & Herbison, 1993; Noble, McCandliss, & Farah, 2007; Wolfe, Sas, & Wekerle, 1994). However, most individuals exposed to childhood adversity have experienced multiple adverse experiences (Dong et a1., 2004; Finkelhor, Ormrod, & Turner, 2007; Green et a1., 2010; McLaughlin, Green, et a1., 2012). This presents challenges for studies focusing on a single type of adversity, as it is unclear if any observed associations represent the downstream effects of the focal adversity in question (e.g., poverty) or the consequences of other co occurring experiences (e.g., exposure to violence) that might have different developmental consequences.

Increasing recognition of the co-occurring nature of adverse childhood experiences has resulted in a shift from focusing on single types of adversity to examining the associations between a number of adverse childhood experiences and developmental outcomes, the core strategy of the ACE approach (Arata, Langhinrichsen Roling, Bowers, & O’Brien, 2007; Dube et al., 2003; Edwards et a1., 2003; Evans et al., 2013). There has been a proliferation of research utilizing this approach in recent years, and it has proved useful in documenting the importance of childhood adversity as a risk factor for a wide range of negative mental health outcomes. However, this approach implicitly assumes that very different kinds of experiences ranging from violence exposure to material deprivation (e.g., food insecurity) to parental loss influence psychopathology through similar mechanisms. Although there is likely to be some overlap in the mechanisms linking different forms of adversity to psychopathology, the count approach oversimplifies the boundaries between distinct types of environmental experience that may have unique developmental consequences.

An alternative approach that is likely to meet with more success involves identifying dimensions of environmental experience that underlie multiple forms of adversity and are likely to influence development in similar ways. In recent work, my colleague Margaret Sheridan and I have proposed two such dimensions that cut across multiple forms of adversity: threat and deprivation (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014).

Threat involves exposure to events involving harm or threat of harm, consistent with the definition of trauma in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; American Psychiatric Association, 2013). Threat is a central dimension underlying multiple commonly studied forms of adversity, including physical abuse, sexual abuse, some forms of emotional abuse (i.e., that involve threats of physical violence and coercion), exposure to domestic violence, and other forms of violent victimization in home, school, or community settings.

Deprivation, in contrast, involves the absence of expected cognitive and social inputs from the environmental stimuli, resulting in reduced opportunities for learning. Deprivation in expected environmental inputs is common to multiple forms of adversity including emotional and physical neglect, institutional rearing, and poverty. Critically, we do not propose that exposure to deprivation and threat occurs independently for children, as these experiences are highly co-occurring, or that these are the only important dimensions of experience involved in childhood adversity.

Instead we propose, first, that these are two important dimensions that can be measured separately and, second, that the mechanisms linking these experiences to the onset of psychopathology are likely to be at least partially distinct (McLaughlin, Sheridan, & Lambert, 2014; Sheridan & McLaughlin, 2014). I describe some of these key mechanisms in the transdiagnostic model presented later. Recently, others have argued for the importance of taking this type of dimensional approach as well (Hamby & Grych, 2013; Humphreys & Zeanah, 2015).

Specific recommendations are for future research to (a) identify key dimensions of environmental experience that might differentially influence developmental outcomes and (b) measure multiple such dimensions in studies of childhood adversity to distinguish between general and specific underlying mechanisms linking different forms of adversity to psychopathology. Fine grained measurement of the dimensions of threat and deprivation has often not been conducted within the same study.

Studies focusing on specific types of exposure (e.g., abuse) without measuring or adjusting for co-occurring exposures (e.g., neglect) are unable to distinguish between common and specific mechanisms linking different dimensions of adverse experiences to psychopathology. The only way to determine whether such specificity exists is to measure and model these dimensions of experience together in future studies.

Characterizing the Interplay of Risk and Protective Factors

Although psychopathology is common among children exposed to a wide range of adverse environments, many children exhibit adaptation and resilience following adversity (Masten, 2001; Masten, Best, & Garmezy, 1990). For example, studies of resilience suggest that children who have a positive relationship with a caring and competent adult; are good at learning. problem solving, and self regulation; are socially engaging; and have positive self image are more likely to exhibit positive adaptation after exposure to adversity than children without these characteristics (Luthar, Cicchetti. & Becker, 2000; Masten. 2001; Masten et al.. 1990).

However, in contrast to the consistent pattern of associations between childhood adversity and psychopathology, evidence for protective factors varies widely across studies, and in most cases children exposed to adversity exhibit adaptive functioning in some domains but not others: even within a single domain, children may be functioning well at one point in time but not at others (Luthar et al.. 2000). This is not surprising given that the degree to which a particular factor is protective depends heavily upon context, including the specific risk factors with which it is interacting (Cicchetti & Lynch. 1993; Sameroff. Gutman, & Peck, 2003).

For example. authoritative parenting has been shown to be associated with adaptive outcomes for children raised in stable contexts that are largely free of significant adversity (Steinberg, Elmen, & Mounts, 1989; Steinberg, Lamborn, Dornbusch. & Darling. 1992; Steinberg, Mounts, Lambom. & Dombusch, I991); in contrast, authoritarian parenting appears to be protective for children being raised in environments characterized by low resources and/or high degrees of violence and other threats (Flouri, 2007; Gonzales, Cauce. Friedman. & Mason, 1996).

The degree to which variation in specific genetic polymorphisms moderates the impact of childhood adversity on development outcomes is also highly variable across studies; although genetic variation clearly contributes to developmental trajectories of adaptation and maladaptation following childhood adversity, this topic has been reviewed extensively elsewhere (Heim & Binder. 2012: McCrory, De Brito, & Viding. 2010; Uher & McGuffrn, 20l0) and is not discussed further. This complexity has contributed to the widely variable findings regarding protective factors and resilience.

Progress in identifying protective factors that buffer children from maladaptive outcomes following childhood adversity might be achieved by shifting the focus from downstream outcomes to more proximal mechanisms known to underlie the relationship between adverse childhood experiences and psychopathology. Research on resiliency has often focused on distal outcomes, such as the absence of psychopathology, the presence of high quality peer relationships, or good academic performance as markers of adaptive functioning in children with exposure to adversity (Bolger, Patterson, & Kupersmidt. 1999; Collishaw et al., 2007; Fergusson & Lynskey, 1996; Luthar, 1991).

Just as there are numerous mechanisms through which exposure to adverse environments lead to psychopathology and other downstream outcomes, there are likely to be a wide range of mechanisms through which protective factors buffer children from maladaptation following childhood adversity. Indeed. modern conceptualizations of resilience describe it as a developmental process that unfolds over time as an ongoing transaction between a child and the multiple contexts in which he or she is embedded (Luthar et al., 2000)

Rather than examining protective factors that buffer children from developing psychopathology following adverse childhood experiences, an alternative approach is to focus on factors that moderate the association of childhood adversity with the developmental processes that serve as mechanisms linking adversity with psychopathology (e.g., emotion regulation, executive functioning) or that moderate the link between these developmental processes and the onset of psychopathology. Deconstructing the pathways linking childhood adversity to psychopathology allows moderators to be examined separately at different stages of these pathways and may yield greater information about how protective factors ultimately exert their effects on downstream outcomes. including psychopathology.

Accordingly, a fourth recommendation is that future research should focus on identifying protective factors that buffer children from the negative consequences of adversity at two levels: (a) factors that modify the association between childhood adversity and the maladaptive patterns of emotional, cognitive, social, and neurobiological development that serve as intermediate phenotypes linking adversity with psychopathology. and (b) factors that moderate the influence of intennediate phenotypes on the emergence of psychopathology, leading to divergent trajectories of adaptation across children.

To understand resilience, we first need to understand the developmental processes that are disrupted following exposure to adversity and how certain characteristics either prevent or compensate for those developmental disruptions or reduce their impact on risk for psychopathology.

A TRANSDIAGNOSTIC MODEL OF CHILDHOOD ADVERSITY AND PSYCHOPATHOLOGY

The remainder of the article outlines a transdiagnostic model of mechanisms linking childhood adversity with youth psychopathology. Two core developmental mechanisms are proposed that, in part, explain patterns of multitinality: emotional processing and executive functioning.

The model builds on a framework described by Nolen Hoeksema and Watkins (2011) for identifying transdiagnostic processes. Of importance, the model is not intended to be comprehensive in delineating all mechanisms linking childhood adversity with psychopathology but rather focuses on two candidate mechanisms linking childhood adversity to multiple forms of psychopathology. At the same time, these mechanisms are also specific in that each is most likely to emerge following exposure to specific dimensions of adverse early experience.

The model is specific with regard to the underlying dimensions of adverse experience considered and identifies several key moderators that might explain divergent developmental trajectories among children following exposure to adversity. Future research is needed to expand this framework to incorporate other key dimensions of the adverse environmental experience, developmental mechanisms linking those dimensions of adversity with psychopathology, and moderators of those associations.

Distal Risk Factors

Within the proposed model, core dimensions of environmental experience that underlie multiple forms of adversity are conceptualized as distal risk factors for psychopathology, Specifically, experiences of threat and deprivation constitute the first component of the proposed transdiagnostic model of childhood adversity and psychopathology.

Experiences of threat and deprivation meet each of Nolen Hoeksema and Watkins’s (2011) criteria for a distal risk factor. They represent environmental conditions largely outside the control of the child that are linked to the onset of psychopathology only through intervening causal mechanisms that represent more proximal risk factors. Although they are probabilistically related to psychopathology, exposure to threat and deprivation do not invariably lead to mental disorders. These experiences influence proximal risk factors primarily through learning mechanisms that ultimately shape patterns of information processing, emotional responses to the environment, and higher order control processes that influence both cognitive and emotional processing.

Proximal Risk Factors

The developmental processes that are altered following exposure to adverse environmental experiences represent proximal risk factors, or intermediate phenotypes, linking them to the onset of psychopathology. These proximal risk factors represent the second component of the proposed transdiagnostic model. Nolen Hoeksema and Watkins (2011) argued that proximal risk factors are within person factors that mediate the relationship between distal risk factors, including aspects of environmental context that are difficult to modify, such as childhood adversity, and the emergence of psychopathology. Proximal risk factors directly influence symptoms and are temporally closer to symptom onset and often easier to modify than distal risk factors (Nolen Hoeksema & Watkins, 2011).

Identifying modifiable within person factors that link adverse environmental experiences with the onset of symptoms is the key to developing interventions to prevent the onset of psychopathology in children who have experienced adversity.

The model includes two primary domains of proximal risk factors: emotional processing and executive functioning.

Emotional processing refers to information processing of emotional stimuli (e.g., attention, memory), emotional reactivity, and both automatic (e.g., habituation, fear extinction) and effortful (e.g., cognitive reappraisal) forms of emotion regulation. These processes all represent responses to emotional stimuli, and many involve interactions of cognition with emotion.

Executive functions comprise a set of cognitive processes that support the ability to learn new knowledge and skills; hold in mind goals and information; and create and execute complex, future oriented plans. Executive functioning comprises the ability to hold information in mind and focus on currently relevant information (working memory), inhibit actions and information not currently relevant (inhibition). and switch flexibly between representations or goals (cognitive flexibility; Miyake & Friedman. 2012; Miyake, Friedman, Rettinger, Shah, & Hegarty, 2001).

Together these skills allow the creation and execution of future oriented plans and the inhibition of behaviors that do not serve these plans, providing the foundation for healthy decision making and self regulation. Many of the diverse mechanisms linking childhood adversity to psychopathology are subsumed within these two broad domains.

Emotional processing, stable patterns of emotional processing, emotional responding to the environment, and emotion regulation represent the first core domain of proximal risk factors. Experiences of uncontrollable threat are associated with strong learning of specific contingencies and overgeneralization of that learning to novel contexts, which facilitates the processing of salient emotional cues in the environment (e.g., biased attention to threat). Given the importance of quickly identifying potential threats in the environment for children growing up in environments characterized by legitimate danger, these learning processes should produce information processing biases that promote rapid identification of potential threats. Indeed, evidence suggests that children with abuse histories, an environment characterized by high levels of threat, exhibit attention biases toward facial displays of anger, identify anger with little perceptual information, have difficulty disengaging from angry faces, and display anticipatory monitoring of the environment following interpersonal displays of anger (Pollak, Cicchetti, Hornung, & Reed, 2000; Pollak & Sinha, 2002; Pollak & Tolley Schell, 2003; Pollak, Vardi, Putzer Bechner, & Curtin, 2005; Shackman, Shackman, & Pollak, 2007).

Given the relevance of anger as a signal of potential threat, these findings suggest that exposure to threatening environments results in stable patterns of information processing that facilitate threat identification and maintenance of attention to threat cues. These attention biases are specific to children who have experienced violence; for example, children who have been neglected (i.e., an environment characterized by deprivation in social and cognitive inputs) experience difficulty discriminating facial expressions of emotion but do not exhibit attention biases toward threat (Pollak, Klorrnan, Thatcher, & Cicchetti, 2001; Pollak et al., 2005).

In addition to attention biases, children who have been the victims of violence are also more likely to generate attributions of hostility to others in socially ambiguous situations (Dodge, Bates, & Pettit, 1990; Dodge, Pettit, Bates, & Valente, 1995; Weiss, Dodge, Bates, & Petit, 1992), a pattern of social information processing tuned to be overly sensitive to potential threats in the environment. Finally, some evidence suggests that exposure to threatening environments is associated with memory biases for overgeneral autobiographical memories in both children and adults (Crane et al., 2014; Williams et al., 2007).

Children with trauma histories also exhibit meaningful differences in patterns of emotional responding that are consistent with these patterns of information processing. For example, children who have experienced interpersonal violence exhibit greater activation in the amygdala and other nodes of the salience network (e.g., anterior insula, putamen, thalamus) to a wide range of negative emotional stimuli (McCrory et al., 2013; McCrory et al., 2011; McLaughlin, Peverill, Gold, Alves, & Sheridan, 2015), suggesting heightened salience of information that could predict threat.

These findings build on earlier work using evoked response potentials documenting amplified neural response to angry faces in children who were physically abused (Pollak, Cicchetti, Klorman, & Brumaghim, 1997; Pollak et al., 2001) and suggests that exposure to threatening experiences heightens the salience of negative emotional information, due to the potential relevance for detecting novel threats.

Heightened amygdala response to negative emotional cues could also reflect fear learning processes, whereby previously neutral stimuli that have become associated with traumatic events begin to elicit conditioned fear responses, or the result of deficits in automatic emotion regulation processes like fear extinction and habituation, which are mediated through connections between the ventromedial prefrontal cortex and amygdala. Recent findings of poor resting state functional connectivity between the ventromedial prefrontal cortex and amygdala among female adolescents with abuse histories provide some evidence for this latter pathway (Herringa et al., 2013).

In addition to heightened neural responses in regions involved in salience processing, consistent associations between exposure to threatening environments and elevations in self reported emotional reactivity to the environment have been observed in our lab and elsewhere (Glaser, Van Os, Portegijs. & Myin Genneys, 2006; Heleniak, Jenness, Van Der Stoep, McCauley, & McLaughlin, in press; McLaughlin, Kubzansky et al., 2010).

Atypical physiological responses to emotional cues have also been documented consistently among children who have experienced trauma, although the specific pattern of findings has varied across studies depending on the specific physiological measures and emotion eliciting paradigms employed. We recently applied a theoretical model drawn from social psychology on adaptive and maladaptive responses to stress to examine physiological responses to stress among maltreated youths. We observed a pattern of increased vascular resistance and blunted cardiac output reactivity among youths who had been physically or sexually abused relative to participants with no history of violence exposure (McLaughlin, Sheridan, Alves, & Mendes, 2014). This pattern of autonomic nervous system reactivity reflects an inefficient cardiovascular response to stress that has been shown in numerous studies to occur when individuals are in a state of heightened threat and is associated with threat appraisals and maladaptive cognitive and behavioral responses to stress (J amieson, Mendes, Blackstock, & Schmader, 2010; Jamieson, Nock, & Mendes, 2012; Mendes, Blascovich, Major. & Seery, 200l; Mendes, Major, McCoy, & Blascovich, 2008). Using data from a large population based cohort of adolescents, we recently replicated the association between childhood trauma exposure and blunted cardiac output reactivity during acute stress (Heleniak, Riese, Ormel, & McLaughlin, 2016).

Together, converging evidence across multiple levels of analysis indicates that exposure to trauma is associated with a persistent pattern of information processing involving biased attention toward potential threats in the environment, heightened neural and subjective responses to negative emotional cues, and a pattern of autonomic nervous system reactivity consistent with heightened threat perception. This heightened reactivity to negative emotional cues may make it more difficult for children who have been exposed to threatening enviromnents to regulate emotional responses. Indeed, a recent study from my lab found that when trying to regulate emotional responses using cognitive reappraisal, children who had been abused recomited regions of the prefrontal cortex involved in effortful control to a greater degree than children who had never experienced violence (McLaughlin, Peverill, et a1., 2015). This pattern suggests that attempts to modulate emotional responses to negative cues require more cognitive resources for children with abuse histories, meaning that effective regulation may break down more easily in the face of stress. Evidence that the negative emotional effects of stressful events are heightened among those with maltreatment histories is consistent with this possibility (Glaser et a1., 2006; McLaughlin, Conron, et al., 2010).

In addition to alterations in patterns of emotional reactivity to environmental cues, child trauma has been associated with maladaptive patterns of responding to distress. For example, exposure to threatening environments early in development is associated with habitual engagement in rumination, a response style characterized by passive focus on feelings of distress along with their causes and consequences without attempts to actively resolve the causes of distress (Nolen Hoeksema, Wisco, & Lyubomirsky, 2008). High reliance on rumination as a strategy for responding to distress has been observed in adolescents and adults who were abused as children (Conway. Mendelson, Giannopoulos, Csank. & Holm, 2005; Heleniak et al., in press; Sarin & Nolen Hoeksema, 2010). Adolescents who experienced victimization by peers (McLaughlin, Hatzenbuehler, & Hilt, 2009), and both adolescents and adults exposed to a wide range of negative life events (McLaughlin & Hatzenbuehler, 2009; Michl, McLaughlin, Shepherd, & Nolen Hoeksema, 2013), although the latter findings are not specific to threat per se.

Although evidence for disruptions in emotional processing come primarily from studies examining children exposed to environments characterized by high degrees of threat, deprived environments are also likely to have downstream effects on emotional development that are at least partially unique from those associated with threat. As noted previously, children who have been neglected experience difficulties discriminating facial displays of emotion (Pollak et al., 2001: Pollak et al., 2005), although some studies of neglected children have found few differences in neural responses to facial emotion in early childhood (Moulson, Fox, Zeanah, & Nelson, 2009; Slopen, McLaughlin, Fox, Zeanah, & Nelson, 2012). However, recent work suggests that children raised in deprived early environments exhibit elevated amygdala response to facial emotion and a mature pattem of functional connectivity between the amygdala and mPFC during emotional processing tasks (Gee et al., 2013; Tottenham et al., 201 1). Finally, children who were neglected or raised in deprived institutions tend to exhibit blunted physiological responses to stress, including in the autonomic nervous system and HPA axis (Gunnar, Frenn, Wewerka, & Van Ryzin, 2009; McLaughlin, Sheridan, et al., 2015).

Much of the existing work on childhood adversity and emotional responding has focused on responses to negative emotional cues. However, a growing body of evidence also suggests that responses to appetitive and rewarding cues are disrupted in children exposed to adversity. For example, children raised in deprived early environments exhibit blunted ventral striatal response to the anticipation of reward (Mehta et al., 2010), and a similar pattern has been observed in a sample of adults exposed to abuse during childhood (Dillon et al., 2009). In a recent study, an increase in ventral striatum response to happy emotional faces occurred from childhood to adolescence in typically developing children but not in children reared in deprived institutions (Goff et al., 2013). In recent work in our lab, we have also observed blunted reward learning among children exposed to institutional rearing (Sheridan, McLaughlin, et al., 2016).

Although the mechanisms underlying the link between diverse forms of childhood adversity and responsiveness to reward have yet to be clearly identified, it has been suggested that repeated activation of the HPA axis in early childhood can attenuate expression of brain derived neurotrophic factor, which in turn regulates the mesolimbic dopamine system that underlies reward learning (Goff & Tottenham, 2014). These reductions in brain derived neurotrophic factor expression may contribute to a pattern of blunted ventral striatum response to reward anticipation or receipt.

Alternatively, given the central role of the mesolimbic dopamine system in attachment related behavior (Strathearn, 2011), the absence or unpredictability of an attachment figure in early development may reduce opportunities for learning about the rewarding nature of affiliative interactions and social bonds; the absence of this type of stimulus reward learning early in development, when sensitive and responsive caregiving from a primary attachment figure is an expected environmental input, may ultimately contribute to biased processing of rewarding stimuli later in development. If social interactions in early life are either absent or unrewarding, expectations about the hedonic value of social relationships and other types of rewards might be altered in the long term, culminating in attenuated responsiveness to anticipation of reward. Future research is needed to identify the precise mechanisms through which adverse early environments ultimately shape reward learning and responses to rewarding stimuli.

Links between emotional processing and psychopathology

An extensive and growing body of work suggests that disruptions in emotional processing, emotional responding, and emotion regulation represent transdiagnostic factors associated with virtually all commonly occurring forms of psychopathology (Aldao, Nolen Hoeksema, & Schweizer, 2010). Specifically, attention biases to threat and overgeneral autobiographical memory biases have been linked to anxiety and depression, respectively, in numerous studies (Bar Haim, Lamy, Bakermans Kranenburgh, Pergamin, & Van Ijzendoorn, 2007; Williams et al., 2007), and attributions of hostility and other social information processing biases associated with trauma exposure are associated with risk for the onset of conduct problems and aggression (Dodge et al., 1990; Dodge et 211., 1995; Weiss et al., 1992).

Heightened emotional responses to negative environmental cues are associated with both internalizing and externalizing psychopathology in laboratory based paradigms examining self reported emotional and physiological responses to emotional stimuli (Boyce et al., 2001; Carthy, Horesh, Apter, Edge. & Gross, 2010: Hankin, Badanes, Abela, & Watamura, 2010′, McLaughlin, Kubzansky. et al., 2010; McLaughlin, Sheridan, Alves, et al., 2014; Rao. Hammen, Ortiz, Chen, & Poland, 2008), MRI studies examining neural response to facial emotion (Sebastian et al., 2012; Siegle, Thompson, Caner, Steinhauer, & Thase, 2007; Stein, Simmons, Feinstein, & Paulus, 2007; Suslow et al., 2010; Thomas et al.. 2001), and experience sampling studies that measure emotional responses in real world situations (Myin Germeys et al., 2003; Silk. Steinberg, & Morris. 2003).

Habitual engagement in rumination has also been linked to heightened risk for anxiety, depression, eating disorders, and problematic substance use (McLaughlin & Nolen Hoeksema, 20l 1: Nolen Hoeksema, 2000; Nolen Hoeksema, Stice, Wade, & Bohon, 2007). Together, evidence from numerous studies examining emotional processing at multiple levels of analysis suggests that disruptions in emotional processing are a key transdiagnostic factor in psychopathology that may explain patterns of multifinality following exposure to threatening early environments.

Executive functioning

Disruption in executive functioning represent the second key proximal risk factor in the model. A growing body of evidence suggests that environmental deprivation is associated with lasting alterations in executive functioning skills. Poor executive functioning, including problems with working memory, inhibitory control, planning ability, and cognitive flexibility, has consistently been documented among children raised in deprived environments ranging from institutional settings to low socioeconomic status [3138) families.

Children raised in institutional settings exhibit a range of deficits in cognitive functions including general intellectual ability (Nelson et al., 2007‘, O’Connor, Rutter, Beckett, Keaveney, & Kreppner, 2000), expressive and receptive language (Albers, Johnson, Hostetter, Iverson, & Miller, 1997; Windsor et al., 201 I), and executive function skills (Bos et al., 2009; Tibu et al., 2016). In contrast to other domains of cognitive ability, however, deficits in executive functioning and marked elevations in the prevalence of attention deficit hyperactivity disorder (ADHD), which is characterized by executive functioning problems, are persistent over time even after placement into a stable family environment (Bos et al., 2009; Tibu et al., 2016; Zeanah et al., 2009).

Similar patterns of executive functioning deficits have also been observed among children raised in low SES families, including problems with working memory, inhibitory control, and cognitive flexibility (Blair, 2002; Farah et al., 2006; Noble et al., 2007; Noble, Norman, & Farah, 2005; Raver, Blair, Willoughby, & The Family life Project Key Investigators, 2013), as well as deficits in language abilities (Fernald, Marchman, & Weisleder, 2013; Weisleder & Femald, 2013). Poor cognitive flexibility among children raised in low SES environments has been observed as early as infancy (Clearfield & Niman, 2012). Relative to children who have been abused, children exposed to neglect are at greater risk for cognitive deficits (Hildyard & Wolfe, 2002) similar to those observed in poverty and institutionalization (Dubowitz et al., 2002; Spratt et al., 2012).

The lateral PFC is recruited during a wide variety of executive functioning tasks, including working memory (Wager & Smith, 2003), inhibition (Aron, Robbins, & Poldrack, 2004), and cognitive flexibility (Rougier, Noelle, Braver, Cohen, & O’Reilly, 2005), and is one of the brain regions most centrally involved in executive functioning. In addition to exhibiting poor performance on executive functioning tasks, children from low SES families also have different patterns of lateral PFC recruitment during these tasks as compared to children from middle class families (Kishiyama, Boyce, Jimenez, Perry, & Knight, 2009; Sheridan, Sarsour, Jutte, D’Esposito, & Boyce, 2012). A similar pattern of poor inhibitory control and altered lateral PFC recruitment during an inhibition task has also been observed in children raised in institutional settings (Mueller et al., 2010).

These studies provide some clues about where to look with regards to the types of environmental inputs that might be necessary for the development of adaptive executive functions. In particular, environmental inputs that are absent or atypical among children raised in institutional settings. as well as among children raised in poverty, are promising candidates. Institutional rearing is associated with an absence of environmental inputs of numerous kinds, including the presence of an attachment figure, variation in daily routines and activities, access to age appropriate enriching cognitive stimulation from books, toys, and interactions with adults, and complex language exposure (Smyke et al., 2007; Zeanah et al., 2003).

Some of these dimensions of environmental experience have also been shown to be deprived among children raised in poverty, including access to cognitively enriching activities, including access to books, toys, and puzzles; learning opportunities outside the home (e.g., museums) and within the context of the parent-child relationship (e.g., parental encouragement of learning colors, words, and numbers, reading to the child); and variation in environmental complexity and stimulation as well as the amount and complexity of language input (Bradley, Convyn, Burchinal, McAdoo, & C01], 200]; Bradley, Corwyn, MCAdoo. & C011, 2001; Dubowitz et al., 2002; Garrett, Ng’andu, & Ferron, 1994; Hart & Risley, 1995; Hoff, 2003; Linver, Brooks Gunn, & Kohen, 2002).

Together, these distinct lines of research suggest that enriching cognitive activities and exposure to complex language might provide the scaffolding that children require to develop executive functions. Some indirect evidence supports this notion. For example, degree of environmental stimulation in the home and amount and quality of maternal language each predict the development of language skills in early childhood (Farah et al., 2008; Hoff, 2003), and children raised in both institutional settings and low SES families exhibit deficits in expressive and receptive language (Albers et al., 1997; Hoff, 2003; Noble et al., 2007; Noble et al., 2005; Windsor et al., 2011), in addition to problems with executive functioning skills. Moreover, a recent study found that atypical patterns of PFC activation during executive function tasks among children from low SES families is explained by degree of complex language exposure in the home (Sheridan et al., 2012). Finally, children raised in bilingual environments appear to have improved performance on executive function tasks (Carlson & Meltzoff, 2008).

These findings suggest that the environmental inputs that are required for language development (i.e., complex language directed at the child) may also be critical for the development of executive function skills. Language provides an opportunity to develop multiple such skills ranging from working memory (e.g., holding in mind the first part of a sentence as you wait for the speaker to finish), inhibitory control (e.g., waiting your turn in a conversation), and cognitive flexibility (e.g,, switching between grammatical and syntactic rules).

Lack of consistent rules, routines, structure, and parental scaffolding behaviors may be another mechanism explaining deficits in executive functioning among children from low SES families. This lack of environmental predictability is more common among low SES than middle class families (Deater Deckard, Chen, Wang, & Bell, 2012; Evans, Gonnella, Mareynyszyn, Gentile, & Salpekar, 2005; Evans & Wachs, 2009). The absence of consistent rules, routines, and contingencies in the environment may interfere with children’s ability to learn abstract rules and to develop the capacity for self regulation. Indeed, higher levels of parental scaffolding, or provision of support to allow the child to solve problems autonomously, has been prospectively linked with the development of better executive function skills in early childhood (Bemier, Carlson. & Whipple, 2010; Hammond, Muller, Carpendale, Bibok, & Lieberrnann Finestone, 2012; Landry, Miller Loncar, Smith, & Swank, 2002).

These findings suggest that environmental unpredictability is an additional mechanism linking low SES environments to poor executive functioning in children. However, given the highly structured and routinized nature of most institutional settings, environmental unpredictability is an unlikely explanation for executive functioning deficits among institutionally reared children.

Deficits in executive functioning skills have sometimes been observed in children with exposure to trauma (DePrince, Weinzierl, & Combs, 2009; Mezzacappa, Kindlon, & Earls, 2001) as well as children with high levels of exposure to stressful life events (Hanson et al., 2012), although some studies have found associations between trauma exposure and working memory but not inhibition or cognitive flexibility (Augusti & Melinder. 2013).

There are two possible explanations for these findings.

First, for children exposed to threat, it may be that deficits in executive functions emerge primarily in emotional contexts, such that the heightened perceptual sensitivity and reactivity to emotional stimuli in children exposed to threat draws attention to emotional stimuli (Shackman et al., 2007), making it more difficult to hold other stimuli in mind, effectively inhibit responses to emotional stimuli, or flexibly allocate attention to nonemotional stimuli. Indeed. in a recent study in my lab, we observed that exposure to trauma (both maltreatment and community violence) was associated with deficits in inhibitory control only in the context of emotional stimuli (i.e., a Stroop task involving emotional faces) and not when stimuli were neutral (i.e., shapes), and had no association with cognitive flexibility (Lambert. King, Monahan, & McLaughlin, 2016). In contrast, deprivation exposure was associated with deficits in inhibition to both neutral and emotional stimuli and poor cognitive flexibility. Although this suggests there may be specificity in the association of trauma exposure with executive functions, greater research is needed to understand these links.

Second, studies examining exposure to trauma seldom measure indicies of deprivation, nor do they adjust for deprivation exposure (just as studies of deprivation rarely assess or control for trauma exposure). Disentangling the specific effects of these two types of experiences on executive functioning processes is a critical goal for future research.

Links between executive functioning and psychopathology

Executive functioning deficits are a central feature of ADHD (Martinussen, Hayden, Hogg Johnson, & Tannock, 2005′, Sergeant, Geurts. & Oosterlaan, 2002; Willcutt, Doyle, Nigg, Faraone, & Pennington, 2005). Problems with executive functions have also been observed in children with externalizing psychopathology, including conduct disorder and oppositional defiant disorder, even after accounting for comorbid ADHD (Hobson, Scott, & Rubia, 2011). They are also associated with elevated risk for the onset of substance use problems and other types of risky behavior (Crews & Boettiger, 2009; Patrick, Blair, & Maggs, 2008), including criminal behavior (Moffitt et al., 2011) and the likelihood of becoming incarcerated (Yechiam et a1., 2008).

Although executive functioning deficits figure less prominently in theoretical models of the etiology of internalizing psychopathology, when these deficits emerge in the context of emotional processing (e.g., poor inhibition of negative emotional information) they are more strongly linked to internalizing problems, including depression (Goeleven, De Raedt, Baert, & Koster, 2006; Joorman & Gotlib, 2010). Executive functioning deficits also contribute to other proximal risk factors, such as rumination (Joorman, 2006), that are well established risk factors for depression and anxiety disorders. Patterns of executive functioning in childhood have lasting implications for health and development beyond effects on psychopathology. Recent work suggests that executive functioning measured in early childhood predicts a wide range of outcomes in adulthood in the domains of health, SES, and criminal behavior. over and above the effects of IQ (Moffrtt et al., 2011).

Mechanisms Linking Distal Risk Factors to Proximal Risk Factors

How do experiences of threat and deprivation come to influence proximal risk factors? Learning mechanisms are the most obvious pathways linking these experiences with changes in emotional processing and executive functioning. although other mechanisms (e. g., the development of stable beliefs and schemes) are also likely to play an important role. Specifically, the impact of threatening and deprived early environments on the development of patterns of emotional processing and emotional responding may be mediated, at least in part, through emotional learning pathways. The associative learning mechanisms and neural circuitry underlying fear learning and reward learning have been well characterized in both animals and humans and reviewed elsewhere (Delgado, Olsson, & Phelps, 2006; Flagel et al., 2011; Johansen, Cain, Ostroff, & LeDoux. 20l l; O’Doherty‘ 2004).

Exposure to threatening or deprived environments early in development results in the presence (i.e., in the case of threats) or absence (i.e., in the case of deprivation) of opportunities for emotional learning: these learning experiences, in turn, have lasting downstream effects on emotional processing. Specifically, early learning histories can influence the salience of environmental stimuli as either potential threats or incentives, shape the magnitude of emotional responses to environmental stimuli, particularly those that represent either threat or reward, and alter motivation to avoid threats or pursue rewards. Thus, fear learning mechanisms and their downstream consequences explain, in part, the association of threatening environments with alterations in emotional processing (McLaughlin et al., 2014; Sheridan & McLaughlin. 2014).

Similarly, the effects of deprived early environments on emotional processing are likely to be partially explained through reward learning pathways. Pathways linking threatening early environments to habitual patterns of responding to distress, such as rumination, may also involve learning mechanisms including both observational (e.g., modeling responses utilized by caregivers) and instrumental (e.g., reinforcement of passive responses to distress when emotional displays are met with dismissive or punishing reactions from caregivers) learning.

Learning mechanisms may also be a central mechanism in the association between deprived early environments and the development of executive functioning. In particular, deprived environments such as institutional rearing. neglect, and poverty are characterized by the absence of learning opportunities, which is thought to directly contribute to later difficulties with complex higher order cognition. Specifically. reduced opportunities for learning due to the absence of complex and varied stimulus response contingencies or the presence of consistent rules, routines, and structures that allow children to learn concrete and abstract rules may influence the development of both cognitive and behavioral aspects of self regulation.

Moderators of the Link Between Distal and Proximal Risk Factors

Children vary markedly in their sensitivity to environmental context. Advances in theoretical conceptualizations of individual differences in sensitivity to context can be leveraged to understand variability in developmental processes among children exposed to adverse environments. A growing body of evidence suggests that certain characteristics make children particularly responsive to environmental influences; such factors confer not only vulnerability in the context of adverse environments but also benefits in the presence of supportive environments (Belsky, Bakermans Kranenburg, & Van Ijzendoom, 2007; Belsky & Pluess, 2009; Boyce & Ellis, 2005; Ellis, Essex, & Boyce, 2005). Highly reactive temperament, vagal tone, and genetic polymorphisms that regulate the dopaminergic and serotonergic system have been identified as markers of plasticity and susceptibility to both negative and positive environmental influences (Belsky & Pluess, 2009). These plasticity markers represent potential moderators of the link between childhood adversity and disruptions in emotional processing and executive functioning.

Developmental timing of exposure to adversity also plays a meaningful role in moderating the impact of childhood adversity on emotional processing and executive functioning For example, in recent work we have shown that early environmental deprivation has a particularly pronounced impact on the development of stress response systems during the first 2 years of life (McLaughlin et al., ZOIS). These findings suggest the possibility of an early sensitive period during which the environment exerts a disproportionate effect on the development of neurobiological systems that regulate responses to stress. As noted in the beginning of this article, additional research is needed to identify developmental periods of heightened plasticity in specific subdomains of emotional processing and executive functioning and to determine the degree to which disruptions in these domains vary as a function of the timing of exposure to childhood adversity.

Moderators of Trajectories From Proximal Risk Factors to Psychopathology

A key component of Nolen Hoeksema and Watkins‘s (2011) transdiagnostic model of psychopathology involves moderators that determine the specific type of psychopathology that someone with a particular proximal risk factor will develop. Specifically, their model argues that ongoing environmental context and neurobiological factors can moderate the impact of proximal risk factors on psychopathology by raising concerns or themes that are acted upon by proximal risk factors and by shaping responses to and altering the reinforcement value of particular types of stimuli.

For example. the nature of ongoing environmental experiences might determine whether someone with an underlying vulnerability (e.g.. neuroticism) develops anxiety or depression. Specifically, a person with high neuroticism who experiences a stressor involving a high degree of threat or danger (e.g., a mugging or a car accident) might develop an anxiety disorder, whereas a person with high neuroticism who experiences a loss (e.g.. an unexpected death of a loved one) might develop major depression (Nolen Hoeksema & Watkins. 2011).

Neurobiological factors that influence the reinforcement value of certain stimuli (e.g., alcohol and other substances. food, social rejection) can also serve as moderators. For example, individual differences in rejection sensitivity might determine whether a child who is bullied develops an anxiety disorder. Although a review of these factors is beyond the scope of the current article, greater understanding of the role of ongoing environmental context as a moderator of the link between proximal risk factors and the emergence of psychopathology has relevance for research on childhood adversity. In particular, environmental factors that buffer against the emergence of psychopathology in children with disruptions in emotional processing and executive functioning can point to potential targets for preventive interventions for children exposed to adversity.

CONCLUSION

Exposure to childhood adversity represents one of the most potent risk factors for the onset of psychopathology. Recognition of the strong and pervasive influence of childhood adversity on risk for psychopathology throughout the life course has generated a burgeoning field of research focused on understanding the links between adverse early experience, developmental processes, and mental health. This article provides recommendations for future research in this area. In particular, future research must develop and utilize a consistent definition of childhood adversity across studies, as it is critical for the field to agree upon what the construct of childhood adversity represents and what types of experiences do and do not qualify.

Progress in identifying developmental mechanisms linking childhood adversity to psychopathology requires integration of studies of typical development with those focused on childhood adversity in order to characterize how experiences of adversity disrupt developmental trajectories in emotion, cognition, social behavior. and the neural circuits that support these processes, as well as greater efforts to distinguish between distinct dimensions of adverse environmental experience that differentially influence these domains of development. Greater understanding of the developmental pathways linking childhood adversity to the onset of psychopathology can inform efforts to identify protective factors that buffer children from the negative consequences of adversity by allowing a shift in focus from downstream outcomes like psychopathology to specific developmental processes that serve as intermediate phenotypes (i.e., mechanisms) linking adversity with psychopathology.

Progress in these domains will generate clinically useful knowledge regarding the mechanisms that explain how childhood adversity is associated with a wide range of psychopathology outcomes (i.e., multifinality) and identify moderators that shape divergent trajectories following adverse childhood experiences. This knowledge can be leveraged to develop and refine empirically informed interventions to prevent the long term consequences of adverse early environments on children’s development. Greater understanding of modifiable developmental processes underlying the associations of diverse forms of childhood adversity with psychopathology will provide critical information regarding the mechanisms that should be specifically targeted by intervention. Determining whether these mechanisms are general or specific is essential, as it is unlikely that a one size fits all approach to intervention will be effective for preventing the onset of psychopathology following all types of childhood adversity. Identifying processes that are disrupted following specific forms of adversity, but not others, will allow interventions to be tailored to address the developmental mechanisms that are most relevant for children exposed to particular types of adversity. Identification of moderators that buffer children either from disruptions in core developmental domains or from developing psychopathology in the presence of developmental disruptions, for example, among children with heightened emotional reactivity or poor executive functioning, will provide additional targets for intervention.

Finally, uncovering sensitive periods when emotional, cognitive, and neurobiological processes are most likely to be influenced by the environment will provide key information about when interventions are most likely to be successful. Together, these advances will help the field to generate innovative new approaches for preventing the onset of psychopathology among children who have experienced adversity.

Meditation can work for everybody – Eric Klein * The Buddha Pill: Can Meditation Change You? – Dr Miguel Farias and Dr Catherine Wikholm.

“When the body can be still, the mind can be still. Spirituality is what you do with those fires that burn within you.” Sister Elaine

Seven Reasons Why Meditation Doesn’t Work, And how to fix them.

by Eric Klein

We didn’t have air conditioning, when we were living in Chicago in the 1970s. So, on hot, humid summer nights, Devi and I would ride our bikes to the shores of Lake Michigan. After securing our bikes, we‘d head for the water.

The water was nice and cool. But, to enjoy it we had to move through the twigs, paper cups, and assorted debris that had accumulated at the water’s edge.

It’s the same with meditation. The deep waters of your inner mind are pure, clear, and refreshing. But, to get there you need to move through some inner. . . um. . . debris.

This debris isn’t life threatening. Just a bit messy. It’s made up of ideas, memories, sensations, misconceptions, and reasons. Reasons why meditation doesn’t work at least for you.

Here are some of the common reasons that people give. You may find some of them familiar, if you’ve gone for a swim in the waters of meditation. Even if you’ve just dipped your toe in.

1) “Meditation is self-centered.”

As meditation has become more mainstream, pictures of people (slim, beautiful people) sitting in lotus postures show up in all kinds of advertising for spas, exotic vacations, skin cream, perfume, and jewelry.

It’s easy to get the impression that meditation is just the latest fashion accessory. Like a big spiritual mirror that you gaze into while putting on organic makeup to cover any imperfections.

But, meditation is the opposite. Meditation is about taking your self much less seriously and much more lightly. And in the process opening more fully and creatively to life.

The practice of meditation reveals that most of what’s scurrying around in the mind isn’t that significant much less real. And that all the ideas about the self are more limiting than liberating. Meditation frees you from being overly preoccupied with protecting and preserving the self.

Through practice, you discover that there really is no hard and fast line between “me” and “life”.

You discover that you are part of life, not apart from life in any way. Thus, the practice of meditation shifts you from self centered to lite centered living. Whether your attention is turned within or without it’s all life.

2) “I don’t have time to meditate.”

The scattered mind never has time for what matters most. It’s busy, busy, busy. Driven by emotion fueled thoughts. The day is filled to overflowing with activities, demands, meetings, and requirements. There’s barely time to sit down for a meal much less to spend a few moments in silence and stillness.

In the mad rush to get more done, the mind becomes more fragmented and speedy.

When things do slow down like in a traffic jam or on a grocery line it’s intolerable. The mind rails against the waste of time and against slowing down. “There’s too much to do!!” it cries.

But, everyone has exactly the same amount of time each day: 1440 minutes.

It’s the experience of time that differs. The more scattered and sped up the mind the more time seems to slip through your fingers like sand. Through meditation, the mind learns to slow down. As it does so, the feeling of pressure lifts. And with it another veil lifts as well.

The veil that concealed the richness of the moment, lifts. Through meditation you touch and are touched by the richness of the present moment. You experience fullness of time which reveals that this moment yes this very moment) is always enough.

3) “My back hurts when I meditate.”

This is likely a technical, postural issue that can be handled with some simple information about how to sit. Here are some practical guidelines.

You can sit on the floor or on a chair.

The key is to keep your spine straight but not still. Allow the chin to be parallel to the ground. When seated on the floor, elevate your body on a firm cushion or folded blanket. This reduces strain on the back. Experiment with different heights of cushion.

If you sit on a chair, make sure it is firm and not too cushiony. You don’t want to sink into it. You want to sit upright.

Once you have assumed a seated posture find your physical center of gravity.

You do this by gently rocking from side to side. As you rock from left to right, feel into the core of your body. You will notice a physical sensation I call passing through the center of gravity as your body shifts from side to side.

Slow down the shifting and feel more deeply into that center of gravity as you pass through. Then reduce the side to side movement and gradually settle your body so that it is aligned along the center of gravity. Do this all by feeling inwardly and sensing that place of balance.

As you settle the body in the center of gravity feel your spine gently lengthening. The back of your skull lifts slightly and the chin is parallel to the ground. The base of the body is grounded.

Your posture is aligned along the center of gravity and the spine is effortlessly extended. Place eyes gaze gently at the root of the nose between the eyebrows

Sitting is a skill that becomes easier with practice.

4) “I’m not religious.”

It’s easy to assume that meditation is religious. When you think about monks, yogis, nuns, and other professionally religious people, concepts like meditation come to mind. And it’s true, that meditation or similar practices have been central to those on a religious quest.

But, does that mean that meditation is religious? Not really. Religions are based on articles of faith, on beliefs.

Meditation requires no beliefs. It’s based on practice and results. In this way, meditation is more like a science experiment than a religious exercise. You don’t need to believe anything in order to conduct an experiment. You just need to follow the protocol. Do the practice. It’s a self validating process. Follow the steps and see the results.

The practitioners who developed the meditation methods used their minds and bodies as laboratories. They conducted experiments in consciousness. They recorded their results. And passed them onto their students for validation testing.

Some of these experiments have stood the test of time. People have conducted these meditation experiments for thousands of years, with reliable results. It’s these tested and validated practices that have been passed from teacher to student for thousands of years.

So, whether you’re religious or not, doesn’t matter in terms of meditation. If you are religious, meditation will enrich your understanding of your faith. If you’re not, you‘ll discover that which is deeper than believing or not believing.

5) “My mind won’t get quiet.”

If you stop the average person on the street and ask them, “Is your mind basically quiet or filled with thoughts?” most will tell you, “Basically quiet.” But, sit them down on a meditation cushion for a few minutes without anything to distract them and bam most people are shocked to discover how noisy it is in there.

It’s not that meditation made their minds noisy. Rather, the practice revealed the noise that was already there. This revelation of the running, ranting mind is a movement forward on the path. Many people drop the practice at this point thinking, “I can’t meditate.” But, they are meditating! The practice is working by revealing the actual state of the conditioned mind. Don’t stop now. The key is to keep practicing. To stay with the process which will lead to the quieting of the mind chatter.

The mind isn’t quieted by willing or by effort. You can’t quiet the mind through will power. That would be like pushing down on a spring. The harder you push the more the spring pushes back. You quiet the mind in the same way that you allow a glass of muddy water to become clear. You just let the particles settle. When you don’t stir up the water the mud settles on its own.

It’s the same in meditation.

Meditation lets the mud, the noisy thoughts settle. The glass of muddy water becomes clear as gravity draws the mud together. The mind becomes clear as you shift from thinking about thoughts to being aware of what is arising. Just by being aware, present, and mindful of the activity of the mind it settles down.

6) “Meditation is . . . boring.”

I remember when my parents would take me, as a child, to watch the sunset. I didn’t get it. I couldn’t see the beauty. To me, the sunset was boring.

Being bored is a symptom of not paying attention. If you pay attention deeply to anything it becomes very, very interesting. Meditation, which is the practice of cultivating deep attention, dissolves boredom. As the mud of the mind settles, as you discover the richness of the present moment even something as simple as a breath becomes the doorway to gratitude, wonder, and joy.

But, on the other hand, meditation is actually quite boring. I mean, you’re sitting there breathing in and breathing out. What could be more boring? In, out, in, out. Or you’re repeating the same mantra over and over. It is kind of boring by design. As the surface mind gets bored, it settles down.

And in that settling, an awareness of all encompassing, and ever present silence emerges. A sense of undisturbed stillness. This stillness and silence infuse everything with aliveness and presence. Not boring at all”

7) “I don’t want to be weird.”

There are two reasons that practicing meditation can feel weird. One is neurological, the other more psychological.

Let’s start neurologically: doing anything unfamiliar can feel weird. Your neurological patterns get used to doing things a certain way. Putting your left leg in your pants before your right ones Brushing one side of your teeth before the other. Sitting in a certain chair and in a certain posture) to watch television. The list goes on.

So, when you change a pattern of behavior even in a positive direction it feels weird. Inside your brain, new neurons are firing.

New connections are being made. And old connections, old patterns, are being restrained. Subjectively it feels weird. The new neurological circuits aren’t totally grooved in yet. so you’re clumsy at the new pattern. And this clumsiness is where the weird feeling can turn more psychological.

Being clumsy can be embarrassing (even if you’re all by yourself). Even if you’re sitting there by yourself with your eyes closed you can still be “watching” what you’re doing and wondering, “Am I doing this right? Is this weird?”

Have you ever danced in front of the mirror? If you judge what your dancing it’s no fun. To enjoy the experience, you need to cut loose from any fixed ideas of what dancing should look like and even more so what you should look like.

It’s the same with meditation. Whether you want to or not, you have an idea about the kind of person who meditates. If you don’t think of yourself as that kind of person then when you meditate, you’ll feel weird. You’ll get in your own way.

But, if you relax, take a breath, and realize that your ideas about meditation are just that ideas. You don’t have to live up to these self imposed ideas of meditation. You can just cut loose and enjoy the process. When you do, you find a whole new and wonderful kind of weirdness.

But, one of the blessings of meditation is finding out that you indeed are weird. You’re weird in the best possible sense of the word. Because, the most ancient meaning of the word weird has to do with following your unique fate, your path through life. You’re weird if you follow your path and listen to the direction of your inner soul.

So, meditation, in this most basic, ancient sense, helps you be weird. Meditation helps you find your path. Through practice, you discover how to live your true life more fully and more joyfully.

Those are the seven reasons.

Along with ideas on how to move through them.

Because, there’s no reason to let a bit of debris stop you from enjoying a refreshing swim in the deep, clear, refreshing waters of your inner mind.

Ready for the next: step?

Our recommendation is for you to subscribe to the Wisdom Heart newsletter. You’ll receive information and inspiration on how to bring meditation alive in your life. Practical ideas that you can use for peace of mind and the clarity to live with greater fulfillment and purpose.

Go to http://www.wisdomheart.org/subscribe

The Buddha Pill: Can Meditation Change You?

Dr Miguel Farias and Dr Catherine Wikholm

INTRODUCTION

My interest in meditation began at the age of six when my parents did a course on Transcendental Meditation. I didn’t realize it then, but I was effectively being introduced to the idea that meditation can produce all manner of changes in who we are and in what we can achieve. Mind-over-matter stories are both inspiring and bewildering, hard to believe yet compelling. They have stirred me deeply enough to dedicate almost two decades of my life researching what attracts some people to techniques like meditation and yoga and whether, like many claim, they can transform us in a fundamental way.

This book tells the story of the human ambition for personal change, with a primary focus on the techniques of meditation and yoga. Hundreds of millions of people around the world meditate daily. Mindfulness courses, directly inspired by Buddhist meditation, are offered in schools and universities, and mindfulness-based therapies are now available as psychological treatments in the UK’s National Health Service.

Many scientists and teachers claim that this spiritual practice is one of the most efficient and economic tools of personal change. Yoga is no less popular. According to a recent survey by the Yoga Health Foundation, more than 250 million people worldwide practise it regularly. Through yoga we learn to notice thoughts, feelings and sensations while working with physical postures. Often, yoga practice includes a period of lying or sitting meditation.

Psychologists have developed an arsenal of theories and techniques to understand and motivate personal change. But it wasn’t psychology that produced the greatest surge of interest of the twentieth century in this topic, it was meditation. By the 1970s millions of people worldwide were signing up to learn a technique that promised quick and dramatic personal change. Transcendental Meditation was introduced to the West by Maharishi Mahesh Yogi, and quickly spread after the Beatles declared themselves to be followers of this Indian guru. To gain respectability Maharishi sponsored dozens of scientific studies about the effects of Transcendental Meditation, in academic fields ranging from psychophysiology to sociology, showing that its regular practice changed personality traits, improved mood and wellbeing and, not least, reduced criminality rates.

The publicity images for Transcendental Meditation included young people levitating in a cross-legged position and displaying blissful smiles. I recall, as a child, staring at the photographs of the levitating meditators used in the advertising brochures and thinking ‘Can they really do that?’ My parents’ enthusiasm for meditation, though, was short-lived. When I recently asked my mum about it, she just said, ‘It was a seventies thing; most of our friends were trying it out.’

Like my parents’ interest research on meditation waned rapidly. Photos of levitating people didn’t help to persuade the scientific community that this was something worth studying. We had to wait almost thirty years before a new generation of researchers reignited interest in the field, conducting the first neuroimaging studies of Tibetan monks meditating, and the first explorations of the use of mindfulness in the treatment of depression. For yoga, too, there is increasing evidence that its practice can reduce depression?

Meditation and yoga are no longer taboo words in psychology, psychiatry and neuroscience departments. There now are dedicated conferences and journals on the topic and thousands of researchers worldwide using the most advanced scientific tools to study these techniques. Many of the studies are funded by national science agencies; just looking at the US federally funded projects, from 1998 to 2009, the number increased from seven to more than 1205. The idea of personal change is increasingly central to these studies. Recent articles show improvements in cognitive and affective skills after six to eight weeks of mindfulness, including an increase in empathy?

These are exciting findings. Meditation practices seem to have an impact on our thoughts, emotions and behaviours. Yet, these studies report only modest changes. But many who use and teach these techniques make astonishing claims about their powers. At the Patanjali Research Foundation in northern India, the world’s largest yoga research centre, I hear miraculous claims about yoga from the mouth of its director-guru, Swami Ramdev: ‘Yoga can heal anything, whether it’s physical or mental illness.’

Teasing fact from fiction is a major aim of this book.

The first part explores ideas about the effects of meditation and yoga, contrasting them with the current scientific evidence of personal change. The second part puts the theories to the test, we carry out new research and scrutinize both the upsides and downsides of these practices. We have dedicated a full chapter to the darker aspects of meditation, which teachers and researchers seldom or never mention.

Although this isn’t a self-help book, it attempts to answer crucial questions for anyone interested in contemplative techniques: can these practices help me to change? If yes, how much and how do they work? And, if they do change me, is it always for the better?

These questions have shaped a significant part of my own life. In my teenage years I believed that to seek personal growth and transformation was the central goal of human existence; this led me to study psychology. I wanted to learn how to promote change through psychological therapy, although it was only later, while undergoing therapy training, that I considered the subtlety and difficulties of this process. My undergraduate psychology degree turned out to not shed much light on our potential for transformation; it rarely touched on ideas about how to make us more whole, healed, enlightened, or just a better person.

But rather than giving up, I read more about the areas of psychology I wasn’t being taught like consciousness studies and started doing research on the effects of spiritual practices. When I decided it was probably a good idea to do a doctorate, I browsed through hundreds of psychology websites in search of potential supervisors; I found one at Oxford whom I thought was open minded enough to mentor my interests, and I moved to the city in 2000.

This is the pre-history of my motivation to write this book. Its history begins in the early summer of 2009, when Shirley du Boulay, a writer and former journalist with the BBC, invited a number of people to take part in the re-creation of a ceremony that blended Christian and Indian spirituality. Images, readings and songs from both traditions were woven together, following the instructions of Henry le Soux, a French Benedictine monk who went to live in India and founded a number of Christian ashrams that adopted the simplicity of Indian spirituality (think of vegetarian food and a thin orange habit)?

I met Catherine Wikholm, the co-author of this book, at this event. She had studied philosophy and theology at Oxford University before embarking on her psychology training, and was at the time doing research relating to young offenders. Catherine and I were both drawn to an elegant woman in her fifties called Sandy Chubb, who spoke in a gentle but authoritative manner. Sandy showed us a book she had recently published with cartoonish illustrations of yoga postures. I thought it was intended for children and asked her if kids enjoyed yoga. Sandy smiled and told us the book was meant for illiterate prisoners. That was the mission of the Prison Phoenix Trust, a small charity she directed: to teach yoga and meditation in prisons. Trying to escape my feeling of embarrassment, I praised the idea of bringing contemplative techniques to prisoners. ‘It must help them to cope with the lack of freedom,’ I suggested. Sandy frowned slightly.

‘That’s not the main purpose,’ she said. Although going to prison is a punishment, Sandy told us, with the help of meditation and yoga, being locked in a small cell can help prisoners realize their true life mission.

‘Which is?’ Catherine and I both asked at the same time. ‘To be saintly, enlightened beings,’ Sandy answered.

Catherine and I kept silent. We were mildly sceptical. But also intrigued. Sandy seemed to claim that meditation and yoga techniques could radically transform criminals. I went back to my office that same evening to search for studies of meditation and yoga in prisons and found only a handful. The results weren’t dramatic but pointed in the right direction, prisoners reported less aggression and higher self-esteem? Reading closely, I noticed there were serious methodological flaws: most had small sample sizes and none included a control group a standard research practice that ensures results are not owing to chance or some variable the researcher forgot to take into account.

I wanted to know more. If Sandy’s claims were true, if meditation and yoga could transform prisoners, this could have tremendous implications for how psychologists understand and promote personal change in all individuals, not just those who are incarcerated. Having no experience of prisons, I contacted Catherine to ask if she’d be interested in working with me on this topic.

‘l’d love to!’ she said, more enthusiastic than I imagine most would be at the prospect of interviewing numerous convicted criminals and in the process spending weeks behind bars. Having started working for the prison service in her early twenties, Catherine had a strong forensic interest, particularly in the treatment of young offenders. She was passionate about the rehabilitation of prisoners in general and was curious as to whether yoga and meditation might represent an alternative means of facilitating positive, meaningful change for those who were unable or unwilling to engage with traditional rehabilitative efforts, such as offending behaviour programs.

So Catherine and l arranged to meet with Sandy at the Prison Phoenix Trust. Walking through Oxford’s trendy Summertown, where the Trust is based, we wondered what the meeting would bring. On arriving at the offices, we received a warm welcome. Sandy gave us the guided tour of their floor of the building, which comprised four rooms: the office, where she and her colleagues had their desks; a dining room for communal meals; a meditation room with cushions on the floor; and, along a corridor, a room that was wall-to-wall lined with metal filing cabinets. These, Sandy explained, were full of the letters the Prison Phoenix Trust had received from prisoners, estimated at numbering more than ten thousand.

If we were intrigued before, we were now completely hooked. Our minds filled with questions, we sat down with Sandy as she began to reveal the unusual story of how a small charity had persuaded prison governors to let them teach meditation and yoga to a broad range of prisoners, including thieves, murderers and rapists.

This story made quite an impression on us. So much so, in fact, that it inspired us to dedicate much of the following two years to designing and implementing a study of the measurable effects of yoga and meditation on prisoners. The findings of our research (which we’ll reveal later on in the book) not only sparked a flurry of media interest, but inspired us to spend the two years after that writing this book.

Our initial focus on the potential of meditative techniques to transform the ‘worst of the worst’ broadened out, as we became increasingly interested in exploring its full potential. Might Eastern contemplative techniques have the power to change all of us? As we engaged with more and more research literature, the inspiring stories of change we uncovered compounded our broadened view of the potential of yoga and meditation. Our own personal experiences, such as those of my ongoing research and Catherine’s clinical psychology doctoral training and subsequent acquaintance with mindfulness-based therapies and their application within the NHS in turn increased our curiosity.

What began as a perhaps unlikely marriage of my interest in spirituality and Catherine’s in forensic and clinical psychology has evolved into a wider exploration of the science and delusions of personal change. Just as we worked on our research together, so we have written this book together. To reflect the dynamic process of our writing, with the combining of our ideas and to avoid any messy jumping back and forth between us as narrators we have chosen to write this book in first-person narrative, as a singular, joint ‘I’. Although inevitably it may sometimes be apparent which one of us is narrating at a particular point, if simply by virtue of our gender difference, we have sought to write as a shared voice. The personal stories, interviews and accounts depicted in this book are all drawn from our real experiences. However, when discussing any examples relating to therapeutic work, we have anonymized all names and identifying details.

Over the course of the book, we will examine the scientific evidence that actually exists for the claims of change that meditation, mindfulness and yoga practitioners, teachers and enthusiasts propagate.

We also bring together our own experiences as psychologists, one more research-oriented and one more practice oriented, as well as the stories of some of the thought-provoking characters we’ve encountered along our journey. All that is to come. But for now let us begin by letting you in on the unique story that started it all.

The Prison Phoenix Trust

CHAPTER 1

AN ASHRAM IN A CELL

.
‘If we forget that in every criminal there is a potential saint, we are dishonouring all of the great spiritual traditions. Saul of Tarsus persecuted and killed Christians before becoming Saint Paul, author of much of the New Testament. Valmiki, the revealer of the Ramayana, was a highwayman, a robber, and a murderer. Milarepa, one of the greatest Tibetan Buddhist gurus, killed 37 people before he became a Saint. We must remember that even the worst of us can change.’ Bo Lozoff (American prison reform activist and founder of the Prison Ashram Project and the Human Kindness Foundation)

Knocking on the door of a house in a quiet street in Oxfordshire, notepad and pen in hand, I stood and waited on the front step. A minute later the door opened. A smartly dressed, elderly lady smiled at me from inside.

‘Tigger?’ I asked. ‘Yes, do come in,’ she replied.

Still full of life at ninety years old, Tigger Ramsey-Brown was a pleasure to interview. I was there to find out from her more about the story of her late younger sister, who had founded the Prison Phoenix Trust. Over cups of tea in her sunny conservatory, Tigger began vividly to recount the story of her sister and how she had started the Trust around thirty years previously.

In the beginning

Tigger pointed out that if we were going to go right to the start, this story actually begins somewhat earlier, with the marine biologist and committed Darwinist Sir Alister Hardy. At one time a Professor of Zoology at Oxford University, Hardy had happened to teach Richard Dawkins, an evolutionary biologist and outspoken atheist. Knighted for his work in biology, Hardy had a strong interest in the evolution of humankind, developing novel theories such as the aquatic ape hypothesis (which proposes that humans went through an aquatic or semi-aquatic stage in our evolution).

But he was also particularly interested in the evolution of religion and religious experience. Hardy viewed humans as spiritual animals, theorizing that spirituality was a natural part of our human consciousness. He mooted that our awareness of something ‘other’ or ‘beyond’ had arisen through exploration of our environment and he wanted to explore this further.

However, aware that fellow scientists and academics were likely to consider his interest in researching spirituality unorthodox, he waited until he retired from Oxford University before he delved deeper and founded the then-called Religious Experience Research Unit (RERU) at Manchester College, Oxford. (It is now the Alister Hardy Religious Experience Research Centre and is based in Wales.)

The goal of Hardy’s research was to discover if people today still had the same kind of mystical experiences they seemed to have had in the past. He began his study by placing adverts in newspapers, asking people to write in with their mystical experiences, in response to what became known as ‘The Hardy Question’: ‘Have you ever been aware of or influenced by a presence or power, whether you call it God or not, which is different from your everyday self?’

‘Thousands of people replied to the adverts, writing about their dreams and spiritual experiences. These responses were compiled into a database to enable researchers to analyze the different natures and functions of people’s religious and spiritual experiences. This is where Ann came in,’ Tigger told me. And so it was that in the mid-1980s in Oxfordshire, a woman named Ann Wetherall spent her days collecting and categorizing people’s dreams, visions and other spiritual experiences.

Looking for a link

Over time, as she examined the letters, Ann began to wonder if there was a common denominator in the accounts.

She noticed that it didn’t seem to matter whether someone was religious or atheist, but, more often than not, it was people who were feeling hopeless or helpless who reported a direct experience of spirituality.

Ann hypothesized that imprisonment might be a context that particularly inspired such despondent feelings and that it therefore might also trigger spiritual experiences. She got in touch with convicted murderer turned sculptor Jimmy Boyle, one of Scotland’s most famous reformed criminals. Boyle helped her to get an advert published in prison newspapers, asking for prisoners to write in about their religious or spiritual episodes. She got quite a response prisoners in their dozens wrote in to her describing their unusual experiences. Many of them had never mentioned these to anyone before and had wondered if they were going mad.

‘Ann wanted to write back and reassure them that they weren’t, and that these were valid spiritual experiences, which could be built on but the Alister Hardy Foundation did not reply to letters,’ Tigger explained. ‘That’s why Ann broke away from the research, so that she could start corresponding with the prisoners who were writing in, and offer support.

Because of their confinement in cells and separation from the outside world, Ann thought that prisoners’ experience was perhaps rather similar to that of monks. While for prisoners this withdrawal from society was not voluntary, she believed that they too could use their cell as a space for spiritual growth.’

‘What was her interpretation of spiritual growth?’ I asked.

‘Not only becoming more in touch with a greater power, but also becoming more aware of inner feelings and thoughts, as well as more connected and sensitive to other people’s needs,’ Tigger explained.

‘And the means of bringing about this kind of change?’ I asked, already pre-empting the answer…

‘Through meditation, of course.’

From spiritual experience to spiritual development

Tigger explained that she and Ann had spent their childhoods in India, growing up among Buddhist monasteries. Because of this upbringing, Ann had had a lifelong involvement with meditation, and believed that prisoners could benefit from learning it. In her letters back and forth to prisoners, she began sharing with them what she knew about meditation, in order to encourage and support their spiritual development.

Over the next couple of years, Ann’s correspondence with convicts came to strengthen her belief that prisoners had real potential for Spiritual development. ‘She thought they had a terrific spirituality, a hunger that wasn’t being met,’ Tigger explained, as our conversation moved onto Ann’s decision to set up a charitable trust, the Prison Ashram Project (now the Prison Phoenix Trust).

Founded in 1988, the organization was at first very small, comprising just Ann and three other volunteers, who wrote to prisoners, encouraging them to use their spiritual experiences as a springboard for future spiritual development.

‘You are more than you think you are’ was the project’s frequent message.

As the name suggests the Prison Ashram Project had the central premise that a prison cell can be used as an ashram, a Hindi word that refers to a spiritual hermitage, a place to develop deeper spiritual understanding through quiet contemplation or ascetic devotion.

Hermitage is not only an Eastern practice in Western Christian tradition, a monastery is a place of hermitage, too, because it is partially removed from the world. Furthermore, the word ‘cell’ is used in monasteries as well as in prisons, and there are a surprising number of similarities between the living conditions of monks and prisoners. Both live ascetic lives filled with restriction and limitation. Both monks and prisoners are able to meet their basic needs (but little more), both desist from sensual pleasures and the accumulation of wealth, and both follow a strict daily schedule.

Despite these parallels, however, there is undeniably a big difference in how monks and prisoners come to live in their respective cells. For monks living communally in monasteries, as well as hermits who live alone, living ascetically is an intentional choice, aimed at enabling them to better focus on spiritual goals. But for prisoners withdrawing from the world is not their choice; rather, it is imposed upon them as punishment. Which leads to the question: can involuntary confinement really open a door to inner freedom and personal change? Ann Wetherall believed so.

Being confined to a cell for much of the day, even against free will, could be a catalyst for spiritual development. The conditions were conducive; all that anyone needed was a radical shift in thinking. Rather than punishment, incarceration could be reconceived of as an opportunity for positive transformative experience. Prisoners had lost their physical liberty, but they could nevertheless gain spiritual freedom. Ann thought that meditation was the ideal tool with which prisoners could build spiritual growth, requiring only body, mind and breath.

So far, so good. But as Tigger talked something seemed to me to be a distinct obstacle to peaceful meditation behind bars: the undeniable fact that prisons are busy, noisy places. Granted, there might be some similarities between prisons, monasteries and spiritual retreats, I thought, but surely finding peace and quiet in a prison would be a bit of a mission impossible. Wouldn’t that render any attempt to meditate a bit futile?

‘No.’ Tigger smiled. ‘Ann believed this actually increased the importance and worth of meditation practice; the practice would enable prisoners to find a sense of peace despite their surroundings.’

Crossing continents

As it turned out Ann was not the first to think of encouraging prisoners’ spiritual development through in-cell meditation. A couple of years after setting up the Prison Ashram Project, she heard about Bo Lozoff, a spiritual leader and prison reform activist doing similar work in the USA.

Curiously, his organization was also called the Prison Ashram Project. Bo first had the idea that a prison cell could be a kind of ashram when his brother-in-law was sentenced to prison for drug smuggling. At the time Bo and his wife Sita were living at an ashram in North Carolina. There, their daily routine involved waking early, wearing all white, working all day without getting paid, abstaining from sex and eating communally. Visiting his brother-in-law in prison, Bo realized there were remarkable parallels between their day-to-day lives.

Around the same time he came across a book by renowned spiritual teacher Ram Dass, entitled Be Here Now. The combination of these two events inspired Bo and Sita to set up their own Prison Ashram Project in 1973, in cooperation with Ram Dass.

Just like Ann, they had begun corresponding with prisoners, offering encouragement and instruction in meditation and also in yoga. They also sent prisoners copies of Ram Dass’s book, along with the book that Bo himself went on to write: We’re All Doing Time A Guide for Getting Free. The central concept of this book is that it’s not only prisoners who are imprisoned, but that we are all ‘doing time’ because we allow ourselves to be so restricted by hang-ups, blocks and tensions. The message is that through meditation and yoga we can all learn to become free.

The birth of the Prison Phoenix Trust

Not long after meeting Bo, Ann changed her charity’s name to the Prison Phoenix Trust (PPT), in part because she was concerned that the word ‘ashram’ might prove an obstacle for the prison service. She was keen to step things up a notch from written correspondence and start setting up meditation and yoga workshops in prisons themselves. However, even with the new name, prison governors and officers were wary of the charity’s efforts. The Trust tried to get into prisons through the Chaplaincy; however, here too there was a surprising amount of resistance.

It’s worth remembering that in the late 1980s, prison chaplains were almost all Anglican. At that time the Anglican Church was still suspicious of practices such as meditation, which when compared with contemplation or silent prayer seemed ‘unChristian’. Many ministers thought that meditation centred on a spirituality that might be Hindu, Buddhist or even evil (stemming from the notion that to silence the mind also means making it available for the devil).

A 2011 article in the Daily Telegraph highlighted an extreme example of Christian opposition to yoga and meditation, reporting how a Catholic priest named Father Gabriele Amroth, appointed the Vatican’s chief exorcist in 1986, had publicly denounced yoga at a film festival where he had been invited to introduce The Rite (a film about exorcism, starring Anthony Hopkins): ‘Practising yoga is Satanic, it leads to evil just like reading Harry Potter,’ the priest is reported as stating, to an audience of bemused film fans?

Of course, not all devout Christians share such concerns that Christianity and Eastern spiritual practices are incompatible. Offering me another biscuit Tigger revealed the next chapter of her sister’s tale, wherein Ann would join forces with ‘a very forceful and very amazing character’.

A CATHOLIC ZEN MASTER

‘Spirituality is what you do with those fires that burn within you.’ Sister Elaine

Thousands of miles away from Oxford and Ann’s fledgling charity lived a Catholic nun. As well as being a nun, Sister Elaine was a Zen master. She grew up in Canada, where in her youth she became a professional classical musician for the Calgary Symphony Orchestra. At the age of thirty, however, she realized her true calling and joined the convent of Our Lady’s Missionaries in Toronto. In 1961, after several years at the convent, she was sent to Japan for her first assignment as a Catholic missionary. Her mission was to set up a Conservatory and Cultural Centre in Osaka, where she would teach English and music to Japanese people, as well as to baptise as many of them as possible.

In order to get to know the Japanese people better, she began to practise Zen Buddhism. She started zazen (sitting meditation) and koan study, under the guidance of Yamada Koun Roshi, a well-known Zen master from the Japanese Sanbo Kyodan order. Perhaps surprisingly, it did not matter to him that Sister Elaine was a Catholic nun with no intention of becoming a Buddhist. Yamada Koun Roshi did not draw a division between different people or religions, and similarly neither does Sister Elaine, who maintains, ‘There is no separation. We make separation?

Devoted to her new discipline, Sister Elaine went on to spend some time living with Buddhist nuns in Kyoto, where the daily regime involved ten hours a day of sitting in silence.

To call the koan study lengthy would be an understatement; it took her nearly two decades of studying with her Zen teacher before she was made a roshi. This title, which translates literally as ‘old teacher’, marks the top echelon of Zen teachers. There are an estimated only 100 roshis worldwide. Very few of them are Westerners, but in 1980 Sister Elaine finally became one of them, an accredited Zen teacher of the Sanbo Kyodan order. Her achievement made her the first Canadian, and certainly the first Catholic nun, to be recognized as one of the world’s highest-ranking teachers of Zen.

In 1976, after 15 years in Japan, Our Lady’s Missionaries back in Toronto transferred Sister Elaine to the Philippines. This was during the worst years of the Marcos regime, and Sister Elaine was to be involved with animal husbandry. However, she did more than merely raise livestock. Once in the Philippines she set up a zendo (Zen meditation centre), for the Catholic Church in Manila. Word spread about her work and a leading dissident, Horacio ‘Boy’ Morales, who had headed the New People’s Army against the Marcos dictatorship, came to hear of her. Held as a political prisoner at the Bago Bantay detention centre, Morales asked Sister Elaine to come to prison to teach meditation to him and a group of fellow prisoners, each of whom had, like him, been tortured. His hope was that the practice could help them to cope with the stress of imprisonment and find inner peace.

Despite the hostility of the authorities and worrying reports of other prison visitors ‘vanishing’, Sister Elaine spent four-and-a-half years teaching meditation to those prisoners every week. During that time she witnessed a remarkable change: the prisoners transformed from being angry, tense men, trembling from torture, to being calm. This convinced her both of the therapeutic power of silent meditation and of the potential for prisoners to develop spiritually while incarcerated.

Sister Elaine’s life makes for quite an unusual story, and her work in the Philippines caught the attention of the media and subsequently of Ann Wetherall. Leaning forward in her seat, Ann’s sister, Tigger, told me of the unexpected events that would subsequently unfold.

Ann’s legacy

In 1992, four years after founding the Prison Phoenix Trust, Ann discovered she had terminal cancer. Coming to terms with this news, Ann felt fearful for the prisoners she was involved with; what would happen to her charity after she was gone? She had heard of Sister Elaine and wrote to her, asking if she would consider taking over as director after she died. Sister Elaine flew over from the Philippines to spend a week with Arm to try to come to a decision. Shortly after returning home, she phoned Ann to accept her offer, telling her ‘don’t die until I get there’.

Sadly, Ann passed away while Sister Elaine was on her way back to England. Over the six years Sister Elaine was director, the idea that yoga and meditation are beneficial for prisoners became increasingly accepted among prison governors and officers. They might not have been as interested in the potential spiritual development of prisoners, but many acknowledged the range of other, more down-to-earth benefits: prisoners doing yoga and meditation were reportedly calmer, slept better and felt less stressed and so were easier to work with.

While, like Ann, Sister Elaine believed that meditation was the key to stilling the mind, incorporating yoga into the classes was important: when the body can be still, the mind can be still.

Aged 75, Sister Elaine left the Trust not to retire, but to return to her native Canada to found a similar organization called Freeing the Human Spirit, based in Toronto.

In the years since Sister Elaine’s departure, the Prison Phoenix Trust (PPT) has continued to develop its work, with classes now running in the majority of UK prisons. Reflecting on the Trust’s progress, Sandy Chubb, the PPT’s subsequent director, remarked to me with a smile, ‘Yes, gone are the days when yoga teachers were branded yoghurt pots.’

Hearing the stories about Ann and Sister Elaine, so vividly recounted to me by Tigger and others, including the Trust’s current director Sam Settle, it made sense to me that yoga and meditation could lead to personal change in prisoners. Certainly the PPT had a whole lot of anecdotal evidence attesting to its benefits. Over the course of 25 years, PPT letter-writers have received more than 10,000 replies from prisoners reporting the positive effects of these techniques. The benefits range from increased self-esteem, better sleep and reduced dependence on drugs, medication or cigarettes, to improved emotional management and reduced stress.

Anecdote or evidence

I was invited to come and have a look through the filing cabinets that contained these letters, the amount of correspondence astounded me. Yet despite all those positive responses, as a psychologist I couldn’t help but be a little sceptical, testimonials are all very well, but what was the empirical evidence that yoga and meditation can help incarcerated criminals change for the better? Searching scientific databases I discovered there was very little rigorous research out there into the measurable psychological effects of these practices on prison populations.

The majority of studies that did exist focused specifically on meditation with some interesting results. Research into the effects of Transcendental Meditation on criminals had been taking place since the 1970s. For example, a study by US researchers Abrams and Siegel found that those prisoners who received a 14-week course of TM training showed a significant reduction in anxiety, neuroticism, hostility and insomnia compared with the control group. This would seemingly constitute early evidence for the rehabilitative effects of TM. However, the study was criticized on the grounds that it had inadequate controls, limiting the conclusions we can draw from the findings and calling into question the authors’ somewhat liberal interpretation of their statistical results.

More recent studies using other meditation techniques also yielded some promising evidence. In these studies, researchers concluded that meditation led to such positive results as improved psychosocial functioning”, a reduction in substance abuse, and decreased recidivism rates?

However, while all that sounds really promising, most of this research also had serious shortcomings. For example, sample sizes were usually very small, there was not a control group, or the research drew evidence only from questionnaire measures.

I realized that if we were to draw any realistic conclusions about whether or not yoga and meditation are effective in bringing about measureable psychological changes in incarcerated criminals, we needed better research evidence. And so the seeds were sown for our Oxford Study, the journey and findings of which we reveal in Chapter 8. While this was in the planning, I wanted to gain a deeper understanding about the PPT’s rationale for encouraging prisoners to practise yoga and meditation, and their conceptualizations of personal change.

PERFECT PRISONERS

While the PPT does believe that yoga and meditation can lead to beneficial psychological effects in prisoners, what they’re really interested in is the possibility of a radical ‘self-change’. This involves a significant shift in perspective. Sandy Chubb told me that in her experience (of teaching yoga in prisons), prisoners are lovely to work with. This didn’t surprise me all that much we all tend to be co-operative when we’re getting to do something we want to do.

What did surprise me was the comment that followed: Sandy told me that ‘prisoners are all perfect’.

Perfect is certainly not the adjective most of us would choose to describe murderers, rapists and paedophiles; for many it’s perhaps even the antonym of the word they would use. I needed Sandy to clarify. ‘What’s perfect about them?’ I asked.

The answer appears to lie in Sandy’s spiritual worldview. Like many others who believe in a universal spirituality, Sandy recognizes the divine nature of each of us including criminals and is convinced of the interconnectedness of all things. She smiles serenely when she tells me what to her is a simple, obvious truth: ‘We are a whole creation that works dynamically.’

The concept of unity or non-duality is a central premise in some Eastern spiritual belief systems, and one that effectively eliminates the ‘us’ and ‘them’ mentality that most of us have in relation to convicted criminals. Early into my interview with Sam Settle, the current director of the PPT and a former Buddhist monk, I encountered the same belief: ‘lf prisoners realized that we are all connected,’ Sam told me, ‘then they would not commit crimes.’

So while reducing re-offending is not an asserted aim of the PPT, it is considered likely to occur as a side-effect of spiritual growth. The hypothesis is that it is criminals’ mistaken idea of separateness that allows them to act in a harmful way towards others. From Sandy and Sam’s perspective, there is no ‘other’, and there are no ‘bad’ people; we are all part of the same perfect whole and meditation and yoga can help people to realize this.

Later in the book I will discuss how many people share this perspective, people who believe that not just individual but worldwide change is possible, if only there are enough people meditating.

SILENT REHABILITATION

While we could dismiss some of these ideas about the transformative potential of meditation and yoga for prisoners as utopian, Romantic, or LaLa-Land spirituality, we can also consider them in a purely secular sense, in terms of psychological and behavioural changes.

But, even if we cast aside, for now, the spiritual dimension, the notion that yoga and meditation can produce meaningful change in prisoners might still be considered somewhat ‘out there’. The very idea of the possibility of personal change is itself a loaded topic, especially in the context of prisons. Young repeat offenders are often labelled hopeless cases, written off by the time they have barely left their teens, undermining the ethos of rehabilitation that should be central to the prison system. However, for many offenders there are myriad factors that may obstruct attempts to rehabilitate not only in terms of overcoming backgrounds of adversity, but also in terms of their perceived (lack of) prospects for the future.

The institution of home

For many who have lived in prisons from an early age, the prospect of going outside is daunting.

I once worked with a prisoner, ‘John’, who was serving his tenth prison sentence at the age of only 21 years old. He attended every session of the offending behaviour program I was facilitating, only to in the final session suddenly become aggressive and disruptive to the point where he had to be removed from the group. Talking to him afterwards, trying to understand why he had sabotaged something that could have helped him towards securing an earlier release date, he admitted he was scared of being released. ‘There is nothing for me outside,’ he said, visibly upset.

When John was a young child, one of his parents murdered the other; he went on to spend the rest of his childhood in numerous short-term foster care placements. Angry and distrusting of people, he would repeatedly run away from them. He committed his first offence aged ten and received his first custodial sentence aged 15. The frequency of his impulsive crimes meant that he had spent the majority of the past six years behind bars. There were no family or friends waiting for him on the outside. The uncertainty of how to build a meaningful life, alone, in the ‘real world’ was overwhelming. Prison was all he felt he knew.

Self-belief

All staff members working in prisons from officers, to psychologists, to governors are acutely aware that changing prisoners can be extraordinarily difficult but it’s not impossible. In my own work with young male offenders, I lost count of the number of times I heard ‘he’ll never change’ from prison officers, who generally would have little idea of that individual’s backstory and the factors that contributed to his offending behaviour. Often the prisoners in question were boys still in their teens, some of them coming from such difficult backgrounds that it would have been a miracle if they hadn’t ended up in prison.

The desire to reform is often unsupported, sometimes owing to budget restrictions, but other times owing to a lack of belief. Changing is hard. And it’s even harder without a helping hand.

The support of others, whether friend, therapist or institution can be fundamental in whether or not we succeed in bringing about a desired change. Feeling that others believe in us can significantly boost our sense of self-efficacy. Feeling that others don’t believe in us at all undermines our self-belief so that we may start to feel a dramatic waning of our own confidence and motivation to try to change.

Changing attitudes

It was a Thursday afternoon and I was on my lunch break, in between research interviews at a West Midlands prison. I was accompanied by an officer in his late fifties, who had been assigned to facilitate the interviews; escorting prisoners from the wings to the interview room. As our break drew to a close, the officer suddenly deviated from his impromptu monologue on the joys of pigeon fancying, my knowledge of which had substantially increased over the hour, to ask whether I really thought that yoga and meditation would do anything at all for prisoners.

‘Well,’ I replied, ‘we think it might. There’s evidence that it works outside of prisons to reduce stress and increase positive emotions. So it may help prisoners to manage their emotions better and improve their self control, which might also reduce their aggression.’

‘Ha!’ said the officer. ‘I doubt it.’

‘Why?’ I asked.

‘I don’t think any of these can change,’ he told me. ‘I’m a firm believer that leopards never change their spots.’

It wasn’t just yoga and meditation the officer was dismissing as futile. He went on to say that he thought nothing could be done to change prisoners for the better; each and every one of them was a hopeless cause. ‘No matter what,’ he told me, ‘they will always revert back to what they are. It’s like a man who used to be a philanderer; he could get married to a woman and be faithful for, let’s say, ten years, but in the end, he’ll always cheat again.’

My attempts to debate failed miserably. When I maintained that I did think we could rehabilitate prisoners, he delivered his closing argument: ‘Well I’m older than you and I’ve met quite a lot of different people, so I think I know.’

Fortunately, this old-style officer is not representative of the majority of prison staff I have encountered. Over the last twenty years, a number of accredited offending behaviour programs (psychological group interventions that aim to reduce re-offending) have been developed that have been shown to be effective in bringing about improvements in prisoner behaviour, such as reducing aggression?

Despite this positive progress, with the reduction-rate for recidivism being generally around 10 per cent for program-completers, there is still clearly room for new and additional approaches particularly as many prisoners are reluctant or unable to engage with psychological treatment at all.

Arriving at a recent meeting at HMP Shrewsbury, l was escorted by a female officer who gave me a quick overview of the prison. She told me that the population was mostly sex offenders and that it was the most overcrowded prison in the country, adding, ‘We’re full of bed blockers.’

‘Bed blockers?’ I asked.

She explained that these are prisoners who had been through the sex offenders treatment program, but for one reason or another hadn’t been moved on to a different prison. The result was that they were taking up spaces that other, as yet untreated, offenders could use.

However, the main problem at Shrewsbury was not the ‘bed blockers’, who had accepted their offences and received treatment, but the many sex offenders who were in denial, and so could not be treated. Owing to the nature of their offences, such prisoners may be limited in what activities they can undertake during their sentences. Typically, for their own protection, sex offenders are segregated from ‘mainstream’ prisoners and even with good behaviour are not deemed suitable for outside work.

HMP Shrewsbury was one of the prisons that participated in our own research study. This prison had by far the biggest number of prisoners keen to do yoga and meditation, many more than we could actually manage to interview during the time we had allocated there.

As I interviewed prisoner after prisoner, all expressing a desire to do the yoga classes, it seemed to me that it could be possible that these techniques if effective could represent an alternative way to encourage positive personal change in prisoners whom the system might otherwise not be able to reach. Why? Because practising meditation and yoga doesn’t involve asking probing questions about offences of which prisoners may be deeply ashamed, feel in denial of, or simply not yet ready to address.

Sandy confirmed the particular utility of yoga and meditation for this demographic: ‘Not only is silence therapeutic and inclusive, it’s also safe for people with addiction and sex-offending histories.’ On the surface yoga is a physical activity, with desirable physiological benefits; it’s unthreatening, non-blaming and doesn’t require the admission of guilt. In this way it is possible that prisoners who would otherwise avoid explicit attempts to ‘change’ their behaviour, may nevertheless engage with a technique that could anyway bring about deep, personal transformation.

FROM MONSTER TO BUDDHA

The concept of a prison cell as an ashram is an idea that captures the imagination, and the paradox of finding spiritual freedom through the loss of physical freedom is intriguing. Might there actually be truth in this unusual idea, can daily yogic sun salutations and deep breathing really make convicted rapists and murderers less violent and impulsive?

While it’s unlikely that yoga and meditation could replace traditional rehabilitative approaches, it seems possible that they may have a unique ability to reach prisoners on a different level: to make them feel more at peace, and more valued and connected. Bo Lozoff summarizes the aim of organizations that teach contemplative techniques to prisoners worldwide when he says that we should ‘allow for transformation, not merely rehabilitation’.

In other words the change that charities such as his and the PPT seek to encourage goes far beyond the cessation of offending behaviour; we are talking about a radical change in worldview. The PPT’s current director Sam Settle describes this transformation as ‘the forgetting of one’s self as one lives the forgetting of me’. In essence moving from focusing on oneself as a separate individual to seeing oneself as part of a larger whole.

Whether or not we share these ideas about the possibility of the transformation of convicted criminals from sinner to saint, from ‘monster’ to Buddha on a theoretical and anecdotal level, there does seem to be reason to think that yoga and meditation can bring about positive personal change in prisoners.

In Chaoter 8 we reveal how we put that theory to the test, but first let’s take a look at what science can tell us about the potential of Eastern techniques for bringing about meaningful change not just for prisoners, but for any of us.

CHAPTER 2

SET LIKE PLASTER

.
‘Change is an odd process, almost contradictory: you want it, but don’t want it,’ said my clinical supervisor, playing with his curled beard and looking at me. What was he talking about? I had started my training in cognitive behavioural therapy (CBT) eight weeks earlier and was discussing my first client, ‘Mary’, a woman in her thirties, whose husband had died while on a family holiday. He had killed himself jumping off a cliff, right in front of his wife and their young child. Six months after the incident, Mary found herself depressed and sleepless.

‘I felt shock and disbelief,’ she told me, remembering. ‘I felt like I had been disembowelled and bricks sewn inside. I had to register his death the next day and felt terrible anger at having to describe myself as a widow, 24 hours after I had been a wife. Bureaucracy shouldn’t require that, you know?’ I nodded but felt tense, eager to show empathy. For the past eight weeks, l’d spent most . . .

*

from

The Buddha Pill: Can Meditation Change You?

by Dr Miguel Farias and Dr Catherine Wikholm

get it at Amazon.com

Dr Miguel Farias writes about the psychology of belief and spiritual practices, including meditation. He was a lecturer at the University of Oxford and is now the leader of the Brain, Belief and Behaviour group at Coventry University.

.
Dr Catherine Wikholm is a Clinical Psychologist registered with the Health Care and Professions Council (HCPC) and a Chartered Psychologist with the British Psychological Society (BPS). She completed her undergraduate degree in Philosophy and Theology at Oxford University, before embarking on her psychology training and gaining a Postgraduate Diploma in Psychology, Masters in Forensic Psychology and a Doctorate in Clinical Psychology. Catherine was previously employed by HM Prison Service where she worked with young offenders. She went on to work alongside Dr Miguel Farias at the Department of Experimental Psychology, Oxford University, on a randomised controlled trial that looked at the psychological effects of yoga and meditation in prisoners. The findings of this research study sparked the idea for ‘The Buddha Pill’, which she co-wrote while completing her doctorate. Catherine currently works in a NHS child and adolescent mental health service (CAMHS) in London, UK.

What happened when the US last introduced tariffs? – Dominic Rushe.

Anyone?

Willis Hawley and Reed Smoot were reviled for a bill blamed for triggering the Great Depression. Will Trump follow their lead?

America inches towards a potential trade war over steel prices, can Donald Trump hear whispering voices?

Alone in the Oval Office in the wee dark hours, illuminated by the glow of his Twitter app, does he feel the sudden chill flowing from those freshly hung gold drapes? It is the shades of Smoot and Hawley.

Willis Hawley and Reed Smoot have haunted Congress since the 1930s when they were the architects of the Smoot Hawley tariff bill, among the most decried pieces of legislation in US history and a bill blamed by some for not only for triggering the Great Depression but also contributing to the start of the second world war.

Pilloried even in their own time, their bloodied names have been brought out like Jacob Marley’s ghost every time America has taken a protectionist turn on trade policy. And America has certainly taken a protectionist turn.

Successful presidents including Barack Obama and Bill Clinton have campaigned on the perils of free trade only to drop the rhetoric once installed in the White House. Trump called Mexicans “rapists” on the campaign trail. And China? “There are people who wish I wouldn’t refer to China as our enemy. But that’s exactly what they are,” Trump said.

As commander in chief he has shown no signs of softening and this week took major action announcing steel imports would face a 25% tariff and aluminium 10%.

Canada and the EU said they would bring forward their own countermeasures. Mexico, China and Brazil have also said they are considering retaliatory steps.

Trump doesn’t seem worried. “Trade wars are good,” he tweeted even as the usually friendly Wall Street Journal thundered that “Trump’s tariff folly ”is the “biggest policy blunder of his Presidency”.

It is not his first protectionist move. In his first days in office the president has vetoed the Trans Pacific Partnership (TPP), the biggest trade deal in a generation, said he will review the North American Free Trade Agreement (Nafta), a deal he has called “the worst in history”, and had his visit with Mexico’s president cancelled over his plans to make them pay for a border wall.

Free traders may have become complacent after hearing tough talk on trade from so many presidential candidates on the campaign trail only to watch them furiously back pedal when they get into ofhce, said Dartmouth professor and trade expert Douglas Irwin. “Unfortunately that pattern may have been broken,” he says. “It looks like we have to take Trump literally and seriously about his threats on trade.”

Not since Herbert Hoover has a US president been so down on free trade. And Hoover was the man who signed off on Smoot and Hawley’s bill.

Hawley, an Oregon congressman and a professor a history and economics, became a stock figure in the textbooks of his successors thanks to his partnership with the lean, patrician figure of Senator Reed Smoot, a Mormon apostle known as the “sugar senator” for his protectionist stance towards Utah’s sugar beet industry.

Before he was shackled to Hawley for eternity Smoot was more famous for his Mormonism and his abhorrence of bawdy books, a disgust that inspired the immortal headline “Smoot Smites Smut” after he attacked the importation of Lady’s Chatterley’s Lover, Robert Burns’ more risque poems and similar texts as “worse than opium I would rather have a child of mine use opium than read these books.”

But it was imports of another kind that secured Smoot and Hawley’s place in infamy.

The US economy was doing well in the 1920s as the consumer society was being born to the sound of jazz. The Tariff Act began life largely as a politically motivated response to appease the agricultural lobby that had fallen behind as American workers, and money, consolidated in the cities.

Foreign demand for US produce had soared during the first world war, and farm prices doubled between 1915 and 1918. A wave of land speculation followed and farmers took on debt as they looked to expand production. By the early 1920s farmers had found themselves heavily in debt and squeezed by tightening monetary policy and an unexpected collapse in commodity prices.

Nearly a quarter of the American labor force was then employed on the land, and Congress could not ignore heartland America. Cheap foreign imports and their toll on the domestic market became a hot issue in the 1928 election. Even bananas weren’t safe. Irwin quotes one critic in his book Peddling Protectionism: Smoot Hawley and the Great Depression: “The enormous imports of cheap bananas into the United States tend to curtail the domestic consumption of fresh fruits produced in the United States.”

Hoover won in a landslide against Albert E Smith, an out of touch New Yorker who didn’t appeal to middle America, and soon after promised to pass “limited” tariff reforms.

Hawley started the bill but with Smoot behind him it metastasized as lobby groups shoehorned their products into the bill, eventually proposing higher tariffs on more than 20,000 imported goods.

Siren voices warned of dire consequences. Henry Ford reportedly told Hoover the bill was “an economic stupidity”.

Critics of the tariffs were being aided and abetted by “internationalists” willing to “betray American interests”, said Smoot. Reports claiming the bill would harm the US economy were decried as fake news. Republican Frank Crowther, dismissed press criticism as “demagoguery and untruth, scandalous untruth”.

In October 1929 as the Senate debated the tariff bill the stock market crashed. When the bill finally made it to Hoover’s desk in June 1930 it had morphed from his original “limited” plan to the “highest rates ever known”, according to a New York Times editorial.

The extent to which Smoot and Hawley were to blame for the coming Great Depression is still a matter of debate. “Ask a thousand economists and you will get a thousand and five answers,” said Charles Geisst, professor of economics at Manhattan College and author of Wall Street: A History.

What is apparent is that the bill sparked international outrage and a backlash. Canada and Europe reacted with a wave of protectionist tariffs that deepened a global depression that presaged the rise of Hitler and the second world war. A myriad other factors contributed to the Depression, and to the second world war, but inarguably one consequence of Smoot Hawley in the US was that never again would a sitting US president be so avowedly anti trade. Until today.

Franklin D Roosevelt swept into power in 1933 and for the first time the president was granted the authority to undertake trade negotiations to reduce foreign barriers on US exports in exchange for lower US tariffs.

The backlash against Smoot and Hawley continued to the present day. The average tariff on dutiable imports was 45% in 1930; by 2010 it was 5%.

The lessons of Smoot Hawley used to be taught in high schools. Presidents from Lyndon Johnson to Ronald Reagan have enlisted the unhappy duo when facing off with free trade critics. “I have been around long enough to remember that when we did that once before in this century, something called Smoot Hawley, we lived through a nightmare,” Reagan, who came of age during the Great Depression, said in 1984.

They even got a mention in Ferris Bueller’s Day Off when actor Ben Stein’s teacher bores his class with it. “I don’t think the current generation are taught it. It’s in the past and we are more interested in the future.”

But that might be about to change. “The main lesson is that you have to worry about what other countries do. Countries will retaliate,” said Irwin. “When Congress was considering Smoot Hawley in the 1930s they didn’t consider what other countries might do in reaction. They thought other countries would remain passive. But other countries don’t remain passive.”

The consequences of a trade war today are far worse than in the 1930s. Exports of goods and services account for about 13% of US gross domestic product (GDP) the broadest measure of an economy. It was roughly 5% back in 1920.

“The US is much more engaged in trade, it’s much more a part of the fabric of the country, than it was in the 1920s and 1930s. That means the ripple effects are widespread. Many more industries will be hit by it and the scope for foreign retaliation, which in the case of Smoot Hawley was quite limited, is going to be much more widespread if a trade war was to start.”

“When you start talking about withdrawing from trade agreements or imposing tariffs of 35%, if you are doing that as a protectionist measure, that would be blowing up the system.”

That the promise of “blowing up the system” got Trump elected may be why the ghosts of Smoot and Hawley are once again walking the halls of Congress.

The Guardian

CFT: Focusing on Compassion In Next Generation CBT Dennis Tirch Ph.D * Compassion Focused Therapy For Dummies – Mary Welford * Compassion Focused Therapy – Paul Gilbert.

Compassion Focused Therapy offers therapists new options.

Dennis Tirch Ph.D

Compassion is currently being studied and used as an evidence based ingredient in effective psychotherapy more than ever before. This might not seem surprising, given that practicing compassion has been at the center of emotional healing in global wisdom traditions for at least 2,600 years. Empathy and emotional validation have been identified as some of the most important components of psychotherapy effectiveness for decades. However, compassion, as a process in itself, has only recently come to be seen as a core focus of psychotherapeutic work. A growing body of research continues to demonstrate how cultivating our compassionate minds can help us to alleviate and prevent a range of psychological problems, including anxiety and shame (Tirch and Gilbert, 2014). Rather than being a soft option, the deliberate activation of our compassion system can generate the courage and psychological flexibility we need to face life’s challenges, and step forward into lives of meaning, purpose and vitality.

Paul Gilbert (2009) has drawn upon developmental psychology, affective neuroscience, Buddhist practical philosophy, and evolutionary theory to develop a comprehensive form of experiential behavior therapy known as Compassion Focused Therapy (CFT). Gilbert describes compassion as a multifaceted process that has evolved from the caregiver mentality found in human parental care and child rearing. As such, compassion includes a number of emotional, cognitive, and motivational elements involved in the ability to create opportunities for growth and change with warmth and care. CFT involves training and enhancing this evolved capacity for compassion.

Gilbert defines the essence of compassion as “a basic kindness, with deep awareness of the suffering of oneself and of other living things, coupled with the wish and effort to relieve it” (2009, p. xiii). This definition involves two central dimensions of compassion. The first is known as the psychology of engagement and involves sensitivity to and awareness of the presence of suffering and its causes. The second dimension is known as the psychology of alleviation and constitutes both the motivation and the commitment to take actual steps to alleviate the suffering we encounter (Gilbert and Choden, 2013).

Over the last few years, the research base for compassion psychology generally and CFT specifically has been growing at a remarkable rate, with a rapid increase in the number of research and clinical publications addressing compassion. For example, the last ten years have seen a major upsurge in exploration into the benefits of cultivating compassion, especially through imagery practice (Fehr, Sprecher, and Underwood, 2008). Neuroscience and imaging research has demonstrated that practices of imagining compassion for others produce changes in the frontal cortex, the immune system, and overall well-being (Lutz et al., 2008). Notably, one study (Hutcherson, Seppala, and Gross, 2008) found that even just a brief loving-kindness meditation increased feelings of social connectedness and affiliation toward strangers.

Several compassion-focused intervention components have been found to enhance psychotherapy outcomes, and to serve as mediator variables in outcomes. For example, one study (Schanche, Stiles, McCullough, Svartberg, and Nielsen, 2011) found that self-compassion was an important mediator of reduction in negative emotions associated with personality disorders. In a study of the effectiveness of mindfulness-based cognitive therapy for depression (Kuyken et al., 2010), researchers found that self-compassion was a significant mediator between mindfulness and recovery. In fact, in a meta-analysis of research concerning both clinical and nonclinical settings, compassion-focused interventions were found to be significantly effective (Hofmann et al., 2011).

CFT is also seeing increasing empirical supported through outcome research. An early clinical trial involving a group of people with chronic mental health problems who were attending a day hospital (Gilbert and Procter, 2006) found that CFT significantly reduced self-criticism, shame, sense of inferiority, depression, and anxiety. In other outcome research, CFT has been found to be significantly effective for the treatment of personality disorders (Lucre and Corten, 2012), eating disorders (Gale, Gilbert, Read, and Goss, 2012), psychosis (Braehler, Harper, and Gilbert, 2012) and in people presenting to community mental health teams (Judge, Cleghorn, McEwan, and Gilbert, 2012). As CFT continues to become more widely disseminated and growing numbers of clinicians and researchers acquire understanding and skill in its methods and philosophy, increasing outcome research will further test the model, leading to innovation and improvement.

The following brief tips can help psychotherapists begin to appreciate how useful a compassion focus can be in practicing ACT, CBT or, in fact, any form of psychotherapy. Furthermore, we can see how remembering to practice compassion for ourselves might help to restore the energy and attention we bring to our work, of sharing compassion with our clients. Feel free to experiement with the following:

1. “It is not your fault…”

From a perspective of compassion, we remember how much of the pain and suffering in life is not of our choosing, and couldn’t really be our fault. In CFT we practice the “wisdom of no-blame” which means that taking responsibility for the direction you choose in life is essential, while languishing in shame, social fears and self-blame seldom leads to effective action. We know we didn’t choose our place in the genetic lottery. We didn’t choose to have a tricky human brain that is set up with a hair-trigger threat detection system and confusing loops of thoughts and actions. We didn’t choose our parents, our childhood or the myriad of social circumstances of life. By realizing that much of what we suffer with is simply not our fault, we can begin to activate compassion for ourselves and others, as we contact and engage with the tragedies of life.

2. Holding ourselves and others in warmth and kindness

When humans are in the presence of warmth, acceptance and affiliative emotions, we are likely to be at our most flexible, empathic, responsive and healthiest mode of operation. From the day we are born and throughout our lives the presence of kindess, support and emotional strength will have powerful impacts on every aspect of our health and behavior. In CFT, we use methods drawn from ancient visualization practices, and also modern techniques drawn from method acting to create the conditions and context that can allow for the experience of compassion. So, when we practice compassion for ourselves and others, we remember to slow down, to have a warm and caring expression on our face, and to use open and centered body language. Adopting a slow pace of our breathing and a warm tone of voice, we do all that we can to invite an experience of compassion. Images that evoke compassion are also used to bring us into contact with our compassionate mind. Can you imagine the most elegant cognitive reframe shouted at you with a cruel voice, such as a depressed client telling themselves, “The evidence doesn’t add up that you are a loser, so stop being so stupid about everything and suck it up and deal with life!” Perhaps even worse, can you imagine the condeming inner monologue of a mindfulness practitioner saying something like, “You’re not supposed to be judgemental about judging your thoughts! My God, you are terrible at this!” No matter how clever the content of our minds may seem to be, an emotional tone of acceptance, kindess and compasion is an essential ingredient to our experience of well-being.

3. Practicing compassion as a flow

We all can feel distressed in our work as psychotherapists, when we repeatedly encounter the suffering of others, which activates sympathetic emotional pain that we experience within our own minds, hearts and brains. Practicing deliberate, consistent compassion for ourselves and for others can help us to prevent empathic distress fatigue, and can build our inner architecture of compassionate strength. When you find yourself feeling that your reservoir of empathy, wisdom and warmth is slightly drained, deliberately breathe in compassionate intentions for yourself. As you exhale, direct compassionate intentions towards your client. This can be done silently, secretly, and consistently. As we breathe in, we wish for our suffering to cease and for ourselves to find peace and happiness. As we breathe out, we wish for our clients suffering to cease also, and we wish them happiness, wellness and an end to needless struggles. When this simple gesture becomes a therapist’s habit, they can quickly activate affiliative emotions to help them work towards their own compassionate mission of alleviating and preventing the suffering that they find in themselves and in others.

*

Dennis Tirch, Ph.D., is a compassion-focused psychologist, the author of The Compassionate Mind Guide to Overcoming Anxiety, and a faculty member at Weill Cornell Medical College.

Paul Gilbert, Ph.D., is currently a professor of clinical psychology at the University of Derby in the United Kingdom, and director of the Mental Health Research Unit at Derbyshire Mental Health Trust.

***

Compassion Focused Therapy For Dummies
Mary Welford.

Introduction

You can work through a never-ending list of things you could do to improve your wellbeing. Getting more sleep, taking regular exercise, eating a healthier diet, developing a positive mental attitude and drinking less alcohol are just some of the things you may benefit from. Advice comes from the TV, newspapers, self-help books, friends, relatives, colleagues, healthcare professionals and even the chats we have with ourselves! But it’s hard to motivate ourselves to make helpful changes. It’s even harder to maintain them.

Compassion Focused Therapy (CFT) is here to help. This approach offers life-changing insights into our amazing capacities and also the challenges we face in our everyday lives. By understanding ourselves, we become motivated to act out of true care for our wellbeing. This changes the relationship we have with ourselves and others. Practicing CFT won’t mean you suddenly turn into a ‘perfect’version of yourself. It does however mean that you become more aware of the choices you have and you’re motivated to make ones that are more helpful to you. And yes, you find plenty of advice in here to guide you on your way too!

About This Book

Compassion Focused Therapy For Dummies contains a wealth of important information that can help you to understand yourself, and others, better. It also introduces you to practices that you can integrate into your everyday life, minute by minute, hour by hour, day by day…. I’ve used as little jargon and off-putting technical terms as possible, and so you don’t need to approach this book with a background knowledge of psychology. Simply put, if you’re in possession of a human brain and you’d like to discover more about CFT, this book is written for you.

That said, two factors may motivate you to continue developing your understanding of CFT once you finish this book: CFT is rooted in a scientific understanding of what it is to be human. As such, the approach constantly evolves to reflect the science. In the same way as it’s helpful to keep up with advancing technology, it’s also good to keep up with advancing our understanding of ourselves. We humans are highly complex.

This book simply doesn’t have the room to do CFT complete justice –not if you want to be able to lift it up! When you finish reading, you may want to move on to explore the comprehensive work of Paul Gilbert (the originator of the CFT approach), his colleagues and collaborators.

Foolish Assumptions In writing this book

I’ve had to make a few assumptions about you. I’ve assumed that: You’re interested in improving your wellbeing. You appreciate that CFT is based on an incredible amount of research –but you don’t necessarily want to plough through it all! You realise that I’ve had to make some tough decisions about what to include and what to leave out. Hopefully most of the choices I’ve made are right (but thankfully I won’t criticise myself if I’ve made a mistake; I hope you don’t either!). You recognise that I’m not trying to pass CFT off as my own creation. Instead, I set out to describe the work of Paul Gilbert and colleagues (of whom I am privileged to be one).

You may be selective about which parts of the book you read. As such, I’ve written this book in a way that allows each chapter to ‘stand alone’ so that you can pick and choose the content you want to read, and when you want to read it. You’re prepared to give new things a go! If you’re a therapist or studying CFT, I also assume that you recognise the importance of learning the approach ‘from the inside out’, and as such that you’ll work through the book with this in mind.

Beyond the Book

In addition to the material in this book, I also provide a free access-anywhere Cheat Sheet that offers some helpful reminders about the many benefits of CFT. To get this Cheat Sheet, simply go to http://www.dummies.com and search for ‘Compassion Focused Therapy For Dummies Cheat Sheet’ in the Search box.

Where to Go from Here

If you’re new to CFT, you may find it helpful to start with Chapter 1 before you decide how to tackle the rest of the chapters (you may even decide that you want to read the book from start to finish –but you don’t have to take that approach, as you find plenty of helpful cross-references to other useful chapters as you work through each chapter).

However you decide to begin, do this at a pace to suit both your understanding and emotional experience. If you have some experience of CFT, you may choose to skip to a particular topic due to a need or question you may have. If this is the case, use the table of contents and the index to help you find your way to the required information. Regardless of how you find your way around this book, I hope you appreciate the journey.

Finally, CFT aims to assist you to develop a compassionate understanding and relationship with yourself and others. If you find the approach helpful, it’s likely to become a way of life. To support your journey, you can access a number of courses to assist you. These course can also connect you with a wider group of people. You can find suitable courses advertised on a range of websites, including http://www.compassionatemind.co.uk, http://www.compassioninmind.co.uk and http://www.compassionatewellbeing.co.uk.

Part 1

Getting Started with Compassion Focused Therapy

IN THIS PART Discover what CFT is all about and how it can be helpful. Explore what compassion is, including the skills and attributes of compassion. Find out about the challenges we face and how our minds are organised.

Chapter 1
Introducing Compassion Focused Therapy

IN THIS CHAPTER
– Understanding how Compassion Focused Therapy works
– Discovering the benefits of compassion
– Exploring the effects of shame and self-criticism
– Beginning your journey
– Reaching out to others with compassion

People are more similar than different. We’re all born into a set of circumstances that we don’t choose, and in possession of a phenomenal yet very tricky brain. We’re all trying to get by, doing the best we can. The sooner we wake up to this reality the better.

Compassion Focused Therapy (CFT) is here to help. This approach aims to liberate you from shame and self-criticism, replacing these feelings with more helpful ways of relating to yourself. It helps you to choose the type of person you want to be and to develop ways to make this choice a reality. In this chapter, I introduce you to CFT, offering you an understanding of how it works and helping you to understand the benefits. I also point out the steps you may take along the way as you work with the information in this book. Finally, I take a moment to help you connect to the wider community around you as you begin this journey.

CFT advocates that you don’t rush to ‘learn’ about the approach but instead allow space to experience and ‘feel’ it. So take your time with this book as you apply it to your life, and really discover the benefits.

Getting to Grips with Compassion Focused Therapy

CFT was founded by UK clinical psychologist Paul Gilbert, OBE.

The name of the approach was chosen to represent three important aspects:

Compassion, in its simplest yet potentially most powerful definition, involves a sensitivity to our own, and other people’s, distress, plus a motivation to prevent or alleviate this distress. As such, it has two vital components. One involves engaging with suffering while the other involves doing something about it. Chapter 2 delves into the ins and outs of compassion in more detail.

Focused means that we actively develop and apply compassion to ourselves. It also involves accepting and experiencing compassion from and for others.

Therapy is a term to describe the processes and techniques used to address an issue or difficulty.

CFT looks to social, developmental and evolutionary psychology and neuroscience to help us understand how our minds develop and work, and the problems we encounter. This scientific understanding (of ourselves and others) calls into question our experiences of shame and self-criticism and helps us to develop the motivation to make helpful changes in our lives.

CFT utilises a range of Eastern and Western methods to enhance our wellbeing. Attention training, mindfulness and imagery combine with techniques used in Cognitive Behavioural Therapy (CBT), and Person Centred, Gestalt and Narrative therapies (to name but a few), resulting in a powerful mix of strategies that can help you become the version of yourself you wish to be.

CFT is often referred to as part of a ‘third-wave’ of cognitive behavioural therapy because it incorporates a number of CBT techniques. However, CFT derives from an evolutionary model (which you find out more about in Chapters 3, 4 and 5) and it uses techniques from many other therapies that have been found to be of benefit. As such, CFT builds upon and integrates with other therapies. As therapies become more rooted in science, we may see increasing overlap rather than diversification.

Compassion can involve kindness and warmth, but it also takes strength and courage to engage with suffering and to do something about it. CFT is by no means the easy or ‘fluffy’ option. Head to Chapter 6 to address some of the myths associated with compassion.

You may be reading this book because you want to find out more about this form of therapy. Alternatively, you may want to develop your compassionate mind and compassionate self out of care for your own wellbeing. The why or your motivation for reading this book has a big effect on the experience and, potentially, the outcome. Personally, I hope that whatever your motivation, you consider applying the approach to yourself in order that you can learn it ‘from the inside out’.

Defining common terms

You may find that some of the terms used in CFT are new to you. Here are a few common terms that I use throughout this book, along with an explanation of what they mean:

Common humanity: This refers to the fact that, as human beings, we all face difficulties and struggles. We’re more alike than different, and this realisation brings with it a sense of belonging to the human family.

Tricky brain: Our highly complex brains can cause us problems. For example, our capacity to think about the future and the past makes us prone to worry and rumination, while our inbuilt tendency to work out our place in a hierarchy can have a huge impact on our mood and self-esteem. In CFT, we use the term tricky brain to recognise our brain’s complexity and the problems this complexity can lead to. We consider our tricky brain in more detail in Chapter 3.

Compassionate mind: This is simply an aspect of our mind. It comes with a set of attributes and skills that are useful for us to cultivate (I introduce these attributes and skills in Chapter 2). This frame of mind is highly important for our wellbeing, relationships and communities. But just as we have a compassionate mind, we also have a competitive and threat-focused mind –which is highly useful, if not a necessity, at certain times (Chapter 4 takes a look at our threat-focused mind).

Compassionate mind training: This describes specific activities designed to develop compassionate attributes and skills, particularly those that influence and help us to regulate emotions. Attention training and mindfulness are used as a means to prepare us for this work, and we look at these practices in Part 3.

Compassionate self: This is the embodiment of your compassionate mind. It’s a whole mind and body experience. Your compassionate self incorporates your compassionate mind but also moves and interacts with the world.

Compassionate self cultivation: Your compassionate self is an identity that you can embody, cultivate and enhance. Compassionate self cultivation describes the range of activities that help you develop your compassionate self. Head to Chapter 10 for more on the cultivation of your compassionate self.

Engagement in the compassionate mind training and compassionate self cultivation activities provided in this book is often referred to as ‘physiotherapy for the brain’, as their use has been found to literally change the brain! Compassionate mind training and compassionate self cultivation are integral to CFT, but there’s so much more to CFT.

For many, getting to a point at which you can see the relevance and benefits of compassionate mind training and compassionate self cultivation, and overcome blocks and barriers to compassion, is the most significant aspect of your compassionate journey.

Exercises: These are activities for you to try. Sometimes they help to illustrate a point or provide a useful insight. Other exercises can give you an idea of what helps you to develop and maintain your compassionate mind.

Practice: Once you’re aware of which exercises are helpful to you, you can then incorporate these into your everyday life. Regular use of these exercises becomes your practice.

Observing the origins of CFT

CFT is closely tied to advances in our understanding of the mind and, because scientific advances never stop, the therapy continues to adapt and change based upon it. Much of this book focuses on sharing the science to help develop a compassionate understanding of yourself and a sense of connection with fellow travellers on this mortal coil.

CFT is also born out of a number of clinical observations:

– People demonstrating high levels of shame and self-criticism often struggle with standard psychological therapies. For example, using CBT, many find that they’re not reassured by the generation or discovery of alternative beliefs and views and that this doesn’t result in changes to the way they feel. Individuals may say ‘Logically, I know I’m not bad/not to blame, but I still feel it’ and ‘I know it’s unlikely that things will go wrong, but I still feel terrible’.

– What we say to ourselves is important, but how we say it is even more important.

Ever called yourself ‘idiot’ in a light-hearted and jovial manner? You probably did so without feeling any negative effects. But, have you ever called yourself an idiot in a harsh and judgemental manner? You probably felt much worse on that occasion, perhaps resulting in an urge to withdraw or isolate yourself.

Consider phrases such as, ‘look on the bright side’ or ‘count your blessings’.

Sometimes these phrases can be said in a life-affirming way, but using a condescending, frustrated or angry tone represents a whole different ball game. This helps illustrate that your emotional tone is important. Therapy can result in improvement in mood, self-esteem, sense of control and achievement, alongside a reduction in difficulties.

However, life events can trigger relapse. How we relate to ourselves, especially when life doesn’t go the way we hope, is pivotal to our ongoing wellbeing. Post therapy, many people report that they never disclosed to their therapist the things that caused them the most distress. This resulted from their sense of shame and the way they believed others (the therapist) would feel about them.

In addition to this, consider how many people simply don’t seek help at all because they fear what others think. People struggle to feel loved, valued, safe or content if they’ve never experienced these feelings. For some people, these feelings are alien concepts and, most of all, alien experiences, difficult to generate by discussion alone. As such, it’s important to develop the emotional resources and skills to deal with difficult emotions without turning to alcohol, food, drugs, work, excessive exercise or particular fixations.

– Most of us struggle with emotions such as anger, anxiety and vulnerability, but many also find positive emotions extremely difficult, even frightening. For some people, care, kindness, love and intimacy are terrifying, and to be avoided.

– People experiencing depression often worry that something bad will happen when their mood lifts.

– Likewise, feelings of connection and trust often stir up feelings of isolation and rejection, and a fear of loss.

These difficulties can interfere with the goals we set ourselves unless we address them.

CFT is an accumulation of years of research, clinical insights and teachings drawn from a broad range of areas. Much of this research and study is summarised and published in scientific papers, textbooks and self-help books by Paul Gilbert and colleagues. A number of websites also provide additional resources. You can find details of these in the Appendix. This book provides you with a starting point for your CFT journey and offers a framework upon which you can hang your future CFT practice –use these resources to develop your practice further.

TAKING A COMPASSIONATELY THERAPEUTIC APPROACH

It has long been established that compassionate, respectful and supportive relationships are key to our wellbeing and integral to effective psychotherapies. A key goal of many therapies is the development of a better relationship with yourself. However, different therapies place emphasis on different methods to account for and produce change, for example: CBT focuses primarily (but not exclusively) on the link between thoughts, feelings and behaviours and helps you generate new thoughts and behaviours in order to change your feelings. Interpersonal therapy focuses on your relationships and how they affect you. Psychodynamic therapy aims to bring the unconscious mind into consciousness, helping you to experience and understand your true feelings in order to resolve them.

In contrast, CFT begins with your experience of compassion from your therapist (in person or through books like this one). This relationship with your therapist is pivotal. It then focuses on the personal development and cultivation of compassion to help you to make beneficial choices for yourself and for others.

With this in mind, this book contains quite a bit of me –as an author, as a psychologist and, most of all, as a human being who struggles too. I hope that the bits of me enhance your experience of reading the words I have chosen to write for you.

Making the Case for Compassion

If we view compassion as ‘a sensitivity to our own and other people’s distress plus a motivation to prevent or alleviate it’, we can easily appreciate the many individual, group and societal benefits to developing and maintaining compassion in our lives. It makes intuitive sense and it’s the reason why compassion has been a central component of many religious and spiritual traditions across the centuries.

Research studies support the benefits of bringing compassion into your life. Higher levels of compassion are associated with fewer psychological difficulties. Compassion enhances our social relationships and emotional wellbeing: it alters our neurophysiology in a positive way and can even strengthen our immune systems. Research also suggests that CFT can be successfully used to address difficulties associated with eating, trauma, mood and psychosis.

However, for me, you can observe the power of the CFT approach in training clinicians. As they discover this approach to help their clients, they often report that the application of CFT in their personal lives can be transformative, leading many clinicians to develop and maintain their own personal practice. I believe that personal practice is vital for any clinician. I attribute much of my wellbeing and my ability to engage with other people’s suffering to the application of this approach in my life.

SO I’LL NEVER FEEL BAD AGAIN?

CFT won’t rid you of life’s difficulties. You won’t find yourself day after day serenely swanning around, impervious to life’s difficulties. We practise compassion because life is hard. Compassion can assist us to make helpful choices and, when ready, create a space in which we can work through strong emotions, and grieve for things we’ve lost and wish had been different. With compassion, we relate to our anger, anxiety and sadness with kindness, warmth and non-judgement. This allows us to consider the reasons such emotions are there, work through them and face the issues they are alerting us to. The development and cultivation of compassion isn’t a quick fix. It’s a way of living our lives.

Understanding the Effects of Shame and Self-Criticism

Shame and self-criticism are common blocks to wellbeing, and CFT is designed to overcome them. The following sections help you consider how shame and self-criticism can affect you and what you can do to address and overcome these issues.

The isolating nature of shame

Shame is an excruciatingly difficult psychological state. The term comes from the Indo-European word ‘sham’meaning ‘to hide’, and, as such, the experience of shame is isolating. When we feel shame, we feel bad about ourselves. We believe others judge us as inadequate, inferior or incompetent.

*

The next exercise helps you to explore the nature of shame and how it may affect you.

Begin by finding a place you can sit for a short time that is free of distractions. Allow yourself to settle for a few moments. It may help to lower your gaze or close your eyes during the exercise. Bring to mind a time when you felt ashamed (nothing too distressing, but something you feel okay to revisit briefly). Allow the experience to occupy your mind for a few moments.
Slowly ask yourself the following questions, allowing time after each question to properly explore your experience:
– How (and where) does shame feel as a sensation in your body?
– What thoughts go through your mind about yourself?
– What do you think other people thought/would think or make of you if they knew this about you?
– What emotions do you feel? What does it make you want to do?

Allow the experience to fade from your mind’s eye. Recall a time you’ve felt content or happy, perhaps on your own or with someone else, and let this memory fill your mind and body.

Depending upon the situation you brought to mind, a sense of anxiety, disgust or anger may have come to the fore. You may feel exposed, flawed, inadequate, disconnected or bad. Maybe you experience the urge to curl up, hide or run away, or perhaps feelings of anger and injustice leave you with the urge to defend yourself or confront someone.

*

Often, shame results in a feeling of disconnection. We don’t like ourselves (or a part of ourselves) and we don’t want to experience closeness to others because this may result in rejection. Our head goes down and we want to creep away. In addition, shame can affect our bodily sensations, maybe leading to tension, nausea or hotness. When you combine these negative views of yourself with predicted negative views from others, you create a very difficult concoction of experiences.

Shame brings with it a range of difficult experiences. Strong physical sensations, thoughts and images are just some of them. Emotions such as anxiety, sadness and anger can race through you as you feel the urge to withdraw, isolate or defend yourself.

Some of the things we feel shame about include:
– Our body (for example, its shape, or our facial features, hair or skin)
– Our body in action (for example, when sweating, urinating, defecating, burping, shaking, walking or running)
– Our health (for example, illnesses, infections, diseases or genetic conditions)
– Our mind (for example, our thoughts, including any intrusive images in our heads, our impulses, forgetfulness and our psychological health)
– Our emotions (for example, anxiety, anger, disgust, sadness, jealousy or envy)
– Our behaviour (for example, things we’ve said and the way we’ve said them, our use of alcohol and drugs, our compulsions, our eating patterns, or our tendency to avoid other people)
– Our environment (for example, our house, neighbourhood, car or bedroom)
– Other people (for example, our friends, family, cultural or religious group, or community)

Exploring why we feel shame

Human beings are social animals and need the protection, kindness and caring of others. Our brains are social organs. We like to feel valued, accepted and wanted by those around us in order to feel safe. There’s no shame in this. These needs represent a deep-rooted part of us that’s been highly significant in our evolution and survival. Shame begins in how you feel you live in the mind of another –and it is a social regulator. In other words, we’re programmed to try to work out, ‘What are they thinking about or feeling toward me?’, ‘Do they like me?’ and ‘Who can I trust?’

Just to add a further layer of complexity, we also try to work out, ‘Do I like myself or this aspect of me?’ and ‘Can I trust myself?’ If we perceive rejection from our social group or reject an aspect of ourselves, shame can be the result.

Although difficult to experience, shame can trigger us to make helpful changes and others to come to our aid in order to soothe the difficulties we experience. But what happens if we feel shame about things we are unable to change (such as our appearance, an aspect of our personality or our culture)? What happens if shame is attached to historical events that we blame ourselves for and can do nothing about? What happens when nobody comes to our assistance or we’re unable to accept the help offered to us?

*

Dr. Mary Welford, Consultant Clinical Psychologist, lives and works in the South West of England. She is a founding member of the Compassionate Mind Foundation, Chair to the charity from 2009-2015 and authored the Compassionate Mind Guide to Building Self Confidence.

*

from

Compassion Focused Therapy For Dummies

by Mary Welford

get it at Amazon.com

***

COMPASSION FOCUSED THERAPY

Paul Gilbert

Research into the beneficial effect of developing compassion has advanced enormously in the last ten years, with the development of inner compassion being an important therapeutic focus and goal.

This book explains how Compassion Focused Therapy (CFT)—a process of developing compassion for the self and others to increase well-being and aid recovery—varies from other forms of Cognitive Behaviour Therapy.

Comprising 30 key points this book explores the founding principles of CFT and outlines the detailed aspects of compassion in the CFT approach. Divided into two parts—Theory and Compassion Practice—this concise book provides a clear guide to the distinctive characteristics of CFT. Compassion Focused Therapy will be a valuable source for students and professionals in training as well as practising therapists who want to learn more about the distinctive features of CFT.

Paul Gilbert is Professor of Clinical Psychology, University of Derby and has been actively involved in research and treating people with shame-based and mood disorders for over 30 years. He is a past President of the British Association for Cognitive and Behavioural Psychotherapy and a fellow of the British Psychological Society and has been developing CFT for twenty years.

Part 1

THEORY: UNDERSTANDING THE MODEL

1 Some basics

All psychotherapies believe that therapy should be conducted in a compassionate way that is respectful, supportive and generally kind to people (Gilbert, 2007a; Glasser, 2005). Rogers (1957) articulated core aspects of the therapeutic relationship involving positive regard, genuineness and empathy—which can be seen as “compassionate”. More recently, helping people develop self-compassion has received research attention (Gilbert & Procter, 2006; Leary, Tate, Adams, Allen, & Hancock, 2007; Neff, 2003a, 2003b) and become a focus for self-help (Germer, 2009; Gilbert, 2009a, 2009b; Rubin, 1975/ 1998; Salzberg, 1995).

Developing compassion for self and others, as a way to enhance well-being, has also been central to Buddhist practice for the enhancement of well-being for thousands of years (Dalai Lama, 1995; Leighton, 2003; Vessantara, 1993).

After exploring the background principles for developing Compassion Focused Therapy (CFT), Point 16 outlines the detailed aspects of compassion in the CFT approach. We can make a preliminary note, however, that different models of compassion are emerging based on different theories, traditions and research (Fehr, Sprecher, & Underwood, 2009).

The word “compassion” comes from the Latin word compati, which means “to suffer with”. Probably the best-known definition is that of the Dalai Lama who defined compassion as “a sensitivity to the suffering of self and others, with a deep commitment to try to relieve it”, i.e., sensitive attention-awareness plus motivation. In the Buddhist model true compassion arises from insight into the illusory nature of a separate self and the grasping to maintain its boundaries—from what is called an enlightened or awake mind.

Kristin Neff (2003a, 2003b; see http://www.self-compassion.org), a pioneer in the research on self-compassion, derived her model and self-report measures from Theravada Buddhism. Her approach to self-compassion involves three main components:
– 1 being mindful and open to one’s own suffering;
– 2 being kind, and non self-condemning; and
– 3 an awareness of sharing experiences of suffering with others rather than feeling ashamed and alone—an openness to our common humanity.

In contrast, CFT was developed with and for people who have chronic and complex mental-health problems linked to shame and self-criticism, and who often come from difficult (e.g., neglectful or abusive) backgrounds.

The CFT approach to compassion borrows from many Buddhist teachings (especially the roles of sensitivity to and motivation to relieve suffering) but its roots are derived from an evolutionary, neuroscience and social psychology approach, linked to the psychology and neurophysiology of caring—both giving and receiving (Gilbert, 1989, 2000a, 2005a, 2009a). Feeling cared for, accepted and having a sense of belonging and affiliation with others is fundamental to our physiological maturation and well-being (Cozolino, 2007; Siegel, 2001, 2007). These are linked to particular types of positive affect that are associated with well-being (Depue & Morrone-Strupinsky, 2005; Mikulincer & Shaver, 2007; Panksepp, 1998), and a neuro-hormonal profile of increased endorphins and oxytocin (Carter, 1998; Panksepp, 1998).

These calm, peaceful types of positive feelings can be distinguished from those psychomotor activating emotions associated with achievement, excitement and resource seeking (Depue & Morrone-Strupinsky, 2005). Feeling a positive sense of well-being, contentment and safeness, in contrast to feeling excited or achievement focused, can now be distinguished on self-report (Gilbert et al., 2008). In that study, we found that emotions of contentment and safeness were more strongly associated with lower depression, anxiety and stress, than were positive emotions of excitement or feeling energized. So, if there are different types of positive emotions—and there are different brain systems underpinning these positive emotions—then it makes sense that psychotherapists could focus on how to stimulate capacities for the positive emotions associated with calming and well-being.

As we will see, this involves helping clients (become motivated to) develop compassion for themselves, compassion for others and the ability to be sensitive to the compassion from others. There are compassionate (and non-compassionate) ways to engage with painful experiences, frightening feelings or traumatic memories.

CFT is not about avoidance of the painful, or trying to “soothe it away”, but rather is a way of engaging with the painful. In Point 29 we’ll note that many clients are fearful of compassionate feelings from others, and for the self, and it is working with that fear that can constitute the major focus of the work.

A second aspect of the CFT evolutionary approach suggests that self-evaluative systems operate through the same processing systems that we use when evaluating social and interpersonal processes (Gilbert, 1989, 2000a).

So, for example, as behaviourists have long noted, whether we see something sexual or fantasise about something sexual, the sexual arousal system is the same—there aren’t different systems for internal and external stimuli. Similarly, self-criticism and self-compassion can operate through similar brain processes that are stimulated when other people are critical of or compassionate to us. Increasing evidence for this view has come from the study of empathy and mirror neurons (Decety & Jackson, 2004) and our own recent fMRI study on self-criticism and self-compassion (Longe et al., 2010).

Interventions

CFT is a multimodal therapy that builds on a range of cognitive-behavioural (CBT) and other therapies and interventions.

Hence, it focuses on attention, reasoning and rumination, behaviour, emotions, motives and imagery.

It utilizes: the therapeutic relationship (see below); Socratic dialogues, guided discovery, psycho-education (of the CFT model); structured formulations; thought, emotion, behaviour and “body” monitoring; inference chaining; functional analysis; behavioural experiments; exposure, graded tasks; compassion focused imagery; chair work; enactment of different selves; mindfulness; learning emotional tolerance, learning to understand and cope with emotional complexities and conflicts, making commitments for effort and practice, illuminating safety strategies; mentalizing; expressive (letter) writing, forgiveness, distinguishing shame-criticizing from compassionate self-correction and out-of-session work and guided practice—to name a few! Feeling the change CFT adds distinctive features in its compassion focus and use of compassion imagery to traditional CBT-type approaches.

As with many of the recent developments in therapy, special attention is given to mindfulness in both client and therapist (Siegel, 2010). In the formulation CFT is focused on the affect-regulation model outlined in Point 6, and interventions are used to develop specific patterns of affect regulation, brain states and self-experiences that underpin change processes.

This is particularly important when it comes to working with self-criticism and shame in people from harsh backgrounds. Such individuals may not have experienced much in the way of caring or affiliative behaviour from others and therefore the (soothing) emotion-regulation system is less accessible to them. These are individuals who are likely to say, “I understand the logic of [say] CBT, but I can’t feel any different”. To feel different requires the ability to access affect systems (a specific neurophysiology) that give rise to our feelings of reassurance and safeness. This is a well-known issue in CBT (Leahy, 2001; Stott, 2007; Wills, 2009, p. 57).

Over twenty years ago I explored why “alternative thoughts” were not “experienced” as helpful. This revealed that the emotional tone, and the way that such clients “heard” alternative thoughts in their head, was often analytical, cold, detached or even aggressive. Alternative thoughts to feeling a failure, like: “Come on, the evidence does not support this negative view; remember how much you achieved last week!” will have a very different impact if said to oneself (experienced) aggressively and with irritation than if said slowly and with kindness and warmth. It was the same with exposures or home-works—the way they are done (bullying and forcing oneself verses encouraging and being kind to oneself) can be as important as what is done.

So, it seemed clear that we needed to focus far more on the feelings of alternatives not just the content—indeed, an over focus on content often was not helpful.

So, my first steps into CFT simply tried to encourage clients to imagine a warm, kind voice offering them the alternatives; or working with them in their behavioural tasks. By the time of the second edition of Counselling for Depression (Gilbert, 2000b) a whole focus had become concentrated on “developing inner warmth”(see also Gilbert, 2000a).

So, CFT progressed from doing CBT and emotion work with a compassion (kindness) focus and, then, as the evidence for the model developed and more specific exercises proved helpful, on to CFT.

The therapeutic relationship

The therapeutic relationship plays a key role in CFT (Gilbert, 2007c; Gilbert & Leahy, 2007), paying particular attention to the micro-skills of therapeutic engagement (Ivey & Ivey, 2003), issues of transference/countertransference (Miranda & Andersen, 2007), expression, amplification, inhibition and/or fear of emotion (Elliott, Watson, Goldman, & Greenberg, 2003; Leahy, 2001), shame (Gilbert, 2007c), validation (Leahy, 2005), and mindfulness of the therapist (Siegel, 2010).

When training people from other approaches, particularly CBT, we find that we have to slow them down; to allow spaces, and silences for reflection, and experiencing within the therapy rather than a series of Socratic questions or “target setting”. We teach how to use one’s voice speed and tone, nonverbal communication, the pacing of the therapy, being mindful (Katzow & Safran, 2007; Siegel, 2010) and the reflective process in the service of creating “safeness” to explore, discover, experiment and develop.

Key is to provide emotional contexts where the client can experience (and internalize) therapists as “compassionately alongside them”—no easy task because as we will discuss below (see Point 10) shame often involves clients having emotional experiences (transference) of being misunderstood, getting things wrong, trying to work out what the other person wants them to do and intense aloneness.

The emotional tone in the therapy is created partly by the whole manner and pacing of the therapist and is important in this process of experiencing “togetherness”. CF therapists are sensitive to how clients can actually find it hard to experience “togetherness” or “being cared about”, and wrap themselves in safety strategies of sealing the self off from “the feelings of togetherness and connectedness” (see Point 29; Gilbert, 1997, 2007a, especially Chapters 5 and 6, 2007c). CBT focuses on collaboration, where the therapist and client focus on the problem together—as a team.

CFT also focuses on (mind) “sharing”.

The evolution of sharing (and motives to share), e.g., not only objects but also our thoughts, ideas and feelings, is one of humans’ most important adaptations and we excel at wanting to share. As an especially social species, humans have an innate desire to share—not only material things but also their knowledge, values and the content of their minds—to be known, understood and validated. Thus, issues of motivation to share versus fear of sharing (shame), empathy and theory of mind are important evolved motives and competencies. It is the felt barriers to this “flow of minds” that can be problematic for some people and the way that the therapist “unblocks” this flow that can be therapeutic.

Dialectical Behaviour Therapy (DBT; Linehan, 1993) addresses the key issue of therapy-interfering behaviours. CFT, like any other therapy, needs to be able to set clear boundaries, and use authority as a containing process. Some clients can be “emotional bullies”, threatening the therapist (e.g., with litigation or suicide) and are demanding. Frightened therapists may submit or back off. The client, at some level, is frightened of their own capacity to force others away from them.

For other clients, during painful moments, therapists might try to rescue rather than be silent. So, clarification of the therapeutic relationship is very important. This is why DBT wisely recommends a support group for therapists working with these kinds of clients. Research has shown that compassion can become a genuine part of self-identity but it can also be linked to self-image goals where people are compassionate in order to be liked (Crocker & Canevello, 2008). Compassion focused self-image goals are problematic in many ways.

Researchers are also beginning to explore attachment style and therapeutic relationships with evidence that securely attached therapists develop therapeutic alliances easier and with less problems than therapists with an insecure attachment style (Black, Hardy, Turpin, & Parry, 2005; see also Liotti, 2007). Leahy (2007) has also outlined how the personality and schema organization of the therapist can play a huge role in the therapeutic relationship—for example, autocratic therapists with dependent patients, or dependent therapists with autocratic patients. So, compassion is not about submissive “niceness”—it can be tough, setting boundaries, being honest and not giving clients what they want but what they need. An alcoholic wants another drink—that is not what they need; many people want to avoid pain and may try to do so in a variety of ways—but (kind) clarity, exposure and acceptance may be what actually facilitates change and growth (Siegel, 2010).

Evidence for the benefits of compassion

Although CFT is rooted in an evolutionary, neuro- and psychological science model, it is important to recognize its heavy borrowing from Buddhist influences. For over 2500 years Buddhism has focused on compassion and mindfulness as central to enlightenment and “healing our mind”. While Theravada Buddhism focuses on mindfulness and loving-( friendly)-kindness, Mahayana practices are specifically compassion focused (Leighton, 2003; Vessantara, 1993).

At the end of his life the Buddha said that his main teachings were mindfulness and compassion—to do no harm to self or others. The Buddha outlined an eight-fold path for practice and training one’s mind to avoid harming and promote compassion. This includes: compassionate meditations and imagery, compassionate behaviour, compassionate thinking, compassionate attention, compassionate feeling, compassion speech and compassionate livelihood.

It is these multimodal components that lead to a compassionate mind. We now know that the practice of various aspects of compassion increases well-being and affects brain functioning, especially in areas of emotional regulation (Begley, 2007; Davidson et al., 2003).

The last 10 years have seen a major upsurge in exploring the benefits of cultivating compassion (Fehr et al., 2009). In an early study Rein, Atkinson and McCraty (1995) found that directing people in compassion imagery had positive effects on an indictor of immune functioning (S-IgA) while anger imagery had negative effects. Practices of imagining compassion for others, produce changes in the frontal cortex, immune system and wellbeing (Lutz, Brefczynski-Lewis, Johnstone, & Davidson, 2008). Hutcherson, Seppala and Gross (2008) found that a brief loving-kindness meditation increased feelings of social connectedness and affiliation towards strangers. Fredrickson, Cohn, Coffey, Pek and Finkel (2008) allocated 67 Compuware employees to a loving-kindness meditation group and 72 to waiting-list control.

They found that six 60-minute weekly group sessions with home practice based on a CD of loving kindness meditations (compassion directed to self, then others, then strangers) increased positive emotions, mindfulness, feelings of purpose in life and social support, and decreased illness symptoms. Pace, Negi and Adame (2008) found that compassion meditation (for six weeks) improved immune function and neuroendocrine and behavioural responses to stress. Rockliff, Gilbert, McEwan, Lightman and Glover (2008) found that compassionate imagery increased heart rate variability and reduced cortisol in low self-critics, but not in high self-critics.

In our recent fMRI study we found that self-criticism and self-reassurance to imagined threatening events (e.g., a job rejection) stimulated different brain areas, with self-compassion but not self-criticism stimulating the insula—a brain area associated with empathy (Longe et al., 2010). Viewing sad faces, neutrally or with a compassionate attitude, influences neurophysiological responses to faces (Ji-Woong et al., 2009). In a small uncontrolled study of people with chronic mentalhealth problems, compassion training significantly reduced shame, self-criticism, depression and anxiety (Gilbert & Procter, 2006). Compassion training has also been found to be helpful for psychotic voice hearers (Mayhew & Gilbert, 2008). In a study of group-based CFT for 19 clients in a high-security psychiatric setting, Laithwaite et al. (2009) found “…a large magnitude of change for levels of depression and self-esteem…. A moderate magnitude of change was found for the social comparison scale and general psychopathology, with a small magnitude of change for shame,…. These changes were maintained at 6-week follow-up”(p. 521).

In the field of relationships and well-being, there is now good evidence that caring for others, showing appreciation and gratitude, having empathic and mentalizing skills, does much to build positive relationships, which significantly influence well-being and mental and physical health (Cacioppo, Berston, Sheridan, & McClintock, 2000; Cozolino, 2007, 2008).

There is increasing evidence that the kind of “self” we try to become will influence our well-being and social relationships, and compassionate rather than self-focused self-identities are associated with the better outcomes (Crocker & Canevello, 2008).

Taken together there are good grounds for the further development of and research into CFT.

Neff (2003a, 2003b) has been a pioneer in studies of self-compassion (see pages 3–4). She has shown that self-compassion can be distinguished from self-esteem and predicts some aspects of well-being better than self-esteem (Neff & Vonk, 2009), and that self-compassion aids in coping with academic failure (Neff, Hsieh, & Dejitterat, 2005; Neely, Schallert, Mohammed, Roberts, & Chen, 2009). Compassionate letter writing to oneself, improves coping with life events and reduces depression (Leary et al., 2007).

As noted, however, Neff’s concepts of compassion are different from the evolutionary and attachment-rooted model outlined here and, as yet, there is no agreed definition of compassion—indeed, the word compassion can have slightly (but important) different meanings in different languages. So, here compassion will be defined as a “mind set”, a basic mentality, and explored in detail in Point 16.

2 A personal journey

My interest in developing people’s capacities for compassion and self-compassion was fuelled by a number of issues:
• First, was a long interest in evolutionary approaches to human behaviour, suffering and growth (Gilbert, 1984, 1989, 1995, 2001a, 2001b, 2005a, 2005b, 2007a, 2007b, 2009a). The idea that cognitive systems tap underlying evolved motivation and emotional mechanisms has also been central to Beck’s cognitive approach (Beck, 1987, 1996; Beck, Emery, & Greenberg, 1985), with a special edition dedicated to exploring the evolutionary-cognitive interface (Gilbert, 2002, 2004).
• Second, evolutionary psychology has focused significantly on the issue of altruism and caring (Gilbert, 2005a) with increasing recognition of just how important these have been in our evolution (Bowlby, 1969; Hrdy, 2009) and now are to our physical and psychological development (Cozolino, 2007) and well-being (Cozolino, 2008; Gilbert, 2009a; Siegel, 2007).
• Third, people with chronic mental-health problems often come from backgrounds of high stress and/ or low altruism and caring (Bifulco & Moran, 1998), backgrounds that significantly affect physical and psychological development (Cozolino, 2007; Gerhardt, 2004; Teicher, 2002).
• Fourth, partly as a consequence of these life experiences, people with chronic and complex problems can be especially, deeply troubled by shame and self-criticism and/ or self-hatred and find it enormously difficult to be open to the kindness of others or to be kind to themselves (Gilbert, 1992, 2000a, 2007a, 2007c; Gilbert & Procter, 2006).
• Fifth, as noted on page 6, when using CBT they would typically say, “I can see the logic of alternative thoughts but I still feel X, or Y. I can understand why I wasn’t to blame for my abuse but I still feel I’m to blame”, or, “I still feel there is something bad about me”.
• Sixth, there is increasing awareness that the way clients are able to think about and reflect on the contents of their own minds (e.g., competencies to mentalize in contrast to being alexithymic) has major implications for the process and focus of therapy (Bateman & Fonagy, 2006; Choi-Kain & Gunderson, 2008; Liotti & Gilbert, in press; Liotti & Prunetti, 2010).
• Last, but not least, is a long personal interest in the philosophies and practices of Buddhism—although I do not regard myself as a Buddhist as such. Compassion practices, such as becoming the compassionate self (see Part 2), may create a sense of safeness that aides the development of mindfulness and mentalizing.

In Buddhist psychology compassion “transforms” the mind.

Logic and emotion

It has been known for a long time that logic and emotion can be in conflict. Indeed, since the 1980s research has shown that we have quite different processing systems in our minds.

One is linked to what is called implicit (automatic) processing, which is non-conscious, fast, emotional, requires little effort, is subject to classical conditioning and self-identify functions, and may generate feelings and fantasies even against conscious desires. This is the system which gives that “felt sense of something”.

This can be contrasted with an explicit (controlled) processing system, which is slower, consciously focused, reflective, verbal and effortful (Haidt, 2001; Hassin, Uleman, & Bargh, 2005).

These findings have been usefully formulated for clinical work (e.g., Power & Dalgleish, 1997) with more complex models being offered by Teasdale and Barnard (1993).

But the basic point is that there is no simple connection of cognition to emotion, and there are different neurophysiological systems underpinning them (Panksepp, 1998).

So, one of the problems linking thinking and feeling (“I know it but I don’t feel it”) can be attributed to (different) implicit and explicit systems coming up with different processing strategies and conclusions.

Cognitive, and many other, therapists and psychologists have not helped matters by using the concept of cognition and information processing interchangeably as if they are the same thing. They are not.

Your computer and DNA—indeed every cell in your body—are information processing mechanisms but I don’t think that they have “cognitions”.

This failure to define what is and is not “a cognition” or “cognitive” in contrast to a motive or an emotion has caused difficulties in this area of research.

Various solutions have been offered to work with the problems of feelings not following cognitions or logical reasoning, such as: needing more time to practise; most change is slow and hard work; more exposure to problematic emotions; identifying “roadblocks” and their functions (Leahy, 2001); a need for a particular therapeutic relationship (Wallin, 2007); or developing mindfulness and acceptance (Hayes, Follette, & Linehan, 2004; Liotti & Prunetti, 2010).

CFT offers an additional position

CFT suggests that there can be a fundamental problem in an implicit emotional system that evolved with mammalian and human caring systems and which gives rise to feelings of reassurance, safeness and connectedness (see Point 6).

The inability to access that affect system is what underpins this problem. Indeed, as noted (page 6), some people can cognitively (logically) generate “alternative thoughts” but hear them in their head as cold, detached or aggressive. There is no warmth or encouragement in their alternative thoughts—the emotional tone is more like cold instruction.

I have found that the idea of feeling (inner) kindness and supportiveness as part of generating alternative “thoughts” is an anathema to them. So, they just cannot “feel” their alternative thoughts and images.

*

Paul Gilbert, Ph.D., is currently a professor of clinical psychology at the University of Derby in the United Kingdom, and director of the Mental Health Research Unit at Derbyshire Mental Health Trust.

*

from

Compassion Focused Therapy

by Paul Gilbert

get it at Amazon.com

***

Authoritative Websites on CFT

Centre for Mindful Self Compassion

Mindful Self Compassion for Teens

Chris Germer

Mindful.org

The Mindfulness

The Compassion

Center For Healthy Minds

Mindfulness Research

Mindfulness Exercises

Compassionate Living

Foundation For Active Compassion

Mindsight Institute

Center For Nonviolent Communication

Awareness In Action

Center for Compassion and Altruism Research and Education

Greater Good: The Science of a Meaningful Life

Charter For Compassion

Compassionate Mind Foundation

Christopher Germer, PhD, Author of The Mindful Path to Self-Compassion

Mindful Awareness Research Center at University of California Los Angeles

University of Massachusetts Center for Mindfulness

Institute for Meditation and Psychotherapy

University of California at San Diego Center for Mindfulness

Mind And Life Institute

Centre for Mindfulness Research and Practice

Mindfulness page maintained by David Fresco

Mindfulness page maintained by Christopher Walsh

Center for Contemplative Mind in Society

Wellspring Institute for Neuroscience and Contemplative Wisdom

Centre for Mindfulness Studies

Recommended Reading:

  • Highly Recommended: Germer, C. K. (2009). The mindful path to self-compassion: Freeing yourself from destructive thoughts and emotions.New York: Guilford Press.
  • Bennett-Goleman, T. (2001). Emotional alchemy: How the mind can heal the heart.New York: Three Rivers Press.
  • Brach, T. (2003) Radical Acceptance: Embracing your life with the heart of a Buddha. New York: Bantam.
  • Brown, B. (1999). Soul without shame: A guide to liberating yourself from the judge within. Boston: Shambala.
  • Brown, B. (2010). The Gifts of Imperfection. Center City, MN: Hazelden.
  • Feldman, C. (2005). Compassion: Listening to the cries of the world.Berkeley: Rodmell Press.
  • Gilbert, P. (2009). The compassionate mind. London: Constable.
  • Goldstein, E. (2015). Uncovering Happiness: Overcoming Depression with Mindfulness and Self-Compassion. New York: Simon & Schuster.
  • Goldstein, J., & Kornfield, J. (1987). Seeking the heart of wisdom: The path of insight meditation. Boston: Shambhala.
  • Hanh, T. N. (1997). Teachings on love.Berkeley, CA: Parallax Press.
  • Kornfield, J. (1993). A path with heart.New York: Bantam Books.
  • Marlowe, S. (2016). My new best friend. Summerville, MA: Wisdom Publications.
  • Rosenberg, M. (2003). Nonviolent Communication: A Language of Life.Encinitas, CA: Puddledancer Press.
  • Salzberg, S. (1997). Lovingkindness: The revolutionary art of happiness.Boston: Shambala.
  • Salzberg, S. (2005). The force of kindness: change your life with love and compassion. Boulder, CO: Sounds True.

Out Of The Woods. Sir Arthur Williams. A hidden life of Depression and Abuse – Cherie Howie.

In the high-powered, ­influential world he spent much of his adult life, Sir ­Arthur Williams was charming and generous.

Another Sir – Robert Muldoon – was among those the married entrepreneur brought home to his family of five children. The then Finance Minister watched across the dinner table as Sir Arthur, smiling and laughing, told stories.

Also at the table was Sir Arthur’s middle child, Brent Williams.

The scene, repeated whenever the property developer philanthropist brought colleagues, church leaders, businessmen – and future Prime Ministers – to the family home in Karori was confusing and intriguing for those close to him.

“I would just sit in awe. I would just sit there thinking: ‘Who is this other man?’ He was so different, he was animated, he was fun,” ­Williams tells the Herald on Sunday.

When his father, who died at 73 in 2001, came home without guests, things were very different.

“We were physically prepared, we were verbally prepared, before he arrived. We sort of ran around like headless chickens, trying to make sure everything was perfect.”

A light left on was enough to spark his father’s rage. He would scream and shout about the waste of electricity. If all the lights were off, he’d find something else. Williams learned to hide in corners and hide under the bed from a young age.

Sir Arthur went into the ­construction business after emigrating from the United Kingdom in the late 1940s. He was later responsible for building dozens of commercial buildings in Wellington, used Valium to get through the day and tranquilisers to get through the night, Williams says.

He took all the stress in his life out on his family. “From as early as I can remember it wasn’t a case of ‘Yay, Dad’s home.’ You’d go into a state of anxiety. What was going to happen?

“There was every form of abuse, in different ways, in different forms, in different levels, but every form of violence was carried out.”

It took decades for Williams to understand the awful toll his ­childhood took.

Despite a successful career and becoming the proud dad of four children, he broke down in his late 40s.

Now he has written an ­innovative graphic novel-style memoir to help others chart their way back from depression. He hopes it will help others struggling to find their way back to health, and also lay his own ghosts to rest.

Arthur imposed his will on everything his family did – from the partners they chose, to the subjects they took at school. He even took ownership of their dreams.

“My father wanted me to be a lawyer. He told me, since the age of 5: ‘Brent’s going to be a lawyer.’ And I believed that.”

Williams went to law school, but another man with a large presence and a powerful voice lit the spark that would become his life’s work – helping the vulnerable.

“This wonderful big man came and gave a guest lecture one day and told me what he was doing with his practice in Mangere and it ­totally inspired me.

“That man was David Lange.”

The community law movement was gaining traction overseas and Williams realised he wanted to work not in a traditional legal way, but by offering people legal ­resources.

His father didn’t approve but, with law student friends, ­Williams set up a community law centre in Wellington in 1981.

They helped street kids, tenants’ groups and victims of domestic abuse and child abuse.

Later, he took his skills to the Legal ­Resources Trust and the Family Court.

But although he walked among the vulnerable, he did not count himself among their ranks.

“My work was totally my life ­experience. There was a lot of ­anger there that I was able to vent in a very constructive way by being an advocate for people who were ­vulnerable.

“But in a way it totally hid the fact that I was actually vulnerable and I’d experienced this. It was really weird to think that I was making videos that were very much based on my personal story, but I was ­totally unaware of it.”

His work revealed to him the truth he had been fighting to hide.

Williams was stressed and ­exhausted and being hard on the photographers trying to capture an image he was obsessed with – a child hiding under a bed as his ­parents screamed and shouted at each other.

“I had no awareness that it was me. Then I was getting the publication reviewed and … the woman, she just stopped and looked at me and said, ‘Now, Brent, what has brought you to this?’

“I just started crying, and that was the start of my journey.”

The first decade of the 21st ­century was coming to a close and ­Williams was about to crash. He’d been fighting it for a while – refusing to accept he was depressed. Eventually, he had to give up work.

His journey back to health would be long.

Almost a decade on, ­Williams holds firm to routines that keep him well.

But in those dark ­early days, putting his thoughts in writing was a first step, which ­eventually turned into his book.

“As time went on and I got a bit stronger, when I was partly ­acknowledging that I had this illness called depression and ­anxiety, I started doing some ­research and I started writing more, my writing had shifted from being more personal to trying to ­understand the illness.

Because of Williams’ job producing ­material to help people, it felt ­natural to get into writing a book.

“I didn’t start off writing a book. I was literally just writing to help ­myself.”

The result – Out of the Woods, out on September 19 – is as honest as it is simply told.

Williams tells his story ­entirely through 700 watercolour ­illustrations by Turkish ­artist ­Korkut Oztekin – from his ­realisation something was wrong to finding his way back to health, and the setbacks along the way.

Williams says he always knew his book had to be in pictures.

“When I was depressed I couldn’t take on board information or advice from people. I certainly couldn’t read good advice – and I think there’s a lot of good advice out there.”

Each illustration chronicles his battle to accept his illness and how he became well – neither one a neatly linear experience.

Some events are condensed – a panic ­attack over a baked beans purchase came from several events, one of which did involve buying beans.

“It’s faithful to the feelings I had. The brain is struggling so much that a simple decision becomes overwhelming and then something else can spark it – a noise, a bump, an ­unfriendly interaction.”

Other experiences ar