Category Archives: Depression

CHILDHOOD TRAUMA AND MENTAL ‘ILLNESS’. Beyond the smoke – Johann Hari.

Depression isn’t a disease; depression is a normal response to abnormal life experiences.

The medical team, and all their friends, expected these people, who had been restored to health to react with joy. Except they didn’t react that way. The people who did best, and lost the most weight were often thrown into a brutal depression, or panic, or rage. Some of them became suicidal.

Was there anything else that happened in your life when you were eleven? Well, Susan replied that was when my grandfather began to rape me.

“Overweight is overlooked, and that’s the way I need to be.”

What we had perceived as the problem, major obesity, was in fact, very frequently, the solution to problems that the rest of us knew nothing about. Obesity, he realized, isn’t the fire. It’s the smoke.

For every category of traumatic experience you go through as a kid, you are radically more likely to become depressed as an adult. The greater the trauma, the greater your risk of depression, anxiety, or suicide.

Emotional abuse especially, is more likely to cause depression than any other kind of trauma, even sexual molestation. Being treated cruelly by your parents is the biggest driver of depression, out of all these categories.

We have failed to see depression as a symptom of something deeper that needs to be dealt with. There’s a house fire inside many of us, and we’ve been concentrating on the smoke.

When the women first came into Dr. Vincent Felitti’s office some of them found it hard to fit through the door. These patients weren’t just a bit overweight: they were eating so much that they were rendering themselves diabetic and destroying their own internal organs. They didn’t seem to be able to stop themselves. They were assigned here, to his clinic, as their last chance.

It was the mid-1980s, and in the California city of San Diego, Vincent had been commissioned by the not-for-profit medical provider Kaiser Permanente to look into the fastest-growing driver of their costs, obesity. Nothing they were trying was working, so he was given a blank sheet of paper. Start from scratch, they said. Total blue-sky thinking. Figure out what we can do to deal with this. And so the patients began to come. But what he was going to learn from them led, in fact, to a major breakthrough in a very different area: how we think about depression and anxiety.

As he tried to scrape away all the assumptions that surround obesity, Vincent learned about a new diet plan based on a maddeningly simple thought. It asked: What if these severely overweight people simply stopped eating, and lived off the fat stores they’d built up in their bodies until they were down to a normal weight? What would happen?

In the news, curiously, there had recently been an experiment in which this was tried, eight thousand miles away, for somewhat strange reasons. For years in Northern Ireland if you were put in jail for being part of the Irish Republican Army’s violent campaign to drive the British out of Northern Ireland, you were classed as a political prisoner. That meant you were treated differently from people who committed (say) bank robberies. You were allowed to wear your own clothes, and you didn’t have to perform the same work as other inmates.

The British government decided to shut down that distinction, and they argued that the prisoners were simply common criminals and shouldn’t get this different treatment anymore. So the prisoners decided to protest by going on a hunger strike. They began, slowly, to waste away.

So the designers of this new diet proposal looked into the medical evidence about these Northern Ireland hunger strikers to find out what killed them. It turns out that the first problem they faced was a lack of potassium and magnesium. Without them, your heart stops beating properly. Okay, the radical dieters thought, what if you give people supplements of potassium and magnesium? Then that doesn’t happen. If you have enough fat on you, you get a few months more to live, until a protein deficiency kills you.

Okay, what if you also give people the supplements that will prevent that? Then, it turns out, you get a year to live, provided there’s enough fat. Then you’ll die from a lack of vitamin C, scurvy, or other deficiencies.

Okay, what if you give people supplements for that, too? Then it looks as though you’ll stay alive, Vincent discovered in the medical literature, and healthy, and you’ll lose three hundred pounds a year. Then you can start eating again, at a healthy level.

All this suggested that in theory, even the most obese person would be down to a normal weight within a manageable time. The patients coming to him had been through everything, every fad diet, every shaming, every prodding and pulling. Nothing had worked. They were ready to try anything. So, under careful monitoring, and with lots of supervision, they began this program. And as the months passed, Vincent noticed something. It worked. The patients were shedding weight. They were not getting sick, in fact, they were returning to health. People who had been rendered disabled by constant eating started to see their bodies transform in front of them.

Their friends and relatives applauded. People who knew them were amazed. Vincent believed he might have found the solution to extreme overweight. “I thought my god, we’ve got this problem licked,” he said.

And then something happened that Vincent never expected.

In the program, there were some stars, people who shed remarkable amounts of weight, remarkably quickly. The medical team, and all their friends, expected these people who had been restored to health to react with joy. Except they didn’t react that way.

The people who did best, and lost the most weight were often thrown into a brutal depression, or panic, or rage. Some of them became suicidal. Without their bulk, they felt they couldn’t cope. They felt unbelievably vulnerable. They often fled the program, gorged on fast food, and put their weight back on very fast.

Vincent was baffled. They were fleeing from a healthy body they now knew they could achieve, toward an unhealthy body they knew would kill them. Why? He didn’t want to be an arrogant, moralistic doctor, standing over his patients, wagging his finger and telling them they were ruining their lives, that’s not his character. He genuinely wanted to help them save themselves. So he felt desperate. That’s why he did something no scientist in this field had done with really obese people before. He stopped telling them what to do, and started listening to them instead. He called in the people who had panicked when they started to shed the pounds, and asked them: What happened when you lost weight? How did you feel?

There was one twenty-eight-year-old woman, who I’ll call Susan to protect her medical confidentiality. In fifty-one weeks, Vincent had taken Susan down from 408 pounds to 132 pounds. It looked like he had saved her life. Then, quite suddenly, for no reason anyone could see, she put on 37 pounds in the space of three weeks. Before long, she was back above 400 pounds. So Vincent asked her gently what had changed when she started to lose weight. It seemed mysterious to both of them. They talked for a long time. There was, she said eventually, one thing. When she was very obese, men never hit on her, but when she got down to a healthy weight, one day she was propositioned by a man, a colleague who she happened to know was married. She fled, and right away began to eat compulsively, and she couldn’t stop.

This was when Vincent thought to ask a question he hadn’t asked his patients before. When did you start to put on weight? If it was (say) when you were thirteen, or when you went to college, why then, and not a year before, or a year after?

Susan thought about the question. She had started to put on weight when she was eleven years old, she said. So he asked: Was there anything else that happened in your life when you were eleven? Well, Susan replied that was when my grandfather began to rape me.

Vincent began to ask all his patients these three simple questions. How did you feel when you lost weight? When in your life did you start to put on weight? What else happened around that time? As he spoke to the 183 people on the program, he started to notice some patterns. One woman started to rapidly put on weight when she was twenty-three. What happened then? She was raped. She looked at the ground after she confessed this, and said softly: “Overweight is overlooked, and that’s the way I need to be.”

“I was incredulous,” he told me when I sat with him in San Diego. “It seemed every other person I was asking was acknowledging such a history. I kept thinking, it can’t be. People would know if this was true. Somebody would’ve told me. Isn’t that what medical school is for?” When five of his colleagues came in to conduct further interviews, it turned out some 55 percent of the patients in the program had been sexually abused, far more than people in the wider population. And even more, including most of the men, had had severely traumatic childhoods.

Many of these women had been making themselves obese for an unconscious reason: to protect themselves from the attention of men, who they believed would hurt them. Being very fat stops most men from looking at you that way. It works. It was when he was listening to another grueling account of sexual abuse that it hit Vincent. He told me later:

“What we had perceived as the problem, major obesity, was in fact, very frequently, the solution to problems that the rest of us knew nothing about.”

Vincent began to wonder if the anti-obesity programs, including his own, had been doing it all wrong, by (for example) giving out nutritional advice. Obese people didn’t need to be told what to eat; they knew the nutritional advice better than he did. They needed someone to understand why they ate. After meeting a person who had been raped, he told me, “I thought with a tremendously clear insight that sending this woman to see a dietitian to learn how to eat right would be grotesque.”

Far from teaching the obese people, he realized they were the people who could teach him what was really going on. So he gathered the patients in groups of around fifteen, and asked them: “Why do you think people get fat? Not how. How is obvious. I’m asking why. What are the benefits?” Encouraged to think about it for the first time, they told him. The answers came in three different categories. The first was that it is sexually protective: men are less interested in you, so you are safer. The second was that it is physically protective: for example, in the program there were two prison guards, who lost between 100 and 150 pounds each. Suddenly, as they shed their bulk, they felt much more vulnerable among the prisoners, they could be more easily beaten up. To walk through those cell blocks with confidence, they explained, they needed to be the size of a refrigerator.

And the third category was that it reduced people’s expectations of them. “You apply for a job weighing four hundred pounds, people assume you’re stupid, lazy,” Vincent said. If you’ve been badly hurt by the world, and sexual abuse is not the only way this can happen, you often want to retreat. Putting on a lot of weight is, paradoxically, a way of becoming invisible to a lot of humanity.

“When you look at a house burning down, the most obvious manifestation is the huge smoke billowing out,” he told me. It would be easy, then, to think that the smoke is the problem, and if you deal with the smoke, you’ve solved it. But “thank God that fire departments understand that the piece that you treat is the piece you don’t see, the flames inside, not the smoke billowing out. Otherwise, house fires would be treated by bringing big fans to blow the smoke away. [And that would] make the house burn down faster.”

Obesity, he realized, isn’t the fire. It’s the smoke.

One day, Vincent went to a medical conference dedicated to obesity to present his findings. After he had spoken, a doctor stood up in the audience and explained: “People who are more familiar with these matters recognize that these statements by patients describing their sexual abuse, are basically fabrications, to provide a cover for their failed lives. It turned out people treating obesity had noticed before that a disproportionate number of obese people described being abused. They just assumed that they were making excuses.

Vincent was horrified. He had in fact verified the abuse claims of many of his patients, by talking to their relatives, or to law enforcement officials who had investigated them. But he knew he didn’t have hard scientific proof yet to rebut people like this. His impressions from talking to individual patients, even gathering the figures from within his group, didn’t prove much. He wanted to gather proper scientific data. So he teamed up with a scientist named Dr. Robert Anda, who had specialized for years in the study of why people do self-destructive things like smoking. Together, funded by the Center for Disease Control, a major US. agency funding medical research, they drew up a way of testing all this, to see if it was true beyond the small sample of people in Vincent’s program.

They called it the Adverse Childhood Experiences (ACE) Study, and it’s quite simple. It’s a questionnaire. You are asked about ten different categories of terrible things that can happen to you when you’re a kid, from being sexually abused, to being emotionally abused, to being neglected. And then there’s a detailed medical questionnaire, to test for all sorts of things that could be going wrong with you, like obesity, or addiction. One of the things they added to the list, almost as an afterthought, was the question: Are you suffering from depression?

This survey was then given to seventeen thousand people who were seeking health care, for a whole range of reasons, from Kaiser Permanente in San Diego. The people who filled in the form were somewhat wealthier and a little older than the general population, but otherwise fairly representative of the city’s population.

When the results came in, they added them up, at first, to see if there were any correlations.

It turned out that for every category of traumatic experience you went through as a kid, you were radically more likely to become depressed as an adult. If you had six categories of traumatic events in your childhood, you were five times more likely to become depressed as an adult than somebody who didn’t have any. If you had seven categories of traumatic events as a child, you were 3,100 percent more likely to attempt to commit suicide as an adult.

“When the results came out, I was in a state of disbelief,” Dr. Anda told me. “I looked at it and I said, really? This can’t be true.” You just don’t get figures like this in medicine very often. Crucially, they hadn’t just stumbled on proof that there is a correlation, that these two things happen at the same time. They seemed to have found evidence that these traumas help cause these problems. How do we know? The greater the trauma, the greater your risk of depression, anxiety, or suicide. The technical term for this is “dose-response effect.” The more cigarettes you smoke, the more your risk of lung cancer goes up, that’s one reason we know smoking causes cancer. In the same way, the more you were traumatized as a child, the more your risk of depression rises.

Curiously, it turned out emotional abuse was more likely to cause depression than any other kind of trauma, even sexual molestation. Being treated cruelly by your parents was the biggest driver of depression, out of all these categories.

When they showed the results to other scientists, including the Centers for Disease Control (CDC), who cofunded the research, they too were incredulous. “The study shocked people,” Dr. Anda told me. “People didn’t want to believe it. People at the CDC didn’t want to believe it. There was resistance within the CDC when I brought the data around, and the medical journals, initially, didn’t want to believe it, because it was so astonishing that they had to doubt it. Because it made them challenge the way they thought about childhood. It challenged so many things, all at one time.” In the years that followed, the study has been replicated many times, and it always finds similar results. But we have barely begun, Vincent told me, to think through its implications.

So Vincent, as he absorbed all this, came to believe that we have been making the same mistake with depression that he had been making before with obesity. We have failed to see it as a symptom of something deeper that needs to be dealt with. There’s a house fire inside many of us, Vincent had come to believe, and we’ve been concentrating on the smoke.

Many scientists and psychologists had been presenting depression as an irrational malfunction in your brain or in your genes, but he learned that Allen Barbour, an internist at Stanford University, had said that depression isn’t a disease; depression is a normal response to abnormal life experiences. “I think that’s a very important idea,” Vincent told me. “It takes you beyond the comforting, limited idea that the reason I’m depressed is I have a serotonin imbalance, or a dopamine imbalance, or what have you.” It is true that something is happening in your brain when you become depressed, he says, but that “is not a causal explanation”; it is “a necessary intermediary mechanism.”

Some people don’t want to see this because, at least at first, “it’s more comforting,” Vincent said, to think it’s all happening simply because of changes in the brain. “It takes away an experiential process and substitutes a mechanistic process.” It turns your pain into a trick of the light that can be banished with drugs. But they don’t ultimately solve the problem, he says, any more than just getting the obese patients to stop eating solved their problems. “Medications have a role,” he told me. “Are they the ultimate be and end-all? No. Do they sometimes short-change people? Absolutely.”

To solve the problem for his obese patients, Vincent said, they had all realized, together, that they had to solve the problems that were leading them to eat obsessively in the first place. So he set up support groups where they could discuss the real reasons why they ate and talk about what they had been through. Once that was in place, far more people became able to keep going through the fasting program and stay at a safe weight. He was going to start exploring a way to do this with depression, with startling results.

More than anyone else I spoke to about the hidden causes of depression, Vincent made me angry. After I met with him, I went to the beach in San Diego and raged against what he had said. I was looking hard for reasons to dismiss it. Then I asked myself. Why are you so angry about this? It seemed peculiar, and I didn’t really understand it. Then, as I discussed it with some people I trust, I began to understand.

If you believe that your depression is due solely to a broken brain, you don’t have to think about your life, or about what anyone might have done to you. The belief that it all comes down to biology protects you, in a way, for a while. If you absorb this different story, though, you have to think about those things. And that hurts.

I asked Vincent why he thinks traumatic childhoods so often produce depressed and anxious adults, and he said that he honestly doesn’t know. He’s a good scientist. He didn’t want to speculate. But I think I might know, although it goes beyond anything I can prove scientifically.

When you are a child and you experience something really traumatic, you almost always think it is your fault. There’s a reason for this, and it’s not irrational; like obesity, it is, in fact, a solution to a problem most people can’t see. When I was young, my mother was ill a lot, and my father was mostly gone, usually in a different country. In the chaos of that, I experienced some extreme acts of violence from an adult in my life. For example, I was strangled with an electrical cord on one occasion. By the time I was sixteen, I left to go and live in another city, away from any adults I knew, and when I was there, I found myself, like many people who have been treated this way at a formative age, seeking out dangerous situations where I was again treated in ways I should not have been treated.

Even now, as a thirty-seven-year-old adult, I feel like writing this down, and saying it to you, is an act of betrayal of the adult who carried out these acts of violence, and the other adults who behaved in ways they shouldn’t have.

I know you can’t figure out who these people are from what I’ve written. I know that if I saw an adult strangling a child with an electrical cord, it would not even occur to me to blame the child, and that if I heard somebody try to suggest such a thing, I would assume they were insane. I know rationally where the real betrayal lies in this situation. But still, I feel it. It’s there, and that feeling almost stopped me from saying this.

Why do so many people who experience violence in childhood feel the same way? Why does it lead many of them to self-destructive behavior, like obesity, or hard core addiction, or suicide? I have spent a lot of time thinking about this. When you’re a child, you have very little power to change your environment. You can’t move away, or force somebody to stop hurting you. So you have two choices. You can admit to yourself that you are powerless, that at any moment, you could be badly hurt, and there’s simply nothing you can do about it. Or you can tell yourself it’s your fault. If you do that, you actually gain some power, at least in your own mind. If it’s your fault, then there’s something you can do that might make it different. You aren’t a pinball being smacked around a pinball machine. You’re the person controlling the machine. You have your hands on the dangerous levers.

In this way, just like obesity protected those women from the men they feared would rape them, blaming yourself for your childhood traumas protects you from seeing how vulnerable you were and are. You can become the powerful one. If it’s your fault, it’s under your control.

But that comes at a cost. If you were responsible for being hurt, then at some level, you have to think you deserved it. A person who thinks they deserved to be injured as a child isn’t going to think they deserve much as an adult, either.

This is no way to live. But it’s a misfiring of the thing that made it possible for you to survive at an earlier point in your life.

You might have noticed that this cause of depression and anxiety is a little different from the ones I have discussed up to now, and it’s different from the ones I’m going to discuss next.

As I mentioned before, most people who have studied the scientific evidence accept that there are three different kinds of causes of depression and anxiety, biological, psychological, and social. The causes I’ve discussed up to now, and will come back to in a moment, are environmental. I’ll come to biological factors soon.

But childhood trauma belongs in a different category. It’s a psychological cause. By discussing it here, I’m hoping childhood trauma can indicate toward the many other psychological causes of depression that are too specific to be discussed in a big, broad way. The ways our psyches can be damaged are almost infinite. I know somebody whose wife cheated on him for years with his best friend and who became deeply depressed when he found out. I know somebody who survived a terror attack and was almost constantly anxious for a decade after. I know someone whose mother was perfectly competent and never cruel to her but was relentlessly negative and taught her always to see the worst in people and to keep them at a distance. You can’t squeeze these experiences into neat categories, it wouldn’t make sense to list “adultery,” “terror attacks,” or “cold parents” as causes of depression and anxiety.

But here’s what we know.

Psychological damage doesn’t have to be as extreme as childhood violence to affect you profoundly. Your wife cheating on you with your best friend isn’t a malfunction in your brain. But it is a cause of deep psychological distress, and it can cause depression and anxiety. If you are ever told a story about these problems that doesn’t talk about your personal psychology, don’t take it seriously.

Dr. Anda, one of the pioneers of this research, told me it had forced him to turn his thinking about depression and other problems inside out.

“When people have these kind of problems, it’s time to stop asking what’s wrong with them,” he said, “and time to start asking what happened to them.”

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

JUNK VALUES. CONSUMERISM LITERALLY IS DEPRESSING – Johann Hari.

Just as we have shifted en masse from eating food to eating junk food, we have also shifted from having meaningful values to having junk values.

All this mass-produced fried chicken looks like food, and it appeals to the part of us that evolved to need food; yet it doesn’t give us what we need from food, nutrition. Instead, it fills us with toxins.

In the same way, all these materialistic values, telling us to spend our way to happiness, look like real values; they appeal to the part of us that has evolved to need some basic principles to guide us through life; yet they don’t give us what we need from values, a path to a satisfying life.

Studies show that materialistic people are having a worse time, day by day, on all sorts of fronts. They feel sicker, and they are angrier. Something about a strong desire for materialistic pursuits actually affects their day-to-day lives, and decreases the quality of their daily experience. They experienced less joy, and more despair.

For thousands of years, philosophers have been suggesting that if you overvalue money and possessions, or if you think about life mainly in terms of how you look to other people, you will be unhappy.

Modern research indicates that materialistic people, who think happiness comes from accumulating stuff and a superior status, have much higher levels of depression and anxiety. The more our kids value getting things and being seen to have things, the more likely they are to be suffering from depression and anxiety.

The pressure, in our culture, runs overwhelmingly one way, spend more; work more. We live under a system that constantly distracts us from what’s really good about life. We are being propagandized to live in a way that doesn’t meet our basic psychological needs, so we are left with a permanent, puzzling sense of dissatisfaction.

The more materialistic and extrinsically motivated you become, the more depressed you will be.

When I was in my late twenties, I got really fat. It was partly a side effect of antidepressants, and partly a side effect of fried chicken. I could still, from memory, talk you through the relative merits of all the fried chicken shops in East London that were the staples of my diet, from Chicken Cottage to Tennessee Fried Chicken (with its logo of a smiling cartoon chicken holding a bucket of fried chicken legs: who knew cannibalism could be an effective marketing tool?). My own favorite was the brilliantly named Chicken Chicken Chicken. Their hot wings were, to me, the Mona Lisa of grease.

One Christmas Eve, I went to my local branch of Kentucky Fried Chicken, and one of the staff behind the counter saw me approaching and beamed. “Johann!” he said. “We have something for you!” The other staff turned and looked at me expectantly. From somewhere behind the grill and the grizzle, he took out a Christmas card. I was forced, by their expectant smiles, to open it in front of them. “To our best customer,” it said, next to personal messages from every member of the staff.

I never ate at KFC again.

Most of us know there is something wrong with our physical diets. We aren’t all gold medalists in the consumption of lard like I was, but more and more of us are eating the wrong things, and it is making us physically sick. As I investigated depression and anxiety, I began to learn something similar is happening to our values, and it is making many of us emotionally sick.

This was discovered by an American psychologist named Tim Kasser, so I went to see him, to learn his story.

As a little boy, Tim arrived in the middle of a long stretch of swampland and open beaches. His dad worked as a manager at an insurance company, and in the early 1970s, he was posted to a place called Pinellas County, on the west coast of Florida. The area was mostly undeveloped and had plenty of big, broad outdoor spaces for a kid to play, but this county soon became the fastest growing in the entire United States, and it was about to be transformed in front of Tim’s eyes. “By the time I left Florida,” he told me, “it was a completely different physical environment. You couldn’t drive along the beach roads anymore and see the water, because it was all condos and high-rises. Areas that had been open land with alligators and rattlesnakes became subdivision after subdivision after shopping mall.”

Tim was drawn to the shopping malls that replaced the beaches and marshes, like all the other kids he knew. There, he would play Asteroids and Space Invaders for hours. He soon found himself longing for stuff, the toys he saw in ads.

It sounds like Edgware, where I am from. I was eight or nine when its shopping mall, the Broadwalk Centre, opened, and I remember wandering around its bright storefronts and gazing at the things I wanted to buy in a thrilled trance. I obsessively coveted the green plastic toy of Castle Grayskull, the fortress where the cartoon character He-Man lived, and Care-a-Lot, the home in the clouds of some animated creatures called the Care Bears. One Christmas, my mother missed my hints and failed to buy me Care-a-Lot, and I was crestfallen for months. I ached and pined for that lump of plastic.

Like most kids at the time, I spent at least three hours a day watching TV, usually more, and whole days would pass in the summer when my only break from television would be to go to the Broadwalk Centre and back again. I don’t remember anyone ever telling me this explicitly, but it seemed to me then that happiness meant being able to buy lots of the things on display there. I think my nine-year-old self, if you had asked him what it meant to be happy, would have said: somebody who could walk through the Broadwalk Centre and buy whatever he wanted. I would ask my dad how much each famous person I saw on television earned, and he would guess, and we would both marvel at what we would do with the money. It was a little bonding ritual, over a fantasy of spending.

I asked Tim if, in Pinellas County where he grew up, he ever heard anyone talking about a different way of valuing things, beyond the idea that happiness came from getting and possessing stuff. “Well, I think, not growing up. No,” he said. In Edgware, there must have been people who acted on different values, but I don’t think I ever saw them.

When Tim was a teenager, his swim coach moved away one summer and gave him a small record collection, and it included albums by John Lennon and Bob Dylan. As he listened to them, he realized they seemed to be expressing something he didn’t really hear anywhere else. He began to wonder if there were hints of a different way to live lying in their lyrics, but he couldn’t find anyone to discuss it with.

It was only when Tim went to study at Vanderbilt University, a very conservative college in the South, at the height of the Reagan years, that it occurred to him, slowly, to think more deeply about this. In 1984, he voted for Ronald Reagan, but he was starting to think a lot about the question of authenticity. “I was stumbling around,” he told me. “I think I was questioning just about everything. I wasn’t just questioning these values. I was questioning lots about myself, I was questioning lots about the nature of reality and the values of society.” He feels like there were pinatas all around him and he was hitting chaotically at them all. He added: “I think I went through that phase for a long time, to be honest.”

When he went to graduate school, he started to read a lot about psychology. It was around this time that Tim realized something odd.

For thousands of years, philosophers had been suggesting that if you overvalue money and possessions, or if you think about life mainly in terms of how you look to other people, you will be unhappy, that the values of Pinellas County and Edgware were, in some deep sense, mistaken. It had been talked about a lot, by some of the finest minds who ever lived, and Tim thought it might be true. But nobody had ever conducted a scientific investigation to see whether all these philosophers were right.

This realization is what launched him on a project that he was going to pursue for the next twenty-five years. It led him to discover subtle evidence about why we feel the way we do, and why it is getting worse.

It all started in grad school, with a simple survey.

Tim came up with a way of measuring how much a person really values getting things and having money compared to other values, like spending time with their family or trying to make the world a better place. He called it the Aspiration Index, and it is pretty straightforward. You ask people how much they agree with statements such as “It is important to have expensive possessions” and how much they agree with very different statements such as “It is important to make the world a better place for others.” You can then calculate their values.

At the same time, you can ask people lots of other questions, and one of them is whether they are unhappy or if they are suffering (or have suffered) from depression or anxiety. Then, as a first step, you see if they match.

Tim’s first tentative piece of research was to give this survey to 316 students. When the results came back and were all calculated out, Tim was struck by the results: materialistic people, who think happiness comes from accumulating stuff and a superior status, had much higher levels of depression and anxiety.

This was, he knew, just a primitive first shot in the dark. So Tim’s next step was, as part of a larger study, to get a clinical psychologist to assess 140 eighteen-year-olds in depth, calculating where they were on the Aspiration Index and if they were depressed or anxious. When the results were added up, they were the same: the more the kids valued getting things and being seen to have things, the more likely they were to be suffering from depression and anxiety.

Was this something that happened only with young people? To find out, Tim measured one hundred citizens of Rochester in upstate New York, who came from a range of age groups and economic backgrounds. The result was the same.

But how could he figure out what was really happening, and why?

Tim’s next step was to conduct a more detailed study, to track how these values affect you over time. He got 192 students to keep a detailed mood diary in which, twice a day, they had to record how much they were feeling nine different emotions, such as happiness or anger, and how much they were experiencing any of nine physical symptoms, such as backache. When he calculated out the results, he found, again, higher depression among the materialistic students; but there was a result more important than that. It really did seem that materialistic people were having a worse time, day by day, on all sorts of fronts. They felt sicker, and they were angrier. “Something about a strong desire for materialistic pursuits,” he was starting to believe, “actually affected the participants’ day-to-day lives, and decreased the quality of their daily experience.” They experienced less joy, and more despair.

Why would this be? What could be happening here? Ever since the 1960s, psychologists have known that there are two different ways you can motivate yourself to get out of bed in the morning. The first are called intrinsic motives, they are the things you do purely because you value them in and of themselves, not because of anything you get out of them. When a kid plays, she’s acting totally on intrinsic motives, she’s doing it because it gives her joy. The other day, I asked my friend’s five-year-old son why he was playing. “Because I love it,” he said. Then he scrunched up his face and said “You’re silly!” and ran off, pretending to be Batman. These intrinsic motivations persist all through our lives, long after childhood.

At the same time, there’s a rival set of values, which are called extrinsic motives. They’re the things you do not because you actually want to do them, but because you’ll get something in return, whether it’s money, or admiration, or sex, or superior status. Joe, who you met two chapters ago, went to work every day in the paint shop for purely extrinsic reasons, he hated the job, but he needed to be able to pay the rent, buy the Oxy that would numb his way through the day, and have the car and clothes that he thought made people respect him. We all have some motives like that.

Imagine you play the piano. If you play it for yourself because you love it, then you are being driven to do it by intrinsic values. If you play in a dive bar you hate, just to make enough cash to ensure you don’t get thrown out of your apartment, then you are being driven to do it by extrinsic values.

These rival sets of values exist in all of us. Nobody is driven totally by one or the other.

Tim began to wonder if looking into this conflict more deeply could reveal something important. So he started to study a group of two hundred people in detail over time. He got them to lay out their goals for the future. He then figured out with them if these were extrinsic goals, like getting a promotion, or a bigger apartment, or intrinsic goals, like being a better friend or a more loving son or a better piano player. And then he got them to keep a detailed mood diary.

What he wanted to know was, Does achieving extrinsic goals make you happy? And how does that compare to achieving intrinsic goals?

The results, when he calculated them out were quite startling. People who achieved their extrinsic goals didn’t experience any increase in day-to-day happiness, none. They spent a huge amount of energy chasing these goals, but when they fulfilled them, they felt the same as they had at the start, Your promotion? Your fancy car? The new iPhone? The expensive necklace? They won’t improve your happiness even one inch.

But people who achieved their intrinsic goals did become significantly happier, and less depressed and anxious. You could track the movement. As they worked at it and felt they became (for example) a better friend, not because they wanted anything out of it but because they felt it was a good thing to do, they became more satisfied with life. Being a better dad? Dancing for the sheer joy of it? Helping another person, just because it’s the right thing to do? They do significantly boost your happiness.

Yet most of us, most of the time, spend our time chasing extrinsic goals, the very thing that will give us nothing. Our whole culture is set up to get us to think this way. Get the right grades. Get the best-paying job. Rise through the ranks. Display your earnings through clothes and cars. That’s how to make yourself feel good.

What Tim had discovered is that the message our culture is telling us about how to have a decent and satisfying life, virtually all the time, is not true. The more this was studied, the clearer it became! Twenty-two different studies have in the years since, found that the more materialistic and extrinsically motivated you become, the more depressed you will be. Twelve different studies found that the more materialistic and extrinsically motivated you become, the more anxious you will be. Similar studies, inspired by Tim’s work and using similar techniques, have now been carried out in Britain, Denmark, Germany, India, South Korea, Russia, Romania, Australia, and Canada-and the results, all over the world, keep coming back the same.

Just as we have shifted en masse from eating food to eating junk food, Tim has discovered, in effect, that we have shifted from having meaningful values to having junk values. All this mass-produced fried chicken looks like food, and it appeals to the part of us that evolved to need food; yet it doesn’t give us what we need from food, nutrition. Instead, it fills us with toxins.

In the same way, all these materialistic values, telling us to spend our way to happiness, look like real values; they appeal to the part of us that has evolved to need some basic principles to guide us through life; yet they don’t give us what we need from values, a path to a satisfying life. Instead, they fill us with psychological toxins. Junk food is distorting our bodies. Junk values are distorting our minds.

Materialism is KFC for the soul.

When Tim studied this in greater depth, he was able to identify at least four key reasons why junk values are making us feel so bad.

The first is that thinking extrinsically poisons your relationships with other people. He teamed up again with another professor, Richard Ryan, who had been an ally from the start, to study two hundred people in depth, and they found that the more materialistic you become, the shorter your relationships will be, and the worse their quality will be. If you value people for how they look, or how they impress other people, it’s easy to see that you’ll be happy to dump them if someone hotter or more impressive comes along. And at the same time, if all you’re interested in is the surface of another person, it’s easy to see why you’ll be less rewarding to be around, and they’ll be more likely to dump you, too. You will have fewer friends and connections, and they won’t last as long.

Their second finding relates to another change that happens as you become more driven by junk values. Let’s go back to the example of playing the piano. Every day, Tim spends at least half an hour playing the piano and singing, often with his kids. He does it for no reason except that he loves it, it makes him, on a good day, feel satisfied, and joyful. He feels his ego dissolve, and he is purely present in the moment. There’s strong scientific evidence that we all get most pleasure from what are called “flow states” like this, moments when we simply lose ourselves doing something we love and are carried along in the moment. They’re proof we can maintain the pure intrinsic motivation that a child feels when she is playing.

But when Tim studied highly materialistic people, he discovered they experience significantly fewer flow states than the rest of us. Why would that be?

He seems to have found an explanation. Imagine if, when Tim was playing the piano every day, he kept thinking: Am I the best piano player in Illinois? Are people going to applaud this performance? Am l going to get paid for this? How much? Suddenly his joy would shrivel up like a salted snail. Instead of his ego dissolving, his ego would be aggravated and jabbed and poked.

That is what your head starts to look like when you become more materialistic. If you are doing something not for itself but to achieve an effect, you can’t relax into the pleasure of a moment. You are constantly monitoring yourself. Your ego will shriek like an alarm you can’t shut off.

This leads to a third reason why junk values make you feel so bad. When you are extremely materialistic, Tim said to me, “you’ve always kind of got to be wondering about yourself, how are people judging you?” It forces you to “focus on other people’s opinions of you, and their praise of you, and then you’re kind of locked into having to worry what other people think about you, and if other people are going to give you those rewards that you want. That’s a heavy load to bear, instead of walking around doing what it is you’re interested in doing, or being around people who love you just for who you are.”

If “your self-esteem, your sense of self-worth, is contingent upon how much money you’ve got, or what your clothes are like, or how big your house is,” you are forced into constant external comparisons, Tim says. “There’s always somebody who’s got a nicer house or better clothes or more money.” Even if you’re the richest person in the world, how long will that last? Materialism leaves you constantly vulnerable to a world beyond your control.

And then, he says, there is a crucial fourth reason. It’s worth pausing on this one, because I think it’s the most important.

All of us have certain innate needs, to feel connected, to feel valued, to feel secure, to feel we make a difference in the world, to have autonomy, to feel we’re good at something. Materialistic people, he believes, are less happy, because they are chasing a way of life that does a bad job of meeting these needs.

What you really need are connections. But what you are told you need, in our culture, is stuff and a superior status, and in the gap between those two signals, from yourself and from society, depression and anxiety will grow as your real needs go unmet.

You have to picture all the values that guide why you do things in your life, Tim said, as being like a pie. “Each value” you have, he explained, “is like a slice of that pie. So you’ve got your spirituality slice, and your family slice, and your money slice, and your hedonism slice. We’ve all got all the slices.” When you become obsessed with materialism and status, that slice gets bigger. And “the bigger one slice gets, the smaller other slices have to get.” So if you become fixated on getting stuff and a superior status, the parts of the pie that care about tending to your relationships, or finding meaning, or making the world better have to shrink, to make way.

“On Friday at four, I can stay [in my office] and work more, or I can go home and play with my kids,” he told me. “I can’t do both. It’s one or the other. If my materialistic values are bigger, I’m going to stay and work. If my family values are bigger, I’m going to go home and play with my kids.” It’s not that materialistic people don’t care about their kids, but “as the materialistic values get bigger, other values are necessarily going to be crowded out,” he says, even if you tell yourself they won’t.

And the pressure, in our culture, runs overwhelmingly one way, spend more; work more. We live under a system, Tim says, that constantly “distracts us from what’s really good about life.” We are being propagandized to live in a way that doesn’t meet our basic psychological needs, so we are left with a permanent, puzzling sense of dissatisfaction.

For millennia, humans have talked about something called the Golden Rule. It’s the idea that you should do unto others as you would have them do unto you. Tim, I think, has discovered something we should call the I-Want-Golden-Things Rule. The more you think life is about having stuff and superiority and showing it off, the more unhappy, and the more depressed and anxious, you will be.

But why would human beings turn, so dramatically, to something that made us less happy and more depressed? Isn’t it implausible that we would do something so irrational? In the later phase of his research, Tim began to dig into the question.

Nobody’s values are totally fixed. Your level of junk values, Tim discovered by following people in his studies, can change over your lifetime. You can become more materialistic, and more unhappy; or you can become less materialistic, and less unhappy. So we shouldn’t be asking, Tim believes, “Who is materialistic?” We should be asking: “When are people materialistic?” Tim wanted to know: What causes the variation?

There’s an experiment, by a different group of social scientists, that gives us one early clue. In 1978, two Canadian social scientists got a bunch of four and five year old kids and divided them into two groups. The first group was shown no commercials. The second group was shown two commercials for a particular toy. Then they offered these four or five year old kids a choice. They told them: You have to choose, now, to play with one of these two boys here. You can play with this little boy who has the toy from the commercials, but we have to warn you, he’s not a nice boy. He’s mean. Or you can play with a boy who doesn’t have the toy, but who is really nice.

If they had seen the commercial for the toy, the kids mostly chose to play with the mean boy with the toy. If they hadn’t seen the commercial, they mostly chose to play with the nice boy who had no toys.

In other words, the advertisements led them to choose an inferior human connection over a superior human connection, because they’d been primed to think that a lump of plastic is what really matters.

Two commercials, just two, did that. Today, every person sees way more advertising messages than that in an average morning. More eighteen-month-olds can recognize the McDonald’s M than know their own surname. By the time an average child is thirty-six months old she aIready knows a hundred brand logos.

Tim suspected that advertising plays a key role in why we are, every day, choosing a value system that makes us feel worse. So with another social scientist named Jean Twenge he tracked the percentage of total US. national wealth that’s spent on advertising, from 1976 to 2003, and he discovered that the more money is spent on ads, the more materialistic teenagers become.

A few years ago, an advertising agency head named Nancy Shalek explained approvingly: “Advertising at its best is making peopie feel that without their product, you’re a loser. Kids are very sensitive to that. You open up emotionaI vulnerabilities, and it’s very easy to do with kids because they’re the most emotionally vulnerable.”

This sounds harsh, until you think through the logic. Imagine if I watched an ad and it told me, Johann, you’re fine how you are. You look good. You smell good. You’re likable. People want to be around you. You’ve got enough stuff now. You don’t need any more. Enjoy life.

That would, from the perspective of the advertising industry, be the worst ad in human history, because I wouldn’t want to go out shopping, or lunge at my laptop to spend, or do any of the other things that feed my junk values. It would make me want to pursue my intrinsic values, which involve a whole lot less spending, and a whole lot more happiness.

When they talk among themselves, advertising people have been admitting since the 1920s that their job is to make people feel inadequate, and then offer their product as the solution to the sense of inadequacy they have created. Ads are the ultimate frenemy, they’re always saying: Oh babe, I want you to look/smell/feel great; it makes me so sad that at the moment you’re ugly/ stinking/miserable; here’s this thing that will make you into the person you and I really want you to be. Oh, did I mention you have to pay a few bucks? I just want you to be the person you deserve to be. Isn’t that worth a few dollars? You’re worth it.

This logic radiates out through the culture, and we start to impose it on each other, even when ads aren’t there. Why did I, as a child, crave Nike air-pumps, even though I was as likely to play basketball as I was to go to the moon? It was partly because of the ads, but mostly because the ads created a group dynamic among everyone I knew. It created a marker of status, that we then policed. As adults, we do the same, only in slightly more subtle ways.

This system trains us, Tim says, to feel “there’s never enough. When you’re focused on money and status and possessions, consumer society is always telling you more, more, more, more. Capitalism is always telling you more, more, more. Your boss is telling you work more, work more, work more. You internalize that and you think: Oh, I’ve got to work more, because my self depends on my status and my achievement. You internalize that. It’s a kind of form of internalized oppression.”

He believes it also explains why junk values lead to such an increase in anxiety. “You’re always thinking: Are they going to reward me? Does the person love me for who I am, or for my handbag? Am I going to be able to climb the ladder of success?” he said. You are hollow, and exist only in other people’s reflections. “That’s going to be anxiety-provoking.”

We are all vulnerable to this, he believes. “The way I understand the intrinsic values,” Tim told me, is that they “are a fundamental part of what we are as humans, but they’re fragile. It’s easy to distract us from them. You give people social models of consumerism and they move in an extrinsic way.” The desire to find meaningful intrinsic values is “there, it’s a powerful part of who we are, but it’s not hard to distract us.” And we have an economic system built around doing precisely that.

As I sat with Tim, discussing all this for hours, I kept thinking of a middle-class married couple who live in a nice semidetached house in the suburbs in Edgware, where we grew up. They are close to me; I have known them all my life; I love them.

If you peeked through their window, you’d think they have everything you need for happiness, each other, two kids, a good home, all the consumer goods we’re told to buy. Both of them work really hard at jobs they have little interest in, so that they can earn money, and with the money they earn, they buy the things that we have learned from television will make us happy, clothes and cars, gadgets and status symbols. They display these things to people they know on social media, and they get lots of likes and comments like “OMG, so jealous!” After the brief buzz that comes from displaying their goods, they usually find they become dissatisfied and down again. They are puzzled by this, and they often assume it’s because they didn’t buy the right thing. So they work harder, and they buy more goods, display them through their devices, feel the buzz, and then slump back to where they started.

They both seem to me to be depressed. They alternate between being blank, or angry, or engaging in compulsive behaviors. She had a drug problem for a long time, although not anymore; he gambles online at least two hours a day. They are furious a lot of the time, at each other, at their children, at their colleagues, and, diffusely, at the world, at anyone else on the road when they are driving, for example, who they scream and swear at. They have a sense of anxiety they can’t shake off, and they often attach it to things outside them, she obsessively monitors where her teenage son is at any moment, and is afraid all the time that he will be a victim of crime or terrorism.

This couple has no vocabulary to understand why they feel so bad. They are doing what the culture has been priming them to do since we were infants, they are working hard and buying the right things, the expensive things. They are every advertising slogan made flesh.

Like the kids in the sandbox, they have been primed to lunge for objects and ignore the prospect of interaction with the people around them.

I see now they aren’t just suffering from the absence of something, such as meaningful work, or community. They are also suffering from the presence of something, an incorrect set of values telling them to seek happiness in all the wrong places, and to ignore the potential human connections that are right in front of them.

When Tim discovered all these facts, it didn’t just guide his scientific work. He began to move toward a life that made it possible for him to live consistent with his own findings, to go back, in a sense, to something more like the beach he had discovered joyfully in Florida as a kid. “You’ve got to pull yourself out of the materialistic environments, the environments that are reinforcing the materialistic values,” he says, because they cripple your internal satisfactions. And then, he says, to make that sustainable, you have to “replace them with actions that are going to provide those intrinsic satisfactions, and encourage those intrinsic goals.”

So, with his wife and his two sons, he moved to a farmhouse on ten acres of land in Illinois, where they live with a donkey and a herd of goats. They have a small TV in the basement, but it isn’t connected to any stations or to cable, it’s just to watch old movies on sometimes. They only recently got the Internet (against his protestations), and they don’t use it much. He works part time, and so does his wife, “so we could spend more time with our kids, and be in the garden more and do volunteer work and do activism work and I could write more”, all the things that give them intrinsic satisfaction. “We play a lot of games. We play a lot of music. We have a lot of family conversations.” They sing together.

Where they live in western Illinois is “not the most exciting place in the world,” Tim says, “but I have ten acres of land, I have a twelve-minute commute with one flashing light and three stop signs on my way to my office, and we afford that on one [combined full-time] salary.”

I ask him if he had withdrawal symptoms from the materialistic world we were both immersed in for so long. “Never,” he says right away. “People ask me that: “Don’t you miss this? Don’t you wish you had that?” No, I don’t, because I am never exposed to the messages telling me that I should want it. I don’t expose myself to those things, so no, I don’t have that.”

One of his proudest moments was when one of his sons came home one day and said: “Dad, some kids at school are making fun of my sneakers.” They were not a brand name, or shiny-new. “Oh, what’d you say to them?” Tim asked. His son explained he looked at them and said: “Why do you care?” He was nonplussed, he could see that what they valued was empty, and absurd.

By living without these polluting values, Tim has, he says, discovered a secret. This way of life is more pleasurable than materialism. “It’s more fun to play these games with your kids,” he told me. “It’s more fun to do the intrinsically motivated stuff than to go to work and do stuff you don’t necessarily want to do. It’s more fun to feel like people love you for who you are, instead of loving you because you gave them a big diamond ring.”

Most people know all this in their hearts, he believes. “At some level I really believe that most people know that intrinsic values are what’s going to give them a good life,” he told me. When you do surveys and ask people what’s most important in life, they almost always name personal growth and relationships as the top two. “But I think part of why people are depressed is that our society is not set up in order to help people live lifestyles, have jobs, participate in the economy, or participate in their neighborhoods” in ways that support their intrinsic values. The change Tim saw happening in Florida as a kid, when the beachfronts were transformed into shopping malls and people shifted their attention there, has happened to the whole culture.

Tim told me people can apply these insights to their own life, on their own, to some extent. “The first thing is for people to ask themselves, Am I setting up my life so I can have a chance of succeeding at my intrinsic values? Am I hanging out with the right people, who are going to make me feel loved, as opposed to making me feel like I made it? Those are hard choices sometimes.” But often, he says, you will hit up against a limit in our culture. You can make improvements, but often “the solutions to the problems that I’m interested in can’t be easily solved at the individual person level, or in the therapeutic consulting room, or by a pill.” They require something more, as I was going to explore later.

When I interviewed Tim, I felt he solved a mystery for me. I had been puzzled back in Philadelphia about why Joe didn’t leave the job he hated at the paint company and go become a fisherman in Florida, when he knew life in the Sunshine State would make him so much happier. It seemed like a metaphor for why so many of us stay in situations we know make us miserable.

I think I see why now. Joe is constantly bombarded with messages that he shouldn’t do the thing that his heart is telling him would make him feel calm and satisfied. The whole logic of our culture tells him to stay on the consumerist treadmill, to go shopping when he feels lousy, to chase junk values. He has been immersed in those messages since the day he was born. So he has been trained to distrust his own wisest instincts.

When I yelled after him “Go to Florida!” I was yelling into a hurricane of messages, and a whole value system, that is saying the exact opposite.

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

DEPRESSION, THE CHEMICAL IMBALANCE MYTH – Johann Hari * THE EMPEROR’S NEW DRUGS – Irving Kirsch.

There is a problem with what everyone knows about antidepressant drugs. It isn’t true. The whole idea of mental distress being caused simply by a chemical imbalance is “a myth,”, sold to us by the drug companies.

In the United States, 40 percent of the regulators’ wages are paid by the drug companies, and in Britain, it’s 100 percent. The rules they have written are designed to make it extraordinarily easy to get a drug approved.

“There was never any basis for it, ever. It was just marketing copy. At the time the drugs came out in the early 1990s, you couldn’t have got any decent expert to go on a platform and say, ‘Look, there’s a lowering of serotonin in the brains of people who are depressed’ There wasn’t ever any evidence for it.” It hasn’t been discredited, because “it didn’t ever get ‘credited.” We don’t know what a “chemically balanced” brain would look like. The effects of these drugs on depression itself are in reality tiny. No matter what chemical you tinker with, you get the same outcome. Antidepressants are little more than active placebos, drugs with very little specific therapeutic benefit, but with serious side effects.

What do the people taking these different drugs actually have in common? Only one thing: the belief that the drugs work, because you believe you are being looked after and offered a solution. Clever marketing over solid empirical evidence.

The serotonin theory “is a lie. I don’t think we should dress it up and say, ‘Oh, well, maybe there’s evidence to support that.’ There isn’t.” Most people on these drugs, after an initial kick, remain depressed or anxious. The belief that antidepressants can cure depression chemically is simply wrong.

The year after I swallowed my first antidepressant, Tipper Gore, the wife of Vice President Al Gore, explained to the newspaper USA Today why she had recently become depressed. “It was definitely a clinical depression, one that I was going to have to have help to overcome,” she said. “What I learned about is your brain needs a certain amount of serotonin and when you run out of that, it’s like running out of gas.” Tens of millions of people, including me, were being told the same thing.

When Irving Kirsch discovered that these serotonin boosting drugs were not having the effects that everyone was being sold, complete/nonfiltered FDA drug company study/research records show that the effects of these drugs on depression itself are in reality tiny, he began, to his surprise, to ask an even more basic question.

What’s the evidence, he began to wonder, that depression is caused primarily by an imbalance of serotonin, or any other chemical, in the brain? Where did it come from?

The serotonin story began, Irving learned, quite by accident in a tuberculosis ward in New York City in the clammy summer of 1952, when some patients began to dance uncontrollably down a hospital corridor. A new drug named Marsilid had come along that doctors thought might help TB patients. It turned out it didn’t have much effect on TB, but the doctors noticed it did something else entirely. They could hardly miss it. It made the patients gleefully, joyfully euphoric, some began to dance frenetically.

So it wasn’t long before somebody decided, perfectly logically, to try to give it to depressed people, and it seemed to have a similar effect on them, for a short time. Not long after that, other drugs came along that seemed to have similar effects (also for short periods), ones named Ipronid and Imipramine. So what, people started to ask, could these new drugs have in common? And whatever it was, could it hold the key to unlocking depression?

Nobody really knew where to look, and so for a decade the question hung in the air, tantalizing researchers. And then in 1965, a British doctor called Alec Coppen came up with a theory. What if, he asked, all these drugs were increasing levels of serotonin in the brain? If that were true, it would suggest that depression might be caused by low levels of serotonin.

“It’s hard to overstate just how far out on a limb these scientists were climbing,” Dr. Gary Greenberg, who has written the history of this period, explains. “They really had no idea what serotonin was doing in the brain.” To be fair to the scientists who first put forward the idea, he says, they put it forward tentatively, as a suggestion. One of them said it was “at best a reductionist simplification,” and said it couldn’t be shown to be true “on the basis of data currently available.”

But a few years later, in the 1970s, it was finally possible to start testing these theories. It was discovered that you can give people a chemical brew that lowers their serotonin levels. So if this theory was right, if low serotonin caused depression, what should happen? After taking this brew, people should become depressed. So they tried it. They gave people a drug to lower their serotonin levels and watched to see what would happen. And, unless they had already been taking powerful drugs they didn’t become depressed. In fact, in the vast majority of patients, it didn’t affect their mood at all.

I went to see one of the first scientists to study these new antidepressants in Britain, Professor David Healy, in his clinic in Bangor, a town in the north of Wales. He has written the most detailed history of antidepressants we have. When it comes to the idea that depression is caused by low serotonin, he told me: “There was never any basis for it, ever. It was just marketing copy. At the time the drugs came out in the early 1990s, you couldn’t have got any decent expert to go on a platform and say, ‘Look, there’s a lowering of serotonin in the brains of people who are depressed’ There wasn’t ever any evidence for it.” It hasn’t been discredited, he said, because “it didn’t ever get ‘credited,’ in a sense. There wasn’t ever a point in time when 50 percent of the field actually believed it.” In the biggest study of serotonin’s effects on humans, it found no direct relationships with depression. Professor Andrew Skull of Princeton has said attributing depression to low serotonin is “deeply misleading and unscientific.“

It had been useful in only one sense. When the drug companies wanted to sell antidepressants to people like me and Tipper Gore, it was a great metaphor. It’s easy to grasp, and it gives you the impression that what antidepressants do is restore you to a natural state, the kind of balance that everyone else enjoys.

Irving learned that once serotonin was abandoned by scientists (but certainly not by drug company PR teams) as an explanation for depression and anxiety, there was a shift in scientific research. Okay, they said: if it’s not low serotonin that’s causing depression and anxiety, then it must be the lack of some other chemical. It was still taken for granted that these problems are caused by a chemical imbalance in the brain, and antidepressants work by correcting that chemical imbalance. If one chemical turns out not to be the psychological killer, they must start searching for another one.

But Irving began to ask an awkward question. If depression and anxiety are caused by a chemical imbalance, and antidepressants work by fixing that imbalance, then you have to account for something odd that he kept finding. Antidepressant drugs that increase serotonin in the brain have the same modest effect, in clinical trials, as drugs that reduce serotonin in the brain. And they have the same effect as drugs that increase another chemical, norepinephrine. And they have the same effect as drugs that increase another chemical, dopamine. In other words, no matter what chemical you tinker with, you get the same outcome.

So Irving asked: What do the people taking these different drugs actually have in common? Only, he found, one thing: the belief that the drugs work. It works, Irving believes, largely for the same reason that John Haygarth’s wand worked: because you believe you are being looked after and offered a solution.

After twenty years researching this at the highest level, Irving has come to believe that the notion depression is caused by a chemical imbalance is just “an accident of history,” produced by scientists initially misreading what they were seeing, and then drug companies selling that misperception to the world to cash in.

And so, Irving says, the primary explanation for depression offered in our culture starts to fall apart. The idea you feel terrible because of a “chemical imbalance” was built on a series of mistakes and errors. It has come as close to being proved wrong, he told me, as you ever get in science. It’s lying broken on the floor, like a neurochemical Humpty Dumpty with a very sad smile.

I had traveled a long way with Irving on his journey but I stopped there, startled. Could this really be true? I am trained in the social sciences, which is the kind of evidence that I’ll be discussing in the rest of this book. I’m not trained in the kind of science he is a specialist in. I wondered if I was misunderstanding him, or if he was a scientific outlier. So I read all that I could, and I got as many other scientists to explain it to me as possible. “There’s no evidence that there’s a chemical imbalance” in depressed or anxious people’s brains, Professor Joanna Moncrieff, one of the leading experts on this question-explained to me bluntly in her office at the University College of London. The term doesn’t really make any sense, she said: we don’t know what a “chemically balanced” brain would look like. People are told that drugs like antidepressants restore a natural balance to your brain, she said, but it’s not true-they create an artificial state. The whole idea of mental distress being caused simply by a chemical imbalance is “a myth,” she has come to believe, sold to us by the drug companies.

The clinical psychologist Dr. Lucy Johnstone was more blunt still. “Almost everything you were told was bullshit,” she said to me over coffee. The serotonin theory “is a lie. I don’t think we should dress it up and say, ‘Oh, well, maybe there’s evidence to support that.’ There isn’t.”

Yet it seemed wildly implausible to me that something so huge, one of the most popular drugs in the world, taken by so many people all around me, could be so wrong. Obviously, there are protections against this happening: huge hurdles of scientific testing that have to take place before a drug gets to our bathroom cabinets. I felt as if I had just landed in a flight from JFK to LAX, only to be told that the plane had been flown by a monkey the whole way. Surely there are procedures in place to stop something like this from happening? How could these drugs have gotten through the procedures in place, if they were really as limited as this deeper research suggested?

I discussed this with one of the leading scientists in this field, Professor John Ioannidis, who the Atlantic Monthly has said “may be one of the most influential scientists alive.” He says it is not surprising that the drug companies could simply override the evidence and get the drugs to market anyway, because in fact it happens all the time. He talked me through how these antidepressants got from the development stage to my mouth.

It works like this: “The companies are often running their own trials on their own products,” he said. That means they set up the clinical trial, and they get to decide who gets to see any results. So “they are judging their own products. They’re involving all these poor researchers who have no other source of funding, and who have little control over how the results will be written up and presented.” Once the scientific evidence is gathered, it’s not even the scientists who write it up much of the time. “Typically, it’s the company people who write up the published scientific reports.”

This evidence then goes to the regulators, whose job is to decide whether to allow the drug onto the market. But in the United States, 40 percent of the regulators’ wages are paid by the drug companies, and in Britain, it’s 100 percent. When a society is trying to figure out which drug is safe to put on the market, there are meant to be two teams: the drug company making the case for it, and a referee working for us, the public, figuring out if it properly works. But Professor Ioannidis was telling me that in this match, the referee is paid by the drug company team, and that team almost always wins.

The rules they have written are designed to make it extraordinarily easy to get a drug approved. All you have to do is produce two trials, any time, anywhere in the world, that suggest some positive effect of the drug. If there are two, and there is some effect, that’s enough. So you could have a situation in which there are one thousand scientific trials, and 998 find the drug doesn’t work at all, and two find there is a tiny effect, and that means the drug will be making its way to your local pharmacy.

“I think that this is a field that is seriously sick,” Professor Ioannidis told me. “The field is just sick and bought and corrupted, and I can’t describe it otherwise.” I asked him how it made him feel to have learned all of this. “It’s depressing,” he said. That’s ironic, I replied. “But it’s not depressing,” he responded, “to the severe extent that I would take SSRIs [antidepressants].”

I tried to laugh, but it caught in my throat.

Some people said to Irving, so what? Okay, so say it’s a placebo effect. Whatever the reason, people still feel better. Why break the spell? He explained: the evidence from the clinical trials suggests that the antidepressant effects are largely a placebo, but the side effects are mostly the result of the chemicals themselves, and they can be very severe.

“Of course,” Irving says, there’s “weight gain.” I massively ballooned, and saw the weight fall off almost as soon as I stopped. “We know that SSRIs [the new type of antidepressants] in particular contribute to sexual dysfunction, and the rates for most SSRIs are around 75 percent of treatment-engendered sexual dysfunction,” he continued. Though it’s painful to talk about, this rang true for me, too. In the years I was taking Paxil, I found my genitals were a lot less sensitive, and it took a really long time to ejaculate. This made sex painful and it reduced the pleasure I took from it. It was only when I stopped taking the drug and I started having more pleasurable sex again that I remembered regular sex is one of the best natural antidepressants in the world.

“In young people, these chemical antidepressants increase the risk of suicide. There’s a new Swedish study showing that it increases the risk of violent criminal behavior,” Irving continued. “In older people it increases the risk of death from all causes, increases the risk of stroke. In everybody, it increases the risk of type 2 diabetes. In pregnant women, it increases the risk of miscarriage and of having children born with autism or physical deformities. So all of these things are known.” And if you start experiencing these effects, it can be hard to stop, about 20 percent of people experience serious withdrawal symptoms.

So, he says, “if you want to use something to get its placebo effect, at least use something that’s safe.” We could be giving people the herb St. John’s Wort, Irving says, and we’d have all the positive placebo effects and none of these drawbacks. Although, of course, St. John’s Wort isn’t patented by the drug companies, so nobody would be making much profit off it.

By this time, Irving was starting, he told me softly, to feel “guilty” for having pushed those pills for all those years.

In 1802, John Haygarth revealed the true story of the wands to the public. Some people are really recovering from their pain for a time, he explained, but it’s not because of the power in the wands. It’s because of the power in their minds. It was a placebo effect, and it likely wouldn’t last, because it wasn’t solving the underlying problem.

This message angered almost everyone? Some felt duped by the people who had sold the expensive wands in the first place, but many more felt furious with Haygarth himself, and said he was clearly talking rubbish. “The intelligence excited great commotions, accompanied by threats and abuse,” he wrote. “A counterdeclaration was to be signed by a great number of very respectable persons”, including some leading scientists of the day, explaining that the wand worked, and its powers were physical, and real.

Since Irving published his early results, and as he has built on them over the years, the reaction has been similar. Nobody denies that the drug companies’ own data, submitted to the FDA, shows that antidepressants have only a really small effect over and above placebo. Nobody denies that my own drug company admitted privately that the drug I was given, Paxil, was not going to work for people like me, and they had to make a payout in court for their deception.

But some scientists, a considerable number, do dispute many of Kirsch’s wider arguments. I wanted to study carefully what they say. I hoped the old story could still, somehow, be saved. I turned to a man who, more than anyone else alive, successfully sold antidepressants to the wider public, and he did it because he believed it: he never took a cent from the drug companies.

In the 1990s, Dr. Peter Kramer was watching as patient after patient walked into his therapy office in Rhode Island, transformed before his eyes after they were given the new antidepressant drugs. It’s not just that they seemed to have improved; they became, he argued, “better than well”, they had more resilience and energy than the average person. The book he wrote about this, Listening to Prozac, became the bestselling book ever about antidepressants. I read it soon after I started taking the drugs. I was sure the process Peter described so compellingly was happening to me. I wrote about it, and I made his case to the public in articles and interviews.

So when Irving started to present his evidence, Peter, by then a professor at Brown Medical School, was horrified. He started taking apart Irving’s critique of antidepressants at length, in public, both in books and in a series of charged public debates.

His first argument is that Irving is not giving antidepressants enough time. The clinical trials he has analyzed, almost all the ones submitted to the regulator, typically last for four to eight weeks. But that isn’t enough. It takes longer for these drugs to have a real effect.

This seemed to me to be an important objection. Irving thought so, too. So he looked to see if there were any drug trials that had lasted longer, to find their results. It turns out there were two, and in the first, the placebo did the same as the drug, and in the second, the placebo did better.

Peter then pointed to another mistake he believed Irving had made. The antidepressant trials that Irving is looking at lump together two groups: moderately depressed people and severely depressed people. Maybe these drugs don’t work much for moderately depressed people, Peter concedes, but they do work for severely depressed people. He’s seen it. So when Irving adds up an average for everyone, lumping together the mildly depressed and the severely depressed, the effect of the drugs looks small, but that’s only because he’s diluting the real effect, as surely as Coke will lose its flavor if you mix it with pints and pints of water.

Again, Irving thought this was a potentially important point, and one he was keen to understand, so he went back over the studies he had drawn his data from. He discovered that, with a single exception, he had looked only at studies of people classed as having very severe depression.

This then led Peter to turn to his most powerful argument. It’s the heart of his case against Irving and for antidepressants.

In 2012, Peter went to watch some clinical trials being conducted, in a medical center that looked like a beautiful glass cube, and gazed out over expensive houses.

When the company there wants to conduct trials into antidepressants, they have two headaches. They have to recruit volunteers who will swallow potentially dangerous pills over a sustained period of time, but they are restricted by law to paying only small amounts: between $40 and $75. At the same time, they have to find people who have very specific mental health disorders, for example, if you are doing a trial for depression, they have to have only depression and no other complicating factors. Given all that, it’s pretty difficult for them to find anyone who will take part, so they often turn to quite desperate people, and they have to offer other things to tempt them. Peter watched as poor people were bused in from across the city to be offered a gorgeous buffet of care they’d never normally receive at home, therapy, a whole community of people who’d listen to them, a warm place to be during the day, medication, and money that could double their poverty-level income.

As he watched this, he was struck by something. The people who turn up at this center have a strong incentive to pretend to have any condition they happen to be studying there, and the for-profit companies conducting the clinical trials have a strong incentive to pretend to believe them. Peter looked on as both sides seemed to be effectively bullshitting each other. When he saw people being asked to rate how well the drugs had worked, he thought they were often clearly just giving the interviewer whatever answer they wanted.

So Peter concluded that the results from clinical trials of antidepressants, all the data we have, are meaningless. That means Irving is building his conclusion that their effect is very small (at best) on a heap of garbage, Peter declared. The trials themselves are fraudulent.

It’s a devastating point, and Peter has proved it quite powerfully. But it puzzled Irving when he heard it, and it puzzled me. The leading scientific defender of antidepressants, Peter Kramer, is making the case for them by saying that the scientific evidence for them is junk.

When I spoke to Peter, I told him that if he is right (and I think he is), then that’s not a case for the drugs. It’s a case against them. It means that, by law, they should never have been brought to market.

When I started to ask about this, in a friendly tone Peter became quite irritable, and said even bad trials can yield usable results. He soon changed the subject. Given that he puts so much weight on what he’s seen with his own eyes, I asked Peter what he would say to the people who claimed that John Haygarth’s wand worked, because they, too, were just believing what they saw with their own eyes. He said that in cases like that, “the collection of experts isn’t as expert or as numerous as what we’re talking about here. I mean, this would be [an] orders-of-magnitude bigger scandal if these were [like] just bones wrapped in cloth.”

Shortly after, he said: “I think I want to cut off this conversation.”

Even Peter Kramer had one note of caution to offer about these drugs. He stressed to me that the evidence he has seen only makes the case for prescribing antidepressants for six to twenty weeks. Beyond that, he said, “I think that the evidence is thinner, and my dedication to the arguments is less as you get to long-term use. I mean, does anyone really know about what fourteen years of use does in terms of harm and benefit? I think the answer is we don’t really know.” I felt anxious as he said that, I had already told him that I used the drugs for almost that long. Perhaps because he sensed my anxiety, he added: “Although I do think we’ve been reasonably lucky. People like you come off and function.”

Very few scientists now defend the idea that depression is simply caused by low levels of serotonin, but the debate about whether chemical antidepressants work for some other reason we don’t fully understand, is still ongoing. There is no scientific consensus. Many distinguished scientists agree with Irving Kirsch; many agree with Peter Kramer. I wasn’t sure what to take away from all of this, until Irving led me to one last piece of evidence. I think it tells us the most important fact we need to know about chemical antidepressants.

In the late 1990s, a group of scientists wanted to test the effects of the new SSRI antidepressants in a situation that wasn’t just a lab, or a clinical trial. They wanted to look at what happens in a more everyday situation, so they set up something called the Star-D Trial. It was pretty simple. A normal patient goes to the doctor and explains he’s depressed. The doctor talks through the options with him, and if they both agree, he starts taking an antidepressant. At this point, the scientists conducting the trial start to monitor the patient. If the antidepressant doesn’t work for him, he’s given another one. If that one doesn’t work, he’s given another one, and on and on until he gets one that feels as though it works. This is how it works for most of us out there in the real world: a majority of people who get prescribed antidepressants try more than one, or try more than one dosage, until they find the effect they’re looking for.

And what the trial found is that the drugs worked. Some 67 percent of patients did feel better, just like I did in those first months.

But then they found something else. Within a year, half of the patients were fully depressed again. Only one in three of the people who stayed on the pills had a lasting, proper recovery from their depression. (And even that exaggerates the effect, since we know many of those people would have recovered naturally without the pills.)

It seemed like my story, played out line by line. I felt better at first; the effect wore off; I tried increasing the dose, and then that wore off, too. When I realized that antidepressants weren’t working for me any more, that no matter how much I jacked up the dose, the sadness would still seep back through, I assumed there was something wrong with me.

Now I was reading the Star-D Trial’s results, and I realized I was normal. My experience was straight from the textbook: far from being an outlier, I had the typical antidepressant experience.

This evidence has been followed up several times since, and the proportion of people on antidepressants who continue to be depressed is found to be between 65 and 80 percent.

To me, this seems like the most crucial piece of evidence about antidepressants of all: most people on these drugs, after an initial kick, remain depressed or anxious.

I want to stress, some reputable scientists still believe that these drugs genuinely work for a minority of people who take them, due to a real chemical effect. It’s possible. Chemical antidepressants may well be a partial solution for a minority of depressed and anxious people, I certainly don’t want to take away anything that’s giving relief to anyone. If you feel helped by them, and the positives outweigh the side effects, you should carry on. (And if you are going to stop taking them, then it’s essential that you don’t do it overnight, because you can experience severe physical withdrawal symptoms and a great deal of panic as a result. I gradually reduced my dose very slowly, over six months, in consultation with my doctor, to prevent this from happening.)

But it is impossible, in the face of this evidence, to say they are enough, for a big majority of depressed and anxious people.

I couldn’t deny it any longer: for the vast majority we clearly needed to find a different story about what is making us feel this way, and a different set of solutions. But what, asked myself, bewildered, could they be?

The Emperor’s New Drugs

Irving Kirsch

Everyone knows that antidepressant drugs are miracles of modern medicine. Professor Irving Kirsch knew this as well as anyone. But, as he discovered during his research, there is a problem with what everyone knows about antidepressant drugs. It isn’t true.

How did antidepressant drugs gain their reputation as a magic bullet for depression? And why has it taken so long for the story to become public? Answering these questions takes us to the point where the lines between clinical research and marketing disappear altogether.

Using the Freedom of Information Act, Kirsch accessed clinical trials that were withheld, by drug companies, from the public and from the doctors who prescribe antidepressants. What he found, and what he documents here, promises to bring revolutionary change to the way our society perceives, and consumes, antidepressants.

The Emperor’s New Drugs exposes what we have failed to see before: depression is not caused by a chemical imbalance in the brain; antidepressants are significantly more dangerous than other forms of treatment and are only marginally more effective than placebos; and, there are other ways to combat depression, treatments that don’t only include the empty promise of the antidepressant prescription.

This is not a book about alternative medicine and its outlandish claims. This is a book about fantasy and wishful thinking in the heart of clinical medicine, about the seductions of myth, and the final stubbornness of facts.

Irving Kirsch is a lecturer in medicine at the Harvard Medical School and a professor of psychology at Plymouth University, as well as professor emeritus of psychology at the University of Hull, and the University of Connecticut. He has published eight books and numerous scientific articles on placebo effects, antidepressant medication, hypnosis, and suggestion. His work has appeared in Science, Science News, New Scientist, New York Times, Newsweek, and BBC Focus and many other leading magazines, newspapers, and television documentaries.

Like most people, I used to think that antidepressants worked. As a clinical psychologist, I referred depressed psychotherapy clients to psychiatric colleagues for the prescription of medication, believing that it might help. Sometimes the antidepressant seemed to work; sometimes it did not. When it did work, I assumed it was the active ingredient in the antidepressant that was helping my clients cope with their psychological condition.

According to drug companies, more than 80 per cent of depressed patients can be treated successfully by antidepressants. Claims like this made these medications one of the most widely prescribed class of prescription drugs in the world, with global sales that make it a $19-billion-a-year industry. Newspaper and magazine articles heralded antidepressants as miracle drugs that had changed the lives of millions of people. Depression, we were told, is an illness a disease of the brain that can be cured by medication. I was not so sure that depression was really an illness, but I did believe that the drugs worked and that they could be a helpful adjunct to psychotherapy for very severely depressed clients. That is why I referred these clients to psychiatrists who could prescribe antidepressants that the clients could take while continuing in psychotherapy to work on the psychological issues that had made them depressed.

But was it really the drug they were taking that made my clients feel better? Perhaps I should have suspected that the improvement they reported might not have been a drug effect. People obtain considerable benefits from many medications, but they also can experience symptom improvement just by knowing they are being treated. This is called the placebo effect. As a researcher at the University of Connecticut, I had been studying placebo effects for many years. I was well aware of the power of belief to alleviate depression, and I understood that this was an important part of any treatment, be it psychological or pharmacological. But I also believed that antidepressant drugs added something substantial over and beyond the placebo effect.

As I wrote in my first book, ‘comparisons of antidepressive medication with placebo pills indicate that the former has a greater effect, the existing data suggest a pharmacologically specific effect of imipramine on depression’. As a researcher, I trusted the data as it had been presented in the published literature. I believed that antidepressants like imipramine were highly effective drugs, and I referred to this as ‘the established superiority of imipramine over placebo treatment’.

When I began the research that I describe in this book, I was not particularly interested in investigating the effects of antidepressants. But I was definitely interested in investigating placebo effects wherever I could find them, and it seemed to me that depression was a perfect place to look. Why did I expect to find a large placebo effect in the treatment of depression? If you ask depressed people to tell you what the most depressing thing in their lives is, many answer that it is their depression. Clinical depression is a debilitating condition. People with severe depression feel unbearably sad and anxious, at times to the point of considering suicide as a way to relieve the burden. They may be racked with feelings of worthlessness and guilt. Many suffer from insomnia, whereas others sleep too much and find it difficult to get out of bed in the morning. Some have difficulty concentrating and have lost interest in all of the activities that previously brought pleasure and meaning into their lives. Worst of all, they feel hopeless about ever recovering from this terrible state, and this sense of hopelessness may lead them to feel that life is not worth living. In short, depression is depressing. John Teasdale, a leading researcher on depression at Oxford and Cambridge universities, labelled this phenomenon ‘depression about depression’ and claimed that effective treatments for depression work at least in part by altering the sense of hopelessness that comes from being depressed about one’s own depression?!

Whereas hopelessness is a central feature of depression, hope lies at the core of the placebo effect. Placebos instil hope in patients by promising them relief from their distress. Genuine medical treatments also instil hope, and this is the placebo component of their effectiveness.

When the promise of relief instils hope, it counters a fundamental attribute of depression. Indeed, it is difficult to imagine any treatment successfully treating depression without reducing the sense of hopelessness that depressed people feel. Conversely, any treatment that reduces hopelessness must also assuage depression. So a convincing placebo ought to relieve depression.

It was with that in mind that one of my postgraduate students, Guy Sapirstein, and I set out to investigate the placebo effect in depression, an investigation that I describe in the first chapter of this book, and that produced the first of a series of surprises that transformed my views about antidepressants and their role in the treatment of depression. In this book I invite you to share this journey in which I moved from acceptance to dissent, and finally to a thorough rejection of the conventional view of antidepressants.

The drug companies claimed and still maintain that the effectiveness of antidepressants has been proven in published clinical trials showing that the drugs are substantially better than placebos (dummy pills with no active ingredients at all). But the data that Sapirstein and I examined told a very different story. Although many depressed patients improve when given medication, so do many who are given a placebo, and the difference between the drug response and the placebo response is not all that great.

What the published studies really indicate is that most of the improvement shown by depressed people when they take antidepressants is due to the placebo effect.

Our finding that most of the effects of antidepressants could be explained as a placebo effect was only the first of a number of surprises that changed my views about antidepressants. Following up on this research, I learned that the published clinical trials we had analysed were not the only studies assessing the effectiveness of antidepressants. I discovered that approximately 40 per cent of the clinical trials conducted had been withheld from publication by the drug companies that had sponsored them. By and large, these were studies that had failed to show a significant benefit from taking the actual drug. When we analysed all of the data, those that had been published and those that had been suppressed my colleagues and I were led to the inescapable conclusion that antidepressants are little more than active placebos, drugs with very little specific therapeutic benefit, but with serious side effects. I describe these analyses and the reaction to them in Chapters 3 and 4.

How can this be? Before a new drug is put on the market, it is subjected to rigorous testing. The drug companies sponsor expensive clinical trials, in which some patients are given medication and others are given placebos. The drug is considered effective only if patients given the real drug improve significantly more than patients given the placebos. Reports of these trials are then sent out to medical journals, where they are subjected to rigorous peer review before they are published. They are also sent to regulatory agencies, like the Food and Drug Administration (FDA) in the US, the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK and the European Medicine Agency (EMEA) in the EU. These regulatory agencies carefully review the data on safety and effectiveness, before deciding whether to approve the drugs for marketing. So there must be substantial evidence backing the effectiveness of any medication that has reached the market.

And yet I remain convinced that antidepressant drugs are not effective treatments and that the idea of depression as a chemical imbalance in the brain is a myth. When I began to write this book, my claim was more modest. I believed that the clinical effectiveness of antidepressants had not been proven for most of the millions of patients to whom they are prescribed, but I also acknowledged that they might be beneficial to at least a subset of depressed patients. During the process of putting all of the data together, those that I had analysed over the years and newer data that have just recently seen the light of day, I realized that the situation was even worse than I thought.

The belief that antidepressants can cure depression chemically is simply wrong.

In this book I will share with you the process by which I came to this conclusion and the scientific evidence on which it is based. This includes evidence that was known to the pharmaceutical companies and to regulatory agencies, but that was intentionally withheld from prescribing physicians, their patients and even from the National Institute for Health and Clinical Excellence (NICE) when it was drawing up treatment guidelines for the National Health Service (NHS).

My colleagues and I obtained some of these hidden data by using the Freedom of Information Act in the US. We analysed the data and submitted the results for peer review to medical and psychological journals, where they were then published. Our analyses have become the focus of a national and international debate, in which many doctors have changed their prescribing habits and others have reacted with anger and incredulity.

My intention in this book is to present the data in a plain and straightforward way, so that you will be able to decide for yourself whether my conclusions about antidepressants are justified.

The conventional view of depression is that it is caused by a chemical imbalance in the brain. The basis for this idea was the belief that antidepressant drugs were effective treatments. Our analysis showing that most if not all of the effects of these medications are really placebo effects challenges this widespread view of depression. In Chapter 4 I examine the chemical-imbalance theory. You may be surprised to learn that it is actually a rather controversial theory and that there is not much scientific evidence to support it. While writing this chapter I came to an even stronger conclusion. It is not just that there is not much supportive evidence; rather, there is a ton of data indicating that the chemical-imbalance theory is simply wrong.

The chemical effect of antidepressant drugs may be small or even non-existent, but these medications do produce a powerful placebo effect. In Chapters 5 and 6 I examine the placebo effect itself. I look at the myriad of effects that placebos have been shown to have and explore the theories of how these effects are produced. I explain how placebos are able to produce substantial relief from depression, almost as much as that produced by medication, and the implications that this has for the treatment of depression.

Finally, in Chapter 7, I describe some of the alternatives to medication for the treatment of depression and assess the evidence for their effectiveness. One of my aims is to provide essential scientifically grounded information for making informed choices between the various treatment options that are available.

Much of what I write in this book will seem controversial, but it is all thoroughly grounded on scientific evidence, evidence that I describe in detail in this book. Furthermore, as controversial as my conclusions seem, there has been a growing acceptance of them. NICE has acknowledged the failure of antidepressant treatment to provide clinically meaningful benefits to most depressed patients; the UK government has instituted plans for providing alternative treatments; and neuroscientists have noted the inability of the chemical-imbalance theory to explain depression. We seem to be on the cusp of a revolution in the way we understand and treat depression.

Learning the facts behind the myths about antidepressants has been, for me, a journey of discovery. It was a journey filled with shocks and surprises, surprises about how drugs are tested and how they are approved, what doctors are told and what is kept hidden from them, what regulatory agencies know and what they don’t want you to know, and the myth of depression as a brain disease. I would like to share that journey with you. Perhaps you will find it as surprising and shocking as I did. It is my hope that making this information public will foster changes in the way new drugs are tested and approved in the future, in the public availability of the data and in the treatment of depression.

1

Listening to Prozac, but Hearing Placebo

In 1995 Guy Sapirstein and I set out to assess the placebo effect in the treatment of depression. Instead of doing a brand-new study, we decided to pool the results of previous studies in which placebos had been used to treat depression and analyse them together. What we did is called a meta-analysis, and it is a common technique for making sense of the data when a large number of studies have been done to answer a particular question. It was once considered somewhat controversial, but meta-analyses are now common features in all of the leading medical journals. Indeed, it is hard to see how one could interpret the results of large numbers of studies without the aid of a meta-anaiysis.

In doing our meta-analysis, it was not enough to find studies in which depressed patients had been given placebos. We also needed to find studies in which depression had been tracked in patients who were not given any treatment at all. This was to make sure that any effect we found was really due to the administration of the placebo. To better understand the reason for this, imagine that you are investigating a new remedy for colds. If the patients are given the new medicine, they get better, if they are given placebos, they also get better. Seeing these data, you might be tempted to think that the improvement was a placebo effect. But people recover from colds even if you give them nothing at all. So when the patients in our imaginary study took a dummy pill and their colds got better, the improvement may have had nothing to do with the placebo effect. It might simply have been due to the passage of time and the fact that colds are short-lasting illnesses.

Spontaneous improvement is not limited to colds. It can also happen when people are depressed. Because people sometimes recover from bouts of depression with no treatment at all, seeing that a person has become less depressed after taking a placebo does not mean that the person has experienced a placebo effect. The improvement could have been due to any of a number of other factors. For example, people can get better because of positive changes in life circumstances, such as finding a job after a period of unemployment or meeting a new romantic partner. Improvement can also be facilitated by the loving support of friends and family. Sometimes a good friend can function as a surrogate therapist. In fact, a very influential book on psychotherapy bore the title Psychotherapy: The Purchase of Friendship. The author did not claim that psychotherapy was merely friendship, but the title does make the point that it can be very therapeutic to have a friend who is empathic and knows how to listen.

The point is that without comparing the effect of placebos against rates of spontaneous recovery, it is impossible to assess the placebo effect. Just as we have to control for the placebo effect to evaluate the effect of a drug, so too we have to control for the passage of time when assessing the placebo effect. The drug effect is the difference between what happens when people are given the active drug and what happens when they are given the placebo. Analogously, the placebo effect is the difference between what happens when people are given placebos and what happens when they are not treated at all.

It is rare for a study to focus on the placebo effect or on the effect of the simple passage of time, for that matter. So where were we to find our placebo data and no-treatment data? We found our placebo data in clinical studies of antidepressants, and our no-treatment data in clinical studies of psychotherapy. It is common to have no-treatment or wait-list control groups in studies of the effects of psychotherapy. These groups consist of patients who are not given any treatment at all during the course of the study, although they may be placed on a wait list and given treatment after the research is concluded.

For the purpose of our research, Sapirstein and I were not particularly interested in the effects of the antidepressants or psychotherapy. What we were interested in was the placebo effect. But since we had the treatment data to hand, we looked at them as well. And, as it turned out, it was the comparison of drug and placebo that proved to be the most interesting part of our study.

All told, we analysed 38 clinical trials involving more than 3,000 depressed patients. We looked at the average improvement during the course of the study in each of the four types of groups: drug, placebo, psychotherapy and no-treatment. I am going to use a graph here (Figure 1.1) to show what the data tell us. Although the text will have a couple more such charts, I am going to keep them to a minimum. But this is one that I think we need, to make the point clearly. What the graph shows is that there was substantial improvement in both the drug and psychotherapy groups. People got better when given either form of treatment, and the difference between the two was not significant. People also got better when given placebos, and here too the improvement was remarkably large, although not as great as the improvement following drugs or psychotherapy. In contrast, the patients who had not been given any treatment at all showed relatively little improvement.

The first thing to notice in this graph is the difference in improvement between patients given placebos and patients not given any treatment at all. This difference shows that most of the improvement in the placebo groups was produced by the fact that they had been given placebos. The reduction in depression that people experienced was not just caused by the passage of time, the natural course of depression or any of the other factors that might produce an improvement in untreated patients. It was a placebo effect. and it was powerful.

Figure 1.1. Average improvement on drug, psychotherapy, placebo and no treatment. ‘lmprovement’ refers to the reduction of symptoms on scales used to measure depression. The numbers are called ‘effect sizes’. They are commonly used when the results of different studies are pooled together. Typically, effect sizes of 0.5 are considered moderate, whereas effect sizes of 0.8 are considered large. So the graph shows that antidepressants, psychotherapy and placebos produce large changes in the symptoms of depression, but there was only a relatively small average improvement in people who were not given any treatment at all.
.

One thing to learn from these data is that doing nothing is not the best way to respond to depression. People should not just wait to recover spontaneously from clinical depression, nor should they be expected just to snap out of it. There may be some improvement that is associated with the simple passage of time, but compared to doing nothing at all, treatment even if it is just placebo treatment provides substantial benefit.

Sapirstein and I were not surprised to find that there was a powerful placebo effect in the treatment of depression. Actually, we were quite pleased. That was our hypothesis and our reason for doing the study. What did surprise us, however, was how small the difference was between the response to the drug and the response to the placebo. That difference is the drug effect. Although the drug effect in the published clinical trials that we had analysed was statistically significant, it was much smaller than we had anticipated. Much of the therapeutic response to the drug was due to the placebo effect.

The relatively small size of the drug effect was the first of a series of surprises that the antidepressant data had in store for us.

One way to understand the size of the drug effect is to think about it as only a part of the improvement that patients experience when taking medication. Part of the improvement might be spontaneous that is, it might have occurred without any treatment at all and part may be a placebo effect. What is left over after you subtract spontaneous improvement and the placebo effect is the drug effect. You can see in Figure 1.1 that improvement in patients who had been given a placebo was about 75 per cent of the response to the real medication. That means that only 25 per cent of the benefit of antidepressant treatment was really due to the chemical effect of the drug. It also means that 50 per cent of the improvement was a placebo effect. In other words, the placebo effect was twice as large as the drug effect.

The drug effect seemed rather small to us, considering that these medications had been heralded as a revolution in the treatment of depression, blockbuster drugs that have been prescribed to hundreds of millions of patients, with annual sales totalling billions of pounds: Sapirstein and I must have done something wrong in either collecting or analysing the data. But what? We spent months trying to figure it out.

ARE ALL DRUGS CREATED EQUAL? DOUBLEBLIND OR DOUBLE-TALK

One thing that occurred to us, when considering how surprisingly small the drug effect was in the clinical trials we had analysed, was that a number of different medications had been assessed in those studies. Perhaps some of them were effective, whereas others were not. If this were the case, we had underestimated the benefits of effective drugs by lumping them together with ineffective medications. So before we sent our paper out for review, we went back to the data and examined the type of drugs that had been administered in each of the clinical trials in our meta-analysis.

We found that some of these trials had assessed tricyclic antidepressants, an older type of medication that was the most commonly used antidepressant in the 1960s and 1970s. In other trials, the focus was on selective serotonin reuptake inhibitors (SSRIs) like Prozac (fluoxetine), the first of the ‘new-generation’ drugs that replaced tricyclics as the top-selling type of antidepressant. And there were other types of antidepressants investigated in these trials as well. When we reanalysed the data, examining the drug effect and the placebo effect for each type of medication separately, we found that the diversity of drugs had not affected the outcome of our analysis. In fact, the data were remarkably consistent much more so than is usually the case when one analyses different groups of data. Not only did all of these medications produce the same degree of improvement in depression, but also, in each case, only 25 per cent of the improvement was due to the effect of the drug. The rest could be explained by the passage of time and the placebo effect.

The lack of difference we found between one class of antidepressants and another is now a rather frequent finding in antidepressant research. The newer antidepressants (SSRIs, for example) are no more effective than the older medications. Their advantage is that their side effects are less troubling, so that patients are more likely to stay on them rather than discontinue treatment. Still, the consistency of the size of the drug effect was surprising. It was not just that the percentages were close; they were virtually identical. They ranged from 24 to 26 per cent. At the time I thought, ‘What a nice coincidence! It will look great in a PowerPoint slide when I am invited to speak on this topic.’ But since then I have been struck by similar instances in which the consistency of the data is remarkable, and it is part of what has transformed me from a doubter to a disbeliever. I will note similar consistencies as we encounter them in this book.

The consistency of the effects of different types of antidepressants meant that we had not underestimated the antidepressant drug effect by lumping together the effects of more effective and less effective drugs. But our re-examination of the data in our meta-analysis held another surprise for us. Some of the medications we had analysed were not antidepressants at all, even though they had been evaluated for their effects on depression. One was a barbiturate, a depressant that had been used as a sleeping aid, before being replaced by less dangerous medications. Another was a benzodiazepine a sedative that has largely replaced the more dangerous barbiturates. Yet another was a synthetic thyroid hormone that had been given to depressed patients who did not have a thyroid disorder. Although none of these drugs are considered antidepressants, their effects on depression were every bit as great as those of antidepressants and significantly better than placebos. Joanna Moncrieff, a psychiatrist at University College London, has since listed other drugs that have been shown to be as effective as medications for depression. These include antipsychotic drugs, stimulants and herbal remedies. Opiates are also better than placebos, but I have not seen them compared to antidepressants.

If sedatives, barbiturates, antipsychotic drugs, stimulants, opiates and thyroid medications all outperform inert placebos in the treatment of depression, does this mean that any active drug can function as an antidepressant? Apparently not. In September 1998 the pharmaceutical company Merck announced the discovery of a novel antidepressant with a completely different mode of action than other medications for depression. This new drug, which they later marketed under the trade name Emend for the prevention of nausea and vomiting due to chemotherapy, seemed to show considerable promise as an antidepressant in early clinical trials. Four months later the company announced its decision to pull the plug on the drug as a treatment for depression. The reason? It could not find a significant benefit for the active drug over placebos in subsequent clinical trials.

This was unfortunate for a number of reasons. One is that the announcement caused a 5 per cent drop in the value of the company’s stock. Another is that the drug had an important advantage over current antidepressants, it produced substantially fewer side effects. The relative lack of side effects had been one reason for the enthusiasm about Merck’s new antidepressant. However, it may also have been the reason for its subsequent failure in controlled clinical trials. It seems that easily noticeable side effects are needed to show antidepressant benefit for an active drug compared to a placebo.

. . .

from

THE EMPEROR’S NEW DRUGS

by Irving Kirsch

get it at Amazon.com

ANGER CORRODES THE VESSEL THAT CONTAINS IT. Self Compassion and Anger in Relationships – Kristin Neff and Christopher Germer.

Our deepest need as human beings is the need to be loved. Our brains communicate emotions to one another, regardless of how carefully we chose our words.

Much of our relationship suffering is unnecessary and can be prevented by cultivating a loving relationship with ourselves. Cultivating self-compassion is one of the best things we can do for our relationship interactions.

Anger has a way of popping up around disconnection and can sometimes linger for years, long after the relationship has ended.

Sometimes we turn the anger against ourselves in the form of harsh self-criticism, which is a surefire way to become depressed. And if we get stuck in angry rumination, who did what to whom and what they deserve for it, we live with an agitated state of mind and may end up getting angry at others for no apparent reason.

“Anger corrodes the vessel that contains it.”

To have the type of close, connected relationships we really want with others, we first need to feel close and connected to ourselves. Cultivating self-compassion is far from selfish.

Much of our suffering arises in relationship with others. As Sartre famously wrote, “Hell is other people.” The good news is that much of our relationship suffering is unnecessary and can be prevented by cultivating a loving relationship with ourselves.

There are at least two types of relational pain. One is the pain of connection, when those we care about are suffering.

The other type is the pain of disconnection, when we experience loss or rejection and feel hurt, angry, or alone.

Our capacity for emotional resonance means that emotions are contagious. This is especially true in intimate relationships. If you are irritated with your partner but try to hide it, for instance, your partner will often pick up on your irritation. He might say, “Are you angry at me?” Even if you deny it, your partner will feel the irritation; it will affect his mood, leading to an irritated tone of voice. You will feel this, in turn, and become even more irritated, and your responses will have a harsher tone, and on it goes. This is because our brains would have been communicating emotions to one another regardless of how carefully we chose our words.

In social interactions, there can be a downward spiral of negative emotions, when one person has a negative attitude, the other person becomes even more negative, and so on. This means that other people are partly responsible for our state of mind, but we are also partly responsible for their state of mind. The good news is that emotional contagion gives us more power than we realize to change the emotional tenor of our relationships. Self-compassion can interrupt a downward spiral and start an upward spiral instead.

Compassion is actually a positive emotion and activates the reward centers of our brain, even though it arises in the presence of suffering. A very useful way to change the direction of a negative relationship interaction, therefore, is to have compassion for the pain we’re feeling in the moment. The positive feelings of compassion we have for ourselves will also be felt by others, manifested in our tone and subtle facial expressions, and help to interrupt the negative cycle, in this way cultivating self-compassion is one of the best things we can do for our relationship interactions as well as for ourselves.

Not surprisingly, research shows that self-compassionate people have happier and more satisfying romantic relationships. In one study, for instance, individuals with higher levels of self-compassion were described by their partners as being more accepting and nonjudgmental than those who lacked self-compassion. Rather than trying to change their partners, self-compassionate people tended to respect their opinions and consider their point of view. They were also described as being more caring, connected, affectionate, intimate, and willing to talk over relationship problems than those who lacked self-compassion. At the same time, self-compassionate people were described as giving their partners more freedom and autonomy in their relationships. They tended to encourage partners to make their own decisions and to follow their own interests. In contrast, people who lacked self-compassion were described as being more critical and controlling of their partners. They were also described as being more self-centered, inflexibly wanting everything their own way.

Steve met Sheila in college, and after 15 years of marriage he still loved her dearly. He hated to admit this to himself, but she was also starting to drive him crazy. Sheila was terribly insecure and constantly needed Steve to reassure her of his love and affection. Wasn’t sticking around for 15 years enough? If he didn’t tell her “I love you ” every day, she would start to worry, and if a few days went by she got into a proper sulk. He felt controlled by her need for reassurance and resented the fact that she didn’t honor his own need to express himself authentically. Thair relationship was starting to suffer.

To have the type of close, connected relationships we really want with others, we first need to feel close and connected to ourselves. By being supportive toward ourselves in times of struggle, we gain the emotional resources needed to care for our significant others. When we meet our own needs for love and acceptance, we can place fewer demands on our partners, allowing them to be more fully themselves. Cultivating self-compassion is far from selfish.

It gives us the resilience we need to build and sustain happy and healthy relationships in our lives.

Over time Sheila was able to see how her constant need for reassurance from Steve was driving him away. She realized that she had become a black hole and that Steve would never be able to fully satisfy her insecurity by giving her “enough” love. It would never be enough. So Sheila started a practice of journaling at night to give herself the love and affection she craved. She would write the type of tender words to herself that she was hoping to hear from Steve, like “I love you sweetheart. I won’t ever leave you.” Then, first thing in the morning, she would read what she had written and let it soak in. She began giving herself the reassurance she was desperately seeking from Steve and let him off the hook. It wasn’t quite as nice, she had to admit, but she liked the fact that she wasn’t so dependent. As the pressure eased, Steve started to be more naturally expressive in their relationship, and they became closer. The more secure she felt in her own self-acceptanee, the more she was able to accept his love as it was, not just how she wanted it to be. Ironically, by meeting her own needs she became less self-focused and started to feel an independence that was new and delicious.

Self-Compassion and Anger in Relationships

Another type of relational pain is disconnection, which occurs whenever there is a loss or rupture in a relationship. Anger is a common reaction to disconnection. We might get angry when we feel rejected or dismissed, but also when disconnection is unavoidable, such as when someone dies. The reaction may not be rational, but it still happens. Anger has a way of popping up around disconnection and can sometimes linger for years, long after the relationship has ended.

Although anger gets a bad rap, it isn’t necessarily bad. Like all emotions, anger has positive functions. For instance, anger can give us information that someone has overstepped our boundaries or hurt us in some way, and it may be a powerful signal that something needs to change. Anger can also provide us with the energy and determination to protect ourselves in the face of threat, take action to stop harmful behavior, or end a toxic relationship.

While anger in and of itself is not a problem, we often have an unhealthy relationship with anger. For instance, we may not allow ourselves to feel our anger and suppress it instead. This can be especially true for women, who are taught to be “nice,” i.e., not angry. When we try to stuff down our anger, it can lead to anxiety, emotional constriction, or numbness. Sometimes we turn the anger against ourselves in the form of harsh self-criticism, which is a surefire way to become depressed. And if we get stuck in angry rumination, who did what to whom and what they deserve for it, we live with an agitated state of mind and may end up getting angry at others for no apparent reason.

Nate was an electrician who lived in Chicago. He had split from his wife, Lila, over five years ago, but he still got enraged every time he thought about her. It turns out that Lila had an affair with a close friend of theirs, someone they often socialized with, and that this went on behind his back for almost a year. As soon as Nate found out about it, he was seething with anger. He somehow managed to refrain from calling her every name in the book, but he was sick to his stomach whenever he thought about what had happened. He filed for divorce almost immediately, thank goodness they didn’t have children, so the process was relatively quick and easy. Although he hadn’t had any contact with Lila for several years, his anger never really subsided. And the trauma of the affair kept Nate from forming new relationships because he had such a hard time trusting anyone.

If we continually harden our emotions in an attempt to protect ourselves against those we’re angry at, over time we may develop bitterness and resentment. Anger, bitterness, and resentment are “hard feelings.” Hard feelings are resistant to change and often stick with us long past the time when they are useful. (How many of us are still angry at someone we are unlikely to ever see again?) Furthermore, chronic anger causes chronic stress, which is bad for all the systems of the body, cardiovascular, endocrine, nervous, even the reproductive system. As the saying goes, “Anger corrodes the vessel that contains it.” Or “Anger is the poison we drink to kill another person.” When anger is no longer helpful to us, the most compassionate thing we can do is change our relationship to it, especially by applying the resources of mindfulness and self compassion.

How? The first step is to identify the soft feelings behind the hard feelings of anger. Often anger is protecting more tender, sensitive emotions, such as feeling hurt, scared, unloved, alone, or vulnerable. When we peel back the outer layer of anger to see what is underneath, we are often surprised by the fullness and complexity of our feelings. Hard feelings are difficult to work with directly because they are typically defensive and outward focused. When we identify our soft feelings, however, we turn inward and can begin the transformation process.

To truly heal, however, we need to peel back the layers even further and discover the unmet needs that are giving rise to our soft feelings. Unmet needs are universal human needs, those experiences that are core to any human being. The Center for Nonviolent Communication offers a comprehensive list of needs at http://www.cm/cnrg/ training/needs inventory. Some examples are the need to be safe, connected, validated, heard, included, autonomous, and respected. And our deepest need as human beings is the need to be loved.

By having the courage to turn toward and experience our authentic feelings and needs, we can begin to have insight into what is really going on for us. Once we contact the pain and respond with self-compassion, things can start to transform on a deep level. We can use self-compassion to meet our needs directly.

Self-compassion in response to unmet needs means that we can begin to give ourselves what we have yearned to receive from others, perhaps for many years. We can be our own source of support, respect, love, validation, or safety. Of course, we need relationships and connection with others. We aren’t automatons. But when others are unable to meet our needs, for whatever reason, and have harmed us in the process, we can recover by holding the hurt, the soft feelings, in a compassionate embrace and fill the hole in our hearts with loving, connected presence.

Nate worked hard at transforming his anger because he realized it was holding him back. He had tried catharsis to get it out, punching pillows, yelling at the top of his lungs but it didn’t work. Eventually Nate signed up for an MSC course because a friend was very enthusiastic about it and said it would reduce his stress.

When Nate came to the part of the MSC course that focused on transforming anger by meeting his unmet needs, he felt nervous but did it anyway. It was easy to get in touch with his anger, and even the hurt behind it, and feel it in his body. The toughest part was identifying his unmet need. Certainly Nate felt betrayed and unloved, but that wasn’t what seemed to be holding him back. Nate stuck with the exercise, and finally the unmet need revealed itself, and his whole body relaxed. Respect!

Nate came from a hardworking hlue-collar family, and his parents were still happily married after 30 years. He tried to do everything right in his own marriage, to the best of his ability, and he took his vows very seriously. Honesty and respect were core values for Nate. Knowing that Lila would never give him the respect he needed, it was too late for that, he took the plunge and tried to give it to himself. “I respect you,” he told himself. At first it felt silly and empty and hollow. So he paused and tried to say the words as if they were true. He thought about how much he had sacrificed to get his master electrician’s certification and open a business, the long hours he had put in to pay the mortgage and build a savings account. “I respect you,” he repeated, over and over, though it still just sounded like words. Then he thought of how honest and hardworking he had tried to be in his marriage, even though that wasn’t enough for Lila.

Very, very slowly, Nate started to take it in. Finally he put his hand on his heart and said it like he really meant it: “I respect you.” He started to tear up, because he actually felt it. Once he did, the anger at his wife started to melt away. He began to see her unmet needs, different from his, for more closeness and affection. Not that what Lila did was okay, but Nate realized that her behavior had nothing to do with his worth or value as a person. He couldn’t rely on any outside person, even one who was reliable and faithful, to give him the respect he needed. It had to come from within.

Self-Compassion and Forgiveness

When someone has harmed us and we still feel anger and bitterness, sometimes the most compassionate thing to do is to forgive. Forgiveness involves letting go of anger at someone who has caused us harm. But forgiveness must involve grieving before letting go. The central point of forgiveness practice is that we cannot forgive others without first opening to the hurt that we have experienced. Similarly, to forgive ourselves, we must first open to the pain, remorse, and guilt of hurting others.

Forgiveness doesn’t mean condoning bad behavior or resuming a relationship that causes harm. If we are being harmed in a relationship, we need to protect ourselves before we can forgive. If we are harming another in a relationship, we cannot forgive ourselves if we are using this as an excuse for acting badly. We must first stop the behavior, then acknowledge and take responsibility for the harm we have caused.

At the same time, it’s helpful to remember that the harm done is usually the product of a universe of interacting causes and conditions stretching back through time. We have partly inherited our temperament from our parents and grandparents, and our actions are shaped by our early childhood history, culture, health status, current events, and so forth. Therefore, we don’t have complete control over precisely what we say and do from one moment to the next.

Sometimes we cause pain in life without intending it, and we may still feel sorry about causing such pain. An example is when we move across the country to start a new life, leaving friends and family behind, or when we can’t give our elderly parents the attention they need because of our work situation. This kind of pain is not the fault of anyone, but it can still be acknowledged and healed with self compassion.

The capacity to forgive requires keen awareness of our common humanity. We are all imperfect human beings whose actions stem from a web of interdependent conditions that are much larger than ourselves. In other words, we don’t have to take our mistakes so personally.

Paradoxically, this understanding helps us take more responsibility for our actions because we feel more emotionally secure. One research study asked participants to recall a recent action they felt guilty about, such as cheating on an exam, lying to a romantic partner, saying something harmful, that still made them feel bad about themselves when they thought about it. The researchers found that participants who were helped to be self-compassionate about their transgression reported being more motivated to apologize for the harm done, and more committed to not repeating the behavior, than those who were not helped to be selfcompassionate.

Anneka really struggled to forgive herself after getting super angry at her friend and coworker Hilde, whom she told to f-off. Anneka had been under a tremendous amount of pressure at work to secure a contract with new clients and was all set to close the deal at a dinner that they were hosting. The clients were pretty conservative, and Anneka knew she had to be on time and look appropriate for them to trust her. Hilde was supposed to pick her up for the dinner, but she wasn’t there at the appointed time. Frantic, Anneka called her. “Where are you?” Hilde had completely forgotten about the event. “Oh, I ’m so sorry,” she offered lamely. Anneka dropped the f bomb, said a few more unpleasant things, then hung up and called a taxi. Immediately after ward, Anneka felt terrible. This was her friend! Hilde hadn’t done anything purposefully harmful, she simply forgot, and Anneka has been too busy to remind her. The truth was that Anneka was so anxious about closing the deal that she lost perspective and ouerreacted.

There are five steps to forgiveness:

1. Opening to pain, being present with the distress of what happened.

2. Self Compassion, allowing our hearts to melt with sympathy for the pain, no matter what caused it.

3. Wisdom, beginning to recognize that the situation wasn’t entirely personal, but was the consequence of many interdependent causes and conditions.

4. Intention to forgive. “May I begin to forgive myself [another] for what I [he/she] did, wittingly or unwittingly, to have caused them [me] pain.”

5. Responsibility to protect, committing ourselves to not repeat the same mistake; to stay out of harm’s way, to the best of our ability.

At first Anneka harshly berated herself for her behavior, but she knew that heating up on herself wouldn’t help anyone. Instead, Anneka needed to forgive herself for having made a mistake, just as everyone makes mistakes.

Anneka had learned the five steps to forgiveness from her MSC course, so she knew what to do. First, she had to accept the pain she had caused Hilde. This was really tough for Anneka, especially since she didn’t get the contract she was hoping for. Her mind wanted to pin all the blame on Hilde. It was Hilde’s fault! But Anneka knew the truth. There was no excuse for talking to Hilde that way. It was wrong.

Anneka allowed herself to feel in her bones what it must have been like for Hilde to hear those words, from someone she considered a friend. That took some courage because Anneka felt so bad about it. Then Anneka gave herself compassion for the pain of hurting someone she loved. “Everyone makes mistakes. I’m so sorry you wounded your friend in this manner. I know you deeply regret it.” Giving herself compassion provided a bit of perspective, and Anneka was able to acknowledge the incredible stress she was under. The circumstances brought out the worst in her. Then Anneka tried to forgive herself, at least in a preliminary way, for her behavior. “May I begin to forgive myself for the pain I unwittingly inflicted on my dear friend Hilde.” Anneka also made a commitment to take at least one deep breath before speaking when she felt angry. Anneka knew this might take some time because she didn’t always know when she felt angry, but she was determined to try to be less reactive when under stress.

The central point of forgiveness is first opening to the hurt that we experienced or caused to others. Timing is very important because we are naturally ambivalent about feeling the guilt of hurting others or making ourselves vulnerable to being hurt again. As the saying goes, first we need to “give up all hope of a better past.”

Embracing the Good

One of the biggest benefits of self compassion is that it doesn’t just help you cope with negative emotions, it actively generates positive emotions. When we embrace ourselves and our experience with loving, connected presence, it feels good. It doesn’t feel good in a saccharine way, nor does it resist or avoid what feels bad. Rather, self compassion allows us to have the full range of experience, the bitter and the sweet.

Typically, however, we tend to focus much more on what’s wrong than on what’s right in our lives. For example, when you get an annual review at work, what do you remember the most, the points of praise or criticism? Or if you go shopping at the mall and interact with five polite salespeople and one rude one, which is most likely to stick in your mind?

The psychological term for this is negativity bias. Rick Hanson says the brain is like “Velcro for bad experiences and Teflon for good ones.” Evolutionarily speaking, the reason we have a negativity bias is that our ancestors who fretted and worried at the end of the day, wondering where that pack of hyenas was yesterday and where it might be hanging out tomorrow, were more likely to survive than our ancestors who kicked back and relaxed. This is evolutionarily adaptive when we face physical danger. However, since most of the dangers we face nowadays are threats to our sense of self, it is self-compassionate to correct the negativity bias because it distorts reality.

We need to intentionally recognize and absorb positive experiences to develop more realistic, balanced awareness that is not skewed toward the negative. This requires some training, just like mindfulness and self-compassion require training. Furthermore, since compassion training includes opening to pain, we may need the energy boost of focusing on positive experience to support our compassion practice.

Focusing on the positive also has important benefits. Barbara Fredrickson, who developed the “broaden and build” theory, posits that the evolutionary purpose of positive emotions is to broaden attention. In other words, when people feel safe and content, they become curious and start exploring their environment, noticing opportunities for food, shelter, or rest. This allows us to take advantage of opportunities that would otherwise go unnoticed.

“When one door of happiness closes, another opens, but often we look so long at the closed door that we do not see the one that has been opened for us.” Helen Keller

Recently there has been a movement in psychology that focuses on finding the most effective ways to help people cultivate positive emotions, and two powerful practices that have been identified are savoring and gratitude.

Savoring

Savoring involves noticing and appreciating the positive aspects of life, taking them in, letting them linger, and then letting them go. It is more than pleasure, savoring involves mindful awareness of the experience of pleasure. In other words, being aware that something good is happening while it’s happening.

Given our natural tendency to skip over what’s right and focus on what’s wrong, we need to plut a little extra effort into paying attention to what gives us pleasure. Luckily, savoring is simple practice, noticing the tart and juicy taste of a fresh apple, a gentle cool breeze on your cheek, the warm smile of your coworker, the hand of your partner gently holding your own. Research suggests that simply taking the time to notice and linger with these sorts of positive experiences can greatly increase our happiness.

Gratitude

Gratitude involves recognizing, acknowledging, and being grateful for the good things in our lives. If we just focus on what we want but don’t have, we’ll remain in a negative state of mind. But when we focus on what we do have, and give thanks for it, we radically reframe our experience.

Whereas savoring is primarily an experiential practice, gratitude is a wisdom practice. Wisdom refers to understanding how everything arises interdependently, The confluence of factors required for even a simple event to occur is mind boggling and can inspire an attitude of awe and reverence. Gratitude involves recognizing the myriad people and events that contribute to the good in our lives. As an MSC participant once remarked, “The texture of wisdom is gratitude.”

Gratitude can be aimed at the big things in life, like our health and family, but the effect of gratitude may be even more powerful when it is aimed at small things, such as when the bus arrives on time or the air conditioning is working on a hot summer day. Research shows that gratitude is also strongly linked to happiness. As the philosopher Mark Nepo wrote: “One key to knowing joy is to be easily pleased.”

The meditation teacher James Baraz tells this wonderful story about the power of gratitude in his book Awakening Joy, which we’ve adapted here by permission.

One year I was visiting my then eighty-nine-year-old mother and brought along a magazine with an article on the beneficial effects of gratitude. As we ate dinner I told her about some of the findings. She said she was impressed by the reports but admitted she had a lifetime habit of looking at the glass half empty. “I know I ’m very fortunate and have so many things to be thankful for, but little things just set me off” She said she wished she could change the habit but had doubts whether that was possible. “I’m just more used to seeing what’s going wrong,” she concluded.

“You know, Mom, the key to gratitude is really in the way we frame a situation,” I began. “For instance, suppose all of a sudden your television isn’t getting good reception.”

“That’s a scenario I can relate to, ” she agreed with a knowing smile.

“One way to describe your experience would be to say, ‘This is so annoying I could scream.” Or you could say, ‘This is so annoying. . . and my life is really very blessed. She agreed that could make a big difference.

“But I don’t think I can remember to do that,” she sighed.

So together we made up a gratitude game to remind her. Each time she complained about something, I would simply say “and . . . ,” to which she would respond “and my life is very blessed.” I was elated to see that she was willing to try it out. Although it had started as just a fun game, after a while it began to have some real impact. Her mood grew brighter as our weeks became filled with gratitude. To my delight and amazement, my mother has continued doing the practice, and the change has been revolutionary.

Self-Appreciation

Most people recognize the importance of expressing gratitude and appreciation toward others. But what about ourselves? That one doesn’t come so easily.

The negativity bias is especially strong toward ourselves. Self appreciation not only feels unnatural it can feel downright wrong. Because our tendency is to focus on our inadequacies rather than appreciate our strengths, we often have a skewed perspective of who we are. Think about it, When you receive a compliment, do you take it in, or does it bounce off you almost immediately? We usually feel uncomfortable just thinking about our good qualities. The counterargument immediately arises: “I’m not always that way” or “I have a lot of bad qualities too.” Again, this reaction demonstrates the negativity bias because when we receive unpleasant feedback, our first thoughts are not typically “Yes, but I’m not aiways that way” or “Are you aware of all my good qualities?”

Many of us are actually afraid to acknowledge our own goodness. Some common reasons given for this are:

– I don’t want to alienate my friends by being arrogant.

– My good qualities are not a problem that needs to be fixed, so I don’t need to focus on them.

– I’m afraid I would be putting myself on a pedestal, only to fall off.

– It will make me feel superior and separate from others.

Of course, there is a big difference between simply acknowledging what’s true, that we have good as well as not so good qualities, and saying that we’re perfect or better than others. It’s important to appreciate our strengths as well as have compassion for our weaknesses so that we embrace the whole of ourselves, exactly as we are.

We can apply the three components of selfcompassion, self kindness, common humanity, and mindfulness, to our positive qualities as well as our negative ones. These three factors together allow us to appreciate ourselves in a healthy and balanced way.

Self Appreciation

Self Kindness: Part of being kind to ourselves involves expressing appreciation for our good qualities, just as we would do with a good friend.

Common Humanity: When we remember that having good qualities is part of being human, we can acknowledge our strengths without feeling isolated or better than others.

Mindfulness: To appreciate ourselves, we need to pay attention to our good qualities rather than taking them for granted.

It’s important to recognize that the practice of self appreciation is not selfish or self centered. Rather, it simply recognizes that good qualities are part of being human, Although some children may have been raised with the belief that humility means not recognizing their accomplishments, that approach can harm children‘s self-concept and get in the way of knowing themselves properly. Self-appreciation is a way to correct our negativity bias toward ourselves and see ourselves more clearly as a whole person. Self-appreciation also provides the emotional resilience and selfconfidence needed to give to others.

The best selling author and spiritual teacher Marianne Williamson writes, “We are all meant to shine, as children do. . . . And as we let our own light shine, we unconsciously give other people permission to do the same. As we are liberated from our own fear, our presence automatically liberates others.”

Wisdom and gratitude are central to selfappreciation as well. These qualities help us to see our good qualities in a broader context. When we appreciate ourselves, we’re also appreciating the causes, conditions, and people, including friends, parents, and teachers, who helped us develop those good qualities in the first place. This means we don’t need to take our own good qualities so personally!

Alice grew up in a stern Protestant family where humility and self effacement were the expected norm. When she was eight years old and came home with a trophy for winning her third-grade spelling bee, she remembers, her mother just raised her eyebrows and said, “Now don’t you be getting too big for your britches.” Every time Alice accomplished anything she felt she had to downplay it or else receive the disapproval of her family.

Later on in life, Alice started dating a man named Theo who thought she was beautiful and kind and smart and wonderful and liked to tell her so. Alice would not only cringe with embarrassment; Theo’s comments made her anxious. What if Theo finds out I’m not perfect? What happens if I let him down? She would continually push aside his comments when he said something nice, leaving Theo feeling perplexed and on the other side of an invisible wall.

Alice was becoming adept at self-compassion, especially the capacity to see her personal inadequacies as part of common humanity. Self-appreciation made sense to Alice, primarily conceptually, but she knew she had a way to go. First Alice made a mental note of everything good that she did during the day, a moment of kindness, a success, a small accomplishment. Then she tried to say something appreciative about it, such as “That was well done, Alice.” When Alice spoke to herself like this, she felt like she was violating an invisible contract from childhood and it made her uneasy, but she persisted. “I’m not saying I’m better than anyone else or that I’m perfect I’m simply acknowledging that this too is true.”

Eventually Alice made a commitment to take in and savor the heartfelt compliments Theo gave her. Theo was so delighted by this turn of events that he bought her a bracelet that said on the inside, I may not be perfect, but parts of me are excellent!

from

The Mindful Self-Compassion Workbook. A proven way to accept yourself, build inner strength, and thrive

by Kristin Neff, PhD and Christopher Germer, PhD

get it at Amazon.com

Also on TPPA = CRISIS

HOW TO ACCEPT YOURSELF, BUILD INNER STRENGTH, AND THRIVE. THE MINDFUL SELF-COMPASSION WORKBOOK – KRISTIN NEFF AND CHRISTOPHER GERMER

TED TALK. SELF-COMPASSION VS SELF-ESTEEM – DR KRISTIN NEFF

CFT: FOCUSING ON COMPASSION IN NEXT GENERATION CBT DENNIS TIRCH PH.D * COMPASSION FOCUSED THERAPY FOR DUMMIES – MARY WELFORD * COMPASSION FOCUSED THERAPY – PAUL GILBERT

THE MINDFUL PATH TO SELF-COMPASSION. FREEING YOURSELF FROM DESTRUCTIVE THOUGHTS AND EMOTIONS – CHRISTOPHER K. GERMER PhD

THE SELF ACCEPTANCE PROJECT. HOW TO BE KIND & COMPASSIONATE TOWARD YOURSELF IN ANY SITUATION – TAMI SIMON

HOW TO ACCEPT YOURSELF, BUILD INNER STRENGTH, AND THRIVE. The Mindful Self-Compassion Workbook – Kristin Neff and Christopher Germer.

“Be kind to yourself in the midst of suffering.”

Western culture places great emphasis on being kind to our friends, family, and neighbors who are struggling. Not so when it comes to ourselves.

Are you kinder to others than you are to yourself? More than a thousand research studies show the benefits of being a supportive friend to yourself, especially in times of need.

In the blink of an eye we can go from “I don’t like this feeling” to “I don’t want this feeling” to “I shouldn’t have this feeling” to “Something is wrong with me for having this feeling” to “I’m bad!”

The seeds of self-compassion already lie within you, learn how you can uncover this powerful inner resource and transform your life.

Self-compassion is the perfect alternative to self-esteem because it offers a sense of self-worth that doesn’t require being perfect or better than others.

This science-based workbook offers a step-by-step approach to breaking free of harsh self-judgments and impossible standards in order to cultivate emotional well-being. The book is based on the authors’ groundbreaking eight-week Mindful Self-Compassion (MSC) program, which has helped tens of thousands of people worldwide. It is packed with guided meditations (with audio downloads); informal practices to do anytime, anywhere; exercises; and vivid stories of people using the techniques to address relationship stress, weight and body image issues, health concerns, anxiety, and other common problems.

Kristin Neff, PhD, is Associate Professor of Human Development and Culture at the University of Texas at Austin and a pioneer in the field of self-compassion research.

Christopher Germer, PhD, has a private practice in mindfulnessand compassionbased psychotherapy in Arlington, Massachusetts, and is a part-time Lecturer on Psychiatry at Harvard Medical School/Cambridge Health Alliance. He is a founding faculty member of the Institute for Meditation and Psychotherapy and of the Center for Mindfulness and Compassion.

Introduction:

How To Approach This Workbook

“Our task is not to seek for love, but merely to seek and find all the barriers within yourself that you have built against it.” Rumi

We have all built barriers to love. We’ve had to in order to protect ourselves from the harsh realities of living a human life. But there is another way to feel safe and protected. When we are mindful of our struggles, and respond to ourselves with compassion, kindness, and support in times of difficulty, things start to change. We can learn to embrace ourselves and our lives, despite inner and outer imperfections, and provide ourselves with the strength needed to thrive.

An explosion of research into self-compassion over the last decade has shown its benefits for well-being. Individuals who are more self-compassionate tend to have greater happiness, life satisfaction, and motivation, better relationships and physical health, and less anxiety and depression. They also have the resilience needed to cope with stressful life events such as divorce, health crises, academic failure, even combat trauma.

When we struggle, however, when we suffer, fail, or feel inadequate, it’s hard to be mindful toward what’s occurring; we’d rather scream and beat our fists on the table. Not only do we not like what’s happening, we think there is something wrong with us because it’s happening. In the blink of an eye we can go from “I don’t like this feeling” to “I don’t want this feeling” to “I shouldn’t have this feeling” to “Something is wrong with me for having this feeling” to “I’m bad!”

That’s where self-compassion comes in. Sometimes we need to comfort and soothe ourselves for how hard it is to be a human being before we can relate to our lives in a more mindful way.

Self-compassion emerges from the heart of mindfulness when we meet suffering in our lives. Mindfulness invites us to open to suffering with loving, spacious awareness. Self-compassion adds, “be kind to yourself in the midst of suffering.” Together, mindfulness and self-compassion form a state of warmhearted, connected presence during difficult moments in our lives.

MINDFUL SELF-COMPASSION

Mindful Self-Compassion (MSC) was the first training program specifically designed to enhance a person’s self-compassion. Mindfulness-based training programs such as mindfulness-based stress reduction and mindfulness-based cognitive therapy also increase self-compassion, but they do so more implicitly, as a welcome byproduct of mindfulness. MSC was created as a way to explicitly teach the general public the skills needed to be self-compassionate in daily life. MSC is an eight-week course where trained teachers lead a group of 8 to 25 participants through the program for 234 hours each week, plus a half-day meditation retreat. Research indicates that the program produces long-lasting increases in self-compassion and mindfulness, reduces anxiety and depression, enhances overall well-being, and even stabilizes glucose levels among people with diabetes.

The idea for MSC started back in 2008 when the authors met at a meditation retreat for scientists. One of us (Kristin) is a developmental psychologist and pioneering researcher into self-compassion. The other (Chris) is a clinical psychologist who has been at the forefront of integrating mindfulness into psychotherapy since the mid-1990s. We were sharing a ride to the airport after the retreat and realized we could combine our skills to create a program to teach self-compassion.

I (Kristin) first came across the idea of self-compassion in 1997 during my last year of graduate school, when, basically, my life was a mess. I had just gotten through a messy divorce and was under incredible stress at school. I thought I would learn to practice Buddhist meditation to help me deal with my stress. To my great surprise the woman leading the meditation class talked about how important it was to develop self-compassion. Although I knew that Buddhists talked a lot about the importance of compassion for others, I never considered that having compassion for myself might be just as important. My initial reaction was “What? You mean I’m allowed to be kind to myself? Isn’t that selfish?” But I was so desperate for some peace of mind I gave it a try. Soon I realized how helpful self-compassion could be. I learned to be a good, supportive friend to myself when I struggled. When I started to be kinder to and less judgmental of myself, my life transformed.

After receiving my PhD, I did two years of postdoctoral training with a leading self-esteem researcher and began to learn about some of the downsides of the self-esteem movement.

Though it’s beneficial to feel good about ourselves, the need to be “special and above average” was being shown to lead to narcissism, constant comparisons with others, ego-defensive anger, prejudice, and so on.

The other limitation of self-esteem is that it tends to be contingent, it’s there for us in times of success but often deserts us in times of failure, precisely when we need it most!

I realized that self-compassion was the perfect alternative to self-esteem because it offered a sense of self-worth that didn’t require being perfect or better than others.

After getting a job as an assistant professor at the University of Texas at Austin, I decided to conduct research on self-compassion. At that point, no one had studied selfcompassion from an academic perspective, so I tried to define what self-compassion is and created a scale to measure it, which started what is now an avalanche of selfcompassion research.

The reason I really know self-compassion works, however, is because I’ve seen the benefits of it in my personal life. My son, Rowan, was diagnosed with autism in 2007, and it was the most challenging experience I had ever faced. I don’t know how I would have gotten through it if it weren’t for my self-compassion practice. I remember the day I got the diagnosis, I was actually on my way to a meditation retreat. I had told my husband that I would cancel the retreat so we could process, and he said, “No, go to your retreat and do that self-compassion thing, then come back and help me.”

So while I was on retreat, I flooded myself with compassion. I allowed myself to feel whatever I was feeling without judgment, even feelings I thought I “shouldn’t” be having. Feelings of disappointment, even of irrational shame. How could I possibly feel this about the person I love most in the world? But I knew I had to open my heart and let it all in. I let in the sadness, the grief, the fear. And fairly soon I realized I had the stability to hold it, that the resource of self-compassion would not only get me through, but would help me be the best, most unconditionally loving parent to Rowan I could be. And what a difference it made!

Because of the intense sensory issues experienced by children with autism, they are prone to violent tantrums. The only thing you can do as a parent is to try to keep your child safe and wait until the storm passes. When my son screamed and flailed away in the grocery store for no discernible reason, and strangers gave me nasty looks because they thought I wasn’t disciplining my child properly, I would practice self-compassion. I would comfort myself for feeling confused, ashamed, stressed, and helpless, providing myself the emotional support I desperately needed in the moment. Self-compassion helped me steer clear of anger and self-pity, allowing me to remain patient and loving toward Rowan despite the feelings of stress and despair that would inevitably arise. I’m not saying that I didn’t have times when I lost it. I had many. But I could rebound from my missteps much more quickly with self-compassion and refocus on supporting and loving Rowan.

I (Chris) also learned self-compassion primarily for personal reasons. I had been practicing meditation since the late ’70s, became a clinical psychologist in the early ’80s, and joined a study group on mindfulness and psychotherapy. This dual passion for mindfulness and therapy eventually led to the publication of Mindfulness and Psychotherapy.

As mindfulness became more popular, I was being asked to do more public speaking. The problem, however, was that I suffered from terrible public speaking anxiety. Despite maintaining a regular practice of meditation my whole adult life and trying every clinical trick in the book to manage anxiety, before any public talk my heart would pound, my hands began to sweat, and I found it impossible to think clearly. The breaking point came when I was scheduled to speak at an upcoming Harvard Medical School conference that I helped to organize. (I still tried to expose myself to every possible speaking opportunity.) I’d been safely tucked in the shadows of the medical school as a clinical instructor but now I’d have to give a speech and expose my shameful secret to all my esteemed colleagues.

Around that time, a very experienced meditation teacher advised me to shift the focus of my meditation to loving-kindness, and to simply repeat phrases such as “May I be safe,” “May I be happy,” “May I be healthy,” “May I live with ease.” So I gave it a try. In spite of all the years I’d been meditating and reflecting on my inner life as a psychologist, I’d never spoken to myself in a tender, comforting way. Right off the bat, I started to feel better and my mind also became clearer. I adopted loving-kindness as my primary meditation practice.

Whenever anxiety arose as I anticipated the upcoming conference, I just said the loving-kindness phrases to myself, day after day, week after week. I didn’t do this particularly to calm down, but simply because there was nothing else I could do. Eventually, however, the day of the conference arrived. When I was called to the podium to speak, the typical dread rose up in the usual way. But this time there was something new, a faint background whisper saying, “May you be safe. May you be happy . . .” In that moment, for the first time, something rose up and took the place of fear, self-compassion.

Upon later reflection, I realized that I was unable to mindfully accept my anxiety because public speaking anxiety isn’t an anxiety disorder after all, it’s a shame disorder, and the shame was just too overwhelming to bear. Imagine being unable to speak about the topic of mindfulness due to anxiety! I felt like a fraud, incompetent, and a bit stupid. What I discovered on that fateful day was that sometimes, especially when we’re engulfed in intense emotions like shame, we need to hold ourselves before we can hold our moment-to-moment experience. I had begun to learn self-compassion, and saw its power firsthand.

In 2009, I published The Mindful Path to Self-Compassion in an effort to share what I had learned, especially in terms of how self-compassion helped the clients I saw in clinical practice. The following year, Kristin published Self-Compassion, which told her personal story, reviewed the theory and research on self-compassion, and provided many techniques for enhancing self-compassion.

Together we held the first public MSC program in 2010. Since then we, along with a worldwide community of fellow teachers and practitioners, have devoted a tremendous amount of time and energy to developing MSC and making it safe, enjoyable, and effective for just about everyone. The benefits of the program have been supported in multiple research studies, and to date tens of thousands of people have taken MSC around the globe.

HOW TO USE THIS WORKBOOK

Most of the MSC curriculum is contained in this workbook, in an easy-to-use format that will help you start to be more self-compassionate right away. Some people who use this workbook will be currently taking an MSC course, some may want to refresh what they previously learned, but for many people this will be their first experience with MSC.

This workbook is designed to also be a stand-alone pathway for you to learn the skills you need to be more self-compassionate in daily life. It follows the general structure of the MSC course, with the chapters organized in a carefully sequenced manner so the skills build upon one another. Each chapter provides basic information about a topic followed by practices and exercises that allow you to experience the concepts firsthand.

Most of the chapters also contain illustrations of the personal experiences of participants in the MSC course, to help you know how the practices may play out in your life. These are composite illustrations that don’t compromise the privacy of any particular participant, and the names are not real. In this book, we also alternate between masculine and feminine pronouns when referring to a single individual. We have made this choice to promote ease of reading as our language continues to evolve and not out of disrespect toward readers who identify with other personal pronouns. We sincerely hope that all will feel included.

We recommend that you go through the chapters in order, giving the time needed in between to do the practices a few times. A rough guideline would be to practice about 30 minutes a day and to do about one or two chapters per week. Go at your own pace, however. If you feel you need to go more slowly or spend extra time on a particular topic, please do so. Make the program your own. If you are interested in taking the MSC course in person from a trained MSC teacher, you can find a program near you at http://www.centerformsc.org. Online training is also available. For professionals who want to learn more about the theory, research, and practice of MSC, including how to teach self-compassion to clients, we recommend reading the MSC professional training manual, to be published by The Guilford Press in 2019.

The ideas and practices in this workbook are largely based on scientific research (notes at the back of the book point to the relevant research). However, they are also based on our experience teaching thousands of people how to be more selfcompassionate. The MSC program is itself an organic entity, continuing to evolve as we and our participants learn and grow together.

Also, while MSC isn’t therapy, it’s very therapeutic, it will help you access the resource of self-compassion to meet and transform difficulties that inevitably emerge as we live our lives. However, the practice of self-compassion can sometimes activate old wounds, so if you have a history of trauma or are currently having mental health challenges, we recommend that you complete this workbook under the supervision of a therapist.

Tips for Practice

As you go through this workbook, it’s important to keep some points in mind to get the most out of it.

– MSC is an adventure that will take you into uncharted territory, and unexpected experiences will arise. See if you can approach this workbook as an experiment in self-discovery and self-transformation. You will be working in the laboratory of your own experience, see what happens.

– While you will be learning numerous techniques and principles of mindfulness and self-compassion, feel free to tailor and adapt them in a way that works for you. The goal is for you to become your own best teacher.

– Know that tough spots will show up as you learn to turn toward your struggles in a new way. You are likely to get in touch with difficult emotions or painful self-judgments. Fortunately, this book is about building the emotional resources, skills, strengths, and capacities to deal with these difficulties.

– While self-compassion work can be challenging, the goal is to find a way to practice that’s pleasant and easy. Ideally, every moment of self-compassion involves less stress, less striving, and less work, not more.

– It is good to be a “slow learner.” Some people defeat the purpose of self compassion training by pushing themselves too hard to become self compassionate. Allow yourself to go at your own pace.

– The workbook itself is a training ground for self-compassion. The way you approach this course should be self-compassionate. In other words, the means and ends are the same.

– It is important to allow yourself to go through a process of opening and closing as you work through this book. just as our lungs expand and contract, our hearts and minds also naturally open and close. It is self-compassionate to allow ourselves to close when needed and to open up again when that naturally happens. Signs of opening might be laughter, tears, or more vivid thoughts and sensations. Signs of closing might be distraction, sleepiness, annoyance, numbness, or self-criticism.

– See if you can find the right balance between opening and closing. Just like a faucet in the shower has a range of water flow between off and full force that you can control, you can also regulate the degree of openness you experience. Your needs will vary: sometimes you may not be in the right space to do a particular practice, and other times it will be exactly what you need. Please take responsibitty for your own emotional safety, and don’t push yourself through something if it doesn’t feel right in the moment. You can always come back to it later, or do the practice with the help and guidance of a trusted friend or therapist.

1. What is Self-Compassion?

Selt-compassion involves treating yourself the way you would treat a friend who is having a hard time, even if your friend blew it or is feeling inadequate, or is just facing a tough life challenge. Western culture places great emphasis on being kind to our friends, family, and neighbors who are struggling. Not so when it comes to ourselves. Self-compassion is a practice in which we learn to be a good friend to ourselves when we need it most, to become an inner ally rather than an inner enemy. But typically we don’t treat ourselves as well as we treat our friends.

The golden rule says “Do unto others as you would have them do unto you.” However, you probably don’t want to do unto others as you do unto yourself! Imagine that your best friend calls you after she just got dumped by her partner, and this is how the conversation goes.

“Hey,” you say, picking up the phone. “How are you?”

“Terrible,” she says, choking back tears. “You know that guy Michael I’ve been dating? Well, he’s the first man I’ve been really excited about since my divorce. Last night he told me that I was putting too much pressure on him and that he just wants to be friends. I’m devastated.”

You sigh and say, “Well, to be perfectly honest, it’s probably because you’re old, ugly, and boring, not to mention needy and dependent. And you’re at least 20 pounds overweight. I’d just give up now, because there’s really no hope of finding anyone who will ever love you. I mean, frankly you don’t deserve it!”

Would you ever talk this way to someone you cared about? Of course not. But strangely, this is precisely the type of thing we say to ourselves in such situations, or worse. With self-compassion, we learn to speak to ourselves like a good friend. “I’m so sorry. Are you okay? You must be so upset. Remember I’m here for you and I deeply appreciate you. Is there anything I can do to help?”

Although a simple way to think about self-compassion is treating yourself as you would treat a good friend, the more complete definition involves three core elements that we bring to bear when we are in pain: self-kindness, common humanity, and mindfulness.

Self-Kindness. When we make a mistake or fail in some way, we are more likely to beat ourselves up than put a supportive arm around our own shoulder. Think of all the generous, caring people you know who constantly tear themselves down (this may even be you). Self-kindness counters this tendency so that we are as caring toward ourselves as we are toward others. Rather than being harshly critical when noticing personal shortcomings, we are supportive and encouraging and aim to protect ourselves from harm. Instead of attacking and berating ourselves for being inadequate, we offer ourselves warmth and unconditional acceptance. Similarly, when external life circumstances are challenging and feel too difficult to bear, we actively soothe and comfort ourselves.

Theresa was excited. “I did it! I can’t believe I did it! I was at an office party last week and blurted out something inappropriate to a coworker. Instead of doing my usual thing of calling myself terrible names, I tried to be kind and understanding. I told myself, ‘Oh well, it’s not the end of the world. I meant well even if it didn’t come out in the best way.”

Common Humanity. A sense of interconnectedness is central to self-compassion. It’s recognizing that all humans are flawed works-in-progress, that everyone fails, makes mistakes, and experiences hardship in life. Self-compassion honors the unavoidable fact that life entails suffering, for everyone, without exception. While this may seem obvious, it’s so easy to forget. We fall into the trap of believing that . . .

from

The Mindful Self-Compassion Workbook. A proven way to accept yourself, build inner strength, and thrive

by Kristin Neff, PhD and Christopher Germer, PhD

get it at Amazon.com

Also on TPPA = CRISIS

TED TALK. SELF-COMPASSION VS SELF-ESTEEM – DR KRISTIN NEFF

CFT: FOCUSING ON COMPASSION IN NEXT GENERATION CBT DENNIS TIRCH PH.D * COMPASSION FOCUSED THERAPY FOR DUMMIES – MARY WELFORD * COMPASSION FOCUSED THERAPY – PAUL GILBERT

THE MINDFUL PATH TO SELF-COMPASSION. FREEING YOURSELF FROM DESTRUCTIVE THOUGHTS AND EMOTIONS – CHRISTOPHER K. GERMER PhD

THE SELF ACCEPTANCE PROJECT. HOW TO BE KIND & COMPASSIONATE TOWARD YOURSELF IN ANY SITUATION – TAMI SIMON

MIGRAINE AND DEPRESSION: It’s All The Same Brain – Gale Scott * Migraine May Permanently Change Brain Structure – American Academy of Neurology * Migraine: Multiple Processes, Complex Pathophysiology – Rami Burstein, Rodrigo Noseda, and David Borsook.

The symptoms that accompany migraine suggest that multiple neuronal systems function abnormally. Neuroimaging studies show that brain networks, brain morphology, and brain chemistry are altered in episodic and chronic migraineurs. As a consequence of the disease itself or its genetic underpinnings, the migraine brain is altered structurally and functionally.

Migraine tends to run in families and as such is considered a genetic disorder. Genetic predisposition to migraine resides in multiple susceptible gene variants, many of which encode proteins that participate in the regulation of glutamate neurotransmission and proper formation of synaptic plasticity.

Migraine is a leading cause of suicide, an indisputable proof of the severity of the distress that the disease may inflict on the individual. 40% of migraine patients are also depressed.

Migraine and Depression: It’s All The Same Brain

Gale Scott

When a patient suffers from both migraines and depression or other psychiatric comorbidities, physicians have to treat both. It’s a common situation, since 40% of migraine patients are also depressed. Anxiety is even more prevalent in these patients. An estimated 50% of migraine patients are anxious whether with generalized anxiety, phobias, panic attacks. or other forms of anxiety, said Mia Minen, MD Director of Headache Services at NYU Langoni Medical Center.

Health care costs in treating these co-morbid patients are 1.5 times higher than for migraine patients without accompanying psychiatric disorders.

But sorting out whether one problem is causing the other is not always easy, she said in a recent interview at NYU Langone. “it’s really interesting, which came first,” she said, “We really don’t know.”

There may be a bidirectional relationship with depression. Anxiety may precede migraines, then depression may follow.

Fortunately, she said, the question of which problem came first doesn’t really matter that much.

“It’s all one brain, one organ, and some of the same neurotransmitters are implicated in both disorders.” Serotonin is affected in migraine just as it is in depression and anxiety, she said. Dopamine and norepinephrine are also related both to migraines and psychiatric comorbidities.

“So it’s really one organ that’s controlling all these things,” Minen said.

The first step in treatment is having patients keep a headache diary to track the intensity and frequency and what they take when they feel it coming on.

For a mild migraine that might be ibuprofen or another over-the-counter pain killer.

If the migraine is moderately severe there are 7 migraine-specifuc medications that are effective, she said. There are oral, nasal, and injectable forms of triptans a family of tryptamine-based drugs. “We sometimes tell patients to combine the triptan with Naprosyn,” she said.

If triptans are contraindicated because the patient has other health problems, there are still more pharmaceutical options.

Those include some classes of beta blockers, antiseizure medications, and tricyclic antidepressants at low doses.

The drug regimen may vary with the particular comorbidity. For instance, for migraines with anxiety, venlafaxine might work. If patients have sleep disturbances, amitriptyline might be effective. “Lack of sleep is also a trigger for migraine,” she noted.

The toughest co-morbidity to treat in patients with migraine is finding a regimen that works for patients who are taking a lot of psychiatric medications, like SSRIs and antipsychotics.

For older patients with migraines plus cardiovascular disease, drug choices are also limited.

Botox injections seem promising, but Minen is cautious. “It’s a great treatment for patients with chronic migraines but they have to have failed 2 or 3 medications before they qualify for Botox.” Treatment involves 31 injections over the forehead, the back of the head and the neck. Relief lasts about 3 months, she said.

In addition to pharmaceutical treatment, there are cognitive behavior approaches that can work, like biofeedback and progressive muscle relaxation therapy.

Opioids are not the treatment of choice, she said. They have not been shown to be effective, Minen said and reduce the body’s ability to respond to triptans.

“For chronic migraine, studies don’t show opioids enable patients to return to work; there is no objective study showing they work,” and their uses raises other problems. Patients used to taking opioids who go to an emergency room and request them may find themselves suspected of drugseeking behavior. “It’s hard for doctors and patients,” she said, when patients ask for opioids “It puts doctors in a predicament.”

New drugs are on the horizon, she said. “Calcitonin gene related peptide antagonists look good,” and unlike triptans are not contraindicated for people at risk of strokes or heart attacks.

For now, said Minen, treating migraine and comorbidities “is more an art than a science,” she said, “But a large majority of patients do get better.”

Migraine May Permanently Change Brain Structure

American Academy of Neurology

Migraine may have longlasting effects on the brain’s structure, according to a study published in the August 28, 2013, online issue of Neurology®, the medical journal of the American Academy of Neurology.

“Traditionally, migraine has been considered a benign disorder without long-term consequences for the brain,” said study author Messoud Ashina, MD, PhD, with the University of Copenhagen in Denmark. “Our review and meta-analysis study suggests that the disorder may permanently alter brain structure in multiple ways.”

The study found that migraine raised the risk of brain lesions, white matter abnormalities and altered brain volume compared to people without the disorder. The association was even stronger in those with migraine with aura.

Migraine with aura
A common type of migraine featuring additional neurological symptoms. Aura is a term used to describe a neurological symptom of migraine, most commonly visual disturbances.
People who experience ‘migraine with aura’ will have many or all the symptoms of a ‘migraine without aura‘ and additional neurological symptoms which develop over a 5 to 20 minute period and last less than an hour.
Visual disturbances can include: blind spots in the field of eyesight, coloured spots, sparkles or stars, flashing lights before the eyes, tunnel vision, zig zag lines, temporary blindness.
Other symptoms include: numbness or tingling, pins and needles in the arms and legs, weakness on one side of the body, dizziness, a feeling of spinning (vertigo).
Speech and hearing can be affected and some people have reported memory changes, feelings of fear and confusion and, more rarely, partial paralysis or fainting.
These neurological symptoms usually happen before a headache, which could be mild, or no headache may follow.

For the meta-analysis, researchers reviewed six population-based studies and 13 clinic-based studies to see whether people who experienced migraine or migraine with aura had an increased risk of brain lesions, silent abnormalities or brain volume changes on MRI brain scans compared to those without the conditions.

The results showed that migraine with aura increased the risk of white matter brain lesions by 68 percent and migraine with no aura increased the risk by 34 percent, compared to those without migraine. The risk for infarct-like abnormalities increased by 44 percent for those without aura. Brain volume changes were more common in people with migraine and migraine with aura than those with no migraines.

“Migraine affects about 10 to 15 percent of the general population and can cause a substantial personal, occupational and social burden,” said Ashina. “We hope that through more study, we can clarify the association of brain structure changes to attack frequency and length of the disease. We also want to find out how these lesions may influence brain function.”

The study was supported by the Lundbeck Foundation and the Novo Nordisk Foundation.

To learn more about migraine, please visit American Academy of Neurology.

Migraine: Multiple Processes, Complex Pathophysiology

Rami Burstein (1,3), Rodrigo Noseda (1,3) and David Borsook (2,3)
1 – Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Boston
2 – Department of Anesthesiology, Perioperative and Pain Medicine, Boston Children’s Hospital
3 – Harvard Medical School

Migraine is a common, multifactorial, disabling, recurrent, hereditary neurovascular headache disorder. It usually strikes sufferers a few times per year in childhood and then progresses to a few times per week in adulthood, particularly in females. Attacks often begin with warning signs (prodromes) and aura (transient focal neurological symptoms) whose origin is thought to involve the hypothalamus, brainstem, and cortex.

Once the headache develops, it typically throbs, intensifies with an increase in intracranial pressure, and presents itself in association with nausea, vomiting, and abnormal sensitivity to light, noise, and smell. It can also be accompanied by abnormal skin sensitivity (anodynia) and muscle tenderness.

Collectively, the symptoms that accompany migraine from the prodromal stage through the headache phase suggest that multiple neuronal systems function abnormally.

As a consequence of the disease itself or its genetic underpinnings, the migraine brain is altered structurally and functionally. These molecular, anatomical, and functional abnormalities provide a neuronal substrate for an extreme sensitivity to fluctuations in homeostasis, a decreased ability to adapt, and the recurrence of headache.

Homeostasis is the state of steady internal conditions maintained by living things.

Advances in understanding the genetic predisposition to migraine, and the discovery of multiple susceptible gene variants (many of which encode proteins that participate in the regulation of glutamate neurotransmission and proper formation of synaptic plasticity) define the most compelling hypothesis for the generalized neuronal hyperexcitability and the anatomical alterations seen in the migraine brain.

Regarding the headache pain itself, attempts to understand its unique qualities point to activation of the trigeminovascular pathway as a prerequisite for explaining why the pain is restricted to the head, often affecting the periorhital area and the eye, and intensities when intracranial pressure increases.

Introduction

Migraine is a recurrent headache disorder affecting 15% of the population during the formative and most productive periods of their lives, between the ages of 22 and 55 years. It frequently starts in childhood, particularly around puberty, and affects women more than men.

It tends to run in families and as such is considered a genetic disorder.

In some cases, the headache begins with no warning signs and ends with sleep. In other cases, the headache may be preceded by a prodromal phase that includes fatigue; euphoria; depression; irritability; food cravings; constipation; neck stiffness; increased yawning; and/or abnormal sensitivity to light, sound, and smell and an aura phase that includes a variety of focal cortically mediated neurological symptoms that appear just before and/or during the headache phase. Symptoms of migraine aura develop gradually, feature exeitatory and inhibitory phases, and resolve completely. Positive (gain of function) and negative (loss of function) symptoms may present as scintillating lights and scotomas when affecting the visual cortex; paresthesia, and numbness of the face and hands when affecting the somatosensory cortex; tremor and unilateral muscle weakness when affecting the motor cortex or basal ganglia; and difficulty saying words (aphasia) when affecting the speech area.

The pursuant headache is commonly unilateral, pulsating, aggravated by routine physical activity, and can last a few hours to a few days (Headache Classification Committee of the International Headache Society, 2013). As the headache progresses, it may be accompanied by a variety of autonomic symptoms (nausea, vomiting, nasallsinus congestion, rhinorrhea, lacrimation, ptosis, yawning, frequent urination, and diarrhea), affective symptoms (depression and irritability), cognitive symptoms (attention deficit, difficulty finding words, transient amnesia, and reduced ability to navigate in familiar environments), and sensory symptoms (photophobia, phonophobia, osmophobia, muscle tenderness, and cutaneous allodynia).

The extent of these diverse symptoms suggests that migraine is more than a headache. It is now viewed as a complex neurological disorder that affects multiple cortical, subcortical, and brainstem areas that regulate autonomic, affective, cognitive, and sensory functions. As such, it is evident that the migraine brain differs from the non-migraine brain and that an effort to unravel the pathophysiology of migraine must expand beyond the simplistic view that there are “migraine generator” areas.

In studying migraine pathophysiology, we must consider how different neural networks interact with each other to allow migraine to commence with stressors such as insufficient sleep, skipping meals, stressful or post stressful periods, hormonal fluctuations, alcohol, certain foods, flickering lights, noise, or certain scents, and why migraine attacks are sometimes initiated by these triggers and sometimes not.

We must tackle the enigma of how attacks are resolved on their own or just weaken and become bearable by sleep, relaxation, food, and/or darkness. We must explore the mechanisms by which the frequency of episodic migraine increases over time (from monthly to weekly to daily), and why progression from episodic to chronic migraine is uncommon.

Disease mechanisms

In many cases, migraine attacks are likely to begin centrally, in brain areas capable of generating the classical neurological symptoms of prodromes and aura, whereas the headache phase begins with consequential activation of meningeal nociceptors at the origin of the trigeminovascular system.

A nociceptor is a sensory neuron that responds to damaging or potentially damaging stimuli by sending “possible threat” signals to the spinal cord and the brain. If the brain perceives the threat as credible, it creates the sensation of pain to direct attention to the body part, so the threat can hopefully be mitigated; this process is called nociception.
.

The meninges are the three membranes that envelop the brain and spinal cord. In mammals, the meninges are the dura mater, the arachnoid mater, and the pia mater. Cerebrospinal fluid is located in the subarachnoid space between the arachnoid mater and the pia mater. The primary function of the meninges is to protect the central nervous system.

While some clues about how the occurrence of aura can activate nociceptors in the meninges exist, nothing is known about the mechanisms by which common prodromes initiate the headache phase or what sequence of events they trigger that results in activation of the meningeal nociceptors. A mechanistic search for a common denominator in migraine symptomatology and characteristics points heavily toward a genetic predisposition to generalized neuronal hyperexcitability. Mounting evidence for alterations in brain structure and function that are secondary to the repetitive state of headache can explain the progression of disease.

Prodromes

In the context of migraine, prodromes are symptoms that precede the headache by several hours. Examination of symptoms that are most commonly described by patients point to the potential involvement of the hypothalamus (fatigue, depression, irritability, food cravings, and yawning), brainstem (muscle tenderness and neck stiffness), cortex (abnormal sensitivity to light, sound, and smell), and limbic system (depression and anhedonia) in the prodromal phase of a migraine attack. Given that symptoms such as fatigue, yawning, food craving, and transient mood changes occur naturally in all humans, it is critical that we understand how their occurrence triggers a headache; whether the routine occurrence of these symptoms in migraineurs (i.e., when no headache develops) differs mechanistically from their occurrence before the onset of migraine; and why yawning, food craving, and fatigue do not trigger a migraine in healthy subjects.

Recently, much attention has been given to the hypothalamus because it plays a key role in many aspects of human circadian rhythms (wake sleep cycle, body temperature, food intake, and hormonal fluctuations) and in the continuous effort to maintain homeostasis. Because the migraine brain is extremely sensitive to deviations from homeostasis, it seems reasonable that hypothalamie neurons that regulate homeostasis and circadian cycles are at the origin of some of the migraine prodromes.

Unraveling the mechanisms by which hypothalamic and brainstem neurons can trigger a headache is central to our ability to develop therapies that can intercept the headache during the prodromal phase (i.e., before the headache begins. The ongoing effort to answer this question focuses on two very different possibilities (Fig. 1). The first suggests that hypothalamic neurons that respond to changes in physiological and emotional homeostasis can activate meningeal nociceptors by altering the balance between parasympathetic and sympathetic tone in the meninges toward the predominance of parasympathetic tone. Support for such a proposal is based on the following: (1) hypothalamic neurons are in a position to regulate the firing of preganglionic parasympathetic neurons in the superior salivatory nucleus (SSN) and sympathetic preganglionic neurons in the spinal intermediolateral nucleus. (2) the SSN can stimulate the release of acetylcholine, vasoactive intestinal peptide, and nitric oxide from meningeal terminals of postganglionic parasympathetic neurons in the Spheno palatine ganglion (SPG), leading to dilation of intracranial blood vessels, plasma protein extravasation, and local release of inflammatory molecules capable of activating pial and dural branches of meningeal nociceptors; (3) meningeal blood vessels are densely innervated by para sympathetic fibers. (4) activation of SSN neurons can modulate the activity of central trigeminovascular neurons in the spinal trigeminal nucleus. (5) activation of meningeal nociceptors appears to depend partially on enhanced activity in the SPG. (6) enhanced cranial parasympathetic tone during migraine is evident by lacrimation and nasal congestion, and, finally, (7) blockade of the sphenopalatine ganglion provides partial or complete relief of migraine pain.

The second proposal suggests that hypothalamic and brainstem neurons that regulate responses to deviation from physiological and emotional homeostasis can lower the threshold for the transmission of nociceptive trigeminovascular signals from the thalamus to the cortex, a critical step in establishing the headache experience. This proposal is based on understanding how the thalamus selects, amplifies, and prioritizes information it eventually transfers to the cortex, and how hypothalamic and brainstem nuclei regulate relay thalamocortical neurons. It is constructed from recent evidence that relay trigeminothalamic neurons in sensory thalamie nuclei receive direct input from hypothalamic neurons that contain dopamine, histamine, orexin, and melanin concentrating hormone (MCH), and brainstem neurons that contain noradrenaline and serotonin. In principle, each of these neuropeptides/neurotransmitters can shift the activity of thalamic neurons from burst to tonic mode if it is excitatory (dopamine, and high concentration of serotonin, noradrenaline, histamine, orexin, and from tonic to burst mode if it is inhibitory (MCH and low concentration of serotonin). The opposing factors that regulate the firing of relay trigeminovascular thalamic neurons provide an anatomical foundation for explaining why prodromes give rise to some migraine attacks but not to others, and why external (e.g., exposure to strong perfume) and internal conditions (e.g., skipping a meal and feeling hungry, sleeping too little and being tired, or simple stress) trigger migraine attacks so inconsistently.

In the context of migraine, the convergence of these hypothalamic and brainstem neurons on thalamic trigeminovascular neuruns can establish high and low set points for the allostatic load of the migraine brain. The allostatic load, defined as the amount of brain activity required to appropriately manage the level of emotional or physiological stress at any given time, can explain why external and internal conditions only trigger headache some of the times, when they coincide with the right circadian phase of cyclic rhythmicity of brainstem, and hypothalamic and thalamic neurons that preserve homeostasis.

Cortical spreading depression

Clinical and preclinical studies suggest that migraine aura is caused by cortical spreading depression (CSD), a slowly propagating wave of depolarization/excitation followed by hyperpolarization/inhibition in cortical neurons and glia. While specific processes that initiate CSD in humans are not known, mechanisms that invoke inflammatory molecules as a result of emotional or physiological stress, such as lack of sleep, may play a role. In the cortex, the initial membrane depolarization is associated with a large efflux of potassium; influx of sodium and calcium; release of glutamate, ATP, and hydrogen ions; neuronal swelling ; upregulation of genes involved in inflammatory processing; and a host of changes in cortical perfusion and enzymatic activity that include opening of the megachannel Panxl, activation of caspase-1, and a breakdown of the blood brain barrier.

Outside the brain, caspase-1 activation can initiate inflammation by releasing high mobility group protein B1 and interleukin-1 into the CSF, which then activates nuclear factor KB in astrocytes, with the consequential release of cyclooxygenase-2 and inducible nitric oxide swithase (iNOS) into the subarach noid space. The introduction into the meninges of these proinflammatory molecules, as well as calcitonin gene related peptide (CGRP) and nitric oxide, may be the link between aura and headache because the meninges are densely innervated by pain fibers whose activation distinguishes headaches of intracranial origin (e.g., migraine, meningitis, and subaraeh noid bleeds) from headaches of extracranial origin (e.g., tension type headache, cervicogenic headache, or headaches caused by mild trauma to the cranium).

Anatomy and physiology of the trigeminovascular pathway: from activation to sensitization

Anatomical description

The trigeminovascular pathway conveys nociceptive information from the meninges to the brain. The pathway originates in trigeminal ganglion neurons whose peripheral axons reach the pia, dura, and large cerebral arteries, and whose central axons reach the nociceptive dorsal horn laminae of the SpV. In the SpV, the nociceptors converge on neurons that receive additional input from the periorbital skin and pericranial muscles. The ascending axonal projections of trigeminovascular SpV neurons transmit monosynaptic nocieeptive signals to (1) brainstem nuclei, such as the ventro lateral periaqueductal gray, reticular for mation, superior salivatory, parabrachial, cuneiform, and the nucleus of the solitary tract; (2) hypothalamic nuclei, such as the anterior, lateral, perifornical, dorsome dial, suprachiasmatic, and supraoptic; and (3) basal ganglia nuclei, such as the caudate putamen, globus pallidus, and sub stantia innominata. These projections maybe critical for the initiation of nausea, vomiting, yawning, lacrimation, urination, loss of appetite, fatigue, anxiety, irritability, and depression by the headache itself.

Additional projections of trigeminovascular SpV neurons are found in the thalamic ventral posteromedial (VPM), posterior (PO), and parafascicular nuclei. Relay trigeminovascular thalamic neurons that project to the somatosensory, insular, motor, parietal association, retrosplenial, auditory, visual, and olfactory cortices are in a position to construct the specific nature of migraine pain (i.e., location, intensity, and quality) and many of the cortically mediated symptoms that distinguish between migraine headache and other pains. These include transient symptoms of motor clumsiness, difficulty focusing, amnesia, allodynia, phonophobia, photophobia, and osmophobia. Figure 2A illustrates the complexity of the trigeminovascular pathway.

Activation

Studies in animals show that CSD initiates delayed activation (Fig. 2D, 2B,C) and immediate activation (Fig. 2D) of peripheral and central trigeminovascular neurons in a fashion that resembles the classic delay and occasional immediate onset of headache after aura, and that systemic administration of the M type potassium channel opener KCNQ2/3 can prevent the CSD induced activation of the nociceptors.

These findings support the notion that the onset of the headache phase of migraine with aura coincides with the activation of meningeal nociceptors at the peripheral origin of the trigeminovascular pathway. Whereas the vascular, cellular, and molecular events involved in the activation of meningeal nocieeptors by CSD are not well under stood, a large body of data suggests that transient constriction and dilatation of pial arteries and the development of dural plasma protein extravasation, neurogenic inflammation, platelet aggregation, and mast cell degranulation, many of which may be driven by CSD dependent peripheral CGRP release, can introduce to the meninges proinflammatory molecules, such as histamine, bradykinin, serotonin, and prostaglandins (prostaglandin E2), and a high level of hydrogen ions thus altering the molecular environment in which meningeal nociceptors exist.

Sensitization

When activated in the altered molecular environment described above, peripheral trigeminovascular neurons become sensitized (their response threshold decreases and their response magnitude increases) and begin to respond to dura stimuli to which they showed minimal or no response at base line. When central trigeminovascular neurons in laminae I and V of SpV (Fig. 2F) and in the thalamic PO/VPM nuclei (Fig. 2G) become sensitized, their spontaneous activity increases, their receptive fields expand, and they begin to respond to innocuous mechanical and thermal stimulation of cephalic and extracephalic skin areas as if it were noxious. The human correlates of the electrophysiological measures of neuronal sensitization in animal studies are evident in contrast analysis of BOLD signals registered in MRI scans of the human trigeminal ganglion (Fig. 2H), spinal trigeminal nucleus (Fig. 2I), and the thalamus (Fig. 2J), all measured during migraine attacks.

The clinical manifestation of peripheral sensitization during migraine, which takes roughly 10 mins to develop, includes the perception of throbbing headache and the transient intensiflcation of headache while bending over or coughing, activities that momentarily increase intracranial pressure.

The clinical manifestation of sensitization of central trigeminovascular neurons in the SpV, which takes 30-60 min to develop and 120 min to reach full extent, include the development of cephalic allodynia signs such as scalp and muscle tenderness and hypersensitivity to touch. These signs are often recognized in patients reporting that they avoid wearing glasses, earrings, hats, or any other object that come in contact with the facial skin during migraine.

The clinical manifestation of thalamic sensitization during migraine, which takes 2-4 h to develop, also includes extracephalic allodynia signs that cause patients to remove tight clothing and jewelry, and avoid being touched, massaged, or hugged.

Evidence that triptans, 5HT agonists that disrupt communications between peripheral and central trigeminovascular neurons in the dorsal horn, are more effective in aborting migraine when administered early (i.e., before the development of central sensitization and allodynia) rather than late (i.e., after the development of allodynia) provides further support for the notion that meningeal nociceptors drive the initial phase of the headache. Further support for this concept was provided recently by studies showing that humanized monoclonal antihodies against CGRP, molecules that are too big to penetrate the bloodbrain barrier and act centrally (according to the companies that developed them), are effective in preventing migraine. Along this line, it was also reported that drugs that act on central trigeminovascular neurons, e.g., dihydroergotamine (DHE), are equally effective in reversing an already developed central sensitization a possible explanation for DHE effectiveness in aborting migraine after the failure of therapy with triptans.

Genetics and the hyperexcitable brain

Family history points to a genetic predisposition to migraine. A genetic association with migraine was first observed and defined in patients with familial hemiplegic migraine (FHM).

The three genes identified with FHM encode proteins that regulate glutamate availability in the synapse. FHM1 (CACNAIA) encodes the pore-forming a1 subunit of the P/Q type calcium channel; FHM2 (ATP1A2) encodes the 112 subunit of the Na+/K+ ATPase pump; and the FHM3 (SCNIA) encodes the a1 sub unit of the neuronal voltage gated Nav1.1 channel.

Collectively, these genes regulate transmitter release, glial ability to clear (reuptake) glutamate from the synapse, and the generation of action potentials.

Since these early findings, large genome wide association studies have identified 13 susceptibility gene variants for migraine with and without aura, three of which regulate glutaminergic neurotransmission (MTDH/AEG-1 downregulates glutamate transporter, LPRI modulates synaptic transmission through the NMDA receptor, and MEF-2D regulates the glutamatergic excitatory synapse), and two of which regulate synaptic development and plasticity (ASTN2 is involved in the structural development of cortical layers, and FHI5 regulates cAMP sensitive CREB proteins involved msynaptic plasticity).

These findings provide the most plausible explanation for the “generalized” neuronal hyperexcitability of the migraine brain.

In the context of migraine, increased activity in glutamalergic systems can lead to excessive occupation of the NMDA receptor, which in turn may amplify and reinforce pain transmission, and the development of allodynia and central sensitization. Network wise, wide spread neuronal hyperexcitability may also be driven by thalamocortical dysrhythmia, defective modulatory brainstem circuits that regulate excitability at multiple levels along the neuraxis; and inherently improper regulation/habituation of cortical, thalamic, and brainstem functions by limbic structures, such as the hypothalamus, amygdala, nucleus accumbens, caudate, putamen, and globus pallidus. Given that 2 of the 13 susceptibility genes regulate synaptic development and plasticity, it is reasonable to speculate that some of the networks mentioned above may not be properly wired to set a normal level of habituation throughout the brain, thus explaining the multi factorial nature of migraine. Along this line, it is also tempting to propose that at least some of the structural alterations seen in the migraine brain may be inherited and, as such, may be the “cause” of migraine, rather than being secondary to (i.e., being caused by) the repeated headache attacks. But this concept awaits evidence.

Structural and functional brain alterations

Brain alterations can be categorized into the following two processes: (1) alteration in brain function and (2) alterations in brain structure (Fig. 3). Functionally, a variety of imaging techniques used to measure relative activation in different brain areas in migraineurs (vs control subjects) revealed enhanced activation in the periaqueductal gray; red nucleus and substantia nigra; hypothalamus; posterior thalamus; cerebellum, insula, cingulate and prefrontal cortices, anterior temporal pole, and the hippocampus; and decreased activation in the somatosensory cortex, nucleus cuneiformis, caudate, putamen, and pallidum. All of these activity changes occurred in response to nonrepetitive stimuli, and in the cingulate and prefrontal cortex they occurred in response to repetitive stimuli.

Collectively, these studies support the concept that the migraine brain lacks the ability to habituate itself and consequently becomes hyperexcitable. It is a matter of debate, however, if such changes are unique to migraine headache. Evidence for nearly identical activation patterns in other pain conditions, such as lower back pain, neuropathic pain, Hbromyalgia, irritable bowel syndrome, and cardiac pain, raises the possibility that differences between somatic pain and migraine pain are not due to differences in central pain processing.

Anatomically, voxel based morphometry and diffusion sensor imaging studies in migraine patients (vs control subjects) have revealed thickening of the somatosensory cortex; increased gray matter density in the caudate; and gray matter volume loss in the superior temporal gyms, inferior frontal gyms, precentral gyms, anterior cingulate cortex, amygdala, parietal operculum, middle and inferior frontal gyrus, inferior frontal gyrus, and bilateral insula.

Changes in cortical and subcortical structures may also depend on the frequency of migraine attacks for a number of cortical and subcortical regions. As discussed above, it is unclear whether such changes are genetically predetermined or simply a result of the repetitive exposure to pain/stress. Favoring the latter are studies showing that similar gray matter changes occurring in patients experiencing other chronic pain conditions are reversible and that the magnitude of these changes can be correlated with the duration of disease.

Further complicating our ability to determine how the migraine brain differs from the brain of a patient experiencing other chronic pain conditions are anatomical findings showing decreased gray matter density in the prefrontal cortex, thalamus, posterior insula, secondary somatosensory cortex, precentral and posteentral gyms, hippocampus, and temporal pole of chronic back pain patients; anterior insula and orbitofrontal cortex of complex regional pain syndrome patients; and the insula, midanterior cingulate cortex, hippocampus and inferior temporal coxtex in osteoarthritis pa tients with chronic back pain.

Whereas some of the brain alterations seen in migraineurs depend on the sex of the patient, little can he said about the role played by the sex of patients who experience other pain conditions.

Treatments in development

Migraine therapy has two goals: to terminate acute attacks; and to prevent the next attack from happening. The latter can potentially prevent the progression from episodic to chronic state. Regarding the effort to terminate acute attacks, migraine represents one of the few pain conditions for which a specific drug (i.e., triptan) has been developed based on understanding the mechanisms of the disease. In contrast, the effort to prevent migraine from happening is likely to face a much larger challenge given that migraine can originate in an unknown number of brain areas (see above), and is associated with generalized functional and structural brain abnormalities.

A number of treatments that attract attention are briefly reviewed below.

Medications The most exciting drug currently under development is humanized monoclonal antibodies against CGRP. The development of these monoclonal antibodies are directed at both CGRP and its receptors. The concept is based on CGRP localization in the trigeminal ganglion and its relevance to migaine patho-physiology. In recent phase II randomized placebo-controlled trials, the neutralizing humanized monoclonal antibodies against CGRP administered by injection for the prevention of episodic migraine, showed promising results. Remarkably, a single injection may prevent or significantly reduce migraine attacks for 3 months.

Given our growing understanding of the importance of prodromes (likely representing abnormal sensitivity to the fluctuation in hypothalamically regulated homeostasis) and aura (likely representing the inherited conical hyperexcitability) in the pathophysiology of migraine, drugs that target ghrelin, leptin, and orexin receptors may be considered for therapeutic development which is based on their ability to restore proper hypothalamic control of stress, hyperphagia, adiposity, and sleep. All may be critical in reducing allostatic load and, consequently, in initiating the next migraine attack.

Brain modification

Neuroimaging studies showing that brain networks, brain morphology, and brain chemistry are altered in episodic and chronic migraineurs justify attempts to develop therapies that widely modify brain networks and their functions. Transcranial magnetic stimulation, which is thought to modify cortical hyperexcitability, is one such approach. Another approach for generalized brain modification is cognitive behavioral therapy.

Conclusions

Migraine is a common and undertreated disease. For those who suffer, it is a major cause of disability, including missing work or school, and it frequently has associated comorbidities such as anxiety and depression. To put this in context, it is a leading cause of suicide, an indisputable proof of the severity of the distress that the disease may inflict on the individual.

There is currently no objective diagnosis or treatment that is universally effective in aborting or preventing attacks. As an intermittent disorder, migraine represents a neurological condition wherein systems that continuously evaluate errors (error detection) frequently fail, thus adding to the allostatic load of the disease.

Given the enormous burden to society, there is an urgent imperative to focus on better understanding the neurobiology of the disease to enable the discovery of novel treatment approaches.

AN ANXIOUS PARADISE. Crisis in New Zealand mental health services as depression and anxiety soar – Eleanor Ainge Roy * World in mental health crisis of monumental suffering – Sarah Broseley.

System neglects ‘missing middle’ of the population who face common problems.
50-80% of New Zealanders experience mental distress or addiction challenges at some point in their lives, while each year one in five people experience mental illness or significant mental distress.

A landmark inquiry has found New Zealand’s mental health services are overwhelmed and geared towards crisis care rather than the wider population who are experiencing increasing rates of depression, trauma and substance abuse.

It has urged the government to widen provision of mental health care from 3% of the population in critical need to “the missing middle” – the 20% of the population who struggle with “common, disabling problems” such as anxiety.

New Zealand has one of the highest rates of suicide in the OECD, especially among young people. In 2017, 20,000 people tried to take their own life.

. . . The Guardian

Also on TPPA = CRISIS

World in mental health crisis of ‘monumental suffering’, say experts – Sarah Broseley.

“Mental health problems kill more young people than any other cause around the world.” Prof. Vikram Patel, Harvard Medical School
Lancet report says 13.5 million lives could be saved every year if mental illness addressed.

OFC, Brain Stimulation for Depression – Janice Wood * Direct Electrical Stimulation of Lateral OFC Acutely Improves Mood.

“You could see the improvements in patients’ body language. They smiled, they sat up straighter, they started to speak more quickly and naturally. They said things like ‘Wow, I feel better,’ ‘I feel less anxious,’ ‘I feel calm, cool and collected.”

An important step toward developing a therapy for people with treatment-resistant depression.

Converging lines of evidence from lesion studies, functional neuroimaging, and intracranial physiology point to a role of OFC in emotion processing. Clinically depressed individuals have abnormally high levels of activity in OFC as ascertained by functional neuroimaging, and recovery from depression is associated with decreased OFC activity.

We found that lateral OFC stimulation acutely improved mood in subjects with baseline depression and that these therapeutic effects correlated with modulation of large-scale brain networks implicated in emotion processing.

Our results suggest that lateral OFC stimulation improves mood state at least partly through mechanisms that underlie natural mood variation, and they are consistent with the notion that OFC integrates multiple streams of information relevant to affective cognition.

Unilateral stimulation of lateral OFC consistently produced acute, dose-dependent mood-state improvement across subjects with baseline depression traits.

In a new study, patients with moderate to severe depression reported significant improvements in mood when researchers stimulated the orbitofrontal cortex (OFC).

The orbitofrontal cortex (OFC) is a prefrontal cortex region in the frontal lobes in the brain, which is involved in the cognitive processing of decision-making.

.

Researchers at the University of California San Francisco say the study’s findings are “an important step toward developing a therapy for people with treatment-resistant depression, which affects as many as 30 percent of depression patients.”

Using electrical current to directly stimulate affected regions of the brain has proven to be an effective therapy for treating certain forms of epilepsy and Parkinson’s disease, but efforts to develop therapeutic brain stimulation for depression have so far been inconclusive, according to the researchers.

“The OFC has been called one of the least understood regions in the brain, but it is richly connected to various brain structures linked to mood, depression, and decision making, making it very well positioned to coordinate activity between emotion and cognition.”

Two additional observations suggested that OFC stimulation could have therapeutic potential.

First, the researchers found that applying current to the lateral OFC triggered wide-spread patterns of brain activity that resembled what had naturally occurred in volunteers’ brains during positive moods in the days before brain stimulation. Equally promising was the fact that stimulation only improved mood in patients with moderate to severe depression symptoms but had no effect on those with milder symptoms.

“These two observations suggest that stimulation was helping patients with serious depression experience something like a naturally positive mood state, rather than artificially boosting mood in everyone.

This is in line with previous observations that OFC activity is elevated in patients with severe depression and suggests electrical stimulation may affect the brain in a way that removes an impediment to positive mood that occurs in people with depression.”

Psych Central

Direct Electrical Stimulation of Lateral Orbitofrontal Cortex Acutely Improves Mood in Individuals with Symptoms of Depression

Vikram R. Rao, Kristin K. Sellers, Deanna L. Wallace, Maryam M. Shanechi, Heather E. Dawes, Edward F. Chang.

Mood disorders cause significant morbidity and mortality, and existing therapies fail 20%–30% of patients. Deep brain stimulation (DBS) is an emerging treatment for refractory mood disorders, but its success depends critically on target selection. DBS focused on known targets within mood-related frontostriatal and limbic circuits has been variably efficacious.

Here, we examine the effects of stimulation in orbitofrontal cortex (OFC), a key hub for mood-related circuitry that has not been well characterized as a stimulation target. We studied 25 subjects with epilepsy who were implanted with intracranial electrodes for seizure localization. Baseline depression traits ranged from mild to severe. We serially assayed mood state over several days using a validated questionnaire. Continuous electrocorticography enabled investigation of neurophysiological correlates of mood-state changes.

We used implanted electrodes to stimulate OFC and other brain regions while collecting verbal mood reports and questionnaire scores. We found that unilateral stimulation of the lateral OFC produced acute, dose-dependent mood-state improvement in subjects with moderate-to-severe baseline depression. Stimulation suppressed low-frequency power in OFC, mirroring neurophysiological features that were associated with positive mood states during natural mood fluctuation. Stimulation potentiated single-pulse-evoked responses in OFC and modulated activity within distributed structures implicated in mood regulation.

Behavioral responses to stimulation did not include hypomania and indicated an acute restoration to non-depressed mood state.

Together, these findings indicate that lateral OFC stimulation broadly modulates mood-related circuitry to improve mood state in depressed patients, revealing lateral OFC as a promising new target for therapeutic brain stimulation in mood disorders.

Experimental Design and Locations of Stimulated Sites

.

Introduction

A modern conception of mood disorders holds that the signs and symptoms of emotional dysregulation are manifestations of abnormal activity within large-scale brain networks. This view, evolved from earlier hypotheses based on chemical imbalances in the brain, has fueled interest in selective neural network modulation with deep brain stimulation (DBS). Although the potential for precise therapeutic intervention with DBS is promising, its efficacy is sensitive to target selection. In treatment-resistant depression (TRD), for example, well-studied targets for DBS include the subgenual cingulate cortex (SCC) and subcortical structures, but the benefits of DBS in these areas are not clearly established.

A major challenge in this regard relates to the fact that clinical manifestations of mood disorders like TRD are heterogeneous and involve dysfunction in cognitive, affective, and reward systems. Therefore, brain regions that represent a functional confluence of these systems are attractive targets for therapeutic brain stimulation.

Residing within prefrontal cortex, the orbitofrontal cortex (OFC) shares reciprocal connections with amygdala, ventral striatum, insula, and cingulate cortex, areas implicated in emotion regulation. As such, OFC is anatomically well positioned to regulate mood. Functionally, OFC serves as a nexus for sensory integration and has myriad roles related to emotional experience, including predicting and evaluating outcomes, representing reward-driven learning and behavior, and mediating subjective hedonic experience.

Converging lines of evidence from lesion studies, functional neuroimaging, and intracranial physiology point to a role of OFC in emotion processing. Clinically depressed individuals have abnormally high levels of activity in OFC as ascertained by functional neuroimaging, and recovery from depression is associated with decreased OFC activity.

Repetitive transcranial magnetic stimulation (rTMS) of OFC was shown to improve mood in a single-subject case study and in a series of patients who otherwise did not respond to rTMS delivered to conventional (non-OFC) targets, but whether intracranial OFC stimulation can reliably alleviate mood symptoms is not known.

Furthermore, OFC is relatively large, and functional distinctions between medial and lateral subregions are known, raising the possibility that subregions of OFC may play distinct roles in mood regulation.

More generally, it remains poorly understood how direct brain stimulation affects local and network-level neural activity to produce complex emotional responses.

We hypothesized that brain networks involved in emotion processing include regions, like OFC, that represent previously unrecognized stimulation targets for alleviation of neuropsychiatric symptoms. To test this hypothesis, we developed a system for studying mood-related neural activity in subjects with epilepsy who were undergoing intracranial electroencephalography (iEEG) for seizure localization. In addition to direct recording of neural activity, iEEG allows delivery of defined electrical stimulation pulses with high spatiotemporal precision and concurrent measurement of behavioral correlates.

Using serial quantitative mood assessments and continuous iEEG recordings, we investigated the acute effects of OFC stimulation on mood state and characterized corresponding changes in neural activity locally and in distributed brain regions. We found that lateral OFC stimulation acutely improved mood in subjects with baseline depression and that these therapeutic effects correlated with modulation of large-scale brain networks implicated in emotion processing.

Our results suggest that lateral OFC stimulation improves mood state at least partly through mechanisms that underlie natural mood variation, and they are consistent with the notion that OFC integrates multiple streams of information relevant to affective cognition.

Discussion

Here, we show that human lateral OFC is a promising target for brain stimulation to alleviate mood symptoms. Unilateral stimulation of lateral OFC consistently produced acute, dose-dependent mood-state improvement across subjects with baseline depression traits. Locally, lateral OFC stimulation increased cortical excitability and suppressed low-frequency power, a feature we found to be negatively correlated with mood state. At the network level, lateral OFC stimulation modulated activity within a network of limbic and paralimbic structures implicated in mood regulation.

Relief of mood symptoms afforded by lateral OFC stimulation may arise from OFC acting as a hub within brain networks that mediate affective cognition.

Previous studies identify OFC as a key node within an emotional salience network activated by anticipation of aversive events. Within this network, OFC is thought to integrate multimodal sensory information and guide emotion-related decisions by evaluating expected outcomes.

Stimulation of other brain regions that encode value information, such as SCC and ventral striatum, has also been found to improve mood, highlighting the relevance of reward circuits to mood state.

Here, using iEEG, we extend previous studies that employed indirect imaging biomarkers, such as glucose metabolism or blood oxygen level, to show that direct OFC stimulation modulates neural activity within a distributed network of brain regions. Our finding that lateral OFC stimulation was more effective than medial OFC stimulation for mood symptom relief advances the idea that these regions have differential contributions to depression, likely due to differences in network connectivity.

We did not observe consistent differences based on laterality of stimulation, but future studies powered to discern such differences may reveal additional layers of specificity.

Although few behavioral variables have been identified to predict which individuals will respond to stimulation of a given target for depression, we found that only patients with significant trait depression experienced mood-state improvement with lateral OFC stimulation. Based on speech-rate analysis, lateral OFC stimulation did not produce supraphysiological mood states, as can be seen with stimulation of other targets, but did specifically elevate speech rate in trait-depressed subjects, resulting in a level similar to that of the non-depressed subjects. Local neurophysiological changes induced by stimulation were opposite of those observed during spontaneous negative mood states. Taken together, these findings suggest that the effect of lateral OFC stimulation is to normalize or suppress pathological activity in circuits that mediate natural mood variation.

Our observations provide potential clues about how lateral OFC stimulation may impact mood. Although functional imaging biomarkers of depression are not firmly established, increased activity in lateral OFC is seen in patients with depression and normalizes with effective antidepressant treatment, and lateral OFC hyperactivity has been proposed as a mood-state marker of depression.

Thus, a speculative possibility is that our stimulation paradigm works by decreasing OFC theta power in a way that may impact baseline hyperactivity. We cannot exclude the possibility that the mechanisms underlying mood improvement with lateral OFC stimulation involve multiple regions and may at least partially overlap with mechanisms responsible for mood improvement with stimulation of SCC. In fact, based on anatomic and functional connectivity between these regions, and the constellation of white matter tracts likely affected by stimulation of these sites, some mechanistic overlap seems probable.

Our results have potential implications for interventional treatments for psychiatric disorders like TRD and anxiety. DBS efficacy for TRD is inconsistent, and a major thrust of the field has been to understand and circumvent inter-subject variability. For example, the heterogeneous responses seen with SCC stimulation may relate to laterality and precise anatomic electrode position. In our study, positive mood responses were induced by unilateral stimulation of the OFC in either hemisphere, and although stimulation of lateral OFC improved mood more than stimulation of medial OFC, we observed mood improvement with stimulation across lateral OFC and did not see evidence of fine subregion specificity. These findings suggest that lateral OFC may be a more forgiving site for therapeutic stimulation than previously reported targets.

Another practical advantage of OFC relative to other targets is that the cortical surface is generally more surgically accessible than deep brain targets and that the ability to forego parenchymal penetration may impart lower risk during electrode implantation. Although seizures are a theoretical risk with any cortical stimulation, this risk is thought to be acceptably low, and we did not observe seizures during OFC stimulation.

Despite the widespread use of DBS in clinical and research applications, the mechanisms by which focal brain stimulation modulates network activity to produce complex behavioral changes remain largely unknown. The effects of stimulation are not limited to the targeted region, and stimulation-induced activity can propagate through anatomical connections to influence distributed networks in the brain. Previous studies have shown that target connectivity may determine likelihood of response to DBS.

Deciphering the precise mechanism of mood improvement with OFC stimulation requires future study, but our observation that stimulation suppresses low-frequency activity broadly across multiple sites suggests a possible local inhibitory effect that reverberates through connected brain regions. Consistent with this, inhibitory transcranial magnetic stimulation of OFC was recently reported to improve mood in one depressed patient. Since the OFC is relatively large and bilateral, it is possible that the mood effects we observed could be improved by more widespread stimulation.

Our study has limitations. The sample size was relatively small, reflecting the rare opportunity to directly and precisely target brain stimulation in human subjects. Although electrode coverage was generally extensive in our subjects, basal ganglia structures known to be important for mood are not typically implanted with electrodes for the purposes of seizure localization. Subjective self-report of mood has intrinsic limitations but remains the best instrument available to measure internal experience.

Our subjects, who had medically refractory epilepsy, may not be representative of all patients with mood disorders. While we cannot rule out the possibility that mood symptoms in our subjects had a seizure-specific etiology, the observed effects of lateral OFC stimulation were robust in a patient group with diverse underlying seizure pathology. To establish generalizability, our findings will need to be replicated in other cohorts.

Finally, it is possible that the acute effects of stimulation we observed may not translate into chronic efficacy for mood disorders in clinical settings. Indeed, rapid mood changes have been previously reported in TRD patients treated with bilateral DBS of SCC and subcortical targets. Whether chronic OFC stimulation can produce durable mood improvement is an important question for future study, ideally under controlled clinical trial conditions with appropriate monitoring of relevant outcomes and adverse events.

The clinical heterogeneity of mood disorders suggests that brain stimulation paradigms may need to be tailored for individual patients. Importantly, this study is one of few to assess the functional consequences of brain stimulation with direct neural recordings. The approach we used for serial quantitative mood state assessment may be useful for sensitively tracking symptoms of mood disorders during clinical interventions, including DBS trials. Our identification of a novel, robust stimulation target and our observation of stimulation-induced changes in endogenous mood-related neural features together set the stage for the next generation of stimulation therapies. OFC theta power may be useful for optimization of stimulation parameters for non-invasive stimulation modalities targeting the OFC in depression, and further characterization of mood biomarkers might enable personalized closed-loop stimulation devices that ameliorate debilitating mood symptoms.

Although the OFC is currently among the least understood brain regions, it may ultimately prove important for the treatment of refractory mood disorders.

Study: Current Biology

University of California, San Francisco (UCSF)

MIND IN THE MIRROR * Neuroplasticity in a Nutshell * Mindsight, Change Your Brain and Your Life – Daniel J. Siegel MD.

We come to know our own minds through our interactions with others.

As we welcome the neural reality of our interconnected lives, we can gain new clarity about who we are, what shapes us, and how we in turn can shape our lives.

Riding the Resonance Circuits

It’s folk wisdom that couples in long and happy relationships look more and more alike as the years go by. Peer closely at those old photographs, and you’ll see that the couples haven’t actually grown similar noses or chins. Instead, they have reflected each other’s expressions so frequently and so accurately that the hundreds of tiny muscle attachments to their skin have reshaped their faces to mirror their union. How this happens gives us a window on one of the most fascinating recent discoveries about the brain, and about how we come to “feel felt” by one another.

Some of what I’ll describe here is still speculative, but it can shed light on the most intimate ways we experience mindsight in our daily lives.

Neurons That Mirror Our Minds

In the mid-1990s, a group of Italian neuroscientists were studying the premotor area of a monkey’s cortex. They were using implanted electrodes to monitor individual neurons, and when the monkey ate a peanut, a certain electrode fired. No surprise there, that’s what they expected. But what happened next has changed the course of our insight into the mind. When the monkey simply watched one of the researchers eat a peanut, that same motor neuron fired. Even more startling: The researchers discovered that this happened only when the motion being observed was goal-directed. Somehow, the circuits they had discovered were activated only by an intentional act.

This mirror neuron system has since been identified in human beings and is now considered the root of empathy. Beginning from the perception of a basic behavioral intention, our more elaborated human prefrontal cortex enables us to map out the minds of others. Our brains use sensory information to create representations of others’ minds, just as they use sensory input to create images of the physical world. The key is that mirror neurons respond only to an act with intention, with a predictable sequence or sense of purpose. If I simply lift up my hand and wave it randomly, your mirror neurons will not respond. But if I carry out any act you can predict from experience, your mirror neurons will “figure out” what I intend to do before I do it. So when I lift up my hand with a cup in it, you can predict at a synaptic level that I intend to drink from the cup. Not only that, the mirror neurons in the premotor area of your frontal cortex will get you ready to drink as well.

We see an act and we ready ourselves to imitate it. At the simplest level, that’s why we get thirsty when others drink, and why we yawn when others yawn. At the most complex level, mirror neurons help us understand the nature of culture and how our shared behaviors bind us together, mind to mind. The internal maps created by mirror neurons are automatic, they do not require consciousness or effort. We are hardwired from birth to detect sequences and make maps in our brains of the internal state, the intentional stance, of other people. And this mirroring is “cross-modal”, it operates in all sensory channels, not just vision, so that a sound, a touch, a smell, can cue us to the internal state and intentions of another.

By embedding the mind of another into our own firing patterns, our mirror neurons may provide the foundation for our mindsight maps.

Now let’s take another step. Based on these sensory inputs, we can mirror not only the behavioral intentions of others, but also their emotional states. In other words, this is the way we not only imitate others’ behaviors but actually come to resonate with their feelings, the internal mental flow of their minds. We sense not only what action is coming next, but also the emotional energy that underlies the behavior. In developmental terms, if the behavioral patterns we see in our caregivers are straightforward, we can then map sequences with security, knowing what might happen next, embedding intentions of kindness and care, and so create in ourselves a mindsight lens that is focused and clear. If, on the other hand, we’ve had parents who are confusing and hard to “read,” our own sequencing circuits may create distorted maps.

So from our earliest days, the basic circuitry of mindsight can be laid down with a solid foundation, or created on shaky ground.

Knowing Me, Knowing You

I once organized an interdisciplinary think tank of researchers to explore how the mind might use the brain to perceive itself. One idea we discussed is that we make maps of intention using our cortically based mirror neurons and then transfer this information downward to our subcortical regions. A neural circuit called the insula seems to be the information superhighway between the mirror neurons and the limbic areas, which in turn send messages to the brainstem and the body proper. This is how we can come to resonate physiologically with others, how even our respiration, blood pressure, and heart rate can rise and fall in sync with another’s internal state.

These signals from our body, brainstem, and limbic areas then travel back up the insula to the middle prefrontal areas. I’ve come to call this set of circuits, from mirror neurons to subcortical regions, back up to the middle prefrontal areas, the “resonance circuits.” This is the pathway that connects us to one another. Notice what happens when you’re at a party with friends. If you approach a group that is laughing, you’ll probably find yourself smiling or chuckling even before you’ve heard the joke. Or perhaps you’ve gone to dinner with people who’ve suffered a recent loss. Without their saying anything, you may begin to sense a feeling of heaviness in your chest, a welling up in your throat, tears in your eyes. Scientists call this emotional contagion. The internal states of others, from joy and play to sadness and fear, directly affect our own state of mind. This contagion can even make us interpret unrelated events with a particular bias, so that, for example, after we’ve been around someone who is depressed we interpret someone else’s seriousness as sadness.

For therapists, it’s crucial to keep this bias in mind. Otherwise a prior session may shape our internal state so much that we aren’t open and receptive to the new person with whom we need to be resonating.

Our awareness of another person’s state of mind depends on how well we know our own. The insula brings the resonating state within us upward into the middle prefrontal cortex, where we make a map of our internal world. So we feel others’ feelings by actually feeling our own, we notice the belly fill with laughter at the party or with sadness at the funeral home. All of our subcortical data, our heart rate, breathing, and muscle tension, our limbic coloring of emotion, travels up the insula to inform the cortex of our state of mind. This is the main reason that people who are more aware of their bodies have been found to be more empathic.

The insula is the key: When we can sense our own internal state, the fundamental pathway for resonating with others is open as well.

The mind we first see in our development is the internal state of our caregiver. We coo and she smiles, we laugh and his face lights up. So we first know ourselves as reflected in the other. One of the most interesting ideas we discussed in our study group is that our resonance with others may actually precede our awareness of ourselves. Developmentally and evolutionarily, our modern self-awareness circuitry may be built upon the more ancient resonance circuits that root us in our social world.

How, then, do we discern who is “me” and who is “you”? The scientists in our group suggested that we may adjust the location and firing pattern of the prefrontal images to perceive our own mind. Increases in the registration of our own bodily sensations combined with a decrease in our mirror neuron response may help us know that these tears are mine, not yours, or that this anger is indeed from me, not from you. This may seem like a purely philosophical and theoretical question until you are in the midst of a marital conflict and find yourself arguing about who is the angry one, you or your spouse. And certainly, as a therapist, if I do not track the distinction between me and other, I can become flooded with my patients’ feelings, lose my ability to help, and also burn out quickly.

When resonance literally becomes mirroring, when we confuse me with you, then objectivity is lost. Resonance requires that we remain differentiated, that we know who we are, while also becoming linked. We let our own internal states be influenced by, but not become identical with, those of the other person.

It will take much more research to elucidate the exact way our mindsight maps make this distinction, but the basic issues are clear. The energy and information flow that we sense both in ourselves and in others rides the resonance circuits to enable mindsight.

As I consider the resonance circuits, two mind lessons stand out for me. One is that becoming open to our body’s states, the feelings in our heart, the sensations in our belly, the rhythm of our breathing, is a powerful source of knowledge. The insula flow that brings up this information and energy colors our cortical awareness, shaping how we reason and make decisions. We cannot successfully ignore or suppress these subcortical springs. Becoming open to them is a gateway to clear mindsight.

The second lesson is that relationships are woven into the fabric of our interior world. We come to know our own minds through our interactions with others. Our mirror neuron perceptions, and the resonance they create, act quickly and often outside of awareness. Mindsight permits us to invite these fast and automatic sources of our mental life into the theater of consciousness. As we welcome the neural reality of our interconnected lives, we can gain new clarity about who we are, what shapes us, and how we in turn can shape our lives.

Neuroplasticity in a Nutshell – Daniel J. Siegel MD

Change Your Brain and Your life – Daniel J. Siegel MD

from

Mindsight, change your brain and your life

by Daniel J. Siegel MD

get it at Amazon.com

SOUL DAMAGE. The Precious Resources of Our Time and Attention – Linda and Charlie Bloom.

How do we nurture our soul?

Soul-damage occurs when we deny ourselves the kinds of enriching experiences that we need to have in order to thrive, rather than simply survive, experiences that make our heart sing, that infuse our lives with a sense of passion and vitality.

Most of us know that there are certain kinds of experiences that nurture our souls and others that don’t. We also know that “experiences” are not “things,” and that although things, like money, homes, and motor vehicles do matter in our lives, they do not nourish our hearts and spirits. There’s nothing wrong with something that enhances only the material as long as we don’t expect more from it than that. For most of us, separating unrealistic expectations from real ones is not easy.

If we spent even 10% as much of our time and energy on matters of the heart as we do on matters that relate to the fulfillment of our ego’s desires, our quality of life would transform.

For most of us, even 10% would represent a several-fold increase of our time. Many of us give more time and concern towards the maintenance of our cars than to our deeper needs. We may insist that what we most value is love, inner peace, family, or ‘truth”, yet our lives may not reflect this priority.

It has been said that you can know a person by the way in which they spend their time, not by their words. What we truly love is what we give our energies to, and this may not be what we say matters most to us.

It is the first and most critical step in the process of bringing integrity into our lives. Until we have done so, self-deception and rationalization will permeate our daily existence.

Psychology Today

Does using testosterone to treat depression work? – Tim Newman.

Medical professionals have been discussing whether testosterone treatment can actually reduce depressive symptoms in men for many years. A recent meta-analysis attempts to draw a clearer picture.

Depression is a major global concern. Per year, major depressive disorder affects an estimated 16.1 million adults in the United States alone.

The World Health Organization (WHO) describe depression as “the leading cause of ill health and disability worldwide.”

There are drugs available to manage depressive symptoms, but they do not work for everyone. In fact, a significant percentage of people do not experience long-term relief, even after trying multiple drugs. Existing depression therapies only work for a subset of the population. For this reason, it is vital to understand whether testosterone might help in treatment-resistant cases.

. . . Medical News Today

MINDING THE BRAIN. Neuroplasticity in a Nutshell – Daniel J. Siegel MD * Neuroplasticity – Wikipedia.

“The brain is so complicated it staggers its own imagination.”

“Neurons that fire together, wire together”, “neurons that fire out of sync, fail to link”.

We can use the power of our mind to change the firing patterns of our brain and thereby alter our feelings, perceptions, and responses. The power to direct our attention, focus, has within it the power to shape our brain’s firing patterns, as well as the power to shape the architecture of the brain itself.

The causal arrows between brain and mind point in both directions. When we focus our attention in specific ways, we create neural firing patterns that permit previously separated areas to become linked and integrated. The synaptic linkages are strengthened, the brain becomes more interconnected, and the mind becomes more adaptive.

Daniel J. Siegel, MD, is a clinical professor of psychiatry at the UCLA School of Medicine, co-director of the UCLA Mindful Awareness Research Center, and executive director of the Mindsight Institute.

Neuroplasticity

/,njuaraupla’stisiti/ noun

The ability of the brain to form and reorganize synaptic connections, especially in response to learning or experience or following injury. “neuroplasticity offers real hope to everyone from stroke victims to dyslexics”

It’s easy to get overwhelmed thinking about the brain. With more than one hundred billion interconnected neurons stuffed into a small, skull-enclosed space, the brain is both dense and intricate. And as if that weren’t complicated enough, each of your average neurons has ten thousand connections, or synapses, linking it to other neurons. In the skull portion of the nervous system alone, there are hundreds of trillions of connections linking the various neural groupings into a vast spiderweb-like network. Even if we wanted to, we couldn’t live long enough to count each of those synaptic linkages.

Given this number of synaptic connections, the brain’s possible on-off firing patterns, its potential for various states of activation, has been calculated to be ten to the millionth power, or ten times ten one million times. This number is thought to be larger than the number of atoms in the known universe. It also far exceeds our ability to experience in one lifetime even a small percentage of these firing possibilities. As a neuroscientist once said, “The brain is so complicated it staggers its own imagination.” The brain’s complexity gives us virtually infinite choices for how our mind will use those firing patterns to create itself. ‘If we get stuck in one pattern or the other, we’re limiting our potential.

Patterns of neural firing are what we are looking for when we watch a brain scanner “light up” as a certain task is being performed. What scans often measure is blood flow. Since neural activity increases oxygen use, an increased flow of blood to a given area of the brain implies that neurons are firing there. Research studies correlate this inferred neural firing with specific mental functions, such as focusing attention, recalling a past event, or feeling pain.

We can only imagine how a scan of my brain might have looked when I went down the low road in a tense encounter with my son one day: an abundance of limbic firing with increased blood flow to my irritated amygdala and a diminished flow to my prefrontal areas as they began to shut down. Sometimes the out-of-control firing of our brain drives what we feel, how we perceive what is happening, and how we respond. Once my prefrontal region was off-Iine, the firing patterns from throughout my subcortical regions could dominate my internal experience and my interactions with my kids. But it is also true that when we’re not traveling down the low road we can use the power of our mind to change the firing patterns of our brain and thereby alter our feelings, perceptions, and responses.

One of the key practical lessons of modern neuroscience is that the power to direct our attention has within it the power to shape our brain’s firing patterns, as well as the power to shape the architecture of the brain itself.

As you become more familiar with the various parts of the brain, you can more easily grasp how the mind uses the firing patterns in these various parts to create itself. It bears repeating that while the physical property of neurons firing is correlated with the subjective experience we call mental activity, no one knows exactly how this actually occurs. But keep this in the front of your mind:

Mental activity stimulates brain firing as much as brain firing creates mental activity.

When you voluntarily choose to focus your attention, say, on remembering how the Golden Gate Bridge looked one foggy day last fall, your mind has just activated the visual areas in the posterior part of your cortex. On the other hand, if you were undergoing brain surgery, the physician might place an electrical probe to stimulate neural firing in that posterior area, and you’d also experience a mental image of some sort.

The causal arrows between brain and mind point in both directions.

Keeping the brain in mind in this way is like knowing how to exercise properly. As we work out, we need to coordinate and balance the different muscle groups in order to keep ourselves fit. Similarly, we can focus our minds to build the specific “muscle groups” of the brain, reinforcing their connections, establishing new circuitry, and linking them together in new and helpful ways. There are no muscles in the brain, of course, but rather differentiated clusters of neurons that form various groupings called nuclei, parts, areas, zones, regions, circuits, or hemispheres.

And just as we can intentionally activate our muscles by flexing them, we can “flex” our circuits by focusing our attention to stimulate the firing in those neuronal groups. Using mindsight to focus our attention in ways that integrate these neural circuits can be seen as a form of “brain hygiene.”

WHAT FIRES TOGETHER, WIRES TOGETHER

You may have heard this before: As neurons fire together, they wire together. But let’s unpack this statement piece by piece. When we have an experience, our neurons become activated; What this means is that the long length of the neuron, the axon, has a flow of ions in and out of its encasing membrane that functions like an electrical current. At the far end of the axon, the electrical flow leads to the release of a chemical neurotransmitter into the small synaptic space that joins the firing neuron to the next, postsynaptic neuron. This chemical release activates or deactivates the downstream neuron. Under the right conditions, neural firing can lead to the strengthening of synaptic connections. These conditions include repetition, emotional arousal, novelty, and the careful focus of attention! Strengthening synaptic linkages between neurons is how we learn from experience.

One reason that we are so open to learning from experience is that, from the earliest days in the womb and continuing into our childhood and adolescence, the basic architecture of the brain is very much a work in progress.

During gestation, the brain takes shape from the bottom up, with the brainstem maturing first. By the time we are born, the limbic areas are partially developed but the neurons of the cortex lack extensive connections to one another.

This immaturity, the lack of connections within and among the different regions of the brain, is what gives us that openness to experience that is so critical to learning.

A massive proliferation of synapses occurs during the first years of life. These connections are shaped by genes and chance as well as experience, with some aspects of ourselves being less amenable to the influence of experience than others. Our temperament, for example, has a nonexperiential basis; it is determined in large part by genes and by chance. For instance, we may have a robust approach to novelty and love to explore new things, or we may tend to hang back in response to new situations, needing to “warm up” before we can overcome our initial shyness. Such neural propensities are set up before birth and then directly shape how we respond to the worId-and how others respond to us.

From our first days of life, our immature brain is also directly shaped by our interactions with the world, and especially by our relationships. Our experiences stimulate neural firing and sculpt our emerging synaptic connections. This is how experience changes the structure of the brain itself, and could even end up having an influence on our innate temperament.

As we grow, then, an intricate weaving together of the genetic, chance, and experiential input into the brain shapes what we call our “personality,” with all its habits, likes, dislikes, and patterns of response. If you’ve always had positive experiences with dogs and have enjoyed having them in your life, you may feel pleasure and excitement when a neighbor’s new dog comes bounding toward you. But if you’ve ever been severely bitten, your neural firing patterns may instead help create a sense of dread and panic, causing your entire body to shrink away from the pooch. If on top of having had a prior bad experience with a dog you also have a shy temperament, such an encounter may be even more fraught with fear. But whatever your experience and underlying temperament, transformation is possible. Learning to focus your attention in specific therapeutic ways can help you override that old coupling of fear with dogs.

The intentional focus of attention is actually a form of self-directed experience: It stimulates new patterns of neural firing to create new synaptic linkages.

You may be wondering, “How can experience, even a mental activity such as directing attention, actually shape the structure of the brain?” As we’ve seen, experience means neural firing. When neurons fire together, the genes in their nuclei, their master control centers, become activated and “express” themselves. Gene expression means that certain proteins are produced. These proteins then enable the synaptic linkages to be constructed anew or to be strengthened.

Experience also stimulates the production of myelin, the fatty sheath around axons, resulting in as much as a hundredfold increase in the speed of conduction down the neuron’s length. And as we now know, experience can also stimulate neural stem cells to differentiate into wholly new neurons in the brain.

This neurogenesis, along with synapse formation and myelin growth, can take place in response to experience throughout our lives. As discussed before, the capacity of the brain to change is called neuroplasticity. We are now discovering how the careful focus of attention amplifies neuroplasticity by stimulating the release of neurochemicals that enhance the structural growth of synaptic linkages among the activated neurons.

Epigenesis

An additional piece of the puzzle is now emerging. Researchers have discovered that early experiences can change the long-term regulation of the genetic machinery within the nuclei of neurons through a process called epigenesis.

If early experiences are positive, for example, chemical controls over how genes are expressed in specific areas of the brain can alter the regulation of our nervous system in such a way as to reinforce the quality of emotional resilience. If early experiences are negative, however, it has been shown that alterations in the control of genes influencing the stress response may diminish resilience in children and compromise their ability to adjust to stressful events in the future.

The changes wrought through epigenesis will continue to be in the science news as part of our exploration of how experience shapes who we are.

In sum, experience creates the repeated neural firing that can lead to gene expression, protein production, and changes in both the genetic regulation of neurons and the structural connections in the brain. By harnessing the power of awareness to strategically stimulate the brain’s firing, mindsight enables us to voluntarily change a firing pattern that was laid down involuntarily. When we focus our attention in specific ways, we create neural firing patterns that permit previously separated areas to become linked and integrated. The synaptic linkages are strengthened, the brain becomes more interconnected, and the mind becomes more adaptive.

THE BRAIN IN THE BODY

It’s important to remember that the activity of what we’re calling the “brain” is not just in our heads. For example, the heart has an extensive network of nerves that process complex information and relay data upward to the brain in the skull. So, too, do the intestines, and all the other major organ systems of the body. The dispersion of nerve cells throughout the body begins during our earliest development in the womb, when the cells that form the outer layer of the embryo fold inward to become the origin of our spinal cord. Clusters of these wandering cells then start to gather at one end of the spinal cord, ultimately to become the skull-encased brain. But other neural tissue becomes intricately woven with our musculature, our skin, our heart, our lungs, and our intestines. Some of these neural extensions form part of the autonomic nervous system, which keeps the body working in balance whether we are awake or asleep; other circuitry forms the voluntary portion of the nervous system, which allows us to intentionally move our limbs and control our respiration. The simple connection of sensory nerves from the periphery to our spinal cord and then upward through the various layers of the skull-encased brain allows signals from the outer world to reach the cortex, where we can become aware of them. This input comes to us via the five senses that permit us to perceive the outer physical world.

The neural networks throughout the interior of the body, including those surrounding the hollow organs, such as the intestines and the heart, send complex sensory input to the skull-based brain. This data forms the foundation for visceral maps that help us have a “gut feeling” or a “heartfelt” sense. Such input from the body forms a vital source of intuition and powerfully influences our reasoning and the way we create meaning in our lives.

Other bodily input comes from the impact of molecules known as hormones. The body’s hormones, together with chemicals from the foods and drugs we ingest, flow into our bloodstream and directly affect the signals sent along neurai routes. And, as we now know, even our immune system interacts with our nervous system. Many of these effects influence the neurotransmitters that operate at the synapses. These chemical messengers come in hundreds of varieties, some of which, such as dopamine and serotonin, have become household names thanks in part to drug company advertising. These substances have specific and complex effects on different regions of our nervous system. For example, dopamine is involved in the reward systems of the brain; behaviors and substances can become addictive because they stimulate dopamine release. Serotonin helps smooth out anxiety, depression, and mood fluctuations. Another chemical messenger is oxytocin, which is released when we feel close and attached to someone.

Throughout this book, I use the general term brain to encompass all of this wonderful complexity of the body proper as it intimately intertwines with its chemical environment and with the portion of neural tissue in the head. This is the brain that both shapes and is shaped by our mind. This is also the brain that forms one point of the triangle of well-being that is so central to mindsight.

By looking at the brain as an embodied system beyond its skull case, we can actually make sense of the intimate dance of the brain, the mind, and our relationships with one another. We can also recruit the power of neuroplasticity to repair damaged connections and create new, more satisfying patterns in our everyday lives.

from

Mindsight, change your brain and your life

by Daniel J. Siegel MD

get it at Amazon.com

See also: MINDSIGHT, OUR SEVENTH SENSE, an introduction. Change your brain and your life – Daniel J. Siegel MD.

Neuroplasticity – Wikipedia

Neuroplasticity, also known as brain plasticity and neural plasticity, is the ability of the brain to change throughout an individual’s life, e.g., brain activity associated with a given function can be transferred to a different location, the proportion of grey matter can change, and synapses may strengthen or weaken over time.

Research in the latter half of the 20th century showed that many aspects of the brain can be altered (or are “plastic”) even through adulthood.“ However, the developing brain exhibits a higher degree of plasticity than the adult brain.

Neuroplasticity can be observed at multiple scales, from microscopic changes in individual neurons to larger-scale changes such as cortical remapping in response to injury. Behavior, environmental stimuli, thought, and emotions may also cause neuroplastic change through activity-dependent plasticity, which has significant implications for healthy development, learning, memory, and recovery from brain damage.

At the single cell level, synaptic plasticity refers to changes in the connections between neurons, whereas non-synaptic plasticity refers to changes in their intrinsic excitability.

Neurobiology

One of the fundamental principles underlying neuroplasticity is based on the idea that individual synaptic connections are constantly being removed or recreated, largely dependent upon the activity of the neurons that bear them. The activity-dependence of synaptic plasticity is captured in the aphorism which is often used to summarize Hebbian theory: “neurons that fire together, wire together”/”neurons that fire out of sync, fail to link”. If two nearby neurons often produce an impulse in close temporal proximity, their functional properties may converge. Conversely, neurons that are not regularly activated simultaneously may be less likely to functionally converge.

Cortical maps

Cortical organization, especially in sensory systems, is often described in terms of maps. For example, sensory information from the foot projects to one cortical site and the projections from the hand target another site. As a result, the cortical representation of sensory inputs from the body resembles a somatotopic map, often described as the sensory homunculus.

In the late 1970s and early 1980s, several groups began exploring the impact of interfering with sensory inputs on cortical map reorganization. Michael Merzenich, Jon Kaas and Doug Rasmusson were some of those researchers. They found that if the cortical map is deprived of its input, it activates at a later time in response to other, usually adjacent inputs. Their findings have been since corroborated and extended by many research groups. Merzenich’s (1984) study involved the mapping of owl monkey hands before and after amputation of the third digit. Before amputation, there were five distinct areas, one corresponding to each digit of the experimental hand. Sixty-two days following amputation of the third digit, the area in the cortical map formerly occupied by that digit had been invaded by the previously adjacent second and fourth digit zones. The areas representing digit one and five are not located directly beside the area representing digit three, so these regions remained, for the most part, unchanged following amputation. This study demonstrates that only those regions that border a certain area invade it to alter the cortical map. In the somatic sensory system, in which this phenomenon has been most thoroughly investigated, JT Wall and J Xu have traced the mechanisms underlying this plasticity. Reorganization is not cortically emergent, but occurs at every level in the processing hierarchy; this produces the map changes observed in the cerebral cortex.“

Merzenich and William Jenkins (1990) initiated studies relating sensory experience, without pathological perturbation, to cortically observed plasticity in the primate somatosensory system, with the finding that sensory sites activated in an attended operant behavior increase in their cortical representation. Shortly thereafter, Ford Ebner and colleagues (1994) made similar efforts in the rodent whisker barrel cortex (also part of the somatosensory system). These two groups largely diverged over the years. The rodent whisker barrel efforts became a focus for Ebner, Matthew Diamond, Michael Armstrong-James, Robert Sachdev, and Kevin Fox. Great inroads were made in identifying the locus of change as being at cortical synapses expressing NMDA receptors, and in implicating cholinergic inputs as necessary for normal expression. The work of Ron Frostig and Daniel Polley (1999, 2004) identified behavioral manipulations causing a substantial impact on the cortical plasticity in that system.

Merzenich and DT Blake (2002, 2005, 2006) went on to use cortical implants to study the evolution of plasticity in both the somatosensory and auditory systems. Both systems show similar changes with respect to behavior. When a stimulus is cognitively associated with reinforcement, its cortical representation is strengthened and enlarged. In some cases, cortical representations can increase two to threefold in 1-2 days when a new sensory motor behavior is first acquired, and changes are largely finalised within at most a few weeks. Control studies show that these changes are not caused by sensory experience alone: they require learning about the sensory experience, they are strongest for the stimuli that are associated with reward, and they occur with equal ease in operant and classical conditioning behaviors.

An interesting phenomenon involving plasticity of cortical maps is the phenomenon of phantom limb sensation. Phantom limb sensation is experienced by people who have undergone amputations in hands, arms, and legs, but it is not limited to extremities. Although the neurological basis of phantom limb sensation is still not entirely understood it is believed that cortical reorganization plays an important role.

Norman Doidge, following the lead of Michael Merzenich, separates manifestations of neuroplasticity into adaptations that have positive or negative behavioral consequences. For example, if an organism can recover after a stroke to normal levels of performance, that adaptiveness could be considered an example of “positive plasticity”. Changes such as an excessive level of neuronal growth leading to spasticity or tonic paralysis, or excessive neurotransmitter release in response to injury that could result in nerve cell death, are considered as an example of “negative” plasticity. In addition, drug addiction and obsessive-compulsive disorder are both deemed examples of “negative plasticity” by Dr. Doidge, as the synaptic rewiring resulting in these behaviors is also highly maladaptive.

A 2005 study found that the effects of neuroplasticity occur even more rapidly than previously expected. Medical students’ brains were imaged during the period of studying for their exams. In a matter of months, the students’ gray matter increased significantly in the posterior and lateral parietal cortex.

Applications and example

The adult brain is not entirely “hard-wired” with fixed neuronal circuits. There are many instances of cortical and subcortical rewiring of neuronal circuits in response to training as well as in response to injury. There is solid evidence that neurogenesis (birth of brain cells) occurs in the adult mammalian brain, and such changes can persist well into old age. The evidence for neurogenesis is mainly restricted to the hippocampus and olfactory bulb, but current research has revealed that other parts of the brain, including the cerebellum, may be involved as well. However, the degree of rewiring induced by the integration of new neurons in the established circuits is not known, and such rewiring may well be functionally redundant.

There is now ample evidence for the active, experience-dependent reorganization of the synaptic networks of the brain involving multiple inter-related structures including the cerebral cortex. The specific details of how this process occurs at the molecular and ultrastructural levels are topics of active neuroscience research. The way experience can influence the synaptic organization of the brain is also the basis for a number of theories of brain function including the general theory of mind and Neural Darwinism. The concept of neuroplasticity is also central to theories of memory and learning that are associated with experience-driven alteration of synaptic structure and function in studies of classical conditioning in invertebrate animal models such as Aplysia.

Treatment of brain damage

A surprising consequence of neuroplasticity is that the brain activity associated with a given function can be transferred to a different location; this can result from normal experience and also occurs in the process of recovery from brain injury. Neuroplasticity is the fundamental issue that supports the scientific basis for treatment of acquired brain injury with goal-directed experiential therapeutic programs in the context of rehabilitation approaches to the functional consequences of the injury.

Neuroplasticity is gaining popularity as a theory that, at least in part, explains improvements in functional outcomes with physical therapy post-stroke. Rehabilitation techniques that are supported by evidence which suggest cortical reorganization as the mechanism of change include constraint-induced movement therapy, functional electrical stimulation, treadmill training with body-weight support, and virtual reality therapy. Robot assisted therapy is an emerging technique, which is also hypothesized to work by way of neuroplasticity, though there is currently insufficient evidence to determine the exact mechanisms of change when using this method.

One group has developed a treatment that includes increased levels of progesterone injections in brain-injured patients. “Administration of progesterone after traumatic brain injury (TBI) and stroke reduces edema, inflammation, and neuronal cell death, and enhances spatial reference memory and sensory motor recovery.” In a clinical trial, a group of severely injured patients had a 60% reduction in mortality after three days of progesterone injections?” However, a study published in the New England Journal of Medicine in 2014 detailing the results of a multi-center NIH-funded phase III clinical trial of 882 patients found that treatment of acute traumatic brain injury with the hormone progesterone provides no significant benefit to patients when compared with placebo.”

Vision

For decades, researchers assumed that humans had to acquire binocular vision, in particular stereopsis, in early childhood or they would never gain it. In recent years, however, successful improvements in persons with amblyopia, convergence insufficiency or other stereo vision anomalies have become prime examples of neuroplasticity; binocular vision improvements and stereopsis recovery are now active areas of scientific and clinical research.

Brain training

Several companies have offered so-called cognitive training software programs for various purposes that claim to work via neuroplasticity; one example is Fast ForWord which is marketed to help children with learning disabilities. A systematic metaanalytic review found that “There is no evidence from the analysis carried out that Fast ForWord is effective as a treatment for children‘s oral language or reading difficulties”. A 2016 review found very little evidence supporting any of the claims of Fast ForWord and other commercial products, as their task-speciflc effects fail to generalise to other tasks.

Sensory prostheses

Neuroplasticity is involved in the development of sensory function. The brain is born immature and it adapts to sensory inputs after birth. In the auditory system, congenital hearing impairment, a rather frequent inborn condition affecting 1 of 1000 newborns, has been shown to affect auditory development, and implantation of a sensory prostheses activating the auditory system has prevented the deficits and induced functional maturation of the auditory system. Due to a sensitive period for plasticity, there is also a sensitive period for such intervention within the first 2-4 years of life. Consequently, in prelingually deaf children, early cochlear implantation, as a rule, allows the children to learn the mother language and acquire acoustic communication.

Phantom limb sensation

In the phenomenon of phantom limb sensation, a person continues to feel pain or sensation within a part of their body that has been amputated. This is strangely common, occurring in 60-80% of amputees. An explanation for this is based on the concept of neuroplasticity, as the cortical maps of the removed limbs are believed to have become engaged with the area around them in the postcentral gyrus. This results in activity within the surrounding area of the cortex being misinterpreted by the area of the cortex formerly responsible for the amputated limb.

The relationship between phantom limb sensation and neuroplasticity is a complex one. In the early 1990s V.S. Ramachandran theorized that phantom limbs were the result of cortical remapping. However, in 1995 Hertz Flor and her colleagues demonstrated that cortical remapping occurs only in patients who have phantom pain.” Her research showed that phantom limb pain (rather than referred sensations) was the perceptual correlate of cortical reorganization. This phenomenon is sometimes referred to as maladaptive plasticity.

In 2009 Lorimer Moseley and Peter Brugger carried out a remarkable experiment in which they encouraged arm amputee subjects to use visual imagery to contort their phantom limbs into impossible configurations. Four of the seven subjects succeeded in performing impossible movements of the phantom limb. This experiment suggests that the subjects had modified the neural representation of their phantom limbs and generated the motor commands needed to execute impossible movements in the absence of feedback from the body. The authors stated that: “In fact, this finding extends our understanding of the brain’s plasticity because it is evidence that profound changes in the mental representation of the body can be induced purely by internal brain mechanisms, the brain truly does change itself.”

Chronlc pain

Individuals who suffer from chronic pain experience prolonged pain at sites that may have been previously injured, yet are otherwise currently healthy. This phenomenon is related to neuroplasticity due to a maladaptive reorganization of the nervous system, both peripherally and centrally. During the period of tissue damage, noxious stimuli and inflammation cause an elevation of nociceptive input from the periphery to the central nervous system. Prolonged nociception from the periphery then elicits a neuroplastic response at the cortical level to change its somatotopic organization for the painful site, inducing central sensitization.“ For instance, individuals experiencing complex regional pain syndrome demonstrate a diminished cortical somatotopic representation of the hand contralaterally as well as a decreased spacing between the hand and the mouth.

Additionally, chronic pain has been reported to significantly reduce the volume of grey matter in the brain globally, and more specifically at the prefrontal cortex and right thalamus. However, following treatment, these abnormalities in cortical reorganization and grey matter volume are resolved, as well as their symptoms. Similar results have been reported for phantom limb pain, chronic low back pain and carpal tunnel syndrome.

Meditation

A number of studies have linked meditation practice to differences in cortical thickness or density of gray matter. One of the most well-known studies to demonstrate this was led by Sara Lazar, from Harvard University, in 2000. Richard Davidson, a neuroscientist at the University of Wisconsin, has led experiments in cooperation with the Dalai Lama on effects of meditation on the brain. His results suggest that long-term or short-term practice of meditation results in different levels of activity in brain regions associated with such qualities as attention, anxiety, depression, fear, anger, and the ability of the body to heal itself. These functional changes may be caused by changes in the physical structure of the brain.

Fitness and exercise

Aerobic exercise promotes adult neurogenesis by increasing the production of neurotrophic factors (compounds that promote growth or survival of neurons), such as brain-derived neurotrophic factor (BDNF), insulin-like growth factor (IGF-1), and vascular endothelial growth factor (VEGF). Exercise-induced neurogenesis in the hippocampus is associated with measurable improvements in spatial memory. Consistent aerobic exercise over a period of several months induces marked clinically significant improvements in executive function (i.e., the “cognitive control” of behavior) and increased gray matter volume in multiple brain regions, particularly those that give rise to cognitive control. The brain structures that show the greatest improvements in gray matter volume in response to aerobic exercise are the prefrontal cortex and hippocampus; moderate improvements are seen in the anterior cingulate cortex, parietal cortex, cerebellum, caudate nucleus, and nucleus accumbens. Higher physical fitness scores (measured by V02 max) are associated with better executive function, faster processing speed, and greater volume of the hippocampus, caudate nucleus, and nucleus accumbens.

Human echolocation

Human echolocation is a learned ability for humans to sense their environment from echoes. This ability is used by some blind people to navigate their environment and sense their surroundings in detail. Studies in 2010 and 2011 using functional magnetic resonance imaging techniques have shown that parts of the brain associated with visual processing are adapted for the new skill of echolocation. Studies with blind patients, for example, suggest that the click-echoes heard by these patients were processed by brain regions devoted to vision rather than audition.

ADHD stimulants

Reviews of MRI studies on individuals with ADHD suggest that the long-term treatment of attention deficit hyperactivity disorder (ADHD) with stimulants, such as amphetamine or methylphenidate, decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudatenucleus of the basal ganglia.

In children

Neuroplasticity is most active in childhood as a part of normal human development, and can also be seen as an especially important mechanism for children in terms of risk and resiliency. Trauma is considered a great risk as it negatively affects many areas of the brain and puts strain on the sympathetic nervous system from constant activation. Trauma thus alters the brain’s connections such that children who have experienced trauma may be hyper vigilant or overly aroused. However a child’s brain can cope with these adverse effects through the actions of neuroplasticity.

In animals

In a single lifespan, individuals of an animal species may encounter various changes in brain morphology. Many of these differences are caused by the release of hormones in the brain; others are the product of evolutionary factors or developmental stages. Some changes occur seasonally in species to enhance or generate response behaviors.

Seasonal brain changes

Changing brain behavior and morphology to suit other seasonal behaviors is relatively common in animals. These changes can improve the chances of mating during breeding season. Examples of seasonal brain morphology change can be found within many classes and species.

Within the class Aves, black-capped chickadees experience an increase in the volume of their hippocampus and strength of neural connections to the hippocampus during fall months. These morphological changes within the hippocampus which are related to spatial memory are not limited to birds, as they can also be observed in rodents and amphibians?“ In songbirds, many song control nuclei in the brain increase in size during mating season. Among birds, changes in brain morphology to influence song patterns, frequency, and volume are common. Gonadotropin-releasing hormone (GnRH) immunoreactivity, or the reception of the hormone, is lowered in European starlings exposed to longer periods of light during the day.

The California sea hare, a gastropod, has more successful inhibition of egg-laying hormones outside of mating season due to increased effectiveness of inhibitors in the brain. Changes to the inhibitory nature of regions of the brain can also be found in humans and other mammals. In the amphibian Bufo japonicus, part of the amygdala is larger before breeding and during hibernation than it is after breeding.

Seasonal brain variation occurs within many mammals. Part of the hypothalamus of the common ewe is more receptive to GnRH during breeding season than at other times of the year.

Humans experience a change in the size of the hypothalamic suprachiasmatic nucleus and vasopressin-immunoreactive neurons within it during the fall, when these parts are larger. In the spring, both reduce in size.

Traumatic brain injury research

Randy Nudo’s group found that if a small stroke (an infarction) is induced by obstruction of blood flow to a portion of a monkey’s motor cortex, the part of the body that responds by movement moves when areas adjacent to the damaged brain area are stimulated. In one study, intracortical microstimulation (ICMS) mapping techniques were used in nine normal monkeys. Some underwent ischemic-infarction procedures and the others, ICMS procedures. The monkeys with ischemic infarctions retained more finger flexion during food retrieval and after several months this deficit returned to preoperative levels.

With respect to the distal forelimb representation, “postinfarction mapping procedures revealed that movement representations underwent reorganization throughout the adjacent, undamaged cortex.” Understanding of interaction between the damaged and undamaged areas provides a basis for better treatment plans in stroke patients. Current research includes the tracking of changes that occur in the motor areas of the cerebral cortex as a result of a stroke. Thus, events that occur in the reorganization process of the brain can be ascertained. Nudo is also involved in studying the treatment plans that may enhance recovery from strokes, such as physiotherapy, pharmacotherapy, and electrical-stimulation therapy.

Jon Kaas, a professor at Vanderbilt University, has been able to show “how somatosensory area 3b and ventroposterior (VP) nucleus of the thalamus are affected by longstanding unilateral dorsal-column lesions at cervical levels in macaque monkeys.” Adult brains have the ability to change as a result of injury but the extent of the reorganization depends on the extent of the injury. His recent research focuses on the somatosensory system, which involves a sense of the body and its movements using many senses. Usually, damage of the somatosensory cortex results in impairment of the body perception. Kaas’ research project is focused on how these systems (somatosensory, cognitive, motor systems) respond with plastic changes resulting from injury.

One recent study of neuroplasticity involves work done by a team of doctors and researchers at Emory University, specifically Dr. Donald Stein and Dr. David Wright. This is the first treatment in 40 years that has significant results in treating traumatic brain injuries while also incurring no known side effects and being cheap to administer. Dr. Stein noticed that female mice seemed to recover from brain injuries better than male mice, and that at certain points in the estrus cycle, females recovered even better. This difference may be attributed to different levels of progesterone, with higher levels of progesterone leading to the faster recovery from brain injury in mice. However, clinical trials showed progesterone offers no significant benefit for traumatic brain injury human patients.

History

Origin

The term “plasticity” was first applied to behavior in 1890 by William James in The Principles of Psychology. The first person to use the term neural plasticity appears to have been the Polish neuroscientist Jerzy Konorski.

See also: William James’s Revolutionary 1884 Theory of How Our Bodies Affect Our Feelings – Maria Popova * What is an Emotion? – William James (1884).

In 1793, Italian anatomist Michele Vicenzo Malacarne described experiments in which he paired animals, trained one of the pair extensively for years, and then dissected both. He discovered that the cerebellums of the trained animals were substantially larger. But these findings were eventually forgotten.

The idea that the brain and its function are not fixed throughout adulthood was proposed in 1890 by William James in The Principles of Psychology, though the idea was largely neglected. Until around the 1970s, neuroscientists believed that the brain’s structure and function was essentially fixed throughout adulthood.

The term has since been broadly applied:

“Given the central importance of neuroplasticity, an outsider would be forgiven for assuming that it was well defined and that a basic and universal framework served to direct current and future hypotheses and experimentation. Sadly, however, this is not the case. While many neuroscientists use the word neuroplasticity as an umbrella term it means different things to different researchers in different subflelds. In brief, a mutually agreed upon framework does not appear to exist.”

Research and discovery

In 1923, Karl Lashley conducted experiments on rhesus monkeys that demonstrated changes in neuronal pathways, which he concluded were evidence of plasticity. Despite this, and other research that suggested plasticity took place, neuroscientists did not widely accept the idea of neuroplasticity.

In 1945, Justo Gonzalo concluded from his research of brain dynamics, that, contrary to the activity of the projection areas, the “central” cortical mass (more or less equidistant from the visual, tactile and auditive projection areas), would be a “maneuvering mass”, rather unspecific or multisensory, with capacity to increase neural excitability and re-organize the activity by means of plasticity properties. He gives as a first example of adaptation, to see upright with reversing glasses in the Stratton experiment, and specially, several first-hand brain injuries cases in which he observed dynamic and adaptive properties in their disorders, in particular in the inverted perception disorder. He stated that a sensory signal in a projection area would be only an inverted and constricted outline that would be magnified due to the increase in recruited cerebral mass, and re-inverted due to some effect of brain plasticity, in more central areas, following a spiral growth.

Marian Diamond of the University of California, Berkeley, produced the first scientific evidence of anatomical brain plasticity, publishing her research in 1964.

Other significant evidence was produced in the 1960s and after, notably from scientists including Paul Bach-y-Rita, Michael Merzenich along with Jon Kaas, as well as several others.

In the 1960s, Paul Bach-y-Rita invented a device that was tested on a small number of people, and involved a person sitting in a chair, in which were embedded nubs that were made to vibrate in ways that translated images received in a camera, allowing a form of vision via sensory substitution.

Studies in people recovering from stroke also provided support for neuroplasticity, as regions of the brain that remained healthy could sometimes take over, at least in part, functions that had been destroyed; Shepherd Ivory Franz did work in this area.

Eleanor Maguire documented changes in hippocampal structure associated with acquiring the knowledge of London’s layout in local taxi drivers. A redistribution of grey matter was indicated in London Taxi Drivers compared to controls. This work on hippocampal plasticity not only interested scientists, but also engaged the public and media worldwide.

Michael Merzenich is a neuroscientist who has been one of the pioneers of neuroplasticity for over three decades. He has made some of “the most ambitious claims for the field that brain exercises may be as useful as drugs to treat diseases as severe as schizophrenia, that plasticity exists from cradle to the grave, and that radical improvements in cognitive functioning, how we learn, think, perceive, and remember are possible even in the elderly.

Merzenich’s work was affected by a crucial discovery made by David Hubel and Torsten Wiesel in their work with kittens. The experiment involved sewing one eye shut and recording the cortical brain maps. Hubel and Wiesel saw that the portion of the kitten’s brain associated with the shut eye was not idle, as expected. Instead, it processed visual information from the open eye. It was “…as though the brain didn’t want to waste any ‘cortical real estate’ and had found a way to rewire itself.”

This implied neuroplasticity during the critical period. However, Merzenich argued that neuroplasticity could occur beyond the critical period. His first encounter with adult plasticity came when he was engaged in a postdoctoral study with Clinton Woosley. The experiment was based on observation of what occurred in the brain when one peripheral nerve was cut and subsequently regenerated. The two scientists micromapped the hand maps of monkey brains before and after cutting a peripheral nerve and sewing the ends together. Afterwards, the hand map in the brain that they expected to be jumbled was nearly normal. This was a substantial breakthrough. Merzenich asserted that, “If the brain map could normalize its structure in response to abnormal input, the prevailing view that we are born with a hardwired system had to be wrong. The brain had to be plastic.” Merzenich received the 2016 Kavli Prize in Neuroscience “for the discovery of mechanisms that allow experience and neural activity to remodel brain function.”

MINDSIGHT, OUR SEVENTH SENSE, an introduction. Change your brain and your life – Daniel J. Siegel MD.

Mindsight, the brain’s capacity for both insight and empathy.

How can we be receptive to the mind’s riches and not just reactive to its reflexes? How can we direct our thoughts and feelings rather than be driven by them?

And how can we know the minds of others, so that we truly understand “where they are coming from” and can respond more effectively and compassionately?

Mindsight is a kind of focused attention that allows us to see the internal workings of our own minds, making it possible to see what is inside, to accept it, and in the accepting to let it go, and finally, to transform it.

When we develop the skill of mindsight, we actually change the physical structure of our brain. How we focus our attention shapes the structure of the brain.

Mindsight has the potential to free us from patterns of mind that are getting in the way of living our lives to the fullest.

Mindsight, our ability to look within and perceive the mind, to reflect on our experience, is every bit as essential to our wellbeing as our six senses. Mindsight is our seventh sense.

What is Mindsight?

“Mindsight” is a term coined by Dr. Dan Siegel to describe our human capacity to perceive the mind of the self and others. It is a powerful lens through which we can understand our inner lives with more clarity, integrate the brain, and enhance our relationships with others. Mindsight is a kind of focused attention that allows us to see the internal workings of our own minds. It helps us get ourselves off of the autopilot of ingrained behaviors and habitual responses. It lets us “name and tame” the emotions we are experiencing, rather than being overwhelmed by them.

“I am sad” vs. “I feel sad”

Mindsight is the difference between saying “I am sad” and ”I feel sad.” Similar as those two statements may seem, they are profoundly different. “I am sad” is a kind of limited selfdefinition. “I feel sad” suggests the ability to recognize and acknowledge a feeling, without being consumed by it. The focusing skills that are part of mindsight make it possible to see what is inside, to accept it, and in the accepting to let it go, and finally, to transform it.

Mindsight: A Skill that Can Change Your Brain

Mindsight is a learnable skill. It is the basic skill that underlies what we mean when we speak of having emotional and social intelligence. When we develop the skill of mindsight, we actually change the physical structure of the brain. This revelation is based on one of the most exciting scientific discoveries of the last twenty years: How we focus our attention shapes the structure of the brain. Neuroscience has also definitively shown that we can grow these new connections throughout our lives, not just in childhood.

What’s Interpersonal Neurobiology?

Interpersonal neurobiology, a term coined by Dr. Siegel in The Developing Mind, 1999, is an interdisciplinary field which seeks to understand the mind and mental health. This field is based on science but is not constrained by science. What this means is that we attempt to construct a picture of the ”whole elephant” of human reality. We build on the research of different disciplines to reveal the details of individual components, while also assembling these pieces to create a coherent view of the whole.

The Mindsight Institute

Through the Mindsight Institute, Dr. Siegel offers a scientificaliy-based way of understanding human development. The Mindsight Institute serves as the organization from which interpersonal neurobiology first developed and it continues to be a key source for learning in this area. The Mindsight Institute links science, clinical practice, education, the arts, and contemplation, serving as an educational hub from which these various domains of knowing and practice can enrich their individual efforts. Through the Mindsight Institute’s online program, people from six continents participate weekly in our global conversation about the ways to create more health and compassion in the world.

Mindsight Institute

Daniel J. Siegel, MD, is a clinical professor of psychiatry at the UCLA School of Medicine, co-director of the UCLA Mindful Awareness Research Center, and executive director of the Mindsight Institute. A graduate of Harvard Medical School, he is the author of the internationally acclaimed professional texts The Mindful Brain and The Developing Mind, and the co-author of Parenting from the Inside Out.

The groundbreaking bestseller on how your capacity for insight and empathy allows you to make positive changes in your brain and in your life.

Daniel J. Siegel, widely recognised as a pioneer in the field of mental health, coined the term ‘mindsight’ to describe the innovative integration of brain science with the practice of psychotherapy. Combining the latest research findings with case studies from his practice, he demonstrates how mindsight can be applied to alleviate a range of psychological and interpersonal problems from anxiety disorders to ingrained patterns of behaviour.

With warmth and humour, Dr Siegel shows us how to observe the working of our minds, allowing us to understand why we think, feel, and act the way we do; and how, by following the proper steps, we can literally change the wiring and architecture of our brains.

Both practical and profound, Mindsight offers exciting new proof that we have the ability at any stage of our lives to transform our thinking, our wellbeing, and our relationships.

Mindsight, change your brain and your life

Daniel J. Siegel MD

FOREWORD by Daniel Goleman

The great leaps forward in psychology have come from original insights that suddenly clarify our experience from a fresh angle, revealing hidden patterns of connection. Freud’s theory of the unconscious and Darwin’s model of evolution continue to help us understand the findings from current research on human behavior and some of the mysteries of our daily lives. Daniel Siegel’s theory of Mindsight, the brain’s capacity for both insight and empathy, offers a similar “Aha!” He makes sense for us out of the cluttered confusions of our sometimes maddening and messy emotions.

Our ability to know our own minds as well as to sense the inner world of others may be the singular human talent, the key to nurturing healthy minds and hearts. I’ve explored this terrain in my own work on emotional and social intelligence. Self-awareness and empathy are (along with seIf-mastery and social skills) domains of human ability essential for success in life. Excellence in these capacities helps people flourish in relationships, family life, and marriage, as well as in work and leadership.

Of these four key life skills, self-awareness lays the foundation for the rest. If we lack the capacity to monitor our emotions, for example, we will be poorly suited to manage or learn from them. Tuned out of a range of our own experience, we will find it all the harder to attune to that same range in others. Effective interactions depend on the smooth integration of selfawareness, mastery, and empathy. Or so I’ve argued. Dr. Siegel casts the discussion in a fresh light, putting these dynamics in terms of mindsight, and marshals compelling evidence for its crucial role in our lives.

A gifted and sensitive clinician, as well as a master synthesizer of research findings from neuroscience and child development, Dr. Siegel gives us a map forward. Over the years he has continually broken new ground in his writing on the brain, psychotherapy, and childrearing; his seminars for professionals are immensely popular.

The brain, he reminds us, is a social organ. Mindsight is the core concept in “interpersonal neurobiology,” a field Dr. Siegel has pioneered. This two-person view of what goes on in the brain lets us understand how our daily interactions matter neurologically, shaping neural circuits. Every parent helps sculpt the growing brain of a child; the ingredients of a healthy mind include an attuned, empathetic parent, one with mindsight. Such parenting fosters this same crucial ability in a child.

Mindsight plays an integrative role in the triangle connecting relationships, mind, and brain. As energy and information flow among these elements of human experience, patterns emerge that shape all three (and the brain here includes its extensions via the nervous system throughout the body). This vision is holistic in the true sense of the word, inclusive of our whole being. With mindsight we can better know and manage this vital flow of being.

Dr. Siegel’s biographical details are impressive. Harvard-trained and a clinical professor of psychiatry at UCLA and co-director of the Mindful Awareness Research Center there, he also founded and directs the Mindsight Institute. But far more impressive is his actual being, a mindful, attuned, and nurturing presence that is nourishing in itself. Dr. Siegel embodies what he teaches.

For professionals who want to delve into this new science, I recommend Dr. Siegel’s 1999 text on interpersonal neurobiology, The Developing Mind: Toward a Neurobiology of Interpersonal Experience. For parents, his book with Mary Hartzell is invaluable: Parenting from the Inside Out: How a Deeper Self-Understanding Can Help You Raise Children Who Thrive. But for anyone who seeks a more rewarding life, the book you hold in your hands has compelling and practical answers.

Daniel Goleman

INTRODUCTION

Diving into the Sea Inside

Within each of us there is an internal mental world that I have come to think of as the sea inside, that is a wonderfully rich place, filled with thoughts and feelings, memories and dreams, hopes and wishes. Of course it can also be a turbulent place, where we experience the dark side of all those wonderful feelings and thoughts, fears, sorrows, dreads, regrets, nightmares. When this inner sea seems to crash in on us, threatening to drag us down below to the dark depths, it can make us feel as if we are drowning.

Who among us has not at one time or another felt overwhelmed by the sensations from within our own minds? Sometimes these feelings are just a passing thing, a bad day at work, a fight with someone we love, an attack of nerves about a test we have to take or a presentation we have to give, or just an inexplicable case of the blues for a day or two.

But sometimes they seem to be something much more intractable, so much part of the very essence of who we are that it may not even occur to us that we can change them. This is where the skill that I have called “mindsight” comes in, for mindsight, once mastered, is a truly transformational tool. Mindsight has the potential to free us from patterns of mind that are getting in the way of living our lives to the fullest.

WHAT IS MINDSIGHT?

Mindsight is a kind of focused attention that allows us to see the internal workings of our own minds. It helps us to be aware of our mental processes without being swept away by them, enables us to get ourselves off the autopilot of ingrained behaviors and habitual responses, and moves us beyond the reactive emotional loops we all have a tendency to get trapped in. It lets us “name and tame” the emotions we are experiencing, rather than being overwhelmed by them. Consider the difference between saying “I am sad” and “I feel sad.” Similar as those two statements may seem, there is actually a profound difference between them. “I am sad” is a kind of seIf-definition, and a very limiting one. “I feel sad” suggests the ability to recognize and acknowledge a feeling, without being consumed by it. The focusing skills that are part of mindsight make it possible to see what is inside, to accept it, and in the accepting to let it go, and, finally, to transform it.

You can also think of mindsight as a very special lens that gives us the capacity to perceive the mind with greater clarity than ever before. This lens is something that virtually everyone can develop, and once we have it we can dive deeply into the mental sea inside, exploring our own inner lives and those of others. A uniquely human ability, mindsight allows us to examine closely, in detail and in depth, the processes by which we think, feel, and behave. And it allows us to reshape and redirect our inner experiences so that we have more freedom of choice in our everyday actions, more power to create the future, to become the author of our own story. Another way to put it is that mindsight is the basic skill that underlies everything we mean when we speak of having social and emotional intelligence.

Interestingly enough, we now know from the findings of neuroscience that the mental and emotional changes we can create through cultivation of the skill of mindsight are transformational at the very physical level of the brain. By developing the ability to focus our attention on our internal world, we are picking up a “scalpel” we can use to resculpt our neural pathways, stimulating the growth of areas of the brain that are crucial to mental health. I will talk a lot about this in the chapters that follow because I believe that a basic understanding of how the brain works helps people see how much potential there is for change.

But change never just happens. It’s something we have to work at. Though the ability to navigate the inner sea of our minds, to have mindsight, is our birthright, and some of us, for reasons that will become clear later, have a lot more of it than others, it does not come automatically, any more than being born with muscles makes us athletes. The scientific reality is that we need certain experiences to develop this essential human capacity. I like to say that parents and other caregivers offer us our first swimming lessons in that inner sea, and if we’ve been fortunate enough to have nurturing relationships early in life, we’ve developed the basics of mindsight on which we can build. But even if such early support was lacking, there are specific activities and experiences that can nurture mindsight throughout the lifespan. As you will see, mindsight is a form of expertise that can be honed in each of us, whatever our early history.

When I first began to explore the nature of the mind professionally, there was no term in our everyday language that captured the way we perceive our thoughts, feelings, sensations, memories, beliefs, attitudes, hopes, dreams, and fantasies. Of course, these activities of the mind fill our day-to-day lives, we don’t need to learn a skill in order to experience them. But how do we actually develop the ability to perceive a thought, not just have one, and to know it as an activity of our minds so that we are not taken over by it? How can we be receptive to the mind’s riches and not just reactive to its reflexes? How can we direct our thoughts and feelings rather than be driven by them?

And how can we know the minds of others, so that we truly understand “where they are coming from” and can respond more effectively and compassionately? When I was a young psychiatrist, there weren’t many readily accessible scientific or even clinical terms to describe the whole of this ability. To be able to help my patients, I coined the term mindsight so that together we could discuss this important ability that allows us to see and shape the inner workings of our own minds.

Our first five senses allow us to perceive the outside world, to hear a bird’s song or a snake’s warning rattle, to make our way down a busy street or smell the warming earth of spring. What has been called our sixth sense allows us to perceive our internal bodily states, the quickly beating heart that signals fear or excitement, the sensation of butterflies in our stomach, the pain that demands our attention.

Mindsight, our ability to look within and perceive the mind, to reflect on our experience, is every bit as essential to our wellbeing. Mindsight is our seventh sense.

As I hope to show you in this book, this essential skill can help us build social and emotional brainpower, move our lives from disorder to well-being, and create satisfying relationships filled with connection and compassion. Business and government leaders have told me that understanding how the mind functions in groups has helped them be more effective and enabled their organizations to become more productive. Clinicians in medicine and mental health have said that mindsight has changed the way they approach their patients, and that putting the mind at the heart of their healing work has helped them create novel and useful interventions. Teachers introduced to mindsight have learned to “teach with the brain in mind” and are reaching and teaching their students in deeper and more lasting ways.

In our individual lives, mindsight offers us the opportunity to explore the subjective essence of who we are, to create a life of deeper meaning with a richer and more understandable internal world. With mindsight we are better able to balance our emotions, achieving an internal equilibrium that enables us to cope with the small and large stresses of our lives. Through our ability to focus attention, mindsight also helps the body and brain achieve homeostasis, the internal balance, coordination, and adaptiveness that forms the core of health. Finally, mindsight can improve our relationships with our friends, colleagues, spouses, and children, and even the relationship we have with our own selves.

A NEW APPROACH TO WELLBEING

Everything that follows rests on three fundamental principles.

The first is that mindsight can be cultivated through very practical steps. This means that creating wellbeing in our mental life, in our close relationships, and even in our bodies, is a learnable skill. Each chapter of this book explores these skills, from basic to advanced, for navigating the sea inside.

Second, as mentioned above, when we develop the skill of mindsight, we actually change the physical structure of the brain. Developing the lens that enables us to see the mind more clearly stimulates the brain to grow important new connections. This revelation is based on one of the most exciting scientific discoveries of the last twenty years: How we focus our attention shapes the structure of the brain. Neuroscience supports the idea that developing the reflective skills of mindsight activates the very circuits that create resilience and wellbeing and that underlie empathy and compassion as well. Neuroscience has also definitively shown that we can grow these new connections throughout our lives, not just in childhood. The short Minding the Brain sections interspersed throughout part 1 are a traveler’s guide to this new territory.

The third principle is at the heart of my work as a psychotherapist, educator, and scientist. Wellbeing emerges when we create connections in our lives, when we learn to use mindsight to help the brain achieve and maintain integration, a process by which separate elements are linked together into a working whole.

I know this may sound both unfamiliar and abstract at first, but I hope you’ll soon find that it is a natural and useful way of thinking about our lives. For example, integration is at the heart of how we connect to one another in healthy ways, honoring one another’s differences while keeping our lines of communication wide open. Linking separate entities to one another, integration, is also important for releasing the creativity that emerges when the left and right sides of the brain are functioning together.

Integration enables us to be flexible and free; the lack of such connections promotes a life that is either rigid or chaotic, stuck and dull on the one hand or explosive and unpredictable on the other. With the connecting freedom of integration comes a sense of vitality and the ease of wellbeing. Without integration we can become imprisoned in behavioral ruts, anxiety and depression, greed, obsession, and addiction.

By acquiring mindsight skills, we can alter the way the mind functions and move our lives toward integration, away from these extremes of chaos or rigidity. With mindsight we are able to focus our mind in ways that literally integrate the brain and move it toward resilience and health.

MINDSIGHT MISUNDERSTOOD

It’s wonderful to receive an email from an audience member or patient who says, “My whole view of reality has changed.” But not everyone new to mindsight gets it right away. Some people are concerned that it’s just another way to become more self-absorbed, a form of navel-gazing, of becoming preoccupied with “reflection” instead of living fully. Perhaps you’ve also read some of the recent research (or the ancient wisdom) that tells us that happiness depends on “getting out of yourself.” Does mindsight turn us away from this greater good?

While it is true that being selfobsessed decreases happiness, mindsight actually frees you to become less self-absorbed, not more. When we are not taken over by our thoughts and feelings, we can become clearer in our own internal world as well as more receptive to the inner world of another. Scientific studies support this idea, revealing that individuals with more mindsight skills show more interest and empathy toward others. Research has also clearly shown that mindsight supports not only internal and interpersonal wellbeing but also greater effectiveness and achievement in school and work.

Another quite poignant concern about mindsight came up one day when I was talking with a group of teachers. “How can you ask us to have children reflect on their own minds?” one teacher said to me. “Isn’t that opening a Pandora’s box?” Recall that when Pandora’s box was opened, all the troubles of humanity flew out. Is this how we imagine our inner lives or the inner lives of our children? In my own experience, a great transformation begins when we look at our minds with curiosity and respect rather than fear and avoidance. Inviting our thoughts and feelings into awareness allows us to learn from them rather than be driven by them. We can calm them without ignoring them; we can hear their wisdom without being terrified by their screaming voices. And as you will see in some of the stories in this book, even surprisingly young children can develop the ability to pause and make choices about how to act when they are more aware of their impulses.

HOW DO WE CULTIVATE MINDSIGHT?

Mindsight is not an all-or-nothing ability, something you either have or don’t have. As a form of expertise, mindsight can be developed when we put in effort, time, and practice.

Most people come into the world with the brain potential to develop mindsight, but the neural circuits that underlie it need experiences to develop properly. For some, such as those with autism and related neurological conditions, the neural circuits of mindsight may not develop well even with the best caregiving. In most children, however, the ability to see the mind develops through everyday interactions with others, especially through attentive communication with parents and caregivers. When adults are in tune with a child, when they reflect back to the child an accurate picture of his internal world, he comes to sense his own mind with clarity. This is the foundation of mindsight. Neuroscientists are now identifying the circuits of the brain that participate in this intimate dance and exploring how a caregiver’s attunement to the child’s internal world stimulates the development of those neural circuits.

If parents are unresponsive, distant, or confusing in their responses, however, their lack of attunement means that they cannot reflect back to the child an accurate picture of the child’s inner world. In this case, research suggests, the child’s mindsight lens may become cloudy or distorted. The child may then be able to see only part of the sea inside, or see it dimly. Or the child may develop a lens that sees well but is fragile, easily disrupted by stress and intense emotions.

The good news is that whatever our early history, it is never too late to stimulate the growth of the neural fibers that enable mindsight to flourish. You’ll soon meet a ninety-two-year-old man who was able to overcome a painful and twisted childhood to emerge a mindsight maven. Here we see living evidence for another exciting discovery of modern neuroscience: that the brain never stops growing in response to experience. And this is true for people with happy childhoods, too. Even if we had positive relationships with our caregivers and parents early on-and even if we write books on the subject, we can continue as long as we live to keep developing our vital seventh sense and promoting the connections and integration that are at the heart of wellbeing.

We’ll begin our journey in part 1 by exploring situations in which the vital skills of mindsight are absent. These stories reveal how seeing the mind clearly and being able to alter how it functions are essential elements in the path toward wellbeing. Part 1 is the more theoretical section of the book, where I explain the basic concepts, give readers an introduction to brain science, and offer working definitions of the mind and mental health. Since I know that my readers will come from a wide variety of backgrounds and interests, I realize that some of you may want to skim or even skip much of that material in order to move directly to part 2.

In part 2, we’ll dive deeply into stories from my practice that illustrate the steps involved in developing the skills of mindsight. This is the section of the book in which I share the knowledge and practical skills that will help people understand how to shape their own minds toward health. At the very end of the book is an appendix outlining the fundamental concepts and a set of endnotes with the scientific resources supporting these ideas.

Our exploration of mindsight begins with the story of a family that changed my own life and my entire approach to psychotherapy. Looking for ways to help them inspired me to search for new answers to some painful questions about what happens when mindsight is lost. It also led to my search for the techniques that can enable us to reclaim and recreate mindsight in ourselves, our children, and our communities. I hope you’ll join me on this journey into the inner sea. Within those depths awaits a vast world of possibility.

PART I

THE PATH TO WELLBEING: MINDSIGHT ILLUMINATED

1. A BROKEN BRAIN, A LOST SOUL The Triangle of WellBeing

Barbara’s family might never have come for therapy if seven year old Leanne hadn’t stopped talking in school. Leanne was Barbara’s middle child, between Amy, who was fourteen, and Tommy, who was three. They had all taken it hard when their mother was in a near fatal car accident. But it wasn’t until Barbara returned home from the hospital and rehabilitation center that Leanne became “selectively mute.” Now she refused to speak with anyone outside the family, including me.

In our first weekly therapy sessions, we spent our time in silence, playing some games, doing pantomimes with puppets, drawing, and just being together. Leanne wore her dark hair in a single jumbled ponytail, and her sad brown eyes would quickly dart away whenever I looked directly at her. Our sessions felt stuck, her sadness unchanging, the games we played repetitive. But then one day when we were playing catch, the ball rolled to the side of the couch and Leanne discovered my video player and screen. She said nothing, but the sudden alertness of her expression told me her mind had clicked on to something.

The following week Leanne brought in a videotape, walked over to the video machine, and put it into the slot. I turned on the player and her smile lit up the room as we watched her mother gently lift a younger Leanne up into the air, again and again, and then pull her into a huge, enfolding hug, the two of them shaking with laughter from head to toe. Leanne’s father, Ben, had captured on film the dance of communication between parent and child that is the hallmark of love: We connect with each other through a give-and-take of signals that link us from the inside out. This is the joy filled way in which we come to share each other’s minds.

Next the pair swirled around on the lawn, kicking the brilliant yellow and burnt-orange leaves of autumn. The mother-daughter duet approached the camera, pursed lips blowing kisses into the lens, and then burst out in laughter. Five-year-old Leanne shouted, “Happy birthday, Daddy!” at the top of her lungs, and you could see the camera shake as her father laughed along with the ladies in his life. In the background Leanne’s baby brother, Tommy, was napping in his stroller, snuggled under a blanket and surrounded by plush toys. Leanne’s older sister, Amy, was off to the side engrossed in a book.

“That’s how my mom used to be when we lived in Boston,” Leanne said suddenly, the smile dropping from her face. It was the first time she had spoken directly to me, but it felt more like I was overhearing her talk to herself. Why had Leanne stopped talking?

It had been two years since that birthday celebration, eighteen months since the family moved to Los Angeles, and twelve months since Barbara suffered a severe brain injury in her accident, a head-on collision. Barbara had not been wearing her seat belt that evening as she drove their old Mustang to the local store to get some milk for the kids. When the drunk driver plowed into her, her forehead was forced into the steering wheel. She had been in a coma for weeks following the accident.

After she came out of the coma, Barbara had changed in dramatic ways. On the videotape I saw the warm, connected, and caring person that Barbara had been. But now, Ben told me, she “was just not the same Barbara anymore.” Her physical body had come home, but Barbara herself, as they had known her, was gone.

During Leanne’s next visit I asked for some time alone with her parents. It was clear that what had been a close relationship between Barbara and Ben was now profoundly stressed and distant. Ben was patient and kind with Barbara and seemed to care for her deeply, but I could sense his despair. Barbara just stared off as we talked, made little eye contact with either of us, and seemed to lack interest in the conversation. The damage to her forehead had been repaired by plastic surgery, and although she had been left with motor skills that were somewhat slow and clumsy, she actually looked quite similar, in outward appearance, to her image on the videotape. Yet something huge had changed inside.

Wondering how she experienced her new way of being, I asked Barbara what she thought the difference was. I will never forget her reply: “

Well, I guess if you had to put it into words, I suppose I’d say that I’ve lost my soul.”

Ben and I sat there, stunned. After a while, I gathered myself enough to ask Barbara what losing her soul felt like.

“I don’t know if I can say any more than that,” she said flatly. “It feels fine, I guess. No different. I mean, just the way things are. Just empty. Things are fine.”

We moved on to practical issues about care for the children, and the session ended.

A DAMAGED BRAIN

It wasn’t clear yet how much Barbara could or would recover. Given that only a year had passed since the accident, much neural repair was still possible. After an injury, the brain can regain some of its function and even grow new neurons and create new neural connections, but with extensive damage it may be difficult to retrieve the complex abilities and personality traits that were dependent on the now destroyed neural structures.

Neuroplasticity is the term used to describe this capacity for creating new neural connections and growing new neurons in response to experience. Neuroplasticity is not just available to us in youth: We now know that it can occur throughout the lifespan. Efforts at rehabilitation for Barbara would need to harness the power of neuroplasticity to grow the new connections that might be able to reestablish old mental functions. But we’d have to wait awhile for the healing effects of time and rehabilitation to see how much neurological recovery would be possible.

My immediate task was to help Leanne and her family understand how someone could be alive and look the same yet have become so radically different in the way her mind functioned. Ben had told me earlier that he did not know how to help the children deal with how Barbara had changed; he said that he could barely understand it himself. He was on double duty, working, managing the kids’ schedules, and making up for what Barbara could no longer do. This was a mother who had delighted in making homemade Halloween costumes and Valentine’s Day cupcakes. Now she spent most of the day watching TV or wandering around the neighborhood. She could walk to the grocery store, but even with a list she would often come home empty-handed. Amy and Leanne didn’t mind so much that she cooked a few simple meals over and over again. But they were upset when she forgot their special requests, things they’d told her they liked or needed for school. It was as if nothing they said to her really registered.

As our therapy sessions continued, Barbara usually sat quietly, even when she was alone with me, although her speech was intact. Occasionally she’d suddenly become agitated at an innocent comment from Ben, or yell if Tommy fidgeted or Leanne twirled her ponytail around her finger. She might even erupt after a silence, as if some internal process was driving her. But most of the time her expression seemed frozen, more like emptiness than depression, more vacuous than sad. She seemed aloof and unconcerned, and I noticed that she never spontaneously touched either her husband or her children. Once, when three-year-old Tommy climbed onto her lap, she briefly put her hand on his leg as if repeating some earlier pattern of behavior, but the warmth had gone out of the gesture.

When I saw the children without their mother, they let me know how they felt. “She just doesn’t care about us like she used to,” Leanne said. “And she doesn’t ever ask us anything about ourselves,” Amy added with sadness and irritation. “She’s just plain selfish. She doesn’t want to talk to anyone anymore.” Tommy remained silent. He sat close to his father with a drawn look on his face.

Loss of someone we love cannot be adequately expressed with words. Grappling with loss, struggling with disconnection and despair, fills us with a sense of anguish and actual pain. Indeed, the parts of our brain that process physical pain overlap with the neural centers that record social ruptures and rejection. Loss rips us apart.

Grief allows you to let go of something you’ve lost only when you begin to accept what you now have in its place. As our mind clings to the familiar, to our established expectations, we can become trapped in feelings of disappointment, confusion, and anger that create our own internal worlds of suffering. But what were Ben and the kids actually letting go of? Could Barbara regain her connected way of being? How could the family learn to live with a person whose body was still alive, but whose personality and “souI”, at least as they had known her, were gone?

“YOU-MAPS” AND “ME-MAPS”

Nothing in my formal training, whether in medical school, pediatrics, or psychiatry, had prepared me for the situation I now faced in my treatment room. I’d had courses on brain anatomy and on brain and behavior, but when I was seeing Barbara’s family, in the early 1990s, relatively little was known about how to bring our knowledge of such subjects into the clinical practice of psychotherapy. Looking for some way to explain Barbara to her family, I trekked to the medical library and reviewed the recent clinical and scientific literature that dealt with the regions of the brain damaged by her accident.

Scans of Barbara’s brain revealed substantial trauma to the area just behind her forehead; the lesions followed the upper curve of the steering wheel. This area, I discovered, facilitates very important functions of our personality. It also links widely separated brain regions to one another, it is a profoundly integrative region of the brain.

The area behind the forehead is a part of the frontal lobe of the cerebral cortex, the outermost section of the brain. The frontal lobe is associated with most of our complex thinking and planning. Activity in this part of the brain fires neurons in patterns that enable us to form neural representations, “maps” of various aspects of our world. The maps resulting from these clusters of neuronal activity serve to create an image in our minds. For example, when we take in the light reflected from a bird sitting in a tree, our eyes send signals back into our brain, and the neurons there fire in certain patterns that permit us to have the visual picture of the bird.

Somehow, in ways still to be discovered, the physical property of neurons firing helps to create our subjective experience, the thoughts, feelings, and associations evoked by seeing that bird, for example. The sight of the bird may cause us to feel certain emotions, to hear or remember its song, and even to associate that song with ideas such as nature, hope, freedom, and peace. The more abstract and symbolic the representation, the higher in the nervous system it is created, and the more forward in the cortex.

The prefrontal cortex, the most damaged part of the frontal lobe of Barbara’s brain, makes complex representations that permit us to create concepts in the present, think of experiences in the past, and plan and make images about the future. The prefrontal cortex is also responsible for the neural representations that enable us to make images of the mind itself. I call these representations of our mental world “mindsight maps.” And I have identified several kinds of mindsight maps made by our brains.

The brain makes what I call a “me-map” that gives us insight into ourselves, and a “you-map” for insight into others. We also seem to create “we-maps,” representations of our relationships. Without such maps, we are unable to perceive the mind within ourselves or others. Without a me-map, for example, we can become swept up in our thoughts or flooded by our feelings. Without a you-map, we see only others’ behaviors, the physical aspect of reality, without sensing the subjective core, the inner mental sea of others. It is the you-map that permits us to have empathy. In essence, the injury to Barbara’s brain had created a world without mindsight. She had feelings and thoughts, but she could not represent them to herself as activities of her mind. Even when she said she’d “lost her soul,” her statement had a bland, factual quality, more like a scientific observation than a deeply felt expression of personal identity. (I was puzzled by that disconnect between observation and emotion until I learned from later studies that the parts of our brain that create maps of the mind are distinct from those that enable us to observe and comment on self-traits such as shyness or anxiety or, in Barbara’s case, the lack of a quality she called “soul.”)

In the years since I took Barbara’s brain scans to the library, much more has been discovered about the interlinked functions of the prefrontal cortex. For example, the side of this region is crucial for how we pay attention; it enables us to put things in the “front of our mind” and hold them in awareness. The middle portion of the prefrontal area, the part damaged in Barbara, coordinates an astonishing number of essential skills, including regulating the body, attuning to others, balancing emotions, being flexible in our responses, soothing fear, and creating empathy, insight, moral awareness, and intuition. These were the skills Barbara was no longer able to recruit in her interactions with her family.

I will be referring to, and expanding on, this list of nine middle prefrontal functions throughout our discussion of mindsight. But even at first glance, you can see that these functions are essential ingredients for wellbeing, ranging from bodily processes such as regulating our hearts to social functions such as empathy and moral reasoning.

After Barbara emerged from her coma, her impairments had seemed to settle into a new personality. Some of her habits, such as what she liked to eat and how she brushed her teeth, remained the same. There was nothing significantly changed in how her brain mapped out these basic behavioral functions. But the ways in which she thought, felt, behaved, and interacted with others were profoundly altered. This affected every detail of daily life, right down to Leanne’s crooked ponytail. Barbara still had the behavioral moves necessary to fix her daughter’s hair, but she no longer cared enough to get it right.

Above all, Barbara seemed to have lost the very map-making ability that would enable her to honor the reality and importance of her own or others’ subjective inner lives. Her mindsight maps were no longer forming amid the now jumbled middle prefrontal circuitry upon which they depended for their creation. This middle prefrontal trauma had also disrupted the communication between Barbara and her family, she could neither send nor receive the connecting signals enabling her to join minds with the people she had loved most.

Ben summed up the change: “She is gone. The person we live with is just not Barbara.”

A TRIANGLE OF WELLBEING: MIND, BRAIN, AND RELATIONSHIPS

The videotape of Ben’s birthday had revealed a vibrant dance of communication between Barbara and Leanne. But now there was no dance, no music keeping the rhythm of two minds flowing into a sense of a “we.” Such joining happens when we attune to the internal shifts in another person, as they attune to us, and our two worlds become linked as one. Through facial expressions and tones of voice, gestures and postures, some so fleeting they can be captured only on a slowed-down recording, we come to “resonate” with one another. The whole we create together is truly larger than our individual identities. We feel this resonance as a palpable sense of connection and aliveness. This is what happens when our minds meet.

A patient of mine once described this vital connection as “feeling felt” by another person: We sense that our internal world is shared, that our mind is inside the other. But Leanne no longer “felt felt” by her mom.

The way Barbara behaved with her family reminded me of a classic research tool used to study infant-parent communication and attachment. Called the “still-face” experiment, it is painful both to participate in and to watch.

A mother is asked to sit with her four-month-old infant facing her and when signaled, to stop interacting with her child. This “still” phase in which no verbal or nonverbal signals are to be shared with the child is profoundly distressing. For up to three minutes, the child attempts to engage the now nonresponsive parent in a bid for connection. At first the child usually amps up her signals, increasing smiles, coos, eye contact. But after a period of continuing nonresponse, she becomes agitated and distressed, her organized bids for connection melting into signs of anguish and outrage. She may then attempt to soothe herself by placing her hand in her mouth or pulling at her clothes. Sometimes researchers or parents call off the experiment at this time, but sometimes it goes on until the infant withdraws, giving up in a kind of despondent collapse that looks like melancholic depression. These stages of protest, self-soothing, and despair reveal how much the child depends upon the attuned responses of a parent to keep her own internal world in equilibrium.

We come into the world wired to make connections with one another, and the subsequent neural shaping of our brain, the very foundation of our sense of self, is built upon these intimate exchanges between the infant and her caregivers. In the early years this interpersonal regulation is essential for survival, but throughout our lives we continue to need such connections for a sense of vitality and well-being.

. . .

from

Mindsight, change your brain and your life

by Daniel J. Siegel MD

get it at Amazon.com

Can Treatment Resistant Depression Be Successfully Treated? – Robert J. Hedaya.

If you or a loved one suffers from depression that doesn’t seem to get better no matter what medication you take or psychotherapy you receive, read this article.

Between one-half to two-thirds of people with depression do not have a full recovery from their depression.
More than 1/3 fail to improve with the standard of care approach, which is medication and therapy.
The most common cause of treatment-resistant depression is a failure to identify and treat the underlying biological causes of depression.
A strong Functional Medicine program can effectively resolve treatment-resistant depression.
So addressing the cause or causes of inflammatory signaling, rather than just prescribing a drug to boost serotonin levels, is an example of a functional medicine approach to treating depression. In fact, it is highly unlikely that a drug will work in such a situation for the simple reason that the SSRI medications require enough serotonin to be present to work on. But with inflammation, by definition, brain serotonin levels are deficient.

Basic science has now enlightened us as to the underlying biological causes of depression, and this information takes us way, way beyond the outdated and inadequate neurotransmitter, neuro-centric model of depression.

. . . Psychology Today

The Vagus Nerve, State and Story. The Polyvagal Theory in Therapy and the Autonomic Nervous System – Deb Dana * Stimulating the pathway connecting body and brain can change patients’ lives – Zoe Fisher and Andrew H Kemp.

“The mind narrates what the nervous system knows. Story follows state.”

“Our Autonomic Nervous System fires muscular tensions, triggered by feedback signals from the external & internal world at millisecond speeds below conscious awareness. These muscles tensions fire our Thoughts.”

In her new book, The Polyvagal Theory in Therapy: Engaging the Rhythm of Regulation, Deb Dana offers a window into the inner life of a traumatized person and a way out of trauma and back to finding joy, connection, and safety through enlightening theory, rich experiential practice, and practical steps.

Developing present moment awareness and the ability to detect autonomic nervous system states opens the door for clients to experience state and story as separate experiences and ultimately reshape their nervous system.

The explanatory power of the Polyvagal Theory provides therapists with a language to help their clients reframe reactions to traumatic events. With the theory, clients are able to understand the adaptive functions of their reactions.

The hope polyvagal theory offers is that, in time, clients feel attuned to their autonomic nervous systems, develop a sense of self-compassion that allows them to see their responses as attempts at survival and not simply clinical diagnoses, and honor the innate wisdom of the autonomic nervous system to find their way back to safety, connection, and the rhythm of regulation.

In each of our relationships, the autonomic nervous system is “learning” about the world and being toned toward habits of connection or protection. Hopefulness lies in knowing that while early experiences shape the nervous system, ongoing experiences can reshape it.

The theory transforms the clients narrative from a documentary to a pragmatic quest for safety with an implicit bodily drive to survive. Through the lens of Polyvagal Theory, we see the role of the autonomic nervous system as it shapes clients’ experiences of safety and affects their ability for connection.

Polyvagal Theory demonstrates that even before the brain makes meaning of an incident, the autonomic nervous system has assessed the environment and initiated an adaptive survival response. Neuroception precedes perception. Story follows state.

The clues to a client’s present-time suffering can be found in their autonomic response history.

“The autonomic nervous system,” Deb writes, “responds to challenges in daily life by telling us not what we are or who we are, but how we are.”

Informing, guiding, and regulating our experiences, the autonomic nervous system tells us when we are safe and can proceed forward and when we are under threat and should retreat.

However, when trauma disrupts our experience, it also disrupts the autonomic nervous system, and the result is dysregulation, the interruption of the ability to feel safe.

“Trauma compromises our ability to engage with others by replacing patterns of connection with patterns of protection,” Dana explains.

Because our lived experience relies on our autonomic nervous system’s ability to detect safety, a term known as neuroception, when the autonomic nervous system becomes disrupted, it affects everything about how we move through the world, interact with those around us, and attune to ourselves and the world around us.

Yet trauma survivors are often judged by their actions. Dana writes, ”We still too often blame the victim if they didn’t fight or try to escape but instead collapsed into submission. We make a judgement about what someone did that leads to a belief about who they are.”

The polyvagal theory, however, sees every response as an action in service of survival. In trauma, safety has been threatened, and the system that helps to regain a sense of safety is no longer able to regulate, detect safety, or restore connection.

Dana writes, “If we think of trauma as Robert Macy (president of the International Trauma Center) defined it, ”an overwhelming demand placed upon the physiological human system,” then we immediately consider the autonomic nervous system.”

And because the autonomic nervous system is shaped over time through the experiences we have, we develop a habitual pattern known as a personal neural profile that then guides our actions and responses.

“We live a story that originates in our autonomic state, is sent through autonomic pathways from the body to the brain, and is then translated by the brain into the beliefs that guide our daily living. The mind narrates what the nervous system knows. Story follows state,” writes Dana.

A Polyvagal theory describes the neural experience as well as the expectations for reciprocal connection it holds. When those connections are violated, the result is what is known as “biological rudeness” and an immediate feeling of threat.

The work of the therapist using polyvagal theory is to interrupt the traumatized client’s neural expectations in positive ways.

Dana writes, “Repeatedly violating neural expectations in this way within the therapist-client dyad influences a client’s autonomic assumptions. As a client’s nervous system begins to anticipate in different ways, the old story will no longer fIt and a new story can be explored.”

Humans are social animals dependent on connection and coregulation for a sense of safety. Yet trauma makes connection dangerous and interrupts the process of coregulatory development.

While trauma can make clients feel as if they no longer need or want connection, their autonomic nervous system relies on connection, and it suffers.

“Chronic loneliness sends a persistent message of danger, and our autonomic nervous system remains locked in survival mode,” writes Dana.

Through mapping their autonomic states, clients begin to understand what triggers move them into a state of sympathetic activation and perception of danger, and what glimmers help restore them to a state of safety, hope and growth.

Developing present moment awareness and the ability to detect autonomic nervous system states opens the door for clients to experience state and story as separate experiences and ultimately reshape their nervous system.

The nervous system is relational in nature and Dana describes how therapists can help clients build their capacity for connection, reciprocity and repair: “When a rupture in the therapeutic relationship occurs, look for the moment when the work became too big of an autonomic challenge, name it for your clients, and take responsibility for the misattunement.”

Ruptures, much like trauma itself, can be opportunities for change, growth, and a deeper understanding. While the experience can feel uncertain and the path unknown, the ability to intertwine states and disrupt the all-or-nothing responses so common in trauma is crucial to experiencing play, intimacy, awe, and elevation.

The hope polyvagal theory offers is that, in time, clients feel attuned to their autonomic nervous systems, develop a sense of self-compassion that allows them to see their responses as attempts at survival and not simply clinical diagnoses, and honor the innate wisdom of the autonomic nervous system to find their way back to safety, connection, and the rhythm of regulation.

Psych Central

A BEGINNER’S GUIDE TO POLYVAGAL THEORY

Deb Dana

We come into the world wired to connect. With our first breath, we embark on a lifelong quest to feel safe in our bodies, in our environments, and in our relationships with others. The autonomic nervous system is our personal surveillance system, always on guard, asking the question “Is this safe?” Its goal is to protect us by sensing safety and risk, listening moment by moment to what is happening in and around our bodies and in the connections we have to others.

This listening happens far below awareness and far away from our conscious control. Dr. Porges, understanding that this is not awareness that comes with perception, coined the term neuroception to describe the way our autonomic nervous system scans for cues of safety, danger, and life-threat without involving the thinking parts of our brain.

Because we humans are meaningmaking beings, what begins as the wordless experiencing of neuroception drives the creation of a story that shapes our daily living.

The Autonomic Nervous System

The autonomic nervous system is made up of two main branches, the sympathetic and the parasympathetic, and responds to signals and sensations via three pathways, each with a characteristic pattern of response. Through each of these pathways, we react “in service of survival.”

The sympathetic branch is found in the middle part of the spinal cord and represents the pathway that prepares us for action. It responds to cues of danger and triggers the release of adrenaline, which fuels the fight-or-flight response.

In the parasympathetic branch, Polyvagal Theory focuses on two pathways traveling within a nerve called the vagus. Vagus, meaning “wanderer,” is aptly named. From the brain stem at the base of the skull, the vagus travels in two directions: downward through the lungs, heart, diaphragm, and stomach and upward to connect with nerves in the neck, throat, eyes, and ears.

The vagus is divided into two parts: the ventral vagal pathway and the dorsal vagal pathway. The ventral vagal pathway responds to cues of safety and supports feelings of being safely engaged and socially connected. In contrast, the dorsal vagal pathway responds to cues of extreme danger. It takes us out of connection, out of awareness, and into a protective state of collapse. When we feel frozen, numb, or “not here,” the dorsal vagus has taken control.

Dr. Porges identified a hierarchy of response built into our autonomic nervous system and anchored in the evolutionary development of our species. The origin of the dorsal vagal pathway of the parasympathetic branch and its immobilization response lies with our ancient vertebrate ancestors and is the oldest pathway. The sympathetic branch and its pattern of mobilization, was next to develop. The most recent addition, the ventral vagal pathway of the parasympathetic branch brings patterns of social engagement that are unique to mammals.

When we are firmly grounded in our ventral vagal pathway, we feel safe and connected, calm and social. A sense (neuroception) of danger can trigger us out of this state and backwards on the evolutionary timeline into the sympathetic branch. Here we are mobilized to respond and take action. Taking action can help us return to the safe and sociaI state. It is when we feel as though we are trapped and can’t escape the danger that the dorsal vagal pathway pulls us all the way back to our evolutionary beginnings. In this state we are immobilized. We shut down to survive. From here, it is a long way back to feeling safe and social and a painful path to follow.

The Autonomic Ladder

Let’s translate our basic knowledge of the autonomic nervous system into everyday understanding by imagining the autonomic nervous system as a ladder. How do our experiences change as we move up and down the ladder?

The Top of the Ladder

What would it feel like to be safe and warm? Arms strong but gentle. Snuggled close, joined by tears and laughter. Free to share, to stay, to leave . . .

Safety and connection are guided by the evolutionarily newest part of the autonomic nervous system. Our social engagement system is active in the ventral vagal pathway of the parasympathetic branch. In this state, our heart rate is regulated, our breath is full, we take in the faces of friends, and we can tune in to conversations and tune out distracting noises.

We see the “big picture” and connect to the world and the people in it. I might describe myself as happy, active, interested and the world as safe, fun, and peaceful. From this ventral vagal place at the top of the autonomic ladder, I am connected to my experiences and can reach out to others. Some of the daily living experiences of this state include being organized, following through with plans, taking care of myself, taking time to play, doing things with others, feeling productive at work, and having a general feeling of regulation and a sense of management. Health benefits include a healthy heart, regulated blood pressure, a healthy immune system decreasing my vulnerability to illness, good digestion, quality sleep, and an overall sense of well-being.

Moving Down the Ladder

Fear is whispering to me and I feel the power of its message. Move, take action, escape. No one can be trusted. No place is safe . . .

The sympathetic branch of the autonomic nervous system activates when we feel a stirring of unease, when something triggers a neuroception of danger. We go into action. Fight or flight happens here. In this state, our heart rate speeds up, our breath is short and shallow, we scan our environment looking for danger, we are “on the move.” I might describe myself as anxious or angry and feel the rush of adrenaline that makes it hard for me to be still. I am listening for sounds of danger and don’t hear the sounds of friendly voices. The world may feel dangerous, chaotic, and unfriendly.

From this place of sympathetic mobilization, a step down the autonomic ladder and backward on the evolutionary timeline, I may believe, “The world is a dangerous place and I need to protect myself from harm.”

Some of the daily living problems can be anxiety, panic attacks, anger, inability to focus or follow through, and distress in relationships. Health consequences can include heart disease; high blood pressure; high cholesterol; sleep problems; weight gain; memory impairment; headache; chronic neck, shoulder, and back tension; stomach problems; and increased vulnerability to illness.

The Bottom of the Ladder

I’m far away in a dark and forbidding place. I make no sound. I am small and silent and barely breathing. Alone where no one will ever find me . . .

Our oldest pathway of response, the dorsal vagal pathway of the parasympathetic branch, is the path of last resort. When all else fails, when we are trapped and action taking doesn’t work, the “primitive vagus” takes us into shutdown, collapse, and dissociation.

Here at the very bottom of the autonomic ladder, I am alone with my despair and escape into not knowing, not feeling, almost a sense of not being. I might describe myself as hopeless, abandoned, foggy, too tired to think or act and the world as empty, dead, and dark.

From this earliest place on the evolutionary timeline, where my mind and body have moved into conservation mode, I may believe, “I am lost and no one will ever find me.”

Some of the daily living problems can be dissociation, problems with memory, depression, isolation, and no energy for the tasks of daily living. Health consequences of this state can include chronic fatigue, fibromyalgia, stomach problems, low blood pressure, type 2 diabetes, and weight gain.

The Polyvagal Theory in Therapy: Engaging the Rhythm of Regulation

The polyvagal theory presented in client friendly language.

Deb Dana

This book offers therapists an integrated approach to adding a polyvagal foundation to their work with clients. With clear explanations of the organizing principles of Polyvagal Theory, this complex theory is translated into clinician and client-friendly language. Using a unique autonomic mapping process along with worksheets designed to effectively track autonomic response patterns, this book presents practical ways to work with clients’ experiences of connection. Through exercises that have been specifically created to engage the regulating capacities of the ventral vagal system, therapists are given tools to help clients reshape their autonomic nervous systems.

Adding a polyvagal perspective to clinical practice draws the autonomic nervous system directly into the work of therapy, helping clients re-pattern their nervous systems, build capacities for regulation, and create autonomic pathways of safety and connection. With chapters that build confidence in understanding Polyvagal Theory, chapters that introduce worksheets for mapping, tracking, and practices for repatterning, as well as a series of autonomic meditations, this book offers therapists a guide to practicing polyvagal-informed therapy.

The Polyvagal Theory in Therapy is essential reading for therapists who work with trauma and those who seek an easy and accessible way of understanding the significance that Polyvagal Theory has to clinical work.

FOREWORD

By Stephen W. Porges

Since Polyvagal Theory emerged in 1994, I have been on a personal journey expanding the clinical applications of the theory. The journey has moved Polyvagal concepts and constructs from the constraints of the laboratory to the clinic where therapists apply innovative interventions to enhance and optimize human experiences.

Initially, the explanatory power of the theory provided therapists with a language to help their clients reframe reactions to traumatic events. With the theory, clients were able to understand the adaptive functions of their reactions.

As insightful and compassionate therapists conveyed the elements of the theory to their clients, survivors of trauma began to reframe their experiences and their personal narratives shifted to feeling heroic and not victimized.

The theory had its foundation in laboratory science, moved into applied research to decipher the neurobiological mechanisms of psychiatric disorders, and now through the insights of Deb Dana and other therapists is informing clinical treatment.

The journey from laboratory to clinic started on October 8, 1994 in Atlanta, when Polyvagal Theory was unveiled to the scientific community in my presidential address to the Society for Psychophysiological Research. A few months later the theory was disseminated as a publication in the society’s journal, Psychophysiology (Porges, 1995). The article was titled “Orienting in a Defensive World: Mammalian Modifications of Our Evolutionary Heritage. A Polyvagal Theory.” The title, crafted to crypticaIIy encode several features of the theory, was intended to emphasize that mammals had evolved in a hostile environment in which survival was dependent on their ability to down regulate states of defense with states of safety and trust, states that supported cooperative behavior and health.

In 1994 I was totally unaware that clinicians would embrace the theory. I did not anticipate its importance in understanding trauma-related experiences. Being a scientist, and not a clinician, my interests were focused on understanding how the autonomic nervous system influenced mental, behavioral, and physiological processes. My clinical interests were limited to obstetrics and neonatology with a focus on monitoring health risk during delivery and the first days of life. Consistent with the demands and rewards of being an academic researcher, my interests were directed at mechanisms.

In my most optimistic dreams of application, I thought my work might evolve into novel assessments of autonomic function. In the early 1990’s I was not interested in emotion, social behavior, and the importance of social interactions on health and the regulation of the autonomic nervous system; I seldom thought of my research leading to strategies of intervention.

After the publication of the Polyvagal Theory, I became curious about the features of individuals with several psychiatric diagnoses. I noticed that research was reliably demonstrating depressed cardiac vagal tone (i.e., respiratory sinus arrhythmia and other measures of heart rate variability) and atypical vagal regulation of the heart in response to challenges. I also noticed that many psychiatric disorders seem to share symptoms that could be explained as a depressed or dysfunctional Social Engagement System with features expressed in auditory hypersensitivities, auditory processing difficulties, flat facial affect, poor gaze, and a lack of prosody.

This curiosity led to an expanded research program in which I conducted studies evaluating clinical groups (e.g., autism, selective mutism, HIV, PTSD, Fragile X syndrome, borderline personality disorder, women with abuse histories, children who stutter, preterm infants). In these studies Polyvagal Theory was used to explain the findings and confirm that many psychiatric disorders were manifest in a dysfunction of the ‘ventral’ vagal complex, which included lower cardiac vagal tone and the associated depressed function of the striated muscles of the face and head resulting in flat facial affect and lack of prosody.

In 2011 the studies investigating clinical populations were summarized in a book published by Norton, The Polyvagal Theory: Neurophysiological Foundations of Emotions, Attachment, Communication, and Self-Regulation.

The publication enabled Polyvagal Theory to become accessible to clinicians; the theory was no longer limited to the digital libraries linked to universities and research institutes. The publication of the book stimulated great interest within the clinical community and especially with traumatologists. I had not anticipated that the main impact of the theory would be to provide plausible neurophysiological explanations for experiences described by individuals who had experienced trauma. For these individuals, the theory provided an understanding of how, after experiencing life threat, their neural reactions were retuned towards a defensive bias and they lost the resilience to return to a state of safety.

This prompted invitations to talk at clinically oriented meetings and to conduct workshops on Polyvagal Theory for clinicians. During the past few years, there has been an expanding awareness of Polyvagal Theory across several clinical areas. This welcoming by the clinical community identified limitations in my knowledge. Although I could talk to clinicians and deconstruct their presentations of clinical cases into constructs described by the theory, I was not a clinician. I was limited in how I related the theory to clinical diagnosis, treatment, and outcome.

During this period, I met Deb Dana. Deb is a talented therapist with astute insights into trauma and a desire to integrate Polyvagal Theory into clinical treatment. For Deb, Polyvagal Theory provided a language of the body that paralleled her feelings and intuitive connectedness with her clients. The theory provided a syntax to label her and her client’s experiences, which were substantiated by documented neural mechanisms.

Functionally, the theory became a lens or a perspective in how she supported her clients and how she reacted to her clients.

The theory transformed the client’s narrative from a documentary to a pragmatic quest for safety with an implicit bodily drive to survive.

As the theory infused her clinical model, she began to develop a methodology to train other therapists. The product of this transition is the current book. In The Polyvagal Theory in Therapy, Deb Dana brilliantly transforms a neurobiologically based theory into clinical practice and Polyvagal Theory comes alive.

INTRODUCTION

Deb Dana

When I teach Polyvagal Theory to colleagues and clients, I tell them they are learning about the science of safety, the science of feeling safe enough to fall in love with life and take the risks of living. Polyvagal Theory provides a physiological and psychological understanding of how and why clients move through a continual cycle of mobilization, disconnection, and engagement.

Through the lens of Polyvagal Theory, we see the role of the autonomic nervous system as it shapes clients’ experiences of safety and affects their ability for connection.

The autonomic nervous system responds to the challenges of daily life by telling us not what we are or who we are but how we are. The autonomic nervous system manages risk and creates patterns of connection by changing our physiological state. These shifts are slight for many people, and, in the moments when large state changes happen, their system is resilient enough to help them return to a regulated state.

Trauma interrupts the process of building the autonomic circuitry of safe connection and sidetracks the development of regulation and resilience.

Clients with trauma histories often experience more intense, extreme autonomic responses, which affects their ability to regulate and feel safe in relationships. Polyvagal Theory helps therapists understand that the behaviors of their clients are autonomic actions in service of survival, adaptive responses ingrained in a survival story that is entered into automatically.

Trauma compromises our ability to engage with others by replacing patterns of connection with patterns of protection. If unresolved, these early adaptive survival responses become habitual autonomic patterns. Therapy through a polyvagal lens, supports clients in repatterning the ways their autonomic nervous systems operate when the drive to survive competes with the longing to connect with others.

This book is designed to help you bring Poiyvagal Theory into your therapy practice. It provides a comprehensive approach to intervention by presenting ways to map autonomic response and shape the autonomic nervous system for safety. With this book, you will learn Poiyvagal Theory and use worksheets and experiential exercises to apply that knowledge to the nuts and bolts of practice.

Section I, “Befriending the Nervous System,” introduces the science of connection and creates basic fluency in the language of Poiyvagal Theory. These chapters present the essential elements of Poiyvagal Theory, building a solid foundation of knowledge and setting the stage for work with the clinical applications presented in the remainder of the book.

Section II, “Mapping the Nervous System,” focuses on learning to recognize patterns of response. The worksheets presented in these chapters create the ability to predictably identify individual placement along the autonomic hierarchy.

Section III, “Navigating the Nervous System,” builds on the newly gained expertise in identifying autonomic states and adds the next steps in the process: learning to track response patterns, recognize triggers, and identify regulating resources. A variety of “attending” practices are presented to support a new way of attuning to patterns of action, disconnection, and engagement.

Section IV, “Shaping the Nervous System,” explores the use of passive and active pathways to tone the autonomic nervous system and reshape it toward increased flexibility of response. These chapters offer ways to engage the regulating capacities of the ventral vagal system through both in-the-moment interventions and practices that begin to shift the system toward finding safety in connection.

Through the ideas presented in this book, you will discover how using Polyvagal Theory in therapy will increase the effectiveness of your clinical work with trauma survivors. In this process, not only will your therapy practice change, but also your way of seeing and being in the world will change.

My personal experience, and my experience teaching Polyvagal Theory to therapists and clients, is that there is a “before-and-after” quality to learning this theory. Once you understand the role of the autonomic nervous system in shaping our lives, you can never again not see the world through that lens.

SECTION I

BEFRIENDING THE NERVOUS SYSTEM

“The greatest thing then, in all education, is to make our nervous system our ally as opposed to our enemy.” WILLIAM JAMES

If you do a Google search for “Polyvagal Theory,” more than 500,000 results pop up, and if you search for “Stephen Porges,” more than 150,000 results appear. Polyvagal Theory has made a remarkable journey from a relatively unknown and controversial theory to its wide acceptance today in the field of psychotherapy.

Polyvagal Theory traces its origins to 1969 and Dr. Porges’s early work with heart rate variability and his “vision that monitoring physiological state would be a helpful guide to the therapist during the clinical interaction”

(Porges, 2011a, p. 2). As Dr. Porges wrote, at that time he “looked forward to new discoveries applying these technologies to clinical populations. I had no intention of developing a theory” (p. 5). Polyvagal Theory was born out of the question how one nerve, the vagus nerve, and its tone, which Dr. Porges was measuring, could be both a marker of resilience and a risk factor for newborns. Through solving this puzzle, now known as the vagal paradox, Dr. Porges created the Polyvagal Theory.

Three organizing principles are at the heart of Polyvagal Theory.

Hierarchy: The autonomic nervous system responds to sensations in the body and signals from the environment through three pathways of response. These pathways work in a specified order and respond to challenges in predictable ways. The three pathways (and their patterns of response), in evolutionary order from oldest to newest, are the dorsal vagus (immobilization), the sympathetic nervous system (mobilization), and the ventral vagus (social engagement and connection).

Neuroception: This is the term coined by Dr. Porges to describe the ways our autonomic nervous system responds to cues of safety, danger, and life-threat from within our bodies, in the world around us, and in our connections to others. Different from perception, this is “detection without awareness” (Porges, n.d.), a subcortical experience happening far below the realm of conscious thought.

Co-regulation: Polyvagal Theory identifies co-regulation as a biological imperative: a need that must be met to sustain life. It is through reciprocal regulation of our autonomic states that we feel safe to move into connection and create trusting relationships.

We can think of the autonomic nervous system as the foundation upon which our lived experience is built. This biological resource (Kok et al., 2013) is the neural platform that is beneath every experience. How we move through the world-turning toward, backing away, sometimes connecting and other times isolating, is guided by the autonomic nervous system. Supported by co-regulating relationships, we become resilient. ln relationships awash in experiences of misattunement, we become masters of survival. In each of our relationships, the autonomic nervous system is “learning” about the world and being toned toward habits of connection or protection.

Hopefulness lies in knowing that while early experiences shape the nervous system, ongoing experiences can reshape it. Just as the brain is continually changing in response to experiences and the environment, our autonomic nervous system is likewise engaged and can be intentionally influenced.

As individual nervous systems reach out for contact and co-regulation, incidents of resonance and misattunement are experienced as moments of connection or moments of protection. The signals conveyed, the cues of safety or danger sent from one autonomic nervous system to another, invite regulation or increase reactivity. In work with couples, it is easy to observe the increased reactivity that occurs when a disagreement quickly escalates and cues of danger communicated between the two nervous systems trigger each partner’s need for protection. In contrast, the attunement of the therapist-client relationship relays signals of safety and an autonomic invitation for connection.

Humans are driven to want to understand the “why” of behaviors. We attribute motivation and intent and assign blame. Society judges trauma survivors by their actions in times of crisis.

We still too often blame the victim if they didn’t fight or try to escape but instead collapsed into submission. We make a judgment about what someone did that leads to a belief about who they are. Trauma survivors themselves often think “It’s my fault” and have a harsh inner critic who mirrors society’s response.

In our daily interactions with family, friends, colleagues, and even the casual exchanges with strangers that define our days, we evaluate others by the ways they engage with us.

Polyvagal Theory gives therapists a neurophysiologioal framework to consider the reasons why people act in the ways they do. Through a polyvagal lens, we understand that actions are automatic and adaptive, generated by the autonomic nervous system well below the level of conscious awareness. This is not the brain making a cognitive choice. These are autonomic energies moving in patterns of protection. And with this new awareness, the door opens to compassion.

A working principle of the autonomic nervous system is “every response is an action in service of survival.” No matter how incongruous an action may look from the outside, from an autonomic perspective it is always an adaptive survival response. The autonomic nervous system doesn’t make a judgment about good and bad; it simply acts to manage risk and seek safety. Helping clients appreciate the protective intent of their autonomic responses begins to reduce the shame and self-blame that trauma survivors so often feel. When offered the lens of Polyvagal Theory, clients become curious about the cues of safety and danger their nervous systems are sensing and begin to understand their responses as courageous survival responses that can be held with compassion.

Trauma-trained therapists are taught that a foundation of effective work is understanding “perception is more important than reality.” Personal perception, not the actual facts of an experience, creates posttraumatic consequences.

Polyvagal Theory demonstrates that even before the brain makes meaning of an incident, the autonomic nervous system has assessed the environment and initiated an adaptive survival response. Neuroception precedes perception. Story follows state.

Through a polyvagal framework, the important question “What happened?” is explored not to document the details of an event but to learn about the autonomic response. The clues to a client’s present-time suffering can be found in their autonomic response history.

The goal of therapy is to engage the resources of the ventral vagus to recruit the circuits that support the prosocial behaviors of the Social Engagement System (Porges, 2009a, 2015a). The Social Engagement System is our “face-heart” connection, created from the linking of the ventral vagus (heart) and the striated muscles in our face and head that control how we look (facial expressions), how we listen (auditory), and how we speak (vocalization) (Porges, 2017a). In our interactions it is through the Social Engagement System that we send and search for cues of safety. In both the therapy setting and the therapy session, creating the conditions for a physiological state that supports an active Social Engagement System is a necessary element. “If we are not safe, we are chronically in a state of evaluation and defensiveness” (Porges, 2011b, p. 14). It is a ventral vagal state and a neuroception of safety that bring the possibility for connection, curiosity, and change. A polyvagal approach to therapy follows the four R’s:

– Recognize the autonomic state.

– Respect the adaptive survival response.

– Regulate or co-regulate into a ventral vagal state.

– Re-story.

. . .

*

from

The Polyvagal Theory in Therapy. Engaging the Rhythm of Regulation

by Deb Dana

get it at Amazon.com

See also:

The Vagus Nerve. Stimulating the pathway connecting body and brain can change patients’ lives – Zoe Fisher and Andrew H Kemp.

BEING DEAD BUT YET ALIVE. The psychological secrets of suicide – Britt Mann * A Very Human Ending: How Suicide Haunts Our Species – Jesse Bering.

There’s a tipping point where the agony of living becomes worse than the pain of dying. Many of us would rather go to our graves keeping up appearances than reveal we’re secretly coming undone. We are the only species on earth that deliberately ends its own life. Depression is a secret tomb that no one sees but you, being dead but yet alive.

Statistically we’re far more likely to perish intentionally by our own hand than to die of causes that are more obviously outside of our control. In fact, historically, suicide has accounted for more deaths than all wars and homicides combined.

“Never kill yourself while you are suicidal.” Edwin Shneidman, suicideologist

The suicidal mind is cognitively distorted, and unreliable when it comes to intelligent decision making. As such, waiting out a dark night of the soul, especially if you’re a teenager, a demographic more likely to kill themselves impulsively, can yield a brighter tomorrow.
Even if the act of killing oneself could be considered rational, the “tremendous urge” to do so rarely lasts longer than 24 hours.

Understanding suicidal urges, from a scientific perspective, can keep many people alive, at least in the short term. My hope is that knowing how it all works will help us to short-circuit the powerful impetus to die when things look calamitous.

It’s that everyday person dealing with suicidal thoughts, the suicidal person in all of us, who is the main subject of this book.

American writer and research psychologist Jesse Bering was considering taking his own life before he was offered a job in New Zealand.

Bering found himself fantasising about a tree near his house in upstate New York, which had a particular bough “crooked as an elbow” that seemed a perfect place from which to hang himself.
So goes the opening anecdote in his latest book, A Very Human Ending: How Suicide Haunts Our Species.

In New Zealand, his desire to die has subsided, but the spectre of suicide still emits a “low hum” in his life. His new book explores why people decide to kill themselves, born from a need to understand his own psyche, and prompt those on the edge to think twice before stepping off.

“The best predictor of future behaviour is past behaviour, and unfortunately that’s the case with suicidal thinking and especially suicide attempts. The likelihood of me being in that state again is pretty high… I think of the book as this is me having a conversation with my future self, to talk me out of this.”

The suicidal mind is cognitively distorted, and unreliable when it comes to intelligent decision making. As such, waiting out a dark night of the soul, especially if you’re a teenager, a demographic more likely to kill themselves impulsively, can yield a brighter tomorrow.
Even if the act of killing oneself could be considered rational, the “tremendous urge” to do so rarely lasts longer than 24 hours.

Stuff.co.nz

A Very Human Ending: How Suicide Haunts Our Species

Jesse Bering

‘This book touches on some deep questions relevant to us all… A fascinating, thoughtful, unflinching meditation on one of the most intriguing and curious aspects of the human condition.‘ Dr Frank Tallis

Why do people want to kill themselves? Despite the prevalence of suicide in the developed world, it’s a question most of us fail to ask. On hearing news of a suicide we are devastated, but overwhelmingly we feel disbelief.

In A Very Human Ending, research psychologist Jesse Bering lifts the lid on this taboo subject, examining the suicidal mindset from the inside out to reveal the subtle tricks the mind can play when we’re easy emotional prey. In raising challenging questions Bering tests our contradictory superstitions about the act itself.

Combining cutting-edge research with investigative journalism and first-person testimony, Bering also addresses the history of suicide and its evolutionary inheritance to offer a personal, accessible, yet scientifically sound examination of why we are the only species on earth that deliberately ends its own life.

This penetrating analysis aims to demystify a subject that knows no cultural or demographic boundaries.

FOR THE SUICIDAL PERSON IN ALL OF US

And so far forth death’s terror doth affright,

He makes away himself, and hates the light

To make an end of fear and grief of heart,

He voluntarily dies to ease his smart.

Robert Burton, The Anatomy of Melancholy (1621)

Given the sensitive nature of the material in this book, I have not used any real names (unless otherwise stated), and I have changed physical descriptions, locations, and other features to ensure that no one is identifiable and their story is protected. This is because this is not a book about the individuals I have described, but about what we can learn from them and how they shape our lives.

1

the call to oblivion

“Just as life had been strange a few minutes before, so death was now as strange. The moth having righted himself now lay most decently and uncomplainingly composed. O yes, he seemed to say, death is stronger than I am.” Virginia Woolf, “The Death of the Moth” (1942)

Just behind my former home in upstate New York, in a small, dense pocket of woods, stood an imposing lichen-covered oak tree built by a century of sun and dampness and frost, its hardened veins crisscrossing on the forest floor. It was just one of many such specimens in this copse of dappled shadows, birds, and well-worn deer tracks, but this particular tree held out a single giant limb crooked as an elbow, a branch so deliberately poised that whenever I’d stroll past it while out with the dogs on our morning walks, it beckoned me.

It was the perfect place, I thought, to hang myself.

I’d had fleeting suicidal feelings since my late teenage years. But now I was being haunted day and night by what was, in fact, a not altogether displeasing image of my corpse spinning ever so slowly from a rope tied around this creaking, pain-relieving branch. It’s an absurd thought, that I could have observed my own dead body as if I’d casually stumbled upon it. And what good would my death serve if it meant having to view it through the eyes of the very same head that I so desperately wanted to escape from in the first place?

Nonetheless, I couldn’t help but fixate on this hypothetical scene of the lifeless, pirouetting dummy, this discarded sad sack whose long-suffering owner had been liberated from a world in which he didn’t truly belong.

Globally, a million people a year kill themselves, and many times that number try to do so. That’s probably a hugely conservative estimate, too; for reasons such as stigma and prohibitive insurance claims, suicides and attempts are notoriously underreported when it comes to the official statistics. Roughly, though, these figures translate to the fact that someone takes their own life every forty seconds. Between now and the time you finish reading the next paragraph, someone, somewhere, will decide that death is a more welcoming prospect than breathing another breath in this world and will permanently remove themselves from the population.

The specific issues leading any given person to become suicidal are as different, of course, as their DNA -involving chains of events that one expert calls “dizzying in their variety”, but that doesn’t mean there aren’t common currents pushing one toward this fatal act. We’re going to get a handle on those elusive themes in this book and, ultimately, begin to make sense of what remains one of the greatest riddles of all time: Why would an otherwise healthy person, someone even in the prime of their life, “go against nature” by hastening their death? After all, on the surface, suicide wouldn’t appear to be a very smart Darwinian tactic, given that being alive would seem to be the first order of business when it comes to survival of the fittest.

But like most scientific questions, it turns out it’s a little more complicated than that.

We won’t be dealing here with “doctor-assisted suicide” or medical euthanasia, what Derek Humphrey in Final Exit regarded as “not suicide [but] selfdeliverance, thoughtful, accelerated death to avoid further suffering from a physical disease.” I consider such merciful instances of death almost always to be ethical and humane. Instead, we’ll be focusing in the present book on those self-killings precipitated by fleeting or ongoing mental distress, namely, those that aren’t the obvious result of physical pain or infirmity.

Our primary analysis will center on the suicides of otherwise normal folks battling periodic depression or who suddenly find themselves in unexpected and overwhelming social circumstances. Plenty of suicides are linked to major psychiatric conditions (in which the person has a tenuous grasp of reality, such as in schizophrenia), but plenty aren’t. And it’s that everyday person dealing with suicidal thoughts, the suicidal person in all of us, who is the main subject of this book.

Benjamin Franklin famously quipped that “nine men in ten are would-be suicides.” Maybe so, but some of us will lapse into this state more readily. It’s now believed that around 43 percent of the variability in suicidal behavior among the general population can be explained by genetics, while the remaining 57 percent is attributable to environmental factors. When people who have a genetic predisposition for suicidality find themselves assaulted by a barrage of challenging life events, they are particularly vulnerable.

The catchall mental illness explanation only takes us so far. The vast majority of those who die by suicide, with some estimates as high as 90 percent, have underlying psychiatric conditions, especially mood disorders such as depressive illness and bipolar disorder. (I have frequently battled the former, coupled with social anxiety.) But it’s also true that not everyone with depression is suicidal, nor, believe it or not, is everyone who commits suicide depressed. According to one estimate, around 5 percent of depressed people will die by suicide, but about half a percent of the nondepressed population will end up taking their own lives too.

As for my own recurring compulsion to end my life, which flares up like a sore tooth at the whims of bad fortune, subsides for a while, yet always threatens to throb again, the types of problems that trigger these dangerous desires change over time. Edwin Shneidman, the famous suicidologist, yes, that’s an actual occupation, had an apt term for this acute, intolerable feeling that makes people want to die: “psychache,” he called it. It’s like what Winona Ryder’s character in the film Girl, Interrupted said after throwing back a fistful of aspirin in a botched suicide attempt-she just wanted “to make the shit stop.” And like a toothache, which can be set off by any number of packaged treats at our fingertips, psychache can be caused by an almost unlimited number of things in our modern world.

What made me suicidal as a teenager, the everlooming prospect of being outed as gay in an intolerant small midwestern town, isn’t what pushes those despairing buttons in me now. I’ve been out of the closet for twenty years and with my partner, Juan, for over a decade. I do sometimes still wince at the memory of my adolescent fear regarding my sexual orientation, but the constant worry and anxiety about being forced prematurely out of the closet are gone now.

Still, other seemingly unsolvable problems continue to crop up as a matter of course.

“Psychache”

Psychache is a term first used by pioneer suicidologist Edwin Shneidman to refer to psychological pain that has become unbearable. The pain is deeper and more vicious than depression, although depression may be present as well.

What drew me to those woods behind my house not so long ago was my unemployment. I was sorely unprepared for it. Not long before, I’d enjoyed a fairly high status in the academic world. Frankly, I was spoiled. And lucky. That part I didn’t realize until much later. I’d gotten my first faculty position at the University of Arkansas straight out of grad school. Then, at the age of thirty, I moved to Northern Ireland, where I ran my own research center for several years at the Queen’s University Belfast.

Somewhere along the way, though, my scholarly ambitions began to wear thin.

It was a classic case of career burnout. By the time I was thirty-five, I’d already done most of what I’d set out to do: I was publishing in the best journals, speaking at conferences all over the world, scoring big grants, and writing about my research (in religion and psychology) for popular outlets. If I were smart, I’d have kept my nose to the grindstone. Instead, I grew restless. “Now what?” I asked myself.

The prospect of doing slight iterations of the same studies over and over became a nightmare, the academic’s equivalent of being stuck in a never-ending time loop. Besides, although controversial issues like religion are never definitively settled, I’d already answered my main research question, at least to my own satisfaction. (Question: “What are the odds that religious ideas are a product of the human mind?” Answer: “Pretty darn high.”)

With my professorial aspirations languishing, I began devoting more and more time to writing popular science essays for outfits such as Scientific American, Slate, Playboy, and a few others. My shtick was covering the salacious science beat. If you’d ever wondered about the relationship between gorilla fur, crab lice, and human pubic hair, about the mysterious psychopharmacological properties of semen, or why our species’ peculiar penis is shaped like it is, l was your man. In fact, I wrote that very book: Why Is the Penis Shaped Like That?

The next book I was to write had an even more squirm-inducing title: Perv: The Sexual Deviant in All of Us. Ever wonder why amputees turn on some folks, others can’t keep from having an orgasm when an attractive passerby lapses into a sneezing fit, or why women are generally kinkier than men? Again, I was your clickable go-to source.

Now, perhaps I should have thought more about how, in a conservative and unforgiving academic world, such subject matter would link my name inexorably with unspeakable things. Sure, my articles got page clicks. My books made you blush at Barnes & Noble. But these titles aren’t exactly ones that university deans and provosts like to boast about to donors. Once you go public with the story of how you masturbated as a teenager to a wax statue of an anatomically correct Neanderthal (I swear it made sense in context), there is no going back. You can pretty much forget about ever getting inducted into the Royal Society. “Oh good riddance,” I thought. Being finally free to write in a manner that suited me, and with my very own soapbox to say the things I’d long wanted to say about society’s souI-crushing hypocrisy, was incredibly appealing.

There was also the money. I wasn’t getting rich, but I’d earned large enough advances with my book deals to quit my academic job, book a one-way ticket from Belfast back to the U.S., and put a deposit down on an idyllic little cottage next to a babbling brook just outside of Ithaca. Back then, the dark patch of forest behind the house didn’t seem so sinister; it was just a great place to walk our two border terriers, Gulliver and Uma, our rambunctious Irish imports. The whole domestic setting seemed the perfect little place to build the perfect little writing life, a fairy tale built on the foundations of other people’s “deviant” sexualities.

You can probably see where this is heading. Juan, the more practical of us, raised his eyebrows early on over such an impulsive and drastic career move. By that I mean he was resolutely set against it. “What are you going to do after you finish the book?” he’d ask, sensing doom on the horizon.

“Write another book I guess. Maybe do freelance. I can always go back to teaching, right? C’mon, don’t be such a pessimist!”

“I don’t know,” Juan would say worriedly. But he also realized how unhappy I was in Northern Ireland, so he went along, grudgingly, with my loosely laid plans.

I wouldn’t say my fall from grace was spectacular. But it was close. If nothing else, it was deeply embarrassing. It’s hard to talk about it even now that I’m, literally, out of the woods.

That’s the thing. Much of what makes people suicidal is hard to talk about. Shame plays a major role. Even suicide notes, as we’ll learn, don’t always key us in to the real reason someone opts out of existence. (Forgive the glib euphemisms; there are only so many times one can write the word “suicide” without expecting readers’ eyes to glaze over.) If I’ll be asking others in this book to be honest about their feelings, though, it would be unfair for me to hide the reasons for my own self-loathing and sense of irredeemable failure during this dark period.

It’s often at our very lowest that we cling most desperately to our points of pride, as though we’re trying to convince not only others, but also ourselves, that we still have value.

Once, long ago, when I was about twenty, I met an old man of about ninety who carried around with him an ancient yellowed letter everywhere he went. People called him “the Judge.”

“I want to show you something, young man,” he said to me after a dinner party, reaching a shaky hand into his vest pocket to retrieve the letter. “See that?” he asked, beaming. A twisted arthritic finger was pointing to a typewritten line from the Prohibition era. As I tried to make sense of the words on the page, he studied my gaze under his watery pink lids to be sure it was really sinking in. “It’s a commendation from Franklin D. Roosevelt, the governor of New York back then. Says here, see, says right here I was the youngest Supreme Court Justice in the state. Twenty. Eight. Years. Old.” With each punctuated word, he gave the paper a firm tap. “Whaddaya think of that?”

“That’s incredibly impressive,” I said.

And it was. In fact, I remember being envious of him. Not because of his accomplished legal career, but because, as I so often have been in my life, I was suicidal at the time; and unlike me, he hadn’t long to go before slipping gently off into that good night.

One of the cruelest tricks played on the genuinely suicidal mind is that time slows to a crawl. When each new dawn welcomes what feels like an eternity of mental anguish, the yawning expanse between youth and old age might as well be interminable Hell itself.

But the point is that when we’re thrown against our wishes into a liminal state, that reluctant space between activity and senescence, employed and unemployed, married and single, closeted and out, citizen and prisoner, wife and widow, healthy person and patient, wealthy and broke, celebrity and has-been, and so on, it’s natural to take refuge in the glorified past of our previous selves. And to try to remind others of this eclipsed identity as well.

Alas, it’s a lost cause. Deep down, we know there’s no going back. Our identities have changed permanently in the minds of others. In the real world (the one whose axis doesn’t turn on cheap clichés and self-help canons about other people’s opinions of us not mattering), we’re inextricably woven into the fabric of society.

For better or worse, our well-being is hugely dependent on what others think we are.

Social psychologist Roy Baumeister, whom we’ll meet again later on, argues that idealistic life conditions actually heighten suicide risk because they create unreasonable standards for personal happiness. When things get a bit messy, people who have led mostly privileged lives, those seen by society as having it made, have a harder time coping with failures. “A reverse of fortune, as society is constituted,” wrote the eighteenth-century thinker Madame de Staél, “produces a most acute unhappiness, which multiplies itself in a thousand different ways. The most cruel of all, however, is the loss of the rank we occupied in the world. Imagination has as much to do with the past, as with the future, and we form with our possessions an alliance, whose rupture is most grievous.”

Like the Judge, I was dangerously proud of my earlier status. The precipitous drop between my past and my present job footing was discombobulating. I wouldn’t have admitted it then, or even known I was guilty of such a cognitive crime, but I also harbored an unspoken sense of entitlement. Now, I felt like Jean-Baptiste Clamence in The Fall by Albert Camus. In the face of a series of unsettling events, the successful Parisian defense attorney watches as his career, and his entire sense of meaning, goes up in smoke. Only when sifting through the ashes are his biases made clear. “As a result of being showered with blessings,” Clamence observes of his worldview till then,

“I felt, I hesitate to admit, marked out. Personally marked out, among all, for that long uninterrupted success. I refused to attribute that success to my own merits and could not believe that the conjunction in a single person of such different and such extreme virtues was the result of chance alone. This is why in my happy life I felt somehow that that happiness was authorized by some higher decree. When I add that l had no religion you can see even better how extraordinary that conviction was.”

Similarly, what I had long failed to fully appreciate were the many subtle and incalculable forces behind my earlier success, forces that had always been beyond my control. I felt somehow, what is the word, charmed is too strong, more like fatalistic. The reality was that I was like everyone else, simply held upright by the brittle bones of chance. And now, they threatened to give way. I’d worked hard, sure, but again, I’d been lucky. Back when I’d earned my doctoral degree, the economy wasn’t so gloomy and there were actually opportunities. I was also doing research on a hot new topic, my PhD dissertation was on children’s reasoning about the afterlife, and I was eager to make a name for myself in a burgeoning field. Now, eleven years later, having turned my back on the academy, fresh out of book ideas, along with a name pretty much synonymous with penises and pervs, it was a very different story. Career burnout? Please. That’s a luxury for the employed.

I just needed a steady paycheck.

The rational part of my brain assured me that my present dilemma was not the end of the world. Still, the little that remained of my book advance was drying up quickly, and my freelance writing gigs, feverishly busy as they kept me, didn’t pay enough to live on. Juan, who’d been earning his master’s degree in library science, was forced to take on a minimum-wage cashier job at the grocery store. He never said “I told you so.” He didn’t have to.

I knew going in that the grass wouldn’t necessarily be greener on the other side of a staid career, but never did I think it could be scorched earth. That perfect little cottage? It came with a mortgage. We didn’t have kids, but we did have two bright-eyed terriers and a cat named Tommy to feed and care for. Student loans. Taxes. Fuel. Credit cards. Electricity. Did I mention I was an uninsured Type I diabetic on an insulin pump? My blinkered pursuit of freedom to write at any cost was starting to have potentially fatal consequences.

Doing what you love for a living is great. But you know what’s even more fun? Food.

The irrational part of my brain couldn’t see how this state of affairs, which I’d stupidly, selfishly put us into, could possibly turn out well. Things were only going to get worse. Cue visions of foreclosure, confused, sadfaced, whimpering pets torn asunder and kenneled (or worse), loving family members, stretched to the limit already themselves, arguing with each other behind closed doors over how to handle the “situation with Jesse.” Everyone, including me, would be better off without me; I just needed to get the animals placed in a loving home and Juan to start a fresh, unimpeded life back in Santa Fe, where he’d been living when we first met.

“You’re such a loser,” I’d scold myself. “You had it made. Now look at you.”

Asshole though this internal voice could be, it did make some good points. What if that was the rational part of my brain, I began to wonder, and the more optimistic side, the one telling me it was all going to be okay, was delusional? After all, in the fast-moving world of science, I was now a dinosaur. I hadn’t taught or done research for years. I’d also burned a lot of bridges due to my, er, penchant for sensationalism. An air of Schadenfreude, which I’m sure I’d rightfully earned from some of my critics, would soon be palpable.

Overall, I felt like persona non grata among all the proper citizens surrounding me, all those deeply rooted trees that so obviously belonged to this world. Even the weeds had their place. But me? I didn’t belong. I was, in point of fact, simultaneously over-and under-qualified for everything I could think of, saddled with an obscure advanced degree and absolutely no practical skills. And of course I might as well be a registered sex offender with the titles of my books and articles (among the ones I was working on at the time, “The Masturbatory Habits of Priests” and “Erotic Vomiting”). I envied the mailman, the store clerk, the landscaper anyone with a clear purpose.

Meanwhile, the stark contrast between my private and public life only exacerbated my despondency. From a distance, it would appear that my star was rising. I was giving talks at the Sydney Opera House, being interviewed regularly by NPR and the BBC, and getting profiled in the Guardian and the New York Times. Morgan Freeman featured my earlier work on religion for his show Through the Wormhole. Meanwhile, over in the UK, the British illusionist Derren Brown did the same on his televised specials. My blog at Scientific American was nominated for a Webby Award. Dan Savage, the famous sex advice columnist, tapped me to be his substitute columnist when he went away on vacation for a week. I even did the late-night talk show circuit. Chelsea Handler brazenly asked me, on national television, if I’d have anal sex with her. (I said yes, by the way, but I was just being polite.) A big Hollywood producer acquired the film option rights to one of my Slate articles.

With such exciting things happening in my life, how could I possibly complain, let alone be suicidal? After all, most writers would kill (no pun intended) to attract the sort of publicity I was getting.

“Oh, boo-hoo,” I told myself. “You’ve sure got it rough. Let’s ask one of those new Syrian refugees how they feel about your dire straits, shall we? How about that nice old woman up the road vomiting her guts out from chemo?” A close friend from my childhood had just had a stroke and was posting inspirational status updates on his Twitter account as he learned how to walk again, #trulyblessed. What right did I have to be so unhappy?

This kind of internal self-flagellation, like reading a never-ending scroll of excoriating social media comments projected onto my mind’s eye, only made being me more insufferable. I ambled along for months this way, miserable, smiling like an idiot and popping Prozac, hoping the constant gray drizzle in my brain would lift before the dam finally flooded and I got washed up into the trees behind the house.

No one knew it. At least, not the full extent of it.

From the outside looking in, even to the few close friends I had, things were going swimmingly. “When are you going to be on TV again?” they’d ask. “Where to next on your book tour?” Or “Hey, um, interesting article on the history of autofellatio.”

All was illusion. The truth is these experiences offered little in the way of remuneration. The press didn’t pay. The public speaking didn’t amount to much. And the film still hasn’t been made.

My outward successes only made me feel like an impostor. Less than a week after I appeared as a guest on Conan, I was racking my head trying to think of someone, anyone, who could get me a gun to blow it off. Yet look hard as you might at a recording of that interview from October 16, 2013, and you won’t see a trace of my crippling worry and despair. What does a suicidal person look like? Me, in that Conan interview.

Here’s the trouble. We’re not all ragingly mad, violently unstable, or even obviously depressed. Sometimes, a suicide seems like it comes out of nowhere. But that’s only because so many of us would rather go to our graves keeping up appearances than reveal we’re secretly coming undone.

In response to an article in Scientific American in which I’d shared my personal experiences as a suicidal gay teenager (while keeping my current mental health issues carefully under wraps), one woman wrote to me about the torturous divide between her own public persona and private inner life. “It’s difficult to admit that at age 34,” she explained, with a young daughter, a graduate degree in history, divorced, and remarried to my high school love, that I’m Googling suicide. But what the world doesn’t see is years of fertility issues, childhood rape, post-traumatic stress disorder, a failing marriage, a custody battle, nonexistent career, mounds of debt, and a general hatred of myself. Depression is a secret tomb that no one sees but you, being dead but yet alive.

She’s far from alone. There are more people walking around this way, “dead but yet alive,” than anyone realizes.

In my case, being open about my persistent suicidal thoughts at a time when readers’ perception of me as a good, clearheaded thinker meant the difference between a respectable middle age and moving into my elderly father’s basement and living off cans of Spaghettios. It just wasn’t something I was willing to do at the time. Who’d buy a book by an author with a mood disorder, a has-been academic, and a self-confessed sensationalist who can’t stop thinking about killing himself, and take him seriously as an authoritative voice of reason?

I don’t blame anyone for missing the signs. What signs? Anyway, regrettably, I’ve done the same. The man who’d designed my website, a sweet, introverted IT guy also struggling to find a job, overdosed while lying on his couch around this time. His landlord found him three days later with his two cats standing on his chest, meowing. I was unnerved to realize that despite our mutual email pleasantries, we’d both in fact wanted to die.

We’re more intuitive than we give ourselves credit for, but people aren’t mind readers. We come to trust appearances; we forget that others are self-contained universes just like us, and the deep rifts forming at the edges go unnoticed, until another unreachable cosmos “suddenly” collapses. In the semiautobiographical The Book of Disquiet, Fernando Pessoa describes being surprised upon learning that a young shop assistant at the tobacco store had killed himself. “Poor lad,” writes Pessoa, “so he existed too!”

“We had all forgotten that, all of us; we who knew him only about as well as those who didn’t know him at all …. But what is certain is that he had a soul, enough soul to kill himself. Passions? Worries? Of course. But for me, and for the rest of humanity, all that remains is the memory of a foolish smile above a grubby woollen jacket that didn’t fit properly at the shoulders. That is all that remains to me of someone who felt deeply enough to kill himself, because, after all[,] there’s no other reason to kill oneself.”

These dark feelings are inherently social in nature. In the vast majority of cases, people kill themselves because of other people. Social problems, especially, a hypervigilant concern with what others think or will think of us if only they knew what we perceive to be some unpalatable truth, stoke a deadly fire.

Fortunately, suicide isn’t inevitable. As for me, it’s funny how things turned out. (And I mean “funny” in the way a lunatic giggles into his hand, because this entire wayward career experience must have knocked about five years off my life.) Just as things looked most grim, I was offered a job in one of the most beautiful places on the planet: the verdant wild bottom of the South Island in New Zealand. In July 2014 Juan, Gulliver, Uma, Tommy, and I, the whole hairy, harried family, packed up all of our earthly possessions, drove across country in a rented van, and flew from Los Angeles to Dunedin, where I’d been hired as the writing coordinator in a new Science Communication department at the University of Otago.

Ironically, I wouldn’t have been much of a candidate had I not devoted a few solid nail-biting years to freelancing. I’ll never disentangle myself from my reputation as a purveyor of pervy knowledge, but the Kiwis took my frank approach to sex with good humor.

Outside our small home on the Otago Peninsula, I’m serenaded by tuis and bellbirds; just up the road, penguins waddle from the shores of an endless ocean each dusk to nest in cliff-side dens, octopuses bobble at the harbor’s edge, while dolphins frolic and giant albatrosses the size of small aircraft soar overhead. At night the Milky Way is so dense and bright against the inky black sky, I can almost reach up and stir it, and every once in a while, the aurora australis, otherwise known as the southern lights, puts on a spectacular multicolored display. The dogs are thriving. The cat is purring. Juan has a great new job.

I therefore whisper this to you as though the cortical gods might conspire against me still: I’m currently “happy” with life.

I use that word happy with trepidation. It defines not a permanent state of being but slippery moments of non-worry. All we can do, really, is try to maximize the occurrence of such anxiety-free moments throughout the course of our lives; a worrisome mind is a place where suicide’s natural breeding ground, depression, spreads like black mold.

Personally, I’m all too conscious of the fact that had things gone this way or that but by a hairbreadth, my own story might just as well have ended years ago at the end of a rope on a tree that grows 8,000 miles away. Whether I’d have gone through with it is hard to say. I don’t enjoy pain, but I certainly wanted to die, and there’s a tipping point where the agony of living becomes worse than the pain of dying. It would be naive of me to assume that just because I called the universe’s bluff back then, my suicidal feelings have been banished for good.

As I write this, I’m forty-two years of age, and so there’s likely plenty of time for those dark impulses to return. Perhaps they’re merely lying in wait for the next unmitigated crisis and will come back with a vengeance. Also, according to some of the science we’ll be examining, I possess almost a full complement of traits that make certain types of people more prone to suicide than others. Impulsive. Check. Perfectionist. Check. Sensitive. Shame-prone. Mooddisordered. Sexual minority. Self-blaming. Check.

Check. Check. Check. Check.

We’re used to safeguarding ourselves against external threats and preparing for unexpected emergencies. We diligently strap on our seat belts every time we get in a car. We look our doors before bed. Some of us even carry weapons in case we’re attacked by a stranger. Ironic, then, that statistically we’re far more likely to perish intentionally by our own hand than to die of causes that are more obviously outside of our control. In fact, historically, suicide has accounted for more deaths than all wars and homicides combined.

When I get suicidal again, not if, but when, I want to be armed with an up-to-date scientific understanding that allows me to critically analyze my own doomsday thoughts or, at the very least, to be an informed consumer of my own oblivion. I want you to have that same advantage. That’s largely why I have written this book to reveal the psychological secrets of suicide, the tricks our minds play on us when we’re easy emotional prey. It’s also about leaving our own preconceptions aside and instead considering the many different experiences of those who’ve found themselves affected somehow, whether that means getting into the headspaces of people who killed themselves or are actively suicidal, those bereaved by the suicide death of a loved one, researchers who must quarantine their own emotions to study suicide objectively, or those on the grueling front lines of prevention campaigns.

Finally, we’ll be exploring some challenging, but fundamental, questions about how we wrestle with the ethical questions surrounding suicide, and how our intellect is often at odds with our emotions when it comes to weighing the “rationality” of other people’s fatal decisions.

Unlike most books on the subject, this one doesn’t necessarily aim to prevent all suicides. My own position, for lack of a better word, is nuanced. In fact, I tend to agree with the Austrian scholar Josef Popper-Lynkeus, who remarked in his book The Right to Live and the Duty to Die (1878) that, for him, “the knowledge of always being free to determine when or whether to give up one’s life inspires me with the feeling of a new power and gives me a composure comparable to the consciousness of the soldier on the battlefield.”

The trouble is, being emotionally fraught with despair can also distort human decision making in ways that undermine a person’s ability to decide intelligently “when or whether” to act. Because despite our firm conviction that there’s absolutely no escape from that seemingly unsolvable, hopeless situation we may currently find ourselves in, we’re often, as I was, dead wrong in retrospect.

“Never kill yourself while you are suicidal” was one of Shneidman’s favorite maxims. Intellectualizing a personal problem is a well-known defense mechanism, and it’s basically what I’ll be doing in this book. Some might see this coldly scientific approach as a sort of evasion tactic for avoiding unpleasant emotions. Yet with suicide, I’m convinced that understanding suicidal urges, from a scientific perspective, can keep many people alive, at least in the short term. My hope is that knowing how it all works will help us to short-circuit the powerful impetus to die when things look calamitous. I want people to be able to recognize when they’re under suicide’s hypnotic spell and to wait it out long enough for that spell to wear off. Acute episodes of suicidal ideation rarely last longer than twenty-four hours.

Education may not always lead to prevention, but it certainly makes for good preparation. And for those of you trying to understand how someone you loved or cared about could have done such an inexplicable thing as to take their own life, my hope is that you’ll benefit, too, from this examination of the self-destructive mind and how we, as a society, think about suicide.

*

from

A very human ending. How Suicide Haunts Our Species

by Jesse Bering

get it at Amazon.com

MIT Creates AI that Predicts Depression from Speech – Cami Rosso.

Depression is one of the most common disorders globally that impacts the lives of over 300 million people, and nearly 800,000 suicides annually.

For a mental health professional, asking the right questions and interpreting the answers is a key factor in the diagnosis. But what if a diagnosis could be achieved through natural conversation, versus requiring context from question and answer?

An innovative Massachusetts Institute of Technology (MIT) research team has discovered a way for AI to detect depression in individuals through identifying patterns in natural conversation.

Psychology Today

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions – Johann Hari.

“Even when the tears didn’t come, I had an almost constant anxious monologue thrumming through my mind. Then I would chide myself: It’s all in your head. Get over it. Stop being so weak.”

As she was speaking, I started to experience something strange. Her voice seemed to be coming from very far away, and the room appeared to be moving around me uncontrollably. Then, quite unexpectedly, I started to explode, all over her hut, like a bomb of vomit and faeces. When, some time later, I became aware of my surroundings again, the old woman was looking at me with what seemed to be sad eyes. “This boy needs to go to a hospital,” she said. “He is very sick.

Although I couldn’t understand why, all through the time I was working on this book, I kept thinking of something that doctor said to me that day, during my unglamorous hour of poisoning.

“You need your nausea. It is a message. It will tell us what is wrong with you.”

It only became clear to me why in a very different place, thousands of miles away, at the end of my journey into what really causes depression and anxiety, and how we can find our way back.
.
In every book about depression or severe anxiety by someone who has been through it, there is a long stretch of pain-porn in which the author describes, in ever more heightened language, the depth of the distress they felt. We needed that once, when other people didn’t know what depression or severe anxiety felt like. Thanks to the people who have been breaking this taboo for decades now, I don’t have to write that book all over again. That is not what I am going to write about here. Take it from me, though: it hurts.

Prologue: The Apple

One evening in the spring of 2014, I was walking down a small side street in central Hanoi when, on a stall by the side of the road, I saw an apple. It was freakishly large and red and inviting. I’m terrible at haggling, so I paid three dollars for this single piece of fruit, and carried it into my room in the Very Charming Hanoi Hotel. Like any good foreigner who’s read his health warnings, I washed the apple diligently with bottled water, but as I bit into it, I felt a bitter, chemical taste fill my mouth. It was the flavor I imagined, back when I was a kid, that all food was going to have after a nuclear war. I knew I should stop, but I was too tired to go out for any other food, so I ate half, and then set it aside, repelled.

Two hours later, the stomach pains began. For two days, I sat in my room as it began to spin around me faster and faster, but I wasn’t worried: I had been through food poisoning before. I knew the script. You just have to drink water and let it pass through you.

On the third day, I realized my time in Vietnam was slipping away in this sickness-blur. I was there to track down some survivors of the war for another book project I’m working on, so I called my translator, Dang Hoang Linh, and told him we should drive deep into the countryside in the south as we had planned all along. As we traveled around, a trashed hamlet here, an Agent Orange victim there, I was starting to feel steadier on my feet.

The next morning, he took me to the hut of a tiny eighty-seven-year-old woman. Her lips were dyed bright red from the herb she was chewing, and she pulled herself toward me across the floor on a wooden plank that somebody had managed to attach some wheels to. Throughout the war, she explained, she had spent nine years wandering from bomb to bomb, trying to keep her kids alive. They were the only survivors from her village.

As she was speaking, I started to experience something strange. Her voice seemed to be coming from very far away, and the room appeared to be moving around me uncontrollably. Then-quite unexpectedly, I started to explode, all over her hut, like a bomb of vomit and faeces. When, some time later, I became aware of my surroundings again, the old woman was looking at me with what seemed to be sad eyes. “This boy needs to go to a hospital,” she said. “He is very sick.”

No, no, I insisted. I had lived in East London on a staple diet of fried chicken for years, so this wasn’t my first time at the E.coli rodeo. I told Dang to drive me back to Hanoi so I could recover in my hotel room in front of CNN and the contents of my own stomach for a few more days.

“No,” the old woman said firmly. “The hospital.”

“Look, Johann,” Dang said to me, “this is the only person, with her kids, who survived nine years of American bombs in her village. I am going to listen to her health advice over yours.” He dragged me into his car, and I heaved and convulsed all the way to a sparse building that I learned later had been built by the Soviets decades before. I was the first foreigner ever to be treated there. From inside, a group of nurses, half excited, half baffled, rushed to me and carried me to a table, where they immediately started shouting. Dang was yelling back at the nurses, and they were shrieking now, in a language that had no words I could recognize. I noticed then that they had put something tight around my arm.

I also noticed that in the corner, there was a little girl with her nose in plaster, alone. She looked at me. I looked back. We were the only patients in the room.

As soon as they got the results of my blood pressure, dangerously low, the nurse said, as Dang translated, they started jabbing needles into me. Later, Dang told me that he had falsely said that I was a Very Important Person from the West, and that if I died there, it would be a source of shame for the people of Vietnam. This went on for ten minutes, as my arm got heavy with tubes and track marks. Then they started to shout questions at me about my symptoms through Dang. It was a seemingly endless list about the nature of my pain.

As all this was unfolding, I felt strangely split. Part of me was consumed with nausea, everything was spinning so fast, and I kept thinking: stop moving, stop moving, stop moving. But another part of me, below or beneath or beyond this, was conducting a quite rational little monologue. Oh. You are close to death. Felled by a poisoned apple. You are like Eve, or Snow White, or Alan Turing.

Then I thought, is your last thought really going to be that pretentious?

Then I thought, if eating half an apple did this to you, what do these chemicals do to the farmers who work in the fields with them day in, day out, for years? That’d be a good story, some day.

Then I thought, you shouldn’t be thinking like this if you are on the brink of death. You should be thinking of profound moments in your life. You should be having flashbacks. When have you been truly happy? I pictured myself as a small boy, lying on the bed in our old house with my grandmother, cuddling up to her and watching the British soap opera Coronation Street. I pictured myself years later when I was looking after my little nephew, and he woke me up at seven in the morning and lay next to me on the bed and asked me long and serious questions about life. I pictured myself lying on another bed, when I was seventeen, with the first person I ever fell in love with. It wasn’t a sexual memory, just lying there, being held.

Wait, I thought. Have you only ever been happy lying in bed? What does this reveal about you? Then this internal monologue was eclipsed by a heave. I begged the doctors to give me something that would switch off this extreme nausea. Dang talked animatedly with the doctors. Then he told me finally: “The doctor says you need your nausea. It is a message, and we must listen to the message. It will tell us what is wrong with you.”

And with that, I began to vomit again.

Many hours later, a doctor, a man in his forties came into my field of vision and said: “We have learned that your kidneys have stopped working. You are extremely dehydrated. Because of the vomiting and diarrhea, you have not absorbed any water for a very long time, so you are like a man who has been wandering in the desert for days.” Dang interjected: “He says if we had driven you back to Hanoi, you would have died on the journey.”

The doctor told me to list everything I had eaten for three days. It was a short list. An apple. He looked at me quizzically. “Was it a clean apple?” Yes, I said, I washed it in bottled water. Everybody burst out laughing, as if I had served up a killer Chris Rock punch line. it turns out that you can’t just wash an apple in Vietnam. They are covered in pesticides so they can stand for months without rotting. You need to cut off the peel entirely, or this can happen to you.

Although I couldn’t understand why, all through the time I was working on this book, I kept thinking of something that doctor said to me that day, during my unglamorous hour of poisoning.

“You need your nausea. It is a message. It will tell us what is wrong with you.”

It only became clear to me why in a very different place, thousands of miles away, at the end of my journey into what really causes depression and anxiety, and how we can find our way back.

“When I flushed away my final packs of Paxil, I found these mysteries waiting for me, like children on a train platform, waiting to be collected, trying to catch my eye. Why was I still depressed? Why were there so many people like me?”

Introduction: A Mystery

I was eighteen years old when I swallowed my first antidepressant. I was standing in the weak English sunshine, outside a pharmacy in a shopping center in London. The tablet was white and small, and as I swallowed, it felt like a chemical kiss.

That morning I had gone to see my doctor. I struggled, I explained to him, to remember a day when I hadn’t felt a long crying jag judder its way out of me. Ever since I was a small child, at school, at college, at home, with friends, I would often have to absent myself, shut myself away, and cry. They were not a few tears. They were proper sobs. And even when the tears didn’t come, I had an almost constant anxious monologue thrumming through my mind. Then I would chide myself: It’s all in your head. Get over it. Stop being so weak.

I was embarrassed to say it then; I am embarrassed to type it now.

In every book about depression or severe anxiety by someone who has been through it, there is a long stretch of pain-porn in which the author describes, in ever more heightened language, the depth of the distress they felt. We needed that once, when other people didn’t know what depression or severe anxiety felt like. Thanks to the people who have been breaking this taboo for decades now, I don’t have to write that book all over again. That is not what I am going to write about here. Take it from me, though: it hurts.

A month before I walked into that doctor’s office, I found myself on a beach in Barcelona, crying as the waves washed into me, when, quite suddenly, the explanation, for why this was happening, and how to find my way back, came to me. I was in the middle of traveling across Europe with a friend, in the summer before I became the first person in my family to go to a fancy university. We had bought cheap student rail passes, which meant for a month we could travel on any train in Europe for free, staying in youth hostels along the way. I had visions of yellow beaches and high culture, the Louvre, a spliff, hot Italians. But just before we left, I had been rejected by the first person I had ever really been in love with, and I felt emotion leaking out of me, even more than usual, like an embarrassing smell.

The trip did not go as I planned. I burst into tears on a gondola in Venice. I howled on the Matterhorn. I started to shake in Kafka’s house in Prague.

For me, it was unusual, but not that unusual. I’d had periods in my life like this before, when pain seemed unmanageable and I wanted to excuse myself from the world. But then in Barcelona, when I couldn’t stop crying, my friend said to me, “You realize most people don’t do this, don’t you?”

And then I experienced one of the very few epiphanies of my life. I turned to her and said: “I am depressed! It’s not all in my head! I’m not unhappy, I’m not weak, I’m depressed!”

This will sound odd, but what I experienced at that moment was a happy jolt, like unexpectedly finding a pile of money down the back of your sofa.

There is a term for feeling like this! It is a medical condition, like diabetes or irritable bowel syndrome! I had been hearing this, as a message bouncing through the culture, for years, of course, but now it clicked into place. They meant me! And there is, I suddenly recalled in that moment, a solution to depression: antidepressants. So that’s what I need! As soon as I get home, I will get these tablets, and I will be normal, and all the parts of me that are not depressed will be unshackled. I had always had drives that have nothing to do with depression, to meet people, to learn, to understand the world. They will be set free, I said, and soon.

The next day, we went to the Parc Güell, in the center of Barcelona. It’s a park designed by the architect Antoni Gaudi to be profoundly strange, everything is out of perspective, as if you have stepped into a funhouse mirror. At one point you walk through a tunnel in which everything is at a rippling angle, as though it has been hit by a wave. At another point, dragons rise close to buildings made of ripped iron that almost appears to be in motion. Nothing looks like the world should. As I stumbled around it, I thought, this is what my head is like: misshapen, wrong. And soon it’s going to be fixed.

Like all epiphanies, it seemed to come in a flash, but it had in fact been a long time coming. I knew what depression was. I had seen it play out in soap operas, and had read about it in books. I had heard my own mother talking about depression and anxiety, and seen her swallowing pills for it. And I knew about the cure, because it had been announced by the global media just a few years before. My teenage years coincided with the Age of Prozac the dawn of new drugs that promised, for the first time, to be able to cure depression without crippling side effects. One of the bestselling books of the decade explained that these drugs actually make you “better than well”, they make you stronger and healthier than ordinary people.

I had soaked all this up, without ever really stopping to think about it. There was a lot of talk like that in the late 1990s; it was everywhere. And now I saw, at last that it applied to me.

My doctor, it was clear on the afternoon when I went to see him, had absorbed all this, too. In his little office, he explained patiently to me why I felt this way. There are some people who naturally have depleted levels of a chemical named serotonin in their brains, he said, and this is what causes depression, that weird, persistent, misfiring unhappiness that won’t go away. Fortunately, just in time for my adulthood, there was a new generation of drugs, Selective Serotonin Reuptake Inhibitors (SSRIs), that restore your serotonin to the level of a normal person’s. Depression is a brain disease, he said, and this is the cure. He took out a picture of a brain and talked to me about it.

He was saying that depression was indeed all in my head, but in a very different way. It’s not imaginary. It’s very real, and it’s a brain malfunction.

He didn’t have to push. It was a story I was already sold on. I left within ten minutes with my script for Seroxat (or Paxil, as it’s known in the United States).

It was only years later, in the course of writing this book, that somebody pointed out to me all the questions my doctor didn’t ask that day. Like: Is there any reason you might feel so distressed? What’s been happening in your life? Is there anything hurting you that we might want to change? Even if he had asked, I don’t think I would have been able to answer him. I suspect I would have looked at him blankly. My life, I would have said, was good. Sure, I’d had some problems; but I had no reason to be unhappy, certainly not this unhappy.

In any case, he didn’t ask, and I didn’t wonder why. Over the next thirteen years, doctors kept writing me prescriptions for this drug, and none of them asked either. If they had, I suspect I would have been indignant, and said, If you have a broken brain that can’t generate the right happiness, producing chemicals, what’s the point of asking such questions?

Isn’t it cruel? You don’t ask a dementia patient why they can’t remember where they left their keys. What a stupid thing to ask me. Haven’t you been to medical school?

The doctor had told me it would take two weeks for me to feel the effect of the drugs, but that night, after collecting my prescription, I felt a warm surge running through me, a light thrumming that I was sure consisted of my brain synapses groaning and creaking into the correct configuration. I lay on my bed listening to a worn-out mix tape, and I knew I wasn’t going to be crying again for a long time.

I left for the university a few weeks later. With my new chemical armor, I wasn’t afraid. There, I became an evangelist for antidepressants. Whenever a friend was sad, I would offer them some of my pills to try, and I’d tell them to get some from the doctor. I became convinced that I was not merely nondepressed, but in some better state, I thought of it as “antidepression.” I was, I told myself, unusually resilient and energetic. I could feel some physical side effects from the drug, it was true, I was putting on a lot of weight, and I would find myself sweating unexpectedly. But that was a small price to pay to stop hemorrhaging sadness on the people around me. And-look! I could do anything now.

Within a few months, I started to notice that there were moments of welling sadness that would come back to me unexpectedly. They seemed inexplicable, and manifestly irrational. I returned to my doctor, and we agreed that I needed a higher dose. So my 20 milligrams a day was upped to 30 milligrams a day; my white pills became blue pills.

And so it continued, all through my late teens, and all through my twenties. I would preach the benefits of these drugs; after a while, the sadness would return; so I would be given a higher dose; 30 milligrams became 40; 40 became 50; until finally I was taking two big blue pills a day, at 60 milligrams. Every time, I got fatter; every time, I sweated more; every time, I knew it was a price worth paying.

I explained to anyone who asked that depression is a disease of the brain, and SSRis are the cure. When I became a journalist, I wrote articles in newspapers explaining this patiently to the public. I described the sadness returning to me as a medical process, clearly there was a running down of chemicals in my brain, beyond my control or comprehension. Thank God these drugs are remarkably powerful, I explained, and they work. Look at me. I’m the proof. Every now and then, I would hear a doubt in my head, but I would swiftly dismiss it by swallowing an extra pill or two that day.

I had my story. In fact, I realize now, it came in two parts. The first was about what causes depression: it’s a malfunction in the brain, caused by serotonin deficiency or some other glitch in your mental hardware. The second was about what solves depression: drugs, which repair your brain chemistry.

I liked this story. It made sense to me. It guided me through life.

I only ever heard one other possible explanation for why I might feel this way. It didn’t come from my doctor, but I read it in books and saw it discussed on TV. It said depression and anxiety were carried in your genes. I knew my mother had been depressed and highly anxious before I was born (and after), and that we had these problems in my family running further back than that. They seemed to me to be parallel stories. They both said, it’s something innate, in your flesh.

I started work on this book three years ago because I was puzzled by some mysteries, weird things that I couldn’t explain with the stories I had preached for so long, and that I wanted to find answers to.

Here’s the first mystery. One day, years after I started taking these drugs, I was sitting in my therapist’s office talking about how grateful I was that antidepressants exist and were making me better. “That’s strange,” he said. “Because to me, it seems you are still really quite depressed.” I was perplexed. What could he possibly mean? “Well,” he said, “you are emotionally distressed a lot of the time. And it doesn’t sound very different, to me, from how you describe being before you took the drugs.”

I explained to him, patiently, that he didn’t understand: depression is caused by low levels of serotonin, and I was having my serotonin levels boosted. What sort of training do these therapists get, I wondered?

Every now and then, as the years passed, he would gently make this point again. He would point out that my belief that an increased dose of the drugs was solving my problem didn’t seem to match the facts, since I remained down and depressed and anxious a lot of the time. I would recoil, with a mixture of anger and prissy superiority.

“No matter how high a dose I jacked up my antidepressants to, the sadness would always outrun it.”

It was years before I finally heard what he was saying. By the time I was in my early thirties, I had a kind of negative epiphany, the opposite of the one I had that day on a beach in Barcelona so many years before. No matter how high a dose I jacked up my antidepressants to, the sadness would always outrun it. There would be a bubble of apparently chemical relief, and then that sense of prickling unhappiness would return. I would start once again to have strong recurring thoughts that said: life is pointless; everything you’re doing is pointless; this whole thing is a fucking waste of time. It would be a thrum of unending anxiety.

So the first mystery I wanted to understand was: How could I still be depressed when I was taking antidepressants? I was doing everything right, and yet something was still wrong. Why?

“Addictions to legal and illegal drugs are now so widespread that the life expectancy of white men is declining for the first time in the entire peacetime history of the United States.”

A curious thing has happened to my family over the past few decades.

From when I was a little kid, I have memories of bottles of pills laid out on the kitchen table, waiting, with inscrutable white medical labels on them. I’ve written before about the drug addiction in my family, and how one of my earliest memories was of trying to wake up one of my relatives and not being able to. But when I was very young, it wasn’t the banned drugs that were dominant in our lives, it was the ones handed out by doctors: old-style antidepressants and tranquilizers like Valium, the chemical tweaks and alterations that got us through the day.

That’s not the curious thing that happened to us. The curious thing is that as I grew up, Western civilization caught up with my family. When I was small and I stayed with friends, I noticed that nobody in their families swallowed pills with their breakfast, lunch, or dinner. Nobody was sedated or amped up or antidepressed. My family was, I realized, unusual.

And then gradually, as the years passed, I noticed the pills appearing in more and more people’s lives, prescribed, approved, recommended. Today they are all around us. Some one in five US. adults is taking at least one drug for a psychiatric problem; nearly one in four middle-aged women in the United States is taking antidepressants at any given time; around one in ten boys at American high schools is being given a powerful stimulant to make them focus; and addictions to legal and illegal drugs are now so widespread that the life expectancy of white men is declining for the first time in the entire peacetime history of the United States.

These effects have radiated out across the Western world: for example, as you read this, one in three French people is taking a legal psychotropic drug such as an antidepressant, while the UK has almost the highest use in all of Europe. You can’t escape it: when scientists test the water supply of Western countries, they always find it is laced with antidepressants, because so many of us are taking them and excreting them that they simply can’t be filtered out of the water we drink every day. We are literally awash in these drugs.

What once seemed startling has become normal. Without talking about it much, we’ve accepted that a huge number of the people around us are so distressed that they feel they need to take a powerful chemical every day to keep themselves together.

So the second mystery that puzzled me was: Why were so many more people apparently feeling depressed and severely anxious? What changed?

“We’ve accepted that a huge number of the people around us are so distressed that they feel they need to take a powerful chemical every day to keep themselves together.”

Then, when I was thirty-one years old, I found myself chemically naked for the first time in my adult life. For almost a decade, I had been ignoring my therapist’s gentle reminders that I was still depressed despite my drugs. It was only after a crisis in my life, when I felt unequivocally terrible and couldn’t shake it off, that I decided to listen to him. What I had been trying for so long wasn’t, it seemed, working. And so, when I flushed away my final packs of Paxil, I found these mysteries waiting for me, like children on a train platform, waiting to be collected, trying to catch my eye. Why was I still depressed? Why were there so many people like me?

And I realized there was a third mystery, hanging over all of it. Could something other than bad brain chemistry have been causing depression and anxiety in me, and in so many people all around me? If so-what could it be?

Still, I put off looking into it. Once you settle into a story about your pain, you are extremely reluctant to challenge it. It was like a leash I had put on my distress to keep it under some control. I feared that if I messed with the story I had lived with for so long, the pain would be like an unchained animal, and would savage me.

Over a period of several years, I fell into a pattern. I would begin to research these mysteries, by reading scientific papers, and talking to some of the scientists who wrote them, but I always backed away, because what they said made me feel disoriented, and more anxious than I had been at the start. I focused on the work for another book, Chasing the Scream: The First and Last Days of the War on Drugs, instead. It sounds ridiculous to say I found it easier to interview hit men for the Mexican drug cartels than to look into what causes depression and anxiety, but messing with my story about my emotions, what I felt, and why I felt it, seemed more dangerous, to me, than that.

And then, finally, I decided I couldn’t ignore it any longer. So, over a period of three years, I went on a journey of over forty thousand miles. I conducted more than two hundred interviews across the world, with some of the most important social scientists in the world, with people who had been through the depths of depression and anxiety, and with people who had recovered. I ended up in all sorts of places I couldn’t have guessed at in the beginning, an Amish village in Indiana, a Berlin housing project rising up in rebellion, a Brazilian city that had banned advertising, a Baltimore laboratory taking people back through their traumas in a totally unexpected way. What I learned forced me to radically revise my story, about myself, and about the distress spreading like tar over our culture.

“Everything that causes an increase in depression also causes an increase in anxiety, and the other way around. They rise and fall together.”

I want to flag up, right at the start, two things that shape the language I am going to use all through the book. Both were surprising to me.

I was told by my doctor that I was suffering from both depression and acute anxiety. I had believed that those were separate problems, and that is how they were discussed for the thirteen years I received medical care for them. But I noticed something odd as I did my research. Everything that causes an increase in depression also causes an increase in anxiety, and the other way around. They rise and fall together.

It seemed curious, and I began to understand it only when, in Canada, I sat down with Robert Kohlenberg, a professor of psychology. He, too, once thought that depression and anxiety were different things. But as he studied it, for over twenty years now, he discovered, he says, that “the data are indicating they’re not that distinct.” In practice, “the diagnoses, particularly depression and anxiety, overlap.” Sometimes one part is more pronounced than the other, you might have panic attacks this month and be crying a lot the next month. But the idea that they are separate in the way that (say) having pneumonia and having a broken leg are separate isn’t borne out by the evidence. It’s “messy,” he has proved.

Robert’s side of the argument has been prevailing in the scientific debate. In the past few years, the National Institutes of Health, the main body funding medical research in the United States, has stopped funding studies that present depression and anxiety as different diagnoses. “They want something more realistic that corresponds to the way people are in actual clinical practice,” he explains.

I started to see depression and anxiety as like cover versions of the same song by different bands. Depression is a cover version by a downbeat emo band, and anxiety is a cover version by a screaming heavy metal group, but the underlying sheet music is the same. They’re not identical, but they are twinned.

*

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

Anatomy of a teenage suicide: Leo’s death will count – Virginia Fallon.

In the past year, 668 people took their own lives in New Zealand: the highest number since records began and the fourth year in a row the number increased.


In 2016, some time after the magnitude 7.8 Kaikōura earthquake rattled the capital, the 18-year-old took his own life.

Stuff.co.nz

get help

24/7 Lifeline – 0800 543 354

World in mental health crisis of ‘monumental suffering’, say experts – Sarah Broseley.

“Mental health problems kill more young people than any other cause around the world.” Prof. Vikram Patel, Harvard Medical School
Lancet report says 13.5 million lives could be saved every year if mental illness addressed.

Every country in the world is facing and failing to tackle a mental health crisis, from epidemics of anxiety and depression to conditions caused by violence and trauma, according to a review by experts that estimates the rising cost will hit $16tn (£12tn) by 2030.

A team of 28 global experts assembled by the Lancet medical journal says there is a “collective failure to respond to this global health crisis” which “results in monumental loss of human capabilities and avoidable suffering.”

The burden of mental ill-health is rising everywhere, says the Lancet Commission, in spite of advances in the understanding of the causes and options for treatment. “The quality of mental health services is routinely worse than the quality of those for physical health,” says their report, launched at a global ministerial mental health summit in London.

The Guardian

Towards a New Era for Mental Health

Prabha S Chandra, Prabhat Chand
The new Lancet Commission on global mental health and sustainable development raises important issues at a time when many countries in the Global South are re-examining their national priorities in mental health. With its broad vision, the Commission shows why mental health is a public good that is a crucial part of the Sustainable Development Goals (SDGs). The Commission’s report emphasises the need to take a dimensional approach to mental health problems and their treatment; to allocate resources where they will be most cost-effective; to consider a life-course approach; and to build on existing research that will pave the way for better understanding of the causes, prevention, and treatment of mental health problems.

The Lancet

‘Living hell’: Inside one man’s battle with anxiety and depression – Bruce Munro.

It is this panic that the panic will overwhelm and expose you, that is the deep demon of anxiety disorders.

Anxiety and depression are a plague on Western society, especially New Zealand. It is only getting worse, becoming decidedly common.

About 17% of New Zealanders have been diagnosed with depression, anxiety, bipolar disorder or a bitter cocktail of the above, at some point in their lives. During the next 12 months, 228,000 Kiwis are predicted to experience a major depressive disorder.

Globally, the World Health Organisation believes mental illness will become the second leading cause of disability within two years.

A fear process had established itself in my brain.

For several years, my brain had been building an extensive back catalogue of experiences it interpreted as fearful. A mind loop was set up. The amygdala, the almond-sized primal brain, detected a threat. A flood of adrenaline and cortisol was released, creating a hyper-attentive state. The neocortex scanned memories for explanations of this arousal. If what was going on, no matter how mundane – a phone ringing, having a conversation, driving across a bridge – had been labelled “fearful” by a past experience, then fear was offered to my conscious brain as the appropriate emotion.

Deep ruts were created that ran directly from any stimuli, past, present and future to a fear response.

By simple, tragic repetition, I had trained my thinking to be scared of virtually everything. …

New Zealand Herald

Need Help? Want to Talk?

24/7 LIFELINE: 0800 543 354

Thinking, Fast And Slow – Daniel Kahneman.

This book presents my current understanding of judgment and decision making, which has been shaped by psychological discoveries of recent decades.

The idea that our minds are susceptible to systematic errors is now generally accepted. Our research on judgment had far more effect on social science than we thought possible when we were working on it.

We can be blind to the obvious, and we are also blind to our blindness.

Daniel Kahneman is a Senior Scholar at Princeton University, and Emeritus Professor of Public Affairs, Woodrow Wilson School of Public and International Affairs. He was awarded the Nobel Prize in Economics in 2002.

*

Every author, I suppose, has in mind a setting in which readers of his or her work could benefit from having read it. Mine is the proverbial office water-cooler, where opinions are shared and gossip is exchanged. I hope to enrich the vocabulary that people use when they talk about the judgments and choices of others, the company’s new policies, or a colleague’s investment decisions.

Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and at home.

To be a good diagnostician, a physician needs to acquire a large set of labels for diseases, each of which binds an idea of the illness and its symptoms, possible antecedents and causes, possible developments and consequences, and possible interventions to cure or mitigate the illness. Learning medicine consists in part of learning the language of medicine. A deeper understanding of judgments and choices also requires a richer vocabulary than is available in everyday language.

The hope for informed gossip is that there are distinctive patterns in the errors people make. Systematic errors are known as biases, and they recur predictably in particular circumstances. When the handsome and confident speaker bounds onto the stage, for example, you can anticipate that the audience will judge his comments more favorably than he deserves. The availability of a diagnostic label for this bias, the halo effect, makes it easier to anticipate, recognize, and understand.

When you are asked what you are thinking about, you can normally answer. You believe you know what goes on in your mind, which often consists of one conscious thought leading in an orderly way to another. But that is not the only way the mind works, nor indeed is that the typical way. Most impressions and thoughts arise in your conscious experience without your knowing how they got there. You cannot trace how you came to the belief that there is a lamp on the desk in front of you, or how you detected a hint of irritation in your spouse’s voice on the telephone, or how you managed to avoid a threat on the road before you became consciously aware of it. The mental work that produces impressions, intuitions, and many decisions goes on in silence in our mind.

Much of the discussion in this book is about biases of intuition. However, the focus on error does not denigrate human intelligence, any more than the attention to diseases in medical texts denies good health. Most of us are healthy most of the time, and most of our judgments and actions are appropriate most of the time. As we navigate our lives, we normally allow ourselves to be guided by impressions and feelings, and the confidence we have in our intuitive beliefs and preferences is usually justified. But not always. We are often confident even when we are wrong, and an objective observer is more likely to detect our errors than we are.

So this is my aim for watercooler conversations: improve the ability to identify and understand errors of judgment and choice, in others and eventually in ourselves, by providing a richer and more precise language to discuss them. In at least some cases, an accurate diagnosis may suggest an intervention to limit the damage that bad judgments and choices often cause.

ORIGINS

This book presents my current understanding of judgment and decision making, which has been shaped by psychological discoveries of recent decades. However,

I trace the central ideas to the lucky day in 1969 when I asked a colleague to speak as a guest to a seminar I was teaching in the Department of Psychology at the Hebrew University of Jerusalem. Amos Tversky was considered a rising star in the field of decision research, indeed, in anything he did, so I knew we would have an interesting time. Many people who knew Amos thought he was the most intelligent person they had ever met. He was brilliant, voluble, and charismatic. He was also blessed with a perfect memory for jokes and an exceptional abiIity to use them to make a point. There was never a dull moment when Amos was around. He was then thirty-two; I was thirty-five.

Amos told the class about an ongoing program of research at the University of Michigan that sought to answer this question: Are people good intuitive statisticians? We already knew that people are good intuitive grammarians: at age four a child effortlessly conforms to the rules of grammar as she speaks, although she has no idea that such rules exist. Do people have a similar intuitive feel for the basic principles of statistics? Amos reported that the answer was a qualified yes. We had a lively debate in the seminar and ultimately concluded that a qualified no was a better answer.

Amos and I enjoyed the exchange and concluded that intuitive statistics was an interesting topic and that it would be fun to explore it together. That Friday we met for lunch at Café Rimon, the favorite hangout of bohemians and professors in Jerusalem, and planned a study of the statistical intuitions of sophisticated researchers. We had concluded in the seminar that our own intuitions were deficient. In spite of years of teaching and using statistics, we had not developed an intuitive sense of the reliability of statistical results observed in small samples. Our subjective judgments were biased: we were far too willing to believe research findings based on inadequate evidence and prone to collect too few observations in our own research. The goal of our study was to examine whether other researchers suffered from the same affliction.

We prepared a survey that included realistic scenarios of statistical issues that arise in research. Amos collected the responses of a group of expert participants in a meeting of the Society of Mathematical Psychology, including the authors of two statistical textbooks. As expected, we found that our expert colleagues, like us, greatly exaggerated the likelihood that the original result of an experiment would be successfully replicated even with a small sample. They also gave very poor advice to a fictitious graduate student about the number of observations she needed to collect. Even statisticians were not good intuitive statisticians.

While writing the article that reported these findings, Amos and I discovered that we enjoyed working together. Amos was always very funny, and in his presence I became funny as well, so we spent hours of solid work in continuous amusement. The pleasure we found in working together made us exceptionally patient; it is much easier to strive for perfection when you are never bored. Perhaps most important, we checked our critical weapons at the door. Both Amos and I were critical and argumentative, he even more than I, but during the years of our collaboration neither of us ever rejected out of hand anything the other said. Indeed, one of the great joys I found in the collaboration was that Amos frequently saw the point of my vague ideas much more clearly than I did. Amos was the more logical thinker, with an orientation to theory and an unfailing sense of direction. I was more intuitive and rooted in the psychology of perception, from which we borrowed many ideas. We were sufficiently similar to understand each other easily, and sufficiently different to surprise each other. We developed a routine in which we spent much of our working days together, often on long walks. For the next fourteen years our collaboration was the focus of our lives, and the work we did together during those years was the best either of us ever did.

We quickly adopted a practice that we maintained for many years. Our research was a conversation, in which we invented questions and jointly examined our intuitive answers. Each question was a small experiment, and we carried out many experiments in a single day. We were not seriously looking for the correct answer to the statistical questions we posed. Our aim was to identify and analyze the intuitive answer, the first one that came to mind, the one we were tempted to make even when we knew it to be wrong. We believed, correctly, as it happened, that any intuition that the two of us shared would be shared by many other people as well, and that it would be easy to demonstrate its effects on judgments.

We once discovered with great delight that we had identical silly ideas about the future professions of several toddlers we both knew. We could identify the argumentative three-year-old lawyer, the nerdy professor, the empathetic and mildly intrusive psychotherapist. Of course these predictions were absurd, but we still found them appealing. It was also clear that our intuitions were governed by the resemblance of each child to the cultural stereotype of a profession. The amusing exercise helped us develop a theory that was emerging in our minds at the time, about the role of resemblance in predictions. We went on to test and elaborate that theory in dozens of experiments, as in the following example.

As you consider the next question, please assume that Steve was selected at random from a representative sample:

An individual has been described by a neighbor as follows: “Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” Is Steve more likely to be a librarian or a farmer?

The resemblance of Steve’s personality to that of a stereotypical librarian strikes everyone immediately, but equally relevant statistical considerations are almost always ignored. Did it occur to you that there are more than 20 male farmers for each male librarian in the United States? Because there are so many more farmers, it is almost certain that more “meek and tidy” souls will be found on tractors than at library information desks. However, we found that participants in our experiments ignored the relevant statistical facts and relied exclusively on resemblance. We proposed that they used resemblance as a simplifying heuristic (roughly, a rule of thumb) to make a difficult judgment. The reliance on the heuristic caused predictable biases (systematic errors) in their predictions.

On another occasion, Amos and I wondered about the rate of divorce among professors in our university. We noticed that the question triggered a search of memory for divorced professors we knew or knew about, and that we judged the size of categories by the ease with which instances came to mind. We called this reliance on the ease of memory search the availability heuristic. In one of our studies, we asked participants to answer a simple cuestion about words in a typical English text:

Consider the letter K . Is K more likely to appear as the first letter in a word OR as the third letter?

As any Scrabble player knows, it is much easier to come up with words that begin with a particular letter than to find words that have the same letter in the third position. This is true for every letter of the alphabet. We therefore expected respondents to exaggerate the frequency of letters appearing in the first position, even those letters (such as K, L, N, R, V) which in fact occur more frequently in the third position. Here again, the reliance on a heuristic produces a predictable bias in judgments. For example, I recently came to doubt my long-held impression that adultery is more common among politicians than among physicians or lawyers. I had even come up with explanations for that “fact,” including the aphrodisiac effect of power and the temptations of life away from home. I eventually realized that the transgressions of politicians are much more likely to be reported than the transgressions of lawyers and doctors. My intuitive impression could be due entirely to journalists’ choices of topics and to my reliance on the availability heuristic.

Amos and I spent several years studying and documenting biases of intuitive thinking in various tasks, assigning probabilities to events, forecasting the future, assessing hypotheses, and estimating frequencies. In the fifth year of our collaboration, we presented our main findings in Science magazine, a publication read by scholars in many disciplines. The article (which is reproduced in full at the end of this book) was titled “Judgment Under Uncertainty: Heuristics and Biases.” It described the simplifying shortcuts of intuitive thinking and explained some 20 biases as manifestations of these heuristics, and also as demonstrations of the role of heuristics in judgment.

Historians of science have often noted that at any given time scholars in a particular field tend to share basic assumptions about their subject. Social scientists are no exception; they rely on a view of human nature that provides the background of most discussions of specific behaviors but is rarely questioned. Social scientists in the 1970s broadly accepted two ideas about human nature. First, people are generally rational, and their thinking is normally sound. Second, emotions such as fear, affection, and hatred explain most of the occasions on which people depart from rationality. Our article challenged both assumptions without discussing them directly. We documented systematic errors in the thinking of normal people, and we traced these errors to the design of the machinery of cognition rather than to the corruption of thought by emotion.

Our article attracted much more attention than we had expected, and it remains one of the most highly cited works in social science (more than three hundred scholarly articles referred to it in 2010). Scholars in other disciplines found it useful, and the ideas of heuristics and biases have been used productively in many fields, including medical diagnosis, legal judgment, intelligence analysis, philosophy, finance, statistics, and military strategy.

For example, students of policy have noted that the availability heuristic helps explain why some issues are highly salient in the public’s mind while others are neglected. People tend to assess the relative importance of issues by the ease with which they are retrieved from memory, and this is largely determined by the extent of coverage in the media. Frequently mentioned topics populate the mind even as others slip away from awareness. In turn, what the media choose to report corresponds to their view of what is currently on the public’s mind. It is no accident that authoritarian regimes exert substantial pressure on independent media. Because public interest is most easily aroused by dramatic events and by celebrities, media feeding frenzies are common. For several weeks after Michael Jackson’s death, for example, it was virtually impossible to find a television channel reporting on another topic. In contrast, there is little coverage of critical but unexciting issues that provide less drama, such as declining educational standards or overinvestment of medical resources in the last year of life. (As I write this, I notice that my choice of “little-covered” examples was guided by availability. The topics I chose as examples are mentioned often; equally important issues that are less available did not come to my mind.)

We did not fully realize it at the time, but a key reason for the broad appeal of “heuristics and biases” outside psychology was an incidental feature of our work: we almost always included in our articles the full text of the questions we had asked ourselves and our respondents. These questions served as demonstrations for the reader, allowing him to recognize how his own thinking was tripped up by cognitive biases. I hope you had such an experience as you read the question about Steve the librarian, which was intended to help you appreciate the power of resemblance as a cue to probability and to see how easy it is to ignore relevant statistical facts.

The use of demonstrations provided scholars from diverse disciplines, notably philosophers and economists, an unusual opportunity to observe possible flaws in their own thinking. Having seen themselves fail, they became more likely to question the dogmatic assumption, prevalent at the time, that the human mind is rational and logical. The choice of method was crucial: if we had reported results of only conventional experiments, the article would have been less noteworthy and less memorable. Furthermore, skeptical readers would have distanced themselves from the results by attributing judgment errors to the familiar fecklessness of undergraduates, the typical participants in psychological studies. Of course, we did not choose demonstrations over standard experiments because we wanted to influence philosophers and economists. We preferred demonstrations because they were more fun, and we were lucky in our choice of method as well as in many other ways.

A recurrent theme of this book is that luck plays a large role in every story of success; it is almost always easy to identify a small change in the story that would have turned a remarkable achievement into a mediocre outcome. Our story was no exception.

The reaction to our work was not uniformly positive. In particular, our focus on biases was criticized as suggesting an unfairly negative view of the mind. As expected in normal science, some investigators refined our ideas and others offered plausible alternatives. By and large, though, the idea that our minds are susceptible to systematic errors is now generally accepted. Our research on judgment had far more effect on social science than we thought possible when we were working on it.

Immediately after completing our review of judgment, we switched our attention to decision making under uncertainty. Our goal was to develop a psychological theory of how people make decisions about simple gambles. For example: Would you accept a bet on the toss of a coin where you win $130 if the coin shows heads and lose $100 if it shows tails? These elementary choices had long been used to examine broad questions about decision making, such as the relative weight that people assign to sure things and to uncertain outcomes. Our method did not change: we spent many days making up choice problems and examining whether our intuitive preferences conformed to the logic of choice. Here again, as in judgment, we observed systematic biases in our own decisions, intuitive preferences that consistently violated the rules of rational choice. Five years after the Science article, we published “Prospect Theory: An Analysis of Decision Under Risk,” a theory of choice that is by some counts more influential than our work on judgment, and is one of the foundations of behavioral economics.

Until geographical separation made it too difficult to go on, Amos and I enjoyed the extraordinary good fortune of a shared mind that was superior to our individual minds and of a relationship that made our work fun as well as productive. Our collaboration on judgment and decision making was the reason for the Nobel Prize that I received in 2002, which Amos would have shared had he not died, aged fifty-nine, in 1996.

WHERE WE ARE NOW

This book is not intended as an exposition of the early research that Amos and I conducted together, a task that has been ably carried out by many authors over the years. My main aim here is to present a view of how the mind works that draws on recent developments in cognitive and social psychology. One of the more important developments is that we now understand the marvels as well as the flaws of intuitive thought.

Amos and I did not address accurate intuitions beyond the casual statement that judgment heuristics “are quite useful, but sometimes lead to severe and systematic errors.” We focused on biases, both because we found them interesting in their own right and because they provided evidence for the heuristics of judgment. We did not ask ourselves whether all intuitive judgments under uncertainty are produced by the heuristics we studied; it is now clear that they are not. In particular, the accurate intuitions of experts are better explained by the effects of prolonged practice than by heuristics. We can now draw a richer and more balanced picture, in which skill and heuristics are alternative sources of intuitive judgments and choices.

The psychologist Gary Klein tells the story of a team of firefighters that entered a house in which the kitchen was on fire. Soon after they started hosing down the kitchen, the commander heard himself shout, “Let’s get out of here!” without realizing why. The floor collapsed almost immediately after the firefighters escaped. Only after the fact did the commander realize that the fire had been unusually quiet and that his ears had been unusually hot. Together, these impressions prompted what he called a “sixth sense of danger.” He had no idea what was wrong, but he knew something was wrong. It turned out that the heart of the fire had not been in the kitchen but in the basement beneath where the men had stood.

We have all heard such stories of expert intuition: the chess master who walks past a street game and announces “White mates in three” without stopping, or the physician who makes a complex diagnosis after a single glance at a patient. Expert intuition strikes us as magical, but it is not. Indeed, each of us performs feats of intuitive expertise many times each day. Most of us are pitch-perfect in detecting anger in the first word of a telephone call, recognize as we enter a room that we were the subject of the conversation, and quickly react to subtle signs that the driver of the car in the next lane is dangerous. Our everyday intuitive abilities are no less marvelous than the striking insights of an experienced firefighter or physician, only more common.

The psychology of accurate intuition involves no magic. Perhaps the best short statement of it is by the great Herbert Simon, who studied chess masters and showed that after thousands of hours of practice they come to see the pieces on the board differently from the rest of us. You can feel Simon’s impatience with the mythologizing of expert intuition when he writes: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.”

We are not surprised when a two-year-old looks at a dog and says “doggie!” because we are used to the miracle of children learning to recognize and name things. Simon’s point is that the miracles of expert intuition have the same character. Valid intuitions develop when experts have learned to recognize familiar elements in a new situation and to act in a manner that is appropriate to it. Good intuitive judgments come to mind with the same immediacy as “doggie!”

Unfortunately, professionals’ intuitions do not all arise from true expertise. Many years ago I visited the chief investment officer of a large financial firm, who told me that he had just invested some tens of millions of dollars in the stock of Ford Motor Company. When I asked how he had made that decision, he replied that he had recently attended an automobile show and had been impressed. “Boy, do they know how to make a car!” was his explanation. He made it very clear that he trusted his gut feeling and was satisfied with himself and with his decision. I found it remarkable that he had apparently not considered the one question that an economist would call relevant: Is Ford stock currently underpriced? Instead, he had listened to his intuition; he liked the cars, he liked the company, and he liked the idea of owning its stock. From what we know about the accuracy of stock picking, it is reasonable to believe that he did not know what he was doing.

The specific heuristics that Amos and I studied provide little help in understanding how the executive came to invest in Ford stock, but a broader conception of heuristics now exists, which offers a good account. An important advance is that emotion now looms much larger in our understanding of intuitive judgments and choices than it did in the past. The executive’s decision would today be described as an example of the affect heuristic, where judgments and decisions are guided directly by feelings of liking and disliking, with little deliberation or reasoning.

When confronted with a problem, choosing a chess move or deciding whether to invest in a stock, the machinery of intuitive thought does the best it can. If the individual has relevant expertise, she will recognize the situation, and the intuitive solution that comes to her mind is likely to be correct. This is what happens when a chess master looks at a complex position: the few moves that immediately occur to him are all strong. When the question is difficult and a skilled solution is not available, intuition still has a shot: an answer may come to mind quickly, but it is not an answer to the original question. The question that the executive faced (should I invest in Ford stock?) was difficult, but the answer to an easier and related question (do I like Ford cars?) came readily to his mind and determined his choice. This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.

The spontaneous search for an intuitive solution sometimes fails, neither an expert solution nor a heuristic answer comes to mind. In such cases we often find ourselves switching to a slower, more deliberate and effortful form of thinking. This is the slow thinking of the title. Fast thinking includes both variants of intuitive thought, the expert and the heuristic, as well as the entirely automatic mental activities of perception and memory, the operations that enable you to know there is a lamp on your desk or retrieve the name of the capital of Russia.

The distinction between fast and slow thinking has been explored by many psychologists over the last twenty-five years. For reasons that I explain more fully in the next chapter, I describe mental life by the metaphor of two agents, called System 1 and System 2, which respectively produce fast and slow thinking. I speak of the features of intuitive and deliberate thought as if they were traits and dispositions of two characters in your mind. In the picture that emerges from recent research, the intuitive System 1 is more influential than your experience tells you, and it is the secret author of many of the choices and judgments you make. Most of this book is about the workings of System 1 and the mutual influences between it and System 2.

WHAT COMES NEXT

The book is divided into five parts. Part 1 presents the basic elements of a two-systems approach to judgment and choice. It elaborates the distinction between the automatic operations of System 1 and the controlled operations of System 2, and shows how associative memory, the core of System 1, continually constructs a coherent interpretation of what is going on in our world at any instant. I attempt to give a sense of the complexity and richness of the automatic and often unconscious processes that underlie intuitive thinking, and of how these automatic processes explain the heuristics of judgment. A goal is to introduce a language for thinking and talking about the mind.

Part 2 updates the study of judgment heuristics and explores a major puzzle: Why is it so difficult for us to think statistically? We easily think associatively, we think metaphorically, we think causally, but statistics requires thinking about many things at once, which is something that System 1 is not designed to do.

The difficulties of statistical thinking contribute to the main theme of Part 3, which describes a puzzling limitation of our mind: our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight. My views on this topic have been influenced by Nassim Taleb, the author of The Black Swan. I hope for watercooler conversations that intelligently explore the lessons that can be learned from the past while resisting the lure of hindsight and the illusion of certainty.

The focus of part 4 is a conversation with the discipline of economics on the nature of decision making and on the assumption that economic agents are rational. This section of the book provides a current view, informed by the two-system model, of the key concepts of prospect theory, the model of choice that Amos and I published in 1979. Subsequent chapters address several ways human choices deviate from the rules of rationality. I deal with the unfortunate tendency to treat problems in isolation, and with framing effects, where decisions are shaped by inconsequential features of choice problems. These observations, which are readily explained by the features of System 1, present a deep challenge to the rationality assumption favored in standard economics.

Part 5 describes recent research that has introduced a distinction between two selves, the experiencing self and the remembering self, which do not have the same interests. For example, we can expose people to two painful experiences. One of these experiences is strictly worse than the other, because it is longer. But the automatic formation of memories, a feature of System 1, has its rules, which we can exploit so that the worse episode leaves a better memory. When people later choose which episode to repeat, they are, naturally, guided by their remembering self and expose themselves (their experiencing self) to unnecessary pain. The distinction between two selves is applied to the measurement of well-being, where we find again that what makes the experiencing self happy is not quite the same as what satisfies the remembering self. How two selves within a single body can pursue happiness raises some difficult questions, both for individuals and for societies that view the well-being of the population as a policy objective.

A concluding chapter explores, in reverse order, the implications of three distinctions drawn in the book: between the experiencing and the remembering selves, between the conception of agents in classical economics and in behavioral economics (which borrows from psychology), and between the automatic System 1 and the effortful System 2. I return to the virtues of educating gossip and to what organizations might do to improve the quality of judgments and decisions that are made on their behalf.

Two articles I wrote with Amos are reproduced as appendixes to the book. The first is the review of judgment under uncertainty that I described earlier. The second, published in 1984, summarizes prospect theory as well as our studies of framing effects. The articles present the contributions that were cited by the Nobel committee and you may be surprised by how simple they are. Reading them will give you a sense of how much we knew a long time ago, and also of how much we have learned in recent decades.

PART ONE, TWO SYSTEMS

1: The Characters of the Story

To observe your mind in automatic mode, glance at the image below.

Figure 1

.

Your experience as you look at the woman’s face seamlessly combines what we normally call seeing and intuitive thinking. As surely and quickly as you saw that the young woman’s hair is dark, you knew she is angry.

Furthermore, what you saw extended into the future. You sensed that this woman is about to say some very unkind words, probably in a loud and strident voice. A premonition of what she was going to do next came to mind automatically and effortlessly. You did not intend to assess her mood or to anticipate what she might do, and your reaction to the picture did not have the feel of something you did. It just happened to you. It was an instance of fast thinking.

Now look at the following problem:

17×24

You knew immediately that this is a multiplication problem, and probably knew that you could solve it, with paper and pencil, if not without. You also had some vague intuitive knowledge of the range of possible results. You would be quick to recognize that both 12,609 and 123 are implausible. Without spending some time on the problem, however, you would not be certain that the answer is not 568. A precise solution did not come to mind, and you felt that you could choose whether or not to engage in the computation. If you have not done so yet, you should attempt the multiplication problem now, completing at least part of it.

You experienced slow thinking as you proceeded through a sequence of steps. You first retrieved from memory the cognitive program for multiplication that you learned in school, then you implemented it. Carrying out the computation was a strain. You felt the burden of holding much material in memory, as you needed to keep track of where you were and of where you were going, while holding on to the intermediate result. The process was mental work: deliberate, effortful, and orderly, a prototype of slow thinking. The computation was not only an event in your mind; your body was also involved. Your muscles tensed up, your blood pressure rose, and your heart rate increased. Someone looking closely at your eyes while you tackled this problem would have seen your pupils dilate. Your pupils contracted back to normal size as soon as you ended your work, when you found the answer (which is 408, by the way) or when you gave up.

TWO SYSTEMS

Psychologists have been intensely interested for several decades in the two modes of thinking evoked by the picture of the angry woman and by the multiplication problem, and have offered many labels for them. I adopt terms originally proposed by the psychologists Keith Stanovich and Richard West, and will refer to two systems in the mind, System 1 and System 2.

– System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.

– System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

The labels of System 1 and System 2 are widely used in psychology, but I go further than most in this book, which you can read as a psychodrama with two characters.

When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do. Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps. I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations of System 1. You will be invited to think of the two systems as agents with their individual abilities, limitations, and functions.

In rough order of complexity, here are some examples of the automatic activities that are attributed to System 1:

– Detect that one object is more distant than another.

– Orient to the source of a sudden sound.

– Complete the phrase “bread and …”

– Make a “disgust face” when shown a horrible picture.

-Detect hostility in a voice.

– Answer 2 + 2 = ?

– Read words on large billboards.

– Drive a car on an empty road.

– Find a strong move in chess (if you are a chess master).

– Understand simple sentences.

– Recognize that a “meek and tidy soul with a passion for detail” resembles an occupational stereotype.

All these mental events belong with the angry woman, they occur automatically and require little or no effort. The capabilities of System 1 include innate skills that we share with other animals. We are born prepared to perceive the world around us, recognize objects, orient attention, avoid losses, and fear spiders. Other mental activities become fast and automatic through prolonged practice. System 1 has learned associations between ideas (the capital of France?); it has also learned skills such as reading and understanding nuances of social situations. Some skills, such as finding strong chess moves, are acquired only by specialized experts. Others are widely shared. Detecting the similarity of a personality sketch to an occupational stereotype requires broad knowledge of the language and the culture, which most of us possess. The knowledge is stored in memory and accessed without intention and without effort.

Several of the mental actions in the list are completely involuntary. You cannot refrain from understanding simple sentences in your own language or from orienting to a loud unexpected sound, nor can you prevent yourself from knowing that 2 + 2 = 4 or from thinking of Paris when the capital of France is mentioned. Other activities, such as chewing, are susceptible to voluntary control but normally run on automatic pilot. The control of attention is shared by the two systems. Orienting to a loud sound is normally an involuntary operation of System 1, which immediately mobilizes the voluntary attention of System 2. You may be able to resist turning toward the source of a loud and offensive comment at a crowded party, but even if your head does not move, your attention is initially directed to it, at least for a while. However, attention can be moved away from an unwanted focus, primarily by focusing intently on another target.

The highly diverse operations of System 2 have one feature in common: they require attention and are disrupted when attention is drawn away. Here are some examples:

– Brace for the starter gun in a race.

– Focus attention on the clowns in the circus.

– Focus on the voice of a particular person in a crowded and noisy room.

– Look for a woman with white hair.

– Search memory to identify a surprising sound.

– Maintain a faster walking speed than is natural for you.

– Monitor the appropriateness of your behavior in a social situation.

– Count the occurrences of the letter a in a page of text.

– Tell someone your phone number.

– Park in a narrow space (for most people except garage attendants).

– Compare two washing machines for overall value.

– Fill out a tax form.

– Check the vaIidity of a complex logical argument.

In aIl these situations you must pay attention, and you will perform less well, or not at all, if you are not ready or if your attention is directed inappropriately. System 2 has some ability to change the way System 1 works, by programming the normally automatic functions of attention and memory. When waiting for a relative at a busy train station, for example, you can set yourself at will to look for a white-haired woman or a bearded man, and thereby increase the likelihood of detecting your relative from a distance. You can set your memory to search for capital cities that start with N or for French existentialist novels. And when you rent a car at London’s Heathrow Airport, the attendant will probably remind you that “we drive on the left side of the road over here.” In all these cases, you are asked to do something that does not come naturally, and you will find that the consistent maintenance of a set requires continuous exertion of at least some effort.

The often-used phrase “pay attention” is apt: you dispose of a limited budget of attention that you can allocate to activities, and if you try to go beyond your budget, you will fail. It is the mark of effortful activities that they interfere with each other, which is why it is difficult or impossible to conduct several at once. You could not compute the product of 17 x 24 while making a left turn into dense traffic, and you certainly should not try. You can do several things at once, but only if they are easy and undemanding. You are probably safe carrying on a conversation with a passenger while driving on an empty highway, and many parents have discovered, perhaps with some guilt, that they can read a story to a child while thinking of something else.

Everyone has some awareness of the limited capacity of attention, and our social behavior makes allowances for these limitations. When the driver of a car is overtaking a truck on a narrow road, for example, adult passengers quite sensibly stop talking. They know that distracting the driver is not a good idea, and they also suspect that he is temporarily deaf and will not hear what they say.

Intense focusing on a task can make people effectively blind, even to stimuli that normally attract attention.

The most dramatic demonstration was offered by Christopher Chabris and Daniel Simons in their book The Invisible Gorilla. They constructed a short film of two teams passing basketballs, one team wearing white shirts, the other wearing black. The viewers of the film are instructed to count the number of passes made by the white team, ignoring the black players. This task is difficult and completely absorbing. Halfway through the video, a woman wearing a gorilla suit appears, crosses the court, thumps her chest, and moves on. The gorilla is in view for 9 seconds. Many thousands of people have seen the video, and about half of them do not notice anything unusual. It is the counting task, and especially the instruction to ignore one of the teams, that causes the blindness. No one who watches the video without that task would miss the gorilla. Seeing and orienting are automatic functions of System 1, but they depend on the allocation of some attention to the relevant stimulus. The authors note that the most remarkable observation of their study is that people find its results very surprising. Indeed, the viewers who fail to see the gorilla are initially sure that it was not there, they cannot imagine missing such a striking event. The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness.

PLOT SYNOPSIS

The interaction of the two systems is a recurrent theme of the book, and a brief synopsis of the plot is in order.

In the story I will tell, Systems 1 and 2 are both active whenever we are awake. System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings. If endorsed by System 2, impressions and intuitions turn into beliefs, and impulses turn into voluntary actions. When all goes smoothly, which is most of the time, System 2 adopts the suggestions of System 1 with little or no modification. You generally believe your impressions and act on your desires, and that is fine, usually.

When System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment. System 2 is mobilized when a question arises for which System 1 does not offer an answer, as probably happened to you when you encountered the multiplication problem 17 x 24. You can also feel a surge of conscious attention whenever you are surprised. System 2 is activated when an event is detected that violates the model of the world that System 1 maintains. In that world, lamps do not jump, cats do not bark, and gorillas do not cross basketball courts. The gorilla experiment demonstrates that some attention is needed for the surprising stimulus to be detected. Surprise then activates and orients your attention: you will stare, and you will search your memory for a story that makes sense of the surprising event.

System 2 is also credited with the continuous monitoring of your own behavior, the control that keeps you polite when you are angry, and alert when you are driving at night. System 2 is mobilized to increased effort when it detects an error about to be made. Remember a time when you almost blurted out an offensive remark and note how hard you worked to restore control. In summary, most of what you (your System 2) think and do originates in your System 1, but System 2 takes over when things get difficult, and it normally has the last word.

The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate. System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off. If you are shown a word on the screen in a language you know, you will read it, unless your attention is totaly focused elsewhere.

CONFLICT

Figure 2 is a variant of a classic experiment that produces a conflict between the two systems. You should try the exercise before reading on.

Figure 2

.

You were almost certainly successful in saying the correct words in both tasks, and you surely discovered that some parts of each task were much easier than others. When you identified upper and lowercase, the left-hand column was easy and the right-hand column caused you to slow down and perhaps to stammer or stumble. When you named the position of words, the left-hand column was difficult and the right-hand column was much easier.

These tasks engage System 2, because saying “upper/lower” or “right/left” is not what you routinely do when looking down a column of words. One of the things you did to set yourself for the task was to program your memory so that the relevant words (upper and lower for the first task) were “on the tip of your tongue.” The prioritizing of the chosen words is effective and the mild temptation to read other words was fairly easy to resist when you went through the first column. But the second column was different, because it contained words for which you were set, and you could not ignore them. You were mostly able to respond correctly, but overcoming the competing response was a strain, and it slowed you down. You experienced a conflict between a task that you intended to carry out and an automatic response that interfered with it.

Conflict between an automatic reaction and an intention to control it is common in our lives. We are all familiar with the experience of trying not to stare at the oddly dressed couple at the neighboring table in a restaurant. We also know what it is like to force our attention on a boring book, when we constantly find ourselves returning to the point at which the reading lost its meaning. Where winters are hard, many drivers have memories of their car skidding out of control on the ice and of the struggle to follow well rehearsed instructions that negate what they would naturally do: “Steer into the skid, and whatever you do, do not touch the brakes!” And every human being has had the experience of not telling someone to go to hell. One of the tasks of System 2 is to overcome the impulses of System 1. In other words, System 2 is in charge of self-control.

ILLUSIONS

To appreciate the autonomy of System 1, as well as the distinction between impressions and beliefs, take a good look at figure 3.

Figure 3

.

This picture is unremarkable: two horizontal lines of different lengths, with fins appended, pointing in different directions. The bottom line is obviously longer than the one above it. That is what we all see, and we naturally believe what we see. If you have already encountered this image, however, you recognize it as the famous Müller-Lyer illusion. As you can easily confirm by measuring them with a ruler, the horizontal lines are in fact identical in length.

Now that you have measured the lines, you, your System 2, the conscious being you call “I”, have a new belief: you know that the lines are equally long. If asked about their length, you will say what you know. But you still see the bottom line as longer. You have chosen to believe the measurement, but you cannot prevent System 1 from doing its thing; you cannot decide to see the lines as equal, although you know they are. To resist the illusion, there is only one thing you can do: you must learn to mistrust your impressions of the length of lines when fins are attached to them. To implement that rule, you must be able to recognize the illusory pattern and recall what you know about it. If you can do this, you will never again be fooled by the Müller-Lyer illusion. But you will still see one line as longer than the other.

Not all illusions are visual. There are illusions of thought, which we call cognitive illusions. As a graduate student, I attended some courses on the art and science of psychotherapy. During one of these lectures, our teacher imparted a morsel of clinical wisdom. This is what he told us:

“You will from time to time meet a patient who shares a disturbing tale of multiple mistakes in his previous treatment. He has been seen by several clinicians, and all failed him. The patient can lucidly describe how his therapists misunderstood him, but he has quickly perceived that you are different. You share the same feeling, are convinced that you understand him, and will be able to help.” At this point my teacher raised his voice as he said, “Do not even think of taking on this patient! Throw him out of the office! He is most likely a psychopath and you will not be able to help him.”

Many years later I learned that the teacher had warned us against pschopathic charm, and the leading authority in the study of psychopathy confirmed that the teacher’s advice was sound. The analogy to the Müller-Lyer illusion is close. What we were being taught was not how to feel about that patient. Our teacher took it for granted that the sympathy we would feel for the patient would not be under our control; it would arise from System 1. Furthermore, we were not being taught to be generally suspicious of our feelings about patients. We were told that a strong attraction to a patient with a repeated history of failed treatment is a danger sign, like the fins on the parallel lines. It is an illusion, a cognitive illusion and I (System 2) was taught how to recognize it and advised not to believe it or act on it.

The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. Biases cannot always be avoided, because System 2 may have no clue to the error. Even when cues to likely errors are available, errors can be prevented only by the enhanced monitoring and effortful activity of System 2.

As a way to live your life, however, continuous vigilance is not necessarily good, and it is certainly impractical. Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people’s mistakes than our own.

USEFUL FICTIONS

You have been invited to think of the two systems as agents within the mind, with their individual personalities, abilities, and limitations. I will often use sentences in which the systems are the subjects, such as, “System 2 calculates products.”

The use of such language is considered a sin in the professional circles in which I travel, because it seems to explain the thoughts and actions of a person by the thoughts and actions of little people inside the person’s head. Grammatically the sentence about System 2 is similar to “The butler steals the petty cash.” My colleagues would point out that the butler’s action actually explains the disappearance of the cash, and they rightly question whether the sentence about System 2 explains how products are calculated. My answer is that the brief active sentence that attributes calculation to System 2 is intended as a description, not an explanation. It is meaningful only because of what you already know about System 2. It is shorthand for the following: “Mental arithmetic is a voluntary activity that requires effort, should not be performed while making a left turn, and is associated with dilated pupils and an accelerated heart rate.”

Similarly, the statement that “highway driving under routine conditions is left to System 1” means that steering the car around a bend is automatic and almost effortless. It also implies that an experienced driver can drive on an empty highway while conducting a conversation. Finally, “System 2 prevented James from reacting foolishly to the insult” means that James would have been more aggressive in his response if his capacity for effortful control had been disrupted (for example, if he had been drunk).

System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictitious characters. Systems 1 and 2 are not systems in the standard sense of entities with interacting aspects or parts. And there is no one part of the brain that either of the systems would call home. You may well ask: What is the point of introducing fictitious characters with ugly names into a serious book? The answer is that the characters are useful because of some quirks of our minds, yours and mine. A sentence is understood more easily if it describes what an agent (System 2) does than if it describes what something is, what properties it has. In other words, “System 2” is a better subject for a sentence than “mental arithmetic.” The mind, especially System 1, appears to have a special aptitude for the construction and interpretation of stories about active agents, who have personalities, habits, and abilities. You quickly formed a bad opinion of the thieving butler, you expect more bad behavior from him, and you will remember him for a while. This is also my hope for the language of systems.

Why call them System 1 and System 2 rather than the more descriptive “automatic system” and “effortful system”? The reason is simple: “Automatic system” takes longer to say than “System 1” and therefore takes more space in our working memory. This matters, because anything that occupies your working memory reduces your ability to think. You should treat “System 1” and “System 2” as nicknames, like Bob and Joe, identifying characters that you will get to know over the course of this book. The fictitious systems make it easier for me to think about judgment and choice, and will make it easier for you to understand what I say.

SPEAKING OF SYSTEM 1 AND SYSTEM 2

“He had an impression, but some of his impressions are illusions.”

“This was a pure System 1 response. She reacted to the threat before she recognized it.”

“This is your System 1 talking. Slow down and let your System 2 take control.”

PART TWO

Attention and Effort

In the unlikely event of this book being made into a film, System 2 would be a supporting character who believes herself to be the hero. The defining feature of System 2, in this story, is that its operations are effortful, and one of its main characteristics is laziness, a reluctance to invest more effort than is strictly necessary. As a consequence, the thoughts and actions that System 2 believes it has chosen are often guided by the figure at the center of the story, System 1. However, there are vital tasks that only System 2 can perform because they require effort and acts of self-control in which the intuitions and impulses of System 1 are overcome.

MENTAL EFFORT

If you wish to experience your System 2 working at full tilt, the following exercise will do; it should bring you to the limits of your cognitive abilities within 5 seconds. To start, make up several strings of 4 digits, all different, and write each string on an index card. Place a blank card on top of the deck. The task that you will perform is called Add-1. Here is how it goes:

Start beating a steady rhythm (or better yet, set a metronome at 1/sec). Remove the blank card and read the four digits aloud. Wait for two beats, then report a string in which each of the original digits is incremented by 1. If the digits on the card are 5294, the correct response is 6305. Keeping the rhythm is important.

Few people can cope with more than four digits in the Add-1 task, but if you want a harder challenge, please try Add-3.

If you would like to know what your body is doing while your mind is hard at work, set up two piles of books on a sturdy table, place a video camera on one and lean your chin on the other, get the video going, and stare at the camera lens while you work on Add-1 or Add-3 exercises. Later, you will find in the changing size of your pupils a faithful record of how hard you worked.

I have a long personal history with the Add-1 task. Early in my career I spent a year at the University of Michigan, as a visitor in a laboratory that studied hypnosis. Casting about for a useful topic of research, I found an article in Scientific American in which the psychologist Eckhard Hess described the pupil of the eye as a window to the soul. I reread it recently and again found it inspiring.

It begins with Hess reporting that his wife had noticed his pupils widening as he watched beautiful nature pictures, and it ends with two striking pictures of the same good looking woman, who somehow appears much more attractive in one than in the other. There is only one difference: the pupils of the eyes appear dilated in the attractive picture and constricted in the other.

Hess also wrote of belladonna, a pupil-dilating substance that was used as a cosmetic, and of bazaar shoppers who wear dark glasses in order to hide their level of interest from merchants.

One of Hess’s findings especially captured my attention. He had noticed that the pupils are sensitive indicators of mental effort, they dilate substantially when people multiply two-digit numbers, and they dilate more if the problems are hard than if they are easy. His observations indicated that the response to mental effort is distinct from emotional arousal. Hess’s work did not have much to do with hypnosis, but I concluded that the idea of a visible indication of mental effort had promise as a research topic. A graduate student in the lab, Jackson Beatty, shared my enthusiasm and we got to work.

Beatty and I developed a setup similar to an optician’s examination room, in which the experimental participant leaned her head on a chin-and-forehead rest and stared at a camera while listening to prerecorded information and answering questions on the recorded beats of a metronome. The beats triggered an infrared flash every second, causing a picture to be taken. At the end of each experimental session, we would rush to have the film developed, project the images of the pupil on a screen, and go to work with a ruler. The method was a perfect fit for young and impatient researchers: we knew our results almost immediately, and they always told a clear story.

Beatty and I focused on paced tasks, such as Add-1, in which we knew precisely what was on the subject’s mind at any time. We recorded strings of digits on beats of the metronome and instructed the subject to repeat or transform the digits one by one, maintaining the same rhythm. We soon discovered that the size of the pupil varied second by second, reflecting the changing demands of the task. The shape of the response was an inverted V. As you experienced it if you tried Add-1 or Add-3, effort builds up with every added digit that you hear, reaches an almost intolerable peak as you rush to produce a transformed string during and immediately after the pause, and relaxes gradually as you “unload” your short-term memory.

The pupil data corresponded precisely to subjective experience: longer strings reliably caused larger dilations, the transformation task compounded the effort, and the peak of pupil size coincided with maximum effort. Add-1 with four digits caused a larger dilation than the task of holding seven digits for immediate recall. Add-3, which is much more difficult, is the most demanding that I ever observed. In the first 5 seconds, the pupil dilates by about 50% of its original area and heart rate increases by about 7 beats per minute. This is as hard as people can work, they give up if more is asked of them. When we exposed our subjects to more digits than they could remember, their pupils stopped dilating or actually shrank.

We worked for some months in a spacious basement suite in which we had set up a closed-circuit system that projected an image of the subject’s pupil on a screen in the corridor; we also could hear what was happening in the laboratory. The diameter of the projected pupil was about a foot; watching it dilate and contract when the participant was at work was a fascinating sight, quite an attraction for visitors in our lab. We amused ourselves and impressed our guests by our ability to divine when the participant gave up on a task. During a mental multiplication, the pupil normally dilated to a large size within a few seconds and stayed large as long as the individual kept working on the problem; it contracted immediately when she found a solution or gave up.

As we watched from the corridor, we would sometimes surprise both the owner of the pupil and our guests by asking, “Why did you stop working just now?” The answer from inside the lab was often, “How did you know?” to which we would reply, “We have a window to your soul.”

The casual observations we made from the corridor were sometimes as informative as the formal experiments. I made a significant discovery as I was idly watching a woman’s pupil during a break between two tasks. She had kept her position on the chin rest, so I could see the image of her eye while she engaged in routine conversation with the experimenter. I was surprised to see that the pupil remained small and did not noticeably dilate as she talked and listened. Unlike the tasks that we were studying, the mundane conversation apparently demanded little or no effort, no more than retaining two or three digits. This was a eureka moment: I realized that the tasks we had chosen for study were exceptionally effortful. An image came to mind: mental life, today I would speak of the life of System 2, is normally conducted at the pace of a comfortable walk, sometimes interrupted by episodes of jogging and on rare occasions by a frantic sprint. The Add-1 and Add-3 exercises are sprints, and casual chatting is a stroll.

We found that people, when engaged in a mental sprint, may become effectively blind. The authors of The Invisible Gorilla had made the gorilla “invisible” by keeping the observers intensely busy counting passes. We reported a rather less dramatic example of blindness during Add-1. Our subjects were exposed to a series of randomly flashin letters while they worked. They were told to give the task complete priority, but they were also asked to report, at the end of the digit task, whether the letter K had appeared at any time during the trial. The main finding was that the ability to detect and report the target letter changed in the course of the 10 seconds of the exercise. The observers almost never missed a K that was shown at the beginning or near the end of the Add-1 task but they missed the target almost half the time when mental effort was at its peak, although we had pictures of their wide-open eye staring straight at it. Failures of detection followed the same inverted-V pattern as the dilating pupil. The similarity was reassuring: the pupil was a good measure of the physical arousal that accompanies mental effort, and we could go ahead and use it to understand how the mind works.

Much like the electricit meter outside your house or apartment, the pupils offer an index of the current rate at which mental energy is used. The analogy goes deep. Your use of electricity depends on what you choose to do, whether to light a room or toast a piece of bread. When you turn on a bulb or a toaster, it draws the energy it needs but no more. Similarly, we decide what to do, but we have limited control over the effort of doing it. Suppose you are shown four digits, say, 9462, and told that your life depends on holding them in memory for 10 seconds. However much you want to live, you cannot exert as much effort in this task as you would be forced to invest to complete an Add-3 transformation on the same digits.

System 2 and the electrical circuits in your home both have limited capacity, but they respond differently to threatened overload. A breaker trips when the demand for current is excessive, causing all devices on that circuit to lose power at once. In contrast, the response to mental overload is selective and precise: System 2 protects the most important activity, so it receives the attention it needs; “spare capacity” is allocated second by second to other tasks.

In our version of the gorilla experiment, we instructed the participants to assign priority to the digit task. We know that they followed that instruction, because the timing of the visual target had no effect on the main task. If the critical letter was presented at a time of high demand, the subjects simply did not see it. When the transformation task was less demanding, detection performance was better.

The sophisticated allocation of attention has been honed by a long evolutionary history. Orienting and responding quickly to the gravest threats or most promising opportunities improved the chance of survival, and this capability is certainly not restricted to humans. Even in modern humans, System 1 takes over in emergencies and assigns total priority to seIf-protective actions. Imagine yourself at the wheel of a car that unexpectedly skids on a large oil slick. You will find that you have responded to the threat before you became fully conscious of it.

Beatty and I worked together for only a year, but our collaboration had a large effect on our subsequent careers. He eventually became the leading authority on “cognitive pupillometry,” and I wrote a book titled Attention and Effort, which was based in large part on what we learned together and on follow-up research I did at Harvard the following year. We learned a great deal about the working mind, which I now think of as System 2, from measuring pupils in a wide variety of tasks.

As you become skilled in a task, its demand for energy diminishes. Studies of the brain have shown that the pattern of activity associated with an action changes as skill increases, with fewer brain regions involved. Talent has similar effects. Highly intelligent individuals need less effort to solve the same problems, as indicated by both pupil size and brain activity. A general “law of least effort” applies to cognitive as well as physical exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action. In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature.

The tasks that we studied varied considerably in their effects on the pupil. At baseline, our subjects were awake, aware, and ready to engage in a task, probably at a higher level of arousal and cognitive readiness than usual. Holding one or two digits in memory or learning to associate a word with a digit (3 = door) produced reliable effects on momentary arousal above that baseline, but the effects were minuscule, only 5% of the increase in pupil diameter associated with Add-3. A task that required discriminating between the pitch of two tones yielded significantly larger dilations. Recent research has shown that inhibiting the tendency to read distracting words (as in figure 2 of the preceding chapter) also induces moderate effort. Tests of short-term memory for six or seven digits were more effortful. As you can experience, the request to retrieve and say aloud your phone number or your spouse’s birthday also requires a brief but significant effort, because the entire string must be held in memory as a response is organized. Mental multiplication of two-digit numbers and the Add-3 task are near the limit of what most people can do.

What makes some cognitive operations more demanding and effortful than others? What outcomes must we purchase in the currency of attention? What can System 2 do that System 1 cannot? We now have tentative answers to these questions.

Effort is required to maintain simultaneously in memory several ideas that require separate actions, or that need to be combined according to a rule, rehearsing your shopping list as you enter the supermarket, choosing between the fish and the veal at a restaurant, or combining a surprising result from a survey with the information that the sample was small, for example. System 2 is the only one that can follow rules, compare objects on several attributes, and make deliberate choices between options. The automatic System 1 does not have these capabilities. System 1 detects simple relations (“they are all alike,” “the son is much taller than the father”) and excels at integrating information about one thing, but it does not deal with multiple distinct topics at once, nor is it adept at using purely statistical information. System 1 will detect that a person described as “a meek and tidy soul, with a need for order and structure, and a passion for detail” resembles a caricature librarian, but combining this intuition with knowledge about the small number of librarians is a task that only System 2 can perform, if System 2 knows how to do so, which is true of few people.

A crucial capability of System 2 is the adoption of “task sets”: it can program memory to obey an instruction that overrides habitual responses. Consider the following: Count all occurrences of the letter f in this page. This is not a task you have ever performed before and it will not come naturally to you, but your System 2 can take it on. it will be effortful to set yourself up for this exercise, and effortful to carry it out, though you will surely improve with practice. Psychologists speak of “executive control” to describe the adoption and termination of task sets, and neuroscientists have identified the main regions of the brain that serve the executive function. One of these regions is involved whenever a conflict must be resolved. Another is the prefrontal area of the brain, a region that is substantially more developed in humans than in other primates, and is involved in operations that we associate with intelligence.

Now suppose that at the end of the page you get another instruction: count all the commas in the next page. This will be harder, because you will have to overcome the newly acquired tendency to focus attention on the letter f. One of the significant discoveries of cognitive psychologists in recent decades is that switching from one task to another is effortful, especially under time pressure. The need for rapid switching is one of the reasons that Add-3 and mental multiplication are so difficult. To perform the Add-3 task, you must hold several digits in your working memory at the same time, associating each with a particular operation: some digits are in the queue to be transformed, one is in the process of transformation, and others, already transformed, are retained for reporting. Modern tests of working memory require the individual to switch repeatedly between two demanding tasks, retaining the results of one operation while performing the other. People who do well on these tests tend to do well on tests of general intelligence. However, the ability to control attention is not simply a measure of intelligence; measures of efficiency in the control of attention predict performance of air traffic controllers and of Israeli Air Force pilots beyond the effects of intelligence.

Time pressure is another driver of effort. As you carried out the Add-3 exercise, the rush was imposed in part by the metronome and in part by the load on memory. Like a juggler with several balls in the air, you cannot afford to slow down; the rate at which material decays in memory forces the pace, driving you to refresh and rehearse information before it is lost. Any task that requires you to keep several ideas in mind at the same time has the same hurried character. Unless you have the good fortune of a capacious working memory, you may be forced to work uncomfortably hard. The most effortful forms of slow thinking are those that require you to think fast.

You surely observed as you performed Add-3 how unusual it is for your mind to work so hard. Even if you think for a living, few of the mental tasks in which you engage in the course of a working day are as demanding as Add-3, or even as demanding as storing six digits for immediate recall. We normally avoid mental overload by dividing our tasks into multiple easy steps, committing intermediate results to long-term memory or to paper rather than to an easily overloaded working memory. We cover long distances by taking our time and conduct our mental lives by the law of least effort.

SPEAKING OF ATTENTION AND EFFORT

“I won’t try to solve this while driving. This is a pupil-dilating task. It requires mental effort!”

“The law of least effort is operating here. He will think as little as possible.”

“She did not forget about the meeting. She was completely focused on something else when the meeting was set and she just didn’t hear you.”

“What came quickly to my mind was an intuition from System 1. I’ll have to start over and search my memory deliberately.”

Three

The Lazy Controller

I spend a few months each year in Berkeley, and one of my great pleasures there is a daily four-mile walk on a marked path in the hills, with a fine view of San Francisco Bay. I usually keep track of my time and have learned a fair amount about effort from doing so. I have found a speed, about 17 minutes for a mile, which I experience as a stroll. I certainly exert physical effort and burn more calories at that speed than if I sat in a recliner, but I experience no strain, no conflict, and no need to push myself. I am also able to think and work while walking at that rate. Indeed, I suspect that the mild physical arousal of the walk may spill over into greater mental alertness.

System 2 also has a natural speed. You expend some mental energy in random thoughts and in monitoring what goes on around you even when your mind does nothing in particular, but there is little strain. Unless you are in a situation that makes you unusually wary or selfconscious, monitoring what happens in the environment with others, to depend on others, or to accept demands from others.

The psychologist who has done this remarkable research, Kathleen Vohs, has been laudably restrained in discussing the implications of her findings, leaving the task to her readers. Her experiments are profound, her findings suggest that living in a culture that surrounds us with reminders of money may shape our behavior and our attitudes in ways that we do not know about and of which we may not be proud. Some cultures provide frequent reminders of respect, others constantly remind their members of God, and some societies prime obedience by large images of the Dear Leader. Can there be any doubt that the ubiquitous portraits of the national leader in dictatorial societies not only convey the feeling that “Big Brother Is Watching” but also lead to an actual reduction in spontaneous thought and independent action?

The evidence of priming studies suggests that reminding people of their mortality increases the appeal of authoritarian ideas, which may become reassuring in the context of the terror of death. Other experiments have confirmed Freudian insights about the role of symbols and metaphors in unconscious associations. For example, consider the ambiguous word fragments W_ _ H and S_ _ P. People who were recently asked to think of an action of which they are ashamed are more likely to complete those fragments as WASH and SOAP and less likely to see WISH and SOUP. Furthermore, merely thinking about stabbing a coworker in the back leaves people more inclined to buy soap, disinfectant, or detergent than batteries, juice, or candy bars. Feeling that one’s soul is stained appears to trigger a desire to cleanse one’s body, an impulse that has been dubbed the “Lady Macbeth effect.”

“The world makes much less sense than you think. The coherence comes mostly from the way your mind works.”

“They were primed to find flaws, and this is exactly what they found.”

“His System 1 constructed a story, and his System 2 believed it. It happens to all of us.”

“I made myself smile and I’m actually feeling better!”

. . .

from

Thinking, Fast And Slow

by Daniel Kahneman

get it at Amazon.com

“Which of the me’s is me?” An Unquiet Mind. A Memoir of Moods and Madness – Kay Redfield Jamison.

Kay Jamison’s story is not of someone who has succeeded despite having a severe disorder, but of someone whose particular triumphs are a consequence of her disorder. She would not have observed what she has if she had not experienced what she did. The fact that she has endured such battles helps her to understand them in others.

“The disease that has, on several occasions, nearly killed me does kill tens of thousands of people every year: most are young, most die unnecessarily, and many are among the most imaginative and gifted that we as a society have. The major clinical problem in treating manic-depressive illness is not that there are not effective medications, there are, but that patients so often refuse to take them. Freedom from the control imposed by medication loses its meaning when the only alternatives are death and insanity.”

Her remarkable achievements are a beacon of hope to those who imagine that they cannot survive their condition, much less thrive with it.

I doubt sometimes whether a quiet & unagitated life would have suited me, yet I sometimes long for it. – Byron.

For centuries, the prevailing wisdom had been that having a mental illness would prevent a doctor from providing competent care, would indicate a vulnerability that would undermine the requisite aura of medical authority, and would kill any patient’s trust. In the face of this intense stigmatization, many bright and compassionate people with mental illness avoided the field of medicine, and many physicians with mental illness lived in secrecy. Kay Redfield Jamison herself led a closeted life for many years, even as she coauthored the standard medical textbook on bipolar illness. She suffered from the anguish inherent in her illness and from the pain that comes of living a lie.

With the publication of An Unquiet Mind, she left that lie behind, revealing her condition not only to her immediate colleagues and patients, but also to the world, and becoming the first clinician ever to describe travails with bipolar illness in a memoir. It was an act of extraordinary courage, a grand risk taking infused with a touch of manic exuberance, and it broke through a firewall of prejudice. You can have bipolar illness and be a brilliant clinician; you can have bipolar illness and be a leading authority on the condition, informed by your experiences rather than blinded by them. You can have bipolar illness and still have a joyful, fruitful, and dignified life.

Kay Jamison’s story is not of someone who has succeeded despite having a severe disorder, but of someone whose particular triumphs are a consequence of her disorder. Ovid said, “The wounded doctor heals best” and Jamison’s open hearted declarations have been a salve for the wounded psyches of untold thousands of people; her unquiet mind has often soothed the minds of others. Her discernments come from a rare combination of observation and experience: she would not have observed what she has if she had not experienced what she did. The fact that she has endured such battles helps her to understand them in others, and her frankness about them offers an antidote to the pervasive shame that Cloisters so many mentally ill people in fretful isolation.

Her remarkable achievements are a beacon of hope to those who imagine that they cannot survive their condition, much less thrive with it. Those who address mental illnesses tend to do so with either rigor or empathy; Jamison attains a rare marriage of the two. Just as her clinical work has been strengthened by her personal experience, her personal experience has been informed by her academic insights.

It is different to go through manic and depressive episodes when you know everything there is to know about your condition than it is to go through them in ignorance, constantly ambushed by the apparently inexplicable.

Like many people with mental illness, Jamison has had to reckon with the impossibility of separating her personality from her condition. “Which of the me’s is me?” she asks rhetorically in these pages. She kept up a nearly willful self-ignorance for years before she succumbed to knowledge; she resisted remedy at first because she feared she might lose some of her essential self to it. It took repeated descents and ascents into torment to instigate a kind of acquiescence. She has become glad of that surrender; it has saved a life that turns out to be well worth living. As this book bleached away her erstwhile denial, it has mediated her readers’ denial, too. As a professor of psychiatry at Johns Hopkins University and in her frequent lectures around the globe, Kay Jamison has taught a younger generation of doctors how to make sense of their patients: not merely how to treat them, but how to help them.

Though An Unquiet Mind does not provide diagnostic criteria or propose specific courses of treatment, it remains very much a book about medicine, with a touchingly fond portrait of science. Jamison expresses enormous gratitude to the doctors who have treated her and to the researchers who established the modes of treatment that have kept her alive. She engages medicine’s resonant clarities, and she tolerates the relative primitivism of our understanding of the brain.

Appreciating the biology of her illness and the mechanisms of its therapies allowed her to achieve a truce with her bipolar illness, and science informed her choice to speak openly about her skirmishes with it. That peace has not entirely precluded further episodes, but it makes them easier to tolerate when they come. Equally, it has given her the courage to stay on medication and the resilience to sustain other forms of self-care.

You can feel in Jamison’s writing a bracing honesty unmarred by self-pity. It seems clear Jamison is not by nature an exhibitionist, and making so much of her private life into public property cannot have been easy for her. On every page, you sense the resolve it has required. Her book differs from much confessional writing in that, although she describes certain experiences in agonizing detail, she maintains a vocabulary of discretion. An Unquiet Mind may have been intended as a book about an illness, not about a life, but it is both. There is satisfaction in making your affliction useful to other people; it redeems what seemed in the instance to be useless experiences. That insistence on making something good out of something bad is the vital force in her writing.

I met Kay Jamison in 1995, shortly after the publication of An Unquiet Mind, when I had first decided to write about depression. I contacted her to request an interview, and she suggested we have lunch; she then invited me to my first serious scientific conference, a suicide symposium she had organized, attended by the leading figures in the field. Her kindness to me in the early stages of my research points to a personal generosity that mirrors the brave generosity of her books. The forbearance that has made her a good clinician and a good writer also makes her a good friend.

In the years since then, Jamison has produced a corpus of work that, in a very different kind of bipolarity, limns the glittering revelations of psychosis only to return to its perilous ordeals. Touched with Fire (1993) had already chronicled the artistic achievements of people with bipolar illness; Night Falls Fast (1999) tackles the impossible subject of suicide; Exuberance (2004) tells us how unipolar mania has generated many intellectual and artistic breakthroughs; and Nothing was the Same (2009) is a closely observed and deeply personal account of losing her second husband to cancer, a journey complicated by her unreliable moods. Her illness runs through these books even when it is not her explicit topic. But that recurrent theme does not narrow the books into ego studies; instead, it makes them startlingly, powerfully intimate.

Jamison consistently evinces a romantic attachment to language itself. Her sentences flow out in an often poetic rapture, and she displays a sustaining love for the poetry of others, quoting it by the apposite yard. Few doctors know poetry so well, and few poets understand so much biology, and Jamison serves as a translator between humanism and science, which are so often disparate vocabularies for the same phenomena. While poetry inflects her literary voice, it sits comfortably beside a sense of humor. Irony is among her best defenses against gloom, and the zing of her comic asides makes reading about unbearable things a great deal more bearable. The crossing point of precision, luminosity, and hilarity may be the safest domain for an inconsistent mind, a nexus of relief for someone whose stoicism cannot fully assuage her distress.

Two decades after its publication, An Unquiet Mind remains fresh. There’s been a bit more science in the field and a great deal of social change regarding mental illness, change this book helped to create: a society in which what was relentlessly shameful is more easily and frequently acknowledged. The book delineates not how to treat the condition, but how to live with the condition and its treatments, and that remains relevant even as actual treatments evolve.

Jamison does not stint on her own despair, but she has constructed meaning and built an identity from it. While she might not have opted for this illness, neither does she entirely regret it; she prefers, as she writes so movingly, a life of passionate turbulence to one of tedious calm. Learning to appreciate the things you also regret makes for a good way forward. If you have bipolar illness, this book will help you to forgive yourself for everything that has gone awry; if you do not, it will perhaps show how a steely tenacity can imbue disasters with value, a capacity that stands to enrich any and every life.

Andrew Solomon

Kay Redfield Jamison

Prologue

When it’s two o’clock in the morning, and you’re manic, even the UCLA Medical Center has a certain appeal. The hospital, ordinarily a cold clotting of uninteresting buildings, became for me, that fall morning not quite twenty years ago, a focus of my finely wired, exquisitely alert nervous system. With vibrissae twinging, antennae perked, eyes fast forwarding and fly faceted, I took in everything around me. I was on the run. Not just on the run but fast and furious on the run, darting back and forth across the hospital parking lot trying to use up a boundless, restless, manic energy. I was running fast, but slowly going mad.

The man I was with, a colleague from the medical school, had stopped running an hour earlier and was, he said impatiently, exhausted. This, to a saner mind, would not have been surprising: the usual distinction between day and night had long since disappeared for the two of us, and the endless hours of scotch, brawling, and fallings about in laughter had taken an obvious, if not final, toll. We should have been sleeping or working, publishing not perishing, reading journals, writing in charts, or drawing tedious scientific graphs that no one would read.

Suddenly a police car pulled up. Even in my less than totally lucid state of mind I could see that the officer had his hand on his gun as he got out of the car. “What in the hell are you doing running around the parking lot at this hour?” he asked. A not unreasonable question. My few remaining islets of judgment reached out to one another and linked up long enough to conclude that this particular situation was going to be hard to explain. My colleague, fortunately, was thinking far better than I was and managed to reach down into some deeply intuitive part of his own and the world’s collective unconscious and said, “We’re both on the faculty in the psychiatry department.” The policeman looked at us, smiled, went back to his squad car, and drove away. Being professors of psychiatry explained everything.

Within a month of signing my appointment papers to become an assistant professor of psychiatry at the University of California, Los Angeles, I was well on my way to madness; it was 1974, and I was twenty-eight years old. Within three months I was manic beyond recognition and just beginning a long, costly personal war against a medication that I would, in a few years’ time, be strongly encouraging others to take. My illness, and my struggles against the drug that ultimately saved my life and restored my sanity, had been years in the making.

For as long as I can remember I was frighteningly, although often wonderfully, beholden to moods. Intensely emotional as a child, mercurial as a young girl, first severely depressed as an adolescent, and then unrelentingly caught up in the cycles of manicdepressive illness by the time I began my professional life, I became, both by necessity and intellectual inclination, a student of moods. It has been the only way I know to understand, indeed to accept, the illness I have; it also has been the only way I know to try and make a difference in the lives of others who also suffer from mood disorders.

The disease that has, on several occasions, nearly killed me does kill tens of thousands of people every year: most are young, most die unnecessarily, and many are among the most imaginative and gifted that we as a society have.

The Chinese believe that before you can conquer a beast you first must make it beautiful. In some strange way, I have tried to do that with manic-depressive illness. It has been a fascinating, albeit deadly, enemy and companion; I have found it to be seductively complicated, a distillation both of what is finest in our natures, and of what is most dangerous. In order to contend with it, I first had to know it in all of its moods and infinite disguises, understand its real and imagined powers. Because my illness seemed at first simply to be an extension of myself, that is to say, of my ordinarily changeable moods, energies, and enthusiasms, I perhaps gave it at times too much quarter. And, because I thought I ought to be able to handle my increasingly violent mood swings by myself, for the first ten years I did not seek any kind of treatment. Even after my condition became a medical emergency, I still intermittently resisted the medications that both my training and clinical research expertise told me were the only sensible way to deal with the illness I had.

My manias, at least in their early and mild forms, were absolutely intoxicating states that gave rise to great personal pleasure, an incomparable flow of thoughts, and a ceaseless energy that allowed the translation of new ideas into papers and projects. Medications not only cut into these fast-flowing, highflying times, they also brought with them seemingly intolerable side effects. it took me far too long to realize that lost years and relationships cannot be recovered, that damage done to oneself and others cannot always be put right again, and that freedom from the control imposed by medication loses its meaning when the only alternatives are death and insanity.

The war that I waged against myself is not an uncommon one. The major clinical problem in treating manic-depressive illness is not that there are not effective medications, there are, but that patients so often refuse to take them. Worse yet, because of a lack of information, poor medical advice, stigma, or fear of personal and professional reprisals, they do not seek treatment at all.

Manic-depression distorts moods and thoughts, incites dreadful behaviors, destroys the basis of rational thought, and too often erodes the desire and will to live. It is an illness that is biological in its origins, yet one that feels psychological in the experience of it; an illness that is unique in conferring advantage and pleasure, yet one that brings in its wake almost unendurable suffering and, not infrequently, suicide.

I am fortunate that I have not died from my illness, fortunate in having received the best medical care available, and fortunate in having the friends, colleagues, and family that I do. Because of this, I have in turn tried, as best I could, to use my own experiences of the disease to inform my research, teaching, clinical practice, and advocacy work.

Through writing and teaching I have hoped to persuade my colleagues of the paradoxical core of this quicksilver illness that can both kill and create; and, along with many others, have tried to change public attitudes about psychiatric illnesses in general and manic depressive illness in particular. It has been difficult at times to weave together the scientific discipline of my intellectual field with the more compelling realities of my own emotional experiences. And yet it has been from this binding of raw emotion to the more distanced eye of clinical science that I feel I have obtained the freedom to live the kind of life I want, and the human experiences necessary to try and make a difference in public awareness and clinical practice.

I have had many concerns about writing a book that so explicitly describes my own attacks of mania, depression, and psychosis, as well as my problems acknowledging the need for ongoing medication. Clinicians have been, for obvious reasons of licensing and hospital privileges, reluctant to make their psychiatric problems known to others. These concerns are often well warranted.

I have no idea what the longterm effects of discussing such issues so openly will be on my personal and professional life, but, whatever the consequences, they are bound to be better than continuing to be silent. I am tired of hiding, tired of misspent and knotted energies, tired of the hypocrisy, and tired of acting as though I have something to hide.

One is what one is, and the dishonesty of hiding behind a degree, or a title, or any manner and collection of words, is still exactly that: dishonest. Necessary, perhaps, but dishonest. I continue to have concerns about my decision to be public about my illness, but one of the advantages of having had manic-depressive illness for more than thirty years is that very little seems insurmountably difficult. Much like crossing the Bay Bridge when there is a storm over the Chesapeake, one may be terrified to go toward, but there is no question of going back. I find myself somewhat inevitably taking a certain solace in Robert Lowell’s essential question, Yet why not say what happened?

Part One

THE WILD BLUE YONDER

Into the Sun

I was standing with my head back, one pigtail caught between my teeth, listening to the jet overhead. The noise was loud, unusually so, which meant that it was close. My elementary school was near Andrews Air Force Base, just outside Washington; many of us were pilots’ kids, so the sound was a matter of routine. Being routine, however, didn’t take away from the magic, and I instinctively looked up from the playground to wave. I knew, of course, that the pilot couldn’t see me, I always knew that, just as I knew that even if he could see me the odds were that it wasn’t actually my father. But it was one of those things one did, and anyway I loved any and all excuses just to stare up into the skies. My father, a career Air Force officer, was first and foremost a scientist and only secondarily a pilot. But he loved to fly, and, because he was a meteorologist, both his mind and his soul ended up being in the skies. Like my father, I looked up rather more than I looked out.

When I would say to him that the Navy and the Army were so much older than the Air Force, had so much more tradition and legend, he would say, Yes, that’s true, but the Air Force is the future. Then he would always add: And we can fly. This statement of creed would occasionally be followed by an enthusiastic rendering of the Air Force song, fragments of which remain with me to this day, nested together, somewhat improbably, with phrases from Christmas carols, early poems, and bits and pieces of the Book of Common Prayer: all having great mood and meaning from childhood, and all still retaining the power to quicken the pulses.

So I would listen and believe and, when I would hear the words “Off we go into the wild blue yonder,” I would think that “wild” and “yonder” were among the most wonderful words I had ever heard; likewise, I would feel the total exhilaration of the phrase “Climbing high, into the sun” and know instinctively that I was a part of those who loved the vastness of the sky.

The noise of the jet had become louder, and I saw the other children in my second grade class suddenly dart their heads upward. The plane was coming in very low, then it streaked past us, scarcely missing the playground. As we stood there clumped together and absolutely terrified, it flew into the trees, exploding directly in front of us. The ferocity of the crash could be felt and heard in the plane’s awful impact; it also could be seen in the frightening yet terrible lingering loveliness of the flames that followed. Within minutes, it seemed, mothers were pouring onto the playground to reassure children that it was not their fathers; fortunately for my brother and sister and myself, it was not ours either. Over the next few days it became clear, from the release of the young pilot’s final message to the control tower before he died, that he knew he could save his own life by bailing out. He also knew, however, that by doing so he risked that his unaccompanied plane would fall onto the playground and kill those of us who were there.

The dead pilot became a hero, transformed into a scorchingly vivid, completely impossible ideal for what was meant by the concept of duty. It was an impossible ideal, but all the more compelling and haunting because of its very unobtainability. The memory of the crash came back to me many times over the years, as a reminder both of how one aspires after and needs such ideals, and of how killingly difficult it is to achieve them. I never again looked at the sky and saw only vastness and beauty. From that afternoon on I saw that death was also and always there.

Although, like all military families, we moved a lot, by the fifth grade my older brother, sister, and I had attended four different elementary schools, and we had lived in Florida, Puerto Rico, California, Tokyo, and Washington, twice, our parents, especially my mother, kept life as secure, warm, and constant as possible. My brother was the eldest and the steadiest of the three of us children and my staunch ally, despite the three year difference in our ages. I idolized him growing up and often trailed along after him, trying very hard to be inconspicuous, when he and his friends would wander off to play baseball or cruise the neighborhood. He was smart, fair, and self-confident, and I always felt that there was a bit of extra protection coming my way whenever he was around. My relationship with my sister, who was only thirteen months older than me, was more complicated. She was the truly beautiful one in the family, with dark hair and wonderful eyes, who from the earliest times was almost painfully aware of everything around her. She had a charismatic way, a fierce temper, very black and passing moods, and little tolerance for the conservative military lifestyle that she felt imprisoned us all. She led her own life, defiant, and broke out with abandon whenever and wherever she could. She hated high school and, when we were living in Washington, frequently skipped classes to go to the Smithsonian or the Army Medical Museum or just to smoke and drink beer with her friends.

She resented me, feeling that I was, as she mockingly put it, “the fair-haired one”, a sister, she thought, to whom friends and schoolwork came too easily, passing far too effortlessly through life, protected from reality by an absurdly optimistic view of people and life. Sandwiched between my brother, who was a natural athlete and who never seemed to see less-than-perfect marks on his college and graduate admission examinations, and me, who basically loved school and was vigorously involved in sports and friends and class activities, she stood out as the member of the family who fought back and rebelled against what she saw as a harsh and difficult world. She hated military life, hated the constant upheaval and the need to make new friends, and felt the family politeness was hypocrisy.

Perhaps because my own violent struggles with black moods did not occur until I was older, I was given a longer time to inhabit a more benign, less threatening, and, indeed to me, a quite wonderful world of high adventure. This world, I think, was one my sister had never known. The long and important years of childhood and early adolescence were, for the most part, very happy ones for me, and they afforded me a solid base of warmth, friendship, and confidence. They were to be an extremely powerful amulet, a potent and positive countervailing force against future unhappiness. My sister had no such years, no such amulets. Not surprisingly, perhaps, when both she and I had to deal with our respective demons, my sister saw the darkness as being within and part of herself, the family, and the world. I, instead, saw it as a stranger; however lodged within my mind and soul the darkness became, it almost always seemed an outside force that was at war with my natural self.

My sister, like my father, could be vastly charming: fresh, original, and devastatingly witty, she also was blessed with an extraordinary sense of aesthetic design. She was not an easy or untroubled person, and as she grew older her troubles grew with her, but she had an enormous artistic imagination and soul. She also could break your heart and then provoke your temper beyond any reasonable level of endurance. Still, I always felt a bit like pieces of earth to my sister’s fire and flames.

For his part, my father, when involved, was often magically involved: ebullient, funny, curious about almost everything, and able to describe with delight and originality the beauties and phenomena of the natural world. A snowflake was never just a snowflake, nor a cloud just a cloud. They became events and characters, and part of a lively and oddly ordered universe. When times were good and his moods were at high tide, his infectious enthusiasm would touch everything. Music would fill the house, wonderful new pieces of jewelry would appear, a moonstone ring, a delicate bracelet of cabochon rubies, a pendant fashioned from a moody sea, green stone set in a swirl of gold, and we’d all settle into our listening mode, for we knew that soon we would be hearing a very great deal about whatever new enthusiasm had taken him over. Sometimes it would be a discourse based on a passionate conviction that the future and salvation of the world was to be found in windmills; sometimes it was that the three of us children simply had to take Russian lessons because Russian poetry was so inexpressibly beautiful in the original.

*

from

An Unquiet Mind. A Memoir of Moods and Madness

by Kay Redfield Jamison

get it at Amazon.com

The Great God of Depression. How mental illness stopped being a terrible dark secret – Pagan Kennedy * DARKNESS VISIBLE. A MEMOIR of MADNESS – William Styron.

The pain of severe depression is quite unimaginable to those who have not suffered it, and it kills in many instances because its anguish can no longer be borne.

The most honest authorities face up squarely to the fact that serious depression is not readily treatable. Failure of alleviation is one of the most distressing factors of the disorder as it reveals itself to the victim, and one that helps situate it squarely in the category of grave diseases.

One by one, the normal brain circuits begin to drown, causing some of the functions of the body and nearly all of those of instinct and intellect to slowly disconnect.

Inadvertently I had helped unlock a closet from which many souls were eager to come out. It is possible to emerge from even the deepest abyss of despair and “once again behold the stars.”

Nearly 30 years ago, the author William Styron outed himself as mentally ill. “My days were pervaded by a gray drizzle of unrelenting horror,” he wrote in a New York Times op-ed article, describing the deep depression that had landed him in the psych ward. He compared the agony of mental illness to that of a heart attack. Pain is pain, Whether it’s in the mind or the body. So why, he asked, were depressed people treated as pariahs?

A confession of mental illness might not seem like a big deal now, but it was back then. In the 1980s, “if you were depressed, it was a terrible dark secret that you hid from the world,” according to Andrew Solomon, a historian of mental illness and author of “The Noonday Demon.” “People with depression were seen as pathetic and even dangerous. You didn’t let them near your kids.”

From William Styron’s Op-Ed on Depression. “In the popular mind, suicide is usually the work of a coward or sometimes, paradoxically, a deed of great courage, but it is neither; the torment that precipitates the act makes it often one of blind necessity.”

The response to Mr. Styron’s op-ed was immediate. Letters flooded into The New York Times. The readers thanked him, blurted out their stories and begged him for more. “Inadvertently I had helped unlock a closet from which many souls were eager to come out,” Mr. Styron wrote later.

“It was like the #MeToo movement,” Alexandra Styron, the author’s daughter, told me. “Somebody comes out and says: ‘This happened. This is real. This is what it feels like.’ And it just unleashed the floodgates.”

Readers were electrified by Mr. Styron’s confession in part because he inhabited a storybook world of glamour. After his novel “Sophie’s Choice” was adapted into a blockbuster movie in 1982, Mr. Styron rocketed from mere literary success to Hollywood fame. Meryl Streep, who won an Oscar for playing Sophie, became a lifelong friend, adding to Mr. Styron’s roster of illustrious buddies, from “Jimmy” Baldwin to Arthur Miller. He appeared at gala events with his silver hair upswept in a genius-y pompadour and his face ruddy from summers on Martha’s Vineyard. And yet he had been so depressed that he had eyed the knives in his kitchen with suicide-lust.

William Styron

James L.W. West, Mr. Styron’s friend and biographer, told me that Mr. Styron had never wanted to become “the guru of depression.” But after his article, he felt he had a duty to take on that role.

His famous memoir of depression, “Darkness Visible,” came out in October 1990. It was Mr. Styron’s curiosity about his own mind, and his determination to use himself as a case study to understand a mysterious disease, that gave the book its political power. “Darkness Visible” demonstrated that patients could be the owners and describers of their mental disorders, upending centuries of medical tradition in which the mentally ill were discredited and shamed. The brain scientist Alice Flaherty, who was Mr. Styron’s close friend and doctor, has called him “the great god of depression” because his influence on her field was so profound. His book became required reading in some medical schools, where physicians were finally being trained to listen to their patients.

Mr. Styron also helped to popularize a new way of looking at the brain. In his telling, suicidal depression is a physical ailment, as unconnected to the patient’s moral character as cancer. The book includes a cursory discussion of the chemistry of the brain neurotransmitters, serotonin and so forth. For many readers, it was a first introduction to scientific ideas that are now widely accepted.

For people with severe mood disorders, “Darkness Visible” became a guidebook. “I got depressed and everyone said to me: ‘You have to read the Bill Styron book. You have to read the Bill Styron book. Have you read the Bill Styron book? Let me give you a copy of the Bill Styron book,”’ Mr. Solomon told me. “On the one hand an absolutely harrowing read, and on the other hand one very much rooted in hope.”

The book benefited from perfect timing. It appeared contemporaneously with the introduction of Prozac and other mood disorder medications with fewer side effects than older psychiatric drugs. Relentlessly advertised on TV and in magazines, they seemed to promise protection. And though Mr. Styron himself probably did not take Prozac and was rather skeptical about drugs, his book became the bible of that era.

He also inspired dozens of writers including Mr. Solomon and Dr. Flaherty to chronicle their own struggles. In the 1990s, bookstores were crowded with mental-illness memoirs, Kay Redfield Jamison’s “An Unquiet Mind,” Susanna Kaysen’s “Girl, Interrupted” and Elizabeth Wurtzel’s “Prozac Nation,” to name a few. You read; you wrote; you survived.

It was an optimistic time. In 1999, with “Darkness Visible” in its 25th printing, Mr. Styron told Diane Rehm in an NPR interview: “I’m in very good shape, if I may be so bold as to say that.” He continued, “It’s as if I had purged myself of this pack of demons.”

It wouldn’t last. In the summer of 2000, he crashed again. In the last six years of his life, he would check into mental hospitals and endure two rounds of electroshock therapy.

Mr. Styron’s story mirrors the larger trends in American mental health over the past few decades. During the exuberance of the 1990s, it seemed possible that drugs would one day wipe out depression, making suicide a rare occurrence. But that turned out to be an illusion. In fact, the American suicide rate has continued to climb since the beginning of the 21st century.

We don’t know why this is happening, though we do have a few clues. Easy access to guns is probably contributing to the epidemic: Studies show that when people are able to reach for a firearm, a momentary urge to self-destruct is more likely to turn fatal. Oddly enough, climate change may also be to blame: A new study shows that rising temperatures can make people more prone to suicide.

With suicidal depression so widespread, we find ourselves needing new ways to talk about it, name its depredations and help families cope with it. Mr. Styron’s mission was to invent this new language of survival, but he did so at high cost to his own mental health.

When he revealed his history of depression, he inadvertently set a trap for himself. He became an icon of recovery. His widow, Rose Styron, told me that readers would call the house at all hours when they felt suicidal, and Mr. Styron would counsel them. He always took those calls, even when they woke him at 3 in the morning.

When he plunged into depression again in 2000, Mr. Styron worried about disappointing his fans. “When he crashed, he felt so guilty because he thought he’d let down all the people he had encouraged in ‘Darkness Visible,’” Ms. Styron told me. And he became painfully aware that if he ever did commit suicide, that private act would ripple out all over the world. The consequences would be devastating for his readers, some of whom might even decide to imitate him.

And so, one dark day in the summer of 2000, he wrote up a statement to be released in the event of his suicide. “I hope that readers of ‘Darkness Visible’ past, present and future will not be discouraged by the manner of my dying,” his message began. It was an attempt to inoculate his fans from the downstream effects of his own selfdestruction.

Mr. Styron’s family described this sense of his that succumbing to depression a second time made him a fraud.

DARKNESS VISIBLE.

A MEMOIR of MADNESS

William Styron

For the thing which I greatly feared is come upon me, and that which I was afraid of Is come unto me. I was not in safety, neither had I rest, neither was I quiet; yet trouble came. -Job

One

IN PARIS ON A CHILLY EVENING LATE IN OCTOBER OF 1985 I first became fully aware that the struggle with the disorder in my mind, a struggle which had engaged me for several months, might have a fatal outcome. The moment of revelation came as the car in which I was riding moved down a rain slick street not far from the Champs Elysées and slid past a dully glowing neon sign that read HOTEL WASHINGTON. I had not seen that hotel in nearly thirty-five years, since the spring of 1952, when for several nights it had become my initial Parisian roosting place.

In the first few months of my Wanderjahr, I had come down to Paris by train from Copenhagen, and landed at the Hotel Washington through the whimsical determination of a New York travel agent. In those days the hotel was one of the many damp, plain hostelries made for tourists, chiefly American, of very modest means who, if they were like me, colliding nervously for the first time with the French and their droll kinks, would always remember how the exotic bidet, positioned solidly in the drab bedroom, along with the toilet far down the ill-lit hallway, virtually defined the chasm between Gallic and Anglo-Saxon cultures.

But I stayed at the Washington for only a short time. Within days I had been urged out of the place by some newly found young American friends who got me installed in an even seedier but more colorful hotel in Montparnasse, hard by Le Dome and other suitably literary hangouts. (In my mid-twenties, I had just published a first novel and was a celebrity, though one of very low rank since few of the Americans in Paris had heard of my book, let alone read it.) And over the years the Hotel Washington gradually disappeared from my consciousness.

It reappeared, however, that October night when I passed the gray stone facade in a drizzle, and the recollection of my arrival so many years before started flooding back, causing me to feel that I had come fatally full circle. I recall saying to myself that when I left Paris for New York the next morning it would be a matter of forever. I was shaken by the certainty with which I accepted the idea that I would never see France again, just as I would never recapture a lucidity that was slipping away from me with terrifying speed.

Only days before I had concluded that I was suffering from a serious depressive illness, and was floundering helplessly in my efforts to deal with it. I wasn’t cheered by the festive occasion that had brought me to France. Of the many dreadful manifestations of the disease, both physical and psychological, a sense of self-hatred, or, put less categorically, a failure of self-esteem, is one of the most universally experienced symptoms, and I had suffered more and more from a general feeling of worthlessness as the malady had progressed.

My dank joylessness was therefore all the more ironic because I had flown on a rushed four day trip to Paris in order to accept an award which should have sparklingly restored my ego. Earlier that summer I received word that I had been chosen to receive the Prix Mondial Cino del Duca, given annually to an artist or scientist whose work reflects themes or principles of a certain “humanism.” The prize was established in memory of Cino del Duca, an immigrant from Italy who amassed a fortune just before and after World War II by printing and distributing cheap magazines, principally comic books, though later branching out into publications of quality; he became proprietor of the newspaper Paris-Jour.

He also produced movies and was a prominent racehorse owner, enjoying the pleasure of having many winners in France and abroad. Aiming for nobler cultural satisfactions, he evolved into a renowned philanthropist and along the way established a book publishing firm that began to produce works of literary merit (by chance, my first novel, Lie Down in Darkness, was one of del Duca’s offerings, in a translation entitled Un Lit de Ténébres); by the time of his death in 1967 this house, Editions Mondiales, became an important entity of a multifold empire that was rich yet prestigious enough for there to be scant memory of its comic book origins when del Duca’s widow, Simone, created a foundation whose chief function was the annual bestowal of the eponymous award.

The Prix Mondial Cino del Duca has become greatly respected in France, a nation pleasantly besotted with cultural prize giving, not only for its eclecticism and the distinction shown in the choice of its recipients but for the openhandedness of the prize itself, which that year amounted to approximately $25,000. Among the winners during the past twenty years have been Konrad Lorenz, Alejo Carpentier, Jean Anouilh, Ignazio Silone, Andrei Sakharov, Jorge Luis Borges and one American, Lewis Mumford. (No women as yet, feminists take note.)

As an American, I found it especially hard not to feel honored by inclusion in their company. While the giving and receiving of prizes usually induce from all sources an unhealthy uprising of false modesty, backbiting, selftorture and envy, my own view is that certain awards, though not necessary, can be very nice to receive. The Prix del Duca was to me so straightforwardly nice that any extensive self-examination seemed silly, and so I accepted gratefully, writing in reply that I would honor the reasonable requirement that I be present for the ceremony. At that time I looked forward to a leisurely trip, not a hasty turnaround. Had I been able to foresee my state of mind as the date of the award approached, I would not have accepted at all.

Depression is a disorder of mood, so mysteriously painful and elusive in the way it becomes known to the self, to the mediating intellect, as to verge close to being beyond description.

It thus remains nearly incomprehensible to those who have not experienced it in its extreme mode, although the gloom, “the blues” which people go through occasionally and associate with the general hassle of everyday existence are of such prevalence that they do give many individuals a hint of the illness in its catastrophic form. But at the time of which I write I had descended far past those familiar, manageable doldrums. In Paris, I am able to see now, I was at a critical stage in the development of the disease, situated at an ominous way station between its unfocused stirrings earlier that summer and the near violent denouement of December, which sent me into the hospital. I will later attempt to describe the evolution of this malady, from its earliest origins to my eventual hospitalization and recovery, but the Paris trip has retained a notable meaning for me.

On the day of the award ceremony, which was to take place at noon and be followed by a formal luncheon, I woke up at midmorning in my room at the Hétel Pont Royal commenting to myself that I felt reasonably sound, and I passed the good word along to my wife, Rose. Aided by the minor tranquilizer Halcion, I had managed to defeat my insomnia and get a few hours’ sleep. Thus I was in fair spirits.

But such wan cheer was an habitual pretense which I knew meant very little, for I was certain to feel ghastly before nightfall. I had come to a point where I was carefully monitoring each phase of my deteriorating condition. My acceptance of the illness followed several months of denial during which, at first, I had ascribed the malaise and restlessness and sudden fits of anxiety to withdrawal from alcohol; I had abruptly abandoned whiskey and all other intoxicants that June.

During the course of my worsening emotional climate I had read a certain amount on the subject of depression, both in books tailored for the layman and in weightier professional works including the psychiatrists’ bible, DSM (The Diagnostic and Statistical Manual of the American Psychiatric Association). Throughout much of my life I have been compelled, perhaps unwisely, to become an autodidact in medicine, and have accumulated a better than average amateur’s knowledge about medical matters (to which many of my friends, surely unwisely, have often deferred), and so it came as an astonishment to me that I was close to a total ignoramus about depression, which can be as serious a medical affair as diabetes or cancer. Most likely, as an incipient depressive, I had always subconsciously rejected or ignored the proper knowledge; it cut too close to the psychic bone, and I shoved it aside as an unwelcome addition to my store of information.

At any rate, during the few hours when the depressive state itself eased off long enough to permit the luxury of concentration, I had recently filled this vacuum with fairly extensive reading and I had absorbed many fascinating and troubling facts, which, however, I could not put to practical use.

The most honest authorities face up squarely to the fact that serious depression is not readily treatable. Unlike, let us say, diabetes, where immediate measures taken to rearrange the body’s adaptation to glucose can dramatically reverse a dangerous process and bring it under control, depression in its major stages possesses no quickly available remedy: failure of alleviation is one of the most distressing factors of the disorder as it reveals itself to the victim, and one that helps situate it squarely in the category of grave diseases.

Except in those maladies strictly designated as malignant or degenerative, we expect some kind of treatment and eventual amelioration, by pills or physical therapy or diet or surgery, with a logical progression from the initial relief of symptoms to final cure. Frighteningly, the layman sufferer from major depression, taking a peek into some of the many books currently on the market, will find much in the way of theory and symptomatology and very little that legitimately suggests the possibility of quick rescue. Those that do claim an easy way out are glib and most likely fraudulent. There are decent popular works which intelligently point the way toward treatment and cure, demonstrating how certain therapies, psychotherapy or pharmacology, or a combination of these, can indeed restore people to health in all but the most persistent and devastating cases; but the wisest books among them underscore the hard truth that serious depressions do not disappear overnight.

All of this emphasizes an essential though difficult reality which I think needs stating at the outset of my own chronicle: the disease of depression remains a great mystery. It has yielded its secrets to science far more reluctantly than many of the other major ills besetting us. The intense and sometimes comically strident factionalism that exists in present day psychiatry, the schism between the believers in psychotherapy and the adherents of pharmacology, resembles the medical quarrels of the eighteenth century (to bleed or not to bleed) and almost defines in itself the inexplicable nature of depression and the difficulty of its treatment. As a clinician in the field told me honestly and, I think, with a striking deftness of analogy: “If you compare our knowledge with Columbus’s discovery of America, America is yet unknown; we are still down on that little island in the Bahamas.”

In my reading I had learned, for example, that in at least one interesting respect my own case was atypical. Most people who begin to suffer from the illness are laid low in the morning, with such malefic effect that they are unable to get out of bed. They feel better only as the day wears on. But my situation was just the reverse. While I was able to rise and function almost normally during the earlier part of the day, I began to sense the onset of the symptoms at midafternoon or a little later, gloom crowding in on me, a sense of dread and alienation and, above all, stifling anxiety. I suspect that it is basically a matter of indifference whether one suffers the most in the morning or the evening: if these states of excruciating near paralysis are similar, as they probably are, the question of timing would seem to be academic. But it was no doubt the turnabout of the usual daily onset of symptoms that allowed me that morning in Paris to proceed without mishap, feeling more or less self-possessed, to the gloriously ornate palace on the Right Bank that houses the Fondation Cino del Duca. There, in a rococo salon, I was presented with the award before a small crowd of French cultural figures, and made my speech of acceptance with what I felt was passable aplomb, stating that while I was donating the bulk of my prize money to various organizations fostering French-American goodwill, including the American Hospital in Neuilly, there was a limit to altruism (this spoken jokingly) and so I hoped it would not be taken amiss if I held back a small portion for myself.

What I did not say, and which was no joke, was that the amount I was withholding was to pay for two tickets the next day on the Concorde, so that I might return speedily with Rose to the United States, where just a few days before I had made an appointment to see a psychiatrist. For reasons that I’m sure had to do with a reluctance to accept the reality that my mind was dissolving, I had avoided seeking psychiatric aid during the past weeks, as my distress intensified. But I knew I couldn’t delay the confrontation indefinitely, and when I did finally make contact by telephone with a highly recommended therapist, he encouraged me to make the Paris trip, telling me that he would see me as soon as I returned. I very much needed to get back, and fast.

Despite the evidence that I was in serious difficulty, I wanted to maintain the rosy view. A lot of the literature available concerning depression is, as I say, breezily optimistic, spreading assurances that nearly all depressive states will be stabilized or reversed if only the suitable antidepressant can be found; the reader is of course easily swayed by promises of quick remedy. In Paris, even as I delivered my remarks, I had a need for the day to be over, felt a consuming urgency to fly to America and the office of the doctor, who would whisk my malaise away with his miraculous medications. I recollect that moment clearly now, and am hardly able to believe that I possessed such ingenuous hope, or that I could have been so unaware of the trouble and peril that lay ahead.

Simone del Duca, a large dark-haired woman of queenly manner, was understandably incredulous at first, and then enraged, when after the presentation ceremony I told her that I could not join her at lunch upstairs in the great mansion, along with a dozen or so members of the Académie Frangaise, who had chosen me for the prize. My refusal was both emphatic and simpleminded; I told her point-blank that I had arranged instead to have lunch at a restaurant with my French publisher, Frangoise Gallimard. Of course this decision on my part was outrageous; it had been announced months before to me and everyone else concerned that a luncheon, moreover, a luncheon in my honor, was part of the day’s pageantry. But my behavior was really the result of the illness, which had progressed far enough to produce some of its most famous and sinister hallmarks: confusion, failure of mental focus and lapse of memory. At a later stage my entire mind would be dominated by anarchic disconnections; as I have said, there was now something that resembled bifurcation of mood: lucidity of sorts in the early hours of the day, gathering murk in the afternoon and evening. It must have been during the previous evening’s murky distractedness that I made the luncheon date with Frangoise Gallimard, forgetting my del Duca obligations. That decision continued to completely master my thinking, creating in me such obstinate determination that now I was able to blandly insult the worthy Simone del Duca. “Alors!” she exclaimed to me, and her face flushed angrily as she whirled in a stately volte-face, “au revoir!”

Suddenly I was flabbergasted, stunned with horror at what I had done. I fantasized a table at which sat the hostess and the Académie Frangaise, the guest of honor at La Coupole. I implored Madame’s assistant, a bespectacled woman with a clipboard and an ashen, mortified expression, to try to reinstate me: it was all a terrible mistake, a mixup, a malentendu. And then I blurted some words that a lifetime of general equilibrium, and a smug belief in the impregnability of my psychic health, had prevented me from believing I could ever utter; I was chilled as I heard myself speak them to this perfect stranger.

“I’m sick,” I said, “un probleme psychiatrique.”

Madame del Duca was magnanimous in accepting my apology and the lunch went off without further strain, aIthough I couldn’t completely rid myself of the suspicion, as we chatted somewhat stiffly, that my benefactress was still disturbed by my conduct and thought me a weird number. The lunch was a long one, and when it was over I felt myself entering the afternoon shadows with their encroaching anxiety and dread. A television crew from one of the national channels was waiting (I had forgotten about them, too), ready to take me to the newly opened Picasso Museum, where I was supposed to be filmed looking at the exhibits and exchanging comments with Rose.

This turned out to be, as I knew it would, not a captivating promenade but a demanding struggle, a major ordeal. By the time we arrived at the museum, having dealt with heavy traffic, it was past four o’clock and my brain had begun to endure its familiar siege: panic and dislocation, and a sense that my thought processes were being engulfed by a toxic and unnameable tide that obliterated any enjoyable response to the living world. This is to say more specifically that instead of pleasure, certainly instead of the pleasure I should be having in this sumptuous showcase of bright genius, I was feeling in my mind a sensation close to, but indescribably different from, actual pain.

This leads me to touch again on the elusive nature of such distress. That the word “indescribable” should present itself is not fortuitous, since it has to be emphasized that if the pain were readily describable most of the countless sufferers from this ancient affliction would have been able to confidently depict for their friends and loved ones (even their physicians) some of the actual dimensions of their torment, and perhaps elicit a comprehension that has been generally lacking; such incomprehension has usually been due not to a failure of sympathy but to the basic inability of healthy people to imagine a form of torment so alien to everyday experience.

For myself, the pain is most closely connected to drowning or suffocation, but even these images are off the mark. William James, who battled depression for many years, gave up the search for an adequate portrayal, implying its near-impossibility when he wrote in The Varieties of Religious Experience:

“It is a positive and active anguish, a sort of psychical neuralgia wholly unknown to normal life.”

The pain persisted during my museum tour and reached a crescendo in the next few hours when, back at the hotel, I fell onto the bed and lay gazing at the ceiling, nearly immobilized and in a trance of supreme discomfort. Rational thought was usually absent from my mind at such times, hence trance.

I can think of no more apposite word for this state of being, a condition of helpless stupor in which cognition was replaced by that “positive and active anguish.”

And one of the most unendurable aspects of such an interlude was the inability to sleep. It had been my custom of a near lifetime, like that of vast numbers of people, to settle myself into a soothing nap in the late afternoon, but the disruption of normal sleep patterns is a notoriously devastating feature of depression; to the injurious sleeplessness with which I had been afflicted each night was added the insult of this afternoon insomnia, diminutive by comparison but all the more horrendous because it struck during the hours of the most intense misery. It had become clear that I would never be granted even a few minutes’ relief from my full-time exhaustion. I clearly recall thinking, as I lay there while Rose sat nearby reading, that my afternoons and evenings were becoming almost measurably worse, and that this episode was the worst to date. But I somehow managed to reassemble myself for dinner with, who else? -Francoise Gallimard, co-victim along with Simone del Duca of the frightful lunchtime contretemps.

The night was blustery and raw, with a chill wet wind blowing down the avenues, and when Rose and I met Francoise and her son and a friend at La Lorraine, a glittering brasserie not far from L’Etoile, rain was descending from the heavens in torrents. Someone in the group, sensing my state of mind, apologized for the evil night, but I recall thinking that even if this were one of those warmly scented and passionate evenings for which Paris is celebrated I would respond like the zombie I had become. The weather of depression is unmodulated, its light a brownout.

And zombielike, halfway through the dinner, I lost the del Duca prize check for $25,000. Having tucked the check in the inside breast pocket of my jacket, I let my hand stray idly to that place and realized that it was gone. Did I “intend” to lose the money? Recently I had been deeply bothered that I was not deserving of the prize. I believe in the reality of the accidents we subconsciously perpetrate on ourselves, and so how easy it was for this loss to be not loss but a form of repudiation, offshoot of that seIf-loathing (depression’s premier badge) by which I was persuaded that I could not be worthy of the prize, that I was in fact not worthy of any of the recognition that had come my way in the past few years. Whatever the reason for its disappearance, the check was gone, and its loss dovetailed well with the other failures of the dinner: my failure to have an appetite for the grand plateau de fruits de mer placed before me, failure of even forced laughter and, at last, virtually total failure of speech.

At this point the ferocious inwardness of the pain produced an immense distraction that prevented my articulating words beyond a hoarse murmur; I sensed myself turning walleyed, monosyllabic, and also I sensed my French friends becoming uneasily aware of my predicament. It was a scene from a bad Operetta by now: all of us near the floor, searching for the vanished money. Just as I signaled that it was time to go, Francoise’s son discovered the check, which had somehow slipped out of my pocket and fluttered under an adjoining table, and we went forth into the rainy night. Then, while I was riding in the car, I thought of Albert Camus and Romain Gary.

Two

WHEN I WAS A YOUNG WRITER THERE HAD BEEN A stage where Camus, almost more than any other contemporary literary figure, radically set the tone for my own view of life and history. I read his novel The Stranger somewhat later than I should have, I was in my early thirties, but after finishing it I received the stab of recognition that proceeds from reading the work of a writer who has wedded moral passion to a style of great beauty and whose unblinking vision is capable of frightening the soul to its marrow.

The cosmic loneliness of Meursault, the hero of that novel, so haunted me that when I set out to write The Confessions of Nat Turner I was impelled to use Camus’s device of having the story flow from the point of view of a narrator isolated in his jail cell during the hours before his execution. For me there was a spiritual connection between Meursault’s frigid solitude and the plight of Nat Turner, his rebel predecessor in history by a hundred years, likewise condemned and abandoned by man and God.

Camus’s essay “Reflections on the Guillotine” is a virtually unique document, freighted with terrible and fiery logic; it is difficult to conceive of the most vengeful supporter of the death penalty retaining the same attitude after exposure to scathing truths expressed with such ardor and precision. I know my thinking was forever altered by that work, not only turning me around completely, convincing me of the essential barbarism of capital punishment, but establishing substantial claims on my conscience in regard to matters of responsibility at large. Camus was a great cleanser of my intellect, ridding me of countless sluggish ideas, and through some of the most unsettling pessimism I had ever encountered causing me to be aroused anew by life’s enigmatic promise.

The disappointment I always felt at never meeting Camus was compounded by that failure having been such a near miss. I had planned to see him in 1960, when I was traveling to France and had been told in a letter by the writer Romain Gary that he was going to arrange a dinner in Paris where I would meet Camus. The enormously gifted Gary, whom I knew slightly at the time and who later became a cherished friend, had informed me that Camus, whom he saw frequently, had read my Un Lit de Te’nebres and had admired it; I was of course greatly flattered and felt that a get-together would be a splendid happening. But before I arrived in France there came the appalling news: Camus had been in an automobile crash, and was dead at the cruelly young age of forty-six. I have almost never felt so intensely the loss of someone I didn’t know. I pondered his death endlessly. Although Camus had not been driving he supposedly knew the driver, who was the son of his publisher, to be a speed demon; so there was an element of recklessness in the accident that bore overtones of the near-suicidal, at least of a death flirtation, and it was inevitable that conjectures concerning the event should revert back to the theme of suicide in the writer’s work.

One of the century’s most famous intellectual pronouncements comes at the beginning of The Myth of Sisyphus: “There is but one truly serious philosophical problem, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.” Reading this for the first time I was puzzled and continued to be throughout much of the essay, since despite the work’s persuasive logic and eloquence there was a lot that eluded me, and I always came back to grapple vainly with the initial hypothesis, unable to deal with the premise that anyone should come close to wishing to kill himself in the first place.

A later short novel, The Fall, I admired with reservations; the guilt and seIf-condemnation of the lawyer-narrator, gloomily spinning out his monologue in an Amsterdam bar, seemed a touch clamorous and excessive, but at the time of my reading I was unable to perceive that the lawyer was behaving very much like a man in the throes of clinical depression. Such was my innocence of the very existence of this disease. Camus, Romain told me, occasionally hinted at his own deep despondency and had spoken of suicide. Sometimes he spoke in jest, but the jest had the quality of sour wine, upsetting Romain. Yet apparently he made no attempts and so perhaps it was not coincidental that, despite its abiding tone of melancholy, a sense of the triumph of life over death is at the core of The Myth of Sisyphus with its austere message: in the absence of hope we must still struggle to survive, and so we doby the skin of our teeth.

It was only after the passing of some years that it seemed credible to me that Camus’s statement about suicide, and his general preoccupation with the subject, might have sprung at least as strongly from some persistent disturbance of mood as from his concerns with ethics and epistemology. Gary again discussed at length his assumptions about Camus’s depression during August of 1978, when I had lent him my guest cottage in Connecticut, and I came down from my summer home on Martha’s Vineyard to pay him a weekend visit. As we talked I felt that some of Romain’s suppositions about the seriousness of Camus’s recurring despair gained weight from the fact that he, too, had begun to suffer from depression, and he freely admitted as much. It was not incapacitating, he insisted, and he had it under control, but he felt it from time to time, this leaden and poisonous mood the color of verdigris, so incongruous in the midst of the lush New England summer. A Russian Jew born in Lithuania, Romain had always seemed possessed of an Eastern European melancholy, so it was hard to tell the difference. Nonetheless, he was hurting. He said that he was able to perceive a flicker of the desperate state of mind which had been described to him by Camus.

Gary’s situation was hardly lightened by the presence of Jean Seberg, his lowa-born actress wife, from whom he had been divorced and, I thought, long estranged. I learned that she was there because their son, Diego, was at a nearby tennis camp. Their presumed estrangement made me surprised to see her living with Romain, surprised too, no, shocked and saddened, by her appearance: all her once fragile and luminous blond beauty had disappeared into a puffy mask. She moved like a Sleepwalker, said little, and had the blank gaze of someone tranquilized (or drugged, or both) nearly to the point of catalepsy. I understood how devoted they still were, and was touched by his solicitude, both tender and paternal. Romain told me that Jean was being treated for the disorder that afflicted him, and mentioned something about antidepressant medications, but none of this registered very strongly, and also meant little.

This memory of my relative indifference is important because such indifference demonstrates powerfully the outsider’s inability to grasp the essence of the illness. Camus’s depression and now Romain Gary’s, and certainly Jean’s, were abstract ailments to me, in spite of my sympathy, and I hadn’t an inkling of its true contours or the nature of the pain so many victims experience as the mind continues in its insidious meltdown.

In Paris that October night I knew that I, too, was in the process of meltdown. And on the way to the hotel in the car I had a clear revelation. A disruption of the circadian cycle, the metabolic and glandular rhythms that are central to our workaday life, seems to be involved in many, if not most, cases of depression; this is why brutal insomnia so often occurs and is most likely why each day’s pattern of distress exhibits fairly predictable alternating periods of intensity and relief. The evening’s relief for me, an incomplete but noticeable letup, like the change from a torrential downpour to a steady shower, came in the hours after dinner time and before midnight, when the pain lifted a little and my mind would become lucid enough to focus on matters beyond the immediate upheaval convulsing my system. Naturally I looked forward to this period, for sometimes I felt close to being reasonably sane, and that night in the car I was aware of a semblance of clarity returning, along with the ability to think rational thoughts. Having been able to reminisce about Camus and Romain Gary, however, I found that my continuing thoughts were not very consoling.

The memory of Jean Seberg gripped me with sadness. A little over a year after our encounter in Connecticut she took an overdose of pills and was found dead in a car parked in a cul-de-sac off a Paris avenue, where her body had lain for many days. The following year I sat with Romain at the Brasserie Lipp during a long lunch while he told me that, despite their difficulties, his loss of Jean had so deepened his depression that from time to time he had been rendered nearly helpless. But even then I was unable to comprehend the nature of his anguish. I remembered that his hands trembled and, though he could hardly be called superannuated, he was in his mid-sixties, his voice had the wheezy sound of very old age that I now realized was, or could be, the voice of depression; in the vortex of my severest pain I had begun to develop that ancient voice myself. I never saw Romain again. Claude Gallimard, Francoise’s father, had recollected to me how, in 1980, only a few hours after another lunch where the talk between the two old friends had been composed and casual, even lighthearted, certainly anything but somber, Romain Gary, twice winner of the Prix Goncourt (one of these awards pseudonymous, the result of his having gleefully tricked the critics), hero of the Republic, valorous recipient of the Croix de Guerre, diplomat, bon vivant, womanizer par excellence, went home to his apartment on the rue du Bac and put a bullet through his brain.

It was at some point during the course of these musings that the sign HOTEL WASHINGTON swam across my vision, bringing back memories of my long ago arrival in the city, along with the fierce and sudden realization that I would never see Paris again. This certitude astonished me and filled me with a new fright, for while thoughts of death had long been common during my siege, blowing through my mind like icy gusts of wind, they were the formless shapes of doom that I suppose are dreamed of by people in the grip of any severe affliction. The difference now was in the sure understanding that tomorrow, when the pain descended once more, or the tomorrow after that, certainly on some not too distant tomorrow, I would be forced to judge that life was not worth living and thereby answer, for myself at least, the fundamental question of philosophy.

Three

TO MANY OF US WHO KNEW ABBIE HOFFMAN EVEN slightly, as I did, his death in the spring of 1989 was a sorrowful happening. Just past the age of fifty, he had been too young and apparently too vital for such an ending; a feeling of chagrin and dreadfulness attends the news of nearly anyone’s suicide, and Abbie’s death seemed to me especially cruel.

I had first met him during the wild days and nights of the 1968 Democratic Convention in Chicago, where I had gone to write a piece for The New York Review of Books, and I later was one of those who testified on behalf of him and his fellow defendants at the trial, also in Chicago, in 1970. Amid the pious follies and morbid perversions of American life, his antic style was exhilarating, and it was hard not to admire the hellraising and the brio, the anarchic individualism.

I wish I had seen more of him in recent years; his sudden death left me with a particular emptiness, as suicides usually do to everyone. But the event was given a further dimension of poignancy by what one must begin to regard as a predictable reaction from many: the denial, the refusal to accept the fact of the suicide itself, as if the voluntary act, as opposed to an accident, or death from natural causes, were tinged with a delinquency that somehow lessened the man and his character.

Abbie’s brother appeared on television, grief, ravaged and distraught; one could not help feeling compassion as he sought to deflect the idea of suicide, insisting that Abbie, after all, had always been careless with pills and would never have left his family bereft. However, the coroner confirmed that Hoffman had taken the equivalent of 150 phenobarbitals.

It’s quite natural that the people closest to suicide victims so frequently and feverishly hasten to disclaim the truth; the sense of implication, of personal guilt, the idea that one might have prevented the act if one had taken certain precautions, had somehow behaved differently, is perhaps inevitable. Even so, the sufferer, whether he has actually killed himself or attempted to do so, or merely expressed threats, is often, through denial on the part of others, unjustly made to appear a wrongdoer.

A similar case is that of Randall Jarrell, one of the fine poets and critics of his generation, who on a night in 1965, near Chapel Hill, North Carolina, was struck by a car and killed. Jarrell’s presence on that particular stretch of road, at an odd hour of the evening, was puzzling, and since some of the indications were that he had deliberately let the car strike him, the early conclusion was that his death was suicide. Newsweek, among other publications, said as much, but Jarrell’s widow protested in a letter to that magazine; there was a hue and cry from many of his friends and supporters, and a coroner’s jury eventually ruled the death to be accidental. Jarrell had been suffering from extreme depression and had been hospitalized; only a few months before his misadventure on the highway and while in the hospital, he had slashed his wrists.

Anyone who is acquainted with some of the jagged contours of Jarrell’s life, including his violent fluctuations of mood, his fits of black despondency, and who, in addition, has acquired a basic knowledge of the danger signals of depression, would seriously question the verdict of the coroner’s jury. But the stigma of selfinflicted death is for some people a hateful blot that demands erasure at all costs. (More than two decades after his death, in the Summer 1986 issue of The American Scholar, a one time student of Jarrell’s, reviewing a collection of the poet’s letters, made the review less a literary or biographical appraisal than an occasion for continuing to try to exorcise the vile phantom of suicide.)

Randall Jarrell almost certainly killed himself. He did so not because he was a coward, nor out of any moral feebleness, but because he was afflicted with a depression that was so devastating that he could no longer endure the pain of it.

This general unawareness of what depression is really like was apparent most recently in the matter of Primo Levi, the remarkable Italian writer and survivor of Auschwitz who, at the age of sixty-seven, hurled himself down a stairwell in Turin in 1987. Since my own involvement with the illness, I had been more than ordinarily interested in Levi’s death, and so, late in 1988, when I read an account in The New York Times about a symposium on the writer and his work held at New York University, was fascinated but, finally, appalled. For, according to the article, many of the participants, worldly writers and scholars, seemed mystified by Levi’s suicide, mystified and disappointed. It was as if this man whom they had all so greatly admired, and who had endured so much at the hands of the Nazis, a man of exemplary resilience and courage, had by his suicide demonstrated a frailty, a crumbling of character they were loath to accept. In the face of a terrible absolute self-destruction, their reaction was helplessness and (the reader could not avoid it) a touch of shame.

My annoyance over all this was so intense that I was prompted to write a short piece for the op-ed page of the Times. The argument I put forth was fairly straightforward:

The pain of severe depression is quite unimaginable to those who have not suffered it, and it kills in many instances because its anguish can no longer be borne.

The prevention of many suicides will continue to be hindered until there is a general awareness of the nature of this pain. Through the healing process of time, and through medical intervention or hospitalization in many cases, most people survive depression, which may be its only blessing; but to the tragic legion who are compelled to destroy themselves there should be no more reproof attached than to the victims of terminal cancer.

I had set down my thoughts in this Times piece rather hurriedly and spontaneously, but the response was equally spontaneous, and enormous. It had taken, I speculated, no particular originality or boldness on my part to speak out frankly about suicide and the impulse toward it, but I had apparently underestimated the number of people for whom the subject had been taboo, a matter of secrecy and shame. The overwhelming reaction made me feel that inadvertently I had helped unlock a closet from which many souls were eager to come out and proclaim that they, too, had experienced the feelings I had described. It is the only time in my life I have felt it worthwhile to have invaded my own privacy, and to make that privacy public. And I thought that, given such momentum, and with my experience in Paris as a detailed example of what occurs during depression, it would be useful to try to chronicle some of my own experiences with the illness and in the process perhaps establish a frame of reference out of which one or more valuable conclusions might be drawn.

Such conclusions, it has to be emphasized, must still be based on the events that happened to one man. In setting these reflections down I don’t intend my ordeal to stand as a representation of what happens, or might happen, to others. Depression is much too complex in its cause, its symptoms and its treatment for unqualified conclusions to be drawn from the experience of a single individual. Although as an illness depression manifests certain unvarying characteristics, it also allows for many idiosyncrasies; I’ve been amazed at some of the freakish phenomena, not reported by other patients, that it has wrought amid the twistings of my mind’s labyrinth.

Depression afflicts millions directly, and millions more who are relatives or friends of victims. It has been estimated that as many as one in ten Americans will suffer from the illness. As assertively democratic as a Norman Rockwell poster, it strikes indiscriminately at all ages, races, creeds and classes, though women are at considerably higher risk than men. The occupational list (dressmakers, barge captains, sushi chefs, cabinet members) of its patients is too long and tedious to give here; it is enough to say that very few people escape being a potential victim of the disease, at least in its milder form. Despite depression’s eclectic reach, it has been demonstrated with fair convincingness that artistic types (especially poets) are particularly vulnerable to the disorder, which, in its graver, clinical manifestation takes upward of twenty percent of its victims by way of suicide.

Just a few of these fallen artists, all modern, make up a sad but scintillant roll call: Hart Crane, Vincent van Gogh, Virginia Woolf, Arshile Gorky, Cesare Pavese, Romain Gary, Vachel Lindsay, Sylvia Plath, Henry de Montherlant, Mark Rothko, John Berryman, Jack London, Ernest Hemingway, William Inge, Diane Arbus, Tadeusz Borowski, Paul Celan, Anne Sexton, Sergei Esenin, Vladimir Mayakovsky, the list goes on. (The Russian poet Mayakovsky was harshly critical of his great contemporary Esenin’s suicide a few years before, which should stand as a caveat for all who are judgmental about self destruction.)

When one thinks of these doomed and splendidly creative men and women, one is drawn to contemplate their childhoods, where, to the best of anyone’s knowledge, the seeds of the illness take strong root; could any of them have had a hint, then, of the psyche’s perishability, its exquisite fragility? And why were they destroyed, while others, similarly stricken, struggled through?

Four

WHEN I WAS FIRST AWARE THAT I HAD BEEN LAID low by the disease, I felt a need, among other things, to register a strong protest against the word “depression.” Depression, most people know, used to be termed “melancholia,” a word which appears in English as early as the year 1303 and crops up more than once in Chaucer, who in his usage seemed to be aware of its pathological nuances. “Melancholia” would still appear to be a far more apt and evocative word for the blacker forms of the disorder, but it was usurped by a noun with a bland tonality and lacking any magisterial presence, used indifferently to describe an economic decline or a rut in the ground, a true wimp of a word for such a major illness. It may be that the scientist generally held responsible for its currency in modern times, a Johns Hopkins Medical School faculty member justly venerated, the Swiss born psychiatrist Adolf Meyer, had a tin ear for the finer rhythms of English and therefore was unaware of the semantic damage he had inflicted by offering “depression” as a descriptive noun for such a dreadful and raging disease. Nonetheless, for over seventy-five years the word has slithered innocuously through the language like a slug, leaving little trace of its intrinsic malevolence and preventing, by its very insipidity, a general awareness of the horrible intensity of the disease when out of control.

As one who has suffered from the malady in extremis yet returned to tell the tale, I would lobby for a truly arresting designation. “Brainstorm,” for instance, has unfortunately been preempted to describe, somewhat jocularly, intellectual inspiration. But something along these lines is needed. Told that someone’s mood disorder has evolved into a storm, a veritable howling tempest in the brain, which is indeed what a clinical depression resembles like nothing else, even the uninformed layman might display sympathy rather than the standard reaction that “depression” evokes, something akin to “So what?” or “You’ll pull out of it” or “We all have bad days.” The phrase “nervous breakdown” seems to be on its way out, certainly deservedly so, owing to its insinuation of a vague spinelessness, but we still seem destined to be saddled with “depression” until a better, sturdier name is created.

The depression that engulfed me was not of the manic type, the one accompanied by euphoric highs, which would have most probably presented itself earlier in my life. I was sixty when the illness struck for the first time, in the “unipolar” form, which leads straight down. I shall never learn what “caused” my depression, as no one will ever learn about their own. To be able to do so will likely forever prove to be an impossibility, so complex are the intermingled factors of abnormal chemistry, behavior and genetics. Plainly, multiple components are involved, perhaps three or four, most probably more, in fathomless permutations.

That is why the greatest fallacy about suicide lies in the belief that there is a single immediate answer, or perhaps combined answers, as to why the deed was done.

The inevitable question “Why did he, or she do it?” usually leads to odd speculations, for the most part fallacies themselves. Reasons were quickly advanced for Abbie Hoffman’s death: his reaction to an auto accident he had suffered, the failure of his most recent book, his mother’s serious illness. With Randall Jarrell it was a declining career cruelly epitomized by a vicious book review and his consequent anguish. Primo Levi, it was rumored, had been burdened by caring for his paralytic mother, which was more onerous to his spirit than even his experience at Auschwitz.

Any one of these factors may have lodged like a thorn in the sides of the three men, and been a torment. Such aggravations may be crucial and cannot be ignored. But most people quietly endure the equivalent of injuries, declining careers, nasty book reviews, family illnesses. A vast majority of the survivors of Auschwitz have borne up fairly well. Bloody and bowed by the outrages of life, most human beings still stagger on down the road, unscathed by real depression.

To discover why some people plunge into the downward spiral of depression, one must search beyond the manifest crisis, and then still fail to come up with anything beyond wise conjecture.

The storm which swept me into a hospital in December began as a cloud no bigger than a wine goblet the previous June. And the cloud, the manifest crisis, involved alcohol, a substance I had been abusing for forty years. Like a great many American writers, whose sometimes lethal addiction to alcohol has become so legendary as to provide in itself a stream of studies and books, I used alcohol as the magical conduit to fantasy and euphoria, and to the enhancement of the imagination. There is no need to either rue or apologize for my use of this soothing, often sublime agent, which had contributed greatly to my writing; although I never set down a line while under its influence, I did use it, often in conjunction with music, as a means to let my mind conceive visions that the unaltered, sober brain has no access to. Alcohol was an invaluable senior partner of my intellect, besides being a friend whose ministrations I sought daily, sought also, I now see, as a means to calm the anxiety and incipient dread that I had hidden away for so long somewhere in the dungeons of my spirit.

The trouble was, at the beginning of this particular summer, that I was betrayed. It struck me quite suddenly, almost overnight: I could no longer drink. It was as if my body had risen up in protest, along with my mind, and had conspired to reject this daily mood bath which it had so long welcomed and, who knows? perhaps even come to need. Many drinkers have experienced this intolerance as they have grown older. I suspect that the crisis was at least partly metabolic, the liver rebelling, as if to say, “No more, no more”, but at any rate I discovered that alcohol in minuscule amounts, even a mouthful of wine, caused me nausea, a desperate and unpleasant wooziness, a sinking sensation and ultimately a distinct revulsion. The comforting friend had abandoned me not gradually and reluctantly, as a true friend might do, but like a shot, and I was left high and certainly dry, and unhelmed.

Neither by will nor by choice had I became an abstainer; the situation was puzzling to me, but it was also traumatic, and I date the onset of my depressive mood from the beginning of this deprivation. Logically, one would be overjoyed that the body had so summarily dismissed a substance that was undermining its health; it was as if my system had generated a form of Antabuse, which should have allowed me to happily go my way, satisfied that a trick of nature had shut me off from a harmful dependence. But, instead, I began to experience a vaguely troubling malaise, a sense of something having gone cockeyed in the domestic universe I’d dwelt in so long, so comfortably. While depression is by no means unknown when people stop drinking, it is usually on a scale that is not menacing. But it should be kept in mind how idiosyncratic the faces of depression can be.

It was not really alarming at first, since the change was subtle, but I did notice that my surroundings took on a different tone at certain times: the shadows of nightfall seemed more somber, my mornings were less buoyant, walks in the woods became less zestful, and there was a moment during my working hours in the late afternoon when a kind of panic and anxiety overtook me, just for a few minutes, accompanied by a visceral queasiness, such a seizure was at least slightly alarming, after all. As I set down these recollections, I realize that it should have been plain to me that I was already in the grip of the beginning of a mood disorder, but I was ignorant of such a condition at that time.

When I reflected on this curious alteration of my consciousness, and I was baffled enough from time to time to do so, I assumed that it all had to do somehow with my enforced withdrawal from alcohol. And, of course, to a certain extent this was true. But it is my conviction now that alcohol played a perverse trick on me when we said farewell to each other: although, as everyone should know, it is a major depressant, it had never truly depressed me during my drinking career, acting instead as a shield against anxiety.

Suddenly vanished, the great ally which for so long had kept my demons at bay was no longer there to prevent those demons from beginning to swarm through the subconscious, and I was emotionally naked, vulnerable as I had never been before.

Doubtless depression had hovered near me for years, waiting to swoop down. Now I was in the first stage, premonitory, like a flicker of sheet lightning barely perceived, of depression’s black tempest.

I was on Martha’s Vineyard, where I’ve Spent a good part of each year since the 1960s, during that exceptionally beautiful summer. But I had begun to respond indifferently to the island’s pleasures. I felt a kind of numbness, an enervation, but more particularly an odd fragility, as if my body had actually become frail, hypersensitive and somehow disjointed and clumsy, lacking normal coordination. And soon I was in the throes of a pervasive hypochondria. Nothing felt quite right with my corporeal self; there were twitches and pains, sometimes intermittent, often seemingly constant, that seemed to presage all sorts of dire infirmities. (Given these signs, one can understand how, as far back as the seventeenth century, in the notes of contemporary physicians, and in the perceptions of John Dryden and others, a connection is made between melancholia and hypochondria; the words are often interchangeable, and were so used until the nineteenth century by writers as various as Sir Walter Scott and the Brontés, who also linked melancholy to a preoccupation with bodily ills.) It is easy to see how this condition is part of the psyche’s apparatus of defense: unwilling to accept its own gathering deterioration, the mind announces to its indwelling consciousness that it is the body with its perhaps correctable defects, not the precious and irreplaceable mind, that is going haywire.

In my case, the overall effect was immensely disturbing, augmenting the anxiety that was by now never quite absent from my waking hours and fueling still another strange behavior pattern, a fidgety restlessness that kept me on the move, somewhat to the perplexity of my family and friends. Once, in late summer, on an airplane trip to New York, I made the reckless mistake of downing a scotch and soda, my first alcohol in months, which promptly sent me into a tailspin, causing me such a horrified sense of disease and interior doom that the very next day I rushed to a Manhattan internist, who inaugurated a long series of tests. Normally I would have been satisfied, indeed elated, when, after three weeks of high-tech and extremely expensive evaluation, the doctor pronounced me totally fit; and I was happy, for a day or two, until there once again began the rhythmic daily erosion of my mood, anxiety, agitation, unfocused dread.

By now I had moved back to my house in Connecticut. It was October, and one of the unforgettable features of this stage of my disorder was the way in which my old farmhouse, my beloved home for thirty years, took on for me at that point when my spirits regularly sank to their nadir an almost palpable quality of ominousness. The fading evening light, akin to that famous “slant of light” of Emily Dickinson’s, which spoke to her of death, of chill extinction, had none of its familiar autumnal loveliness, but ensnared me in a suffocating gloom. I wondered how this friendly place, teeming with such memories of (again in her words) “Lads and Girls,” of “laughter and ability and sighing, and Frocks and Curls,” could almost perceptibly seem so hostile and forbidding. Physically, I was not alone. As always Rose was present and listened with unflagging patience to my complaints. But I felt an immense and aching solitude. I could no longer concentrate during those afternoon hours, which for years had been my working time, and the act of writing itself, becoming more and more difficult and exhausting, stalled, then finally ceased.

There were also dreadful, pouncing seizures of anxiety. One bright day on a walk through the woods with my dog I heard a flock of Canada geese honking high above trees ablaze with foliage; ordinarily a sight and sound that would have exhilarated me, the flight of birds caused me to stop, riveted with fear, and I stood stranded there, helpless, shivering, aware for the first time that I had been stricken by no mere pangs of withdrawal but by a serious illness whose name and actuality I was able finally to acknowledge. Going home, I couldn’t rid my mind of the line of Baudelaire’s, dredged up from the distant past, that for several days had been skittering around at the edge of my consciousness: “I have felt the wind of the wing of madness.”

Our perhaps understandable modern need to dull the sawtooth edges of so many of the afflictions we are heir to has led us to banish the harsh old-fashioned words: madhouse, asylum, insanity, melancholia, lunatic, madness.

But never let it be doubted that depression, in its extreme form, is madness. The madness results from an aberrant biochemical process. It has been established with reasonable certainty (after strong resistance from many psychiatrists, and not all that long ago) that such madness is chemically induced amid the neurotransmitters of the brain, probably as the result of systemic stress, which for unknown reasons causes a depletion of the chemicals norepinephrine and serotonin, and the increase of a hormone, cortisol.

With all of this upheaval in the brain tissues, the alternate drenching and deprivation, it is no wonder that the mind begins to feel aggrieved, stricken, and the muddied thought processes register the distress of an organ in convulsion. Sometimes, though not very often, such a disturbed mind will turn to violent thoughts regarding others. But with their minds turned agonizingly inward, people with depression are usually dangerous only to themselves. The madness of depression is, generally speaking, the antithesis of violence. It is a storm indeed, but a storm of murk. Soon evident are the slowed-down responses, near paralysis, psychic energy throttled back close to zero. Ultimately, the body is affected and feels sapped, drained.

That fall, as the disorder gradually took full possession of my system, I began to conceive that my mind itself was like one of those outmoded small town telephone exchanges, being gradually inundated by flood waters: one by one, the normal circuits began to drown, causing some of the functions of the body and nearly all of those of instinct and intellect to slowly disconnect.

There is a well known checklist of some of these functions and their failures. Mine conked out fairly close to schedule, many of them following the pattern of depressive seizures. I particularly remember the lamentable near disappearance of my voice. It underwent a strange transformation, becoming at times quite faint, wheezy and spasmodic, a friend observed later that it was the voice of a ninety year old. The libido also made an early exit, as it does in most major illnesses, it is the superfluous need of a body in beleaguered emergency. Many people lose all appetite; mine was relatively normal, but I found myself eating only for susistence: food, like everything else within the scope of sensation, was utterly without savor. Most distressing of all the instinctual disruptions was that of sleep, along with a complete absence of dreams.

Exhaustion combined with sleeplessness is a rare torture. The two or three hours of sleep I was able to get at night were always at the behest of the Halcyon, a matter which deserves particular notice. For some time now many experts in psychopharmacology have warned that the benzodiazepine family of tranquilizers, of which Halcion is one (Valium and Ativan are others), is capable of depressing mood and even precipitating a major depression. Over two years before my siege, an insouciant doctor had prescribed Ativan as a bedtime aid, telling me airily that I could take it as casually as aspirin. The Physicians’ Desk Reference, the pharmacological bible, reveals that the medicine I had been ingesting was (a) three times the normally prescribed strength, (b) not advisable as a medication for more than a month or so, and (c) to be used with special caution by people of my age. At the time of which I am speaking I was no longer taking Ativan but had become addicted to Halcion and was consuming large doses. It seems reasonable to think that this was still another contributory factor to the trouble that had come upon me. Certainly, it should be a caution to others.

At any rate, my few hours of sleep were usually terminated at three or four in the morning, when I stared up into yawning darkness, wondering and writhing at the devastation taking place in my mind, and awaiting the dawn, which usually permitted me a feverish, dreamless nap. I’m fairly certain that it was during one of these insomniac trances that there came over me the knowledge, a weird and shocking revelation, like that of some long beshrouded metaphysical truth, that this condition would cost me my life if it continued on such a course. This must have been just before my trip to Paris.

Death, as I have said, was now a daily presence, blowing over me in cold gusts. I had not conceived precisely how my end would come. In short, I was still keeping the idea of suicide at bay. But plainly the possibility was around the corner, and I would soon meet it face to face.

What I had begun to discover is that, mysteriously and in ways that are totally remote from normal experience.

The gray drizzle of horror induced by depression takes on the quality of physical pain. But it is not an immediately identifiable pain, like that of a broken limb. It may be more accurate to say that despair, owing to some evil trick played upon the sick brain by the inhabiting psyche, comes to resemble the diabolical discomfort of being imprisoned in a fiercely overheated room. And because no breeze stirs this caldron, because there is no escape from this smothering confinement, it is entirely natural that the victim begins to think ceaselessly of oblivion.

*

Five

. . .

*

from

DARKNESS VISIBLE. A MEMOIR of MADNESS

by William Styron

get it at Amazon.com