The Last Girl. My story of captivity and my fight against the Islamic State – Nadia Murad.

This book is written for every Yazidi.

Nadia Murad is not just my client, she is my friend. When we were introduced in London, she asked if I would act as her lawyer. She explained that she would not be able to provide funds, that the case would likely be long and unsuccessful. But before you decide, she had said, hear my story.

In 2014, ISIS attacked Nadia’s village in Iraq, and her life as a twenty-one-year-old student was shattered. She was forced to watch her mother and brothers be marched off to their deaths. And Nadia herself was traded from one lSlS fighter to another. She was forced to pray, forced to dress up and put makeup on in preparation for rape, and one night was brutally abused by a group of men until she was unconscious. She showed me her scars from cigarette burns and beatings. And she told me that throughout her ordeal ISIS militants would call her a “dirty unbeliever” and brag about conquering Yazidi women and wiping their religion from the earth.

Nadia was one of thousands of Yazidis taken by ISIS to be sold in markets and on Facebook, sometimes for as little as twenty dollars. Nadia’s mother was one of eighty older women who were executed and buried in an unmarked grave. Six of her brothers were among the hundreds of men who were murdered in a single day.

What Nadia was telling me about is genocide. And genocide doesn’t happen by accident. You have to plan it. Before the genocide began, the ISIS “Research and Fatwa Department” studied the Yazidis and concluded that, as a Kurdish speaking group that did not have a holy book, Yazidis were nonbelievers whose enslavement was a “firmly established aspect of the Shariah.” This is why, according to ISIS’s warped morality, Yazidis, unlike Christians, Shias, and others, can be systematically raped. Indeed, this was to be one of the most effective ways to destroy them.

What followed was the establishment of a bureaucracy of evil on an industrial scale. ISIS even released a pamphlet entitled Questions and Answers on Taking Captives and Slaves to provide more guidelines. “Question: Is it permissible to have intercourse with a female slave who has not reached puberty? Answer: It is permissible to have intercourse with the female slave who hasn’t reached puberty if she is fit for intercourse. Question: Is it permissible to sell a female captive? Answer: It is permissible to buy, sell, or gift female captives and slaves, for they are merely property.”

When Nadia told me her story in London, it had been almost two years since ISIS’s genocide against the Yazidis had begun. Thousands of Yazidi women and children were still held captive by ISIS, but no member of ISIS had been prosecuted in a court anywhere in the world for these crimes. Evidence was being lost or destroyed. And prospects for justice looked bleak.

Of course, I took the case. And Nadia and I spent more than a year campaigning together for justice. We met repeatedly with the Iraqi government, United Nations representatives, members of the UN Security Council, and ISIS victims. I prepared reports, provided drafts and legal analysis, and gave speeches imploring the UN to act. Most of our interlocutors told us it would be impossible: the Security Council had not taken action on international justice in years.

But just as I write this foreword, the UN Security Council has adopted a landmark resolution creating an investigation team that will collect evidence of the crimes committed by ISIS in Iraq. This is a major victory for Nadia and all the victims of ISIS, because it means that evidence will be preserved and that individual ISIS members can be put on trial. I sat next to Nadia in the Security Council when the resolution was adopted unanimously. And as we watched fifteen hands go up, Nadia and I looked at each other and smiled.

As a human-rights lawyer, my job is often to be the voice of those who have been silenced: the journalist behind bars or the victims of war crimes fighting for their day in court. There is no doubt ISIS tried to silence Nadia when they kidnapped and enslaved her, raped and tortured her, and killed seven members of her family in a single day.

But Nadia refused to be silenced. She has defied all the labels that life has given her: Orphan, Rape victim, Slave, Refugee. She has instead created new ones: Survivor, Yazidi leader, Women’s advocate, Nobel Peace Prize nominee. United Nations Goodwill Ambassador, And now author.

Over the time I have known her, Nadia has not only found her voice, she has become the voice of every Yazidi who is a victim of genocide, every woman who has been abused, every refugee who has been left behind.

Those who thought that by their cruelty they could silence her were wrong. Nadia Murad’s spirit is not broken, and her voice will not be muted.

Instead, through this book, her voice is louder than ever.

Amal Clooney, Barrister, September 2017

Amal Clooney and Nadia Murad at the United Nations

Charter One

Early in the summer of 2014, while I was busy preparing for my last year of high school, two farmers disappeared from their fields just outside Kocho, the small Yazidi village in northern Iraq where I was born and where, until recently, I thought I would live for the rest of my life. One moment the men were lounging peacefully in the shade of scratchy homemade tarps, and the next they were captive in a small room in a nearby village, home mostly to Sunni Arabs. Along with the farmers, the kidnappers took a hen and a handful of her chicks, which confused us. “Maybe they were just hungry,” we said to one another, although that did nothing to calm us down.

Kocho, for as long as I have been alive, has been a Yazidi village, settled by the nomadic farmers and shepherds who first arrived in the middle of nowhere and decided to build homes to protect their wives from the desert-like heat while they walked their sheep to better grass. They chose land that would be good for farming, but it was a risky location, on the southern edge of Iraq’s Sinjar region, where most of the country’s Yazidis live, and very close to non-Yazidi Iraq.

When the first Yazidi families arrived in the mid 1950s, Kocho was inhabited by Sunni Arab farmers working for landlords in Mosul. But those Yazidi families had hired a lawyer to buy the land, the lawyer, himself a Muslim, is still considered a hero, and by the time l was born, Kocho had grown to about two hundred families, all of them Yazidi and as close as if we were one big family, which we nearly were.

The land that made us special also made us vulnerable. Yazidis have been persecuted for centuries because of our religious beliefs, and, compared to most Yazidi towns and villages, Kocho is far from Mount Sinjar, the high, narrow mountain that has sheltered us for generations. For a long time we had been pulled between the competing forces of Iraq’s Sunni Arabs and Sunni Kurds, asked to deny our Yazidi heritage and conform to Kurdish or Arab identities. Until 2013, when the road between Kocho and the mountain was finally paved, it would take us almost an hour to drive our white Datsun pickup across the dusty roads through Sinjar City to the base of the mountain. I grew up closer to Syria than to our holiest temples, closer to strangers than to safety.

A drive in the direction of the mountain was joyful. In Sinjar City we could find candy and a particular kind of lamb sandwich we didn’t have in Kocho, and my father almost always stopped to let us buy what we wanted. Our truck kicked up clouds of dust as we moved, but I still preferred to ride in the open air, lying flat in the truck bed until we were outside the village and away from our curious neighbors, then popping up to feel the wind whip through my hair and watch the blur of livestock feeding along the road. I easily got carried away, standing more and more upright in the back of the truck until my father or my eldest brother, Elias, shouted at me that if I wasn’t careful, I would go flying over the side.

In the opposite direction, away from those lamb sandwiches and the comfort of the mountain, was the rest of Iraq. In peacetime, and if he wasn’t in a hurry, it might take a Yazidi merchant fifteen minutes to drive from Kocho to the nearest Sunni village to sell his grain or milk. We had friends in those villages, girls I met at weddings, teachers who spent the term sleeping in Kocho’s school, men who were invited to hold our baby boys during their ritual circumcision, and from then on bonded to that Yazidi family as a kiriv, something like a god-parent. Muslim doctors traveled to Kocho or to Sinjar City to treat us when we were sick, and Muslim merchants drove through town selling dresses and candies, things you couldn’t find in Kocho’s few shops, which carried mostly necessities. Growing up, my brothers often traveled to non-Yazidi villages to make a little money doing odd jobs. The relationships were burdened by centuries of distrust, it was hard not to feel bad when a Muslim wedding guest refused to eat our food, no matter how politely, but still, there was genuine friendship.

These connections went back generations, lasting through Ottoman control, British colonization, Saddam Hussein, and the American occupation. In Kocho, we were particularly known for our close relationships with Sunni villages.

But when there was fighting in Iraq, and there always seemed to be fighting in Iraq, those villages loomed over us, their smaller Yazidi neighbor, and old prejudice hardened easily into hatred. Often, from that hatred, came violence. For at least the past ten years, since Iraqis had been thrust into a war with the Americans that began in 2003, then spiraled into more vicious local fights and eventually into full-fledged terrorism, the distance between our homes had grown enormous. Neighboring villages began to shelter extremists who denounced Christians and non-Sunni Muslims and, even worse, who considered Yazidis to be kuffar, unbelievers worthy of killing (kafir is singular).

In 2007 a few of those extremists drove a fuel tanker and three cars into the busy centers of two Yazidi towns about ten miles northwest of Kocho, then blew up the vehicles, killing the hundreds of people who had rushed to them, many thinking they were bringing goods to sell at the market.

Yazidism is an ancient monotheistic religion, spread orally by holy men entrusted with our stories. Although it has elements in common with the many religions of the Middle East, from Mithraism and Zoroastrianism to Islam and Judaism, it is truly unique and can be difficult even for the holy men who memorize our stories to explain. I think of my religion as being an ancient tree with thousands of rings, each telling a story in the long history of Yazidis. Many of those stories, sadly, are tragedies.

Today there are only about one million Yazidis in the world. For as long as I have been alive, and, I know, for a long time before I was born, our religion has been what defined us and held us together as a community. But it also made us targets of persecution by larger groups, from the Ottomans to Saddam’s Baathists, who attacked us or tried to coerce us into pledging our loyalty to them. They degraded our religion, saying that we worshipped the devil or that we were dirty, and demanded that we renounce our faith.

Yazidis survived generations of attacks that were intended to wipe us out, whether by killing us, forcing us to convert, or simply pushing us from our land and taking everything we owned. Before 2014, outside powers had tried to destroy us seventy-three times. We used to call the attacks against Yazidis firman, an Ottoman word, before we learned the word genocide.

When we heard about the ransom demands for the two farmers, the whole village went into a panic. “Forty thousand dollars,” the kidnappers told the farmers’ wives over the phone. “Or come here with your children so you can convert to Islam as families.” Otherwise, they said, the men would be killed. It wasn’t the money that made their wives collapse in tears in front of our mukhtar, or village leader, Ahmed Jasso; forty thousand dollars was an otherworldly sum, but it was just money. We all knew that the farmers would sooner die than convert, so the villagers wept in relief when, late one night, the men escaped through a broken window, ran through the barley fields, and showed up at home, alive, dust up to their knees and panting with fear. But the kidnappings didn’t stop.

Soon afterward Dishan, a man employed by my family, the Tahas, was abducted from a field near Mount Sinjar where he watched our sheep. It had taken my mother and brothers years to buy and breed our sheep, and each one was a victory. We were proud of our animals, keeping them in our courtyard when they weren’t roaming outside the village, treating them almost like pets. The annual shearing was a celebration in itself. I loved the ritual of it, the way the soft wool fell to the ground in cloudlike piles, the musky smell that took over our house, how the sheep bleated quietly, passively. I loved sleeping beneath the thick comforters my mother, Shami, would make from the wool, stuffing it between colorful pieces of fabric. Sometimes I got so attached to a lamb that I had to leave the house when it came time to slaughter it. By the time Dishan was kidnapped, we had over a hundred sheep, for us, a small fortune.

Remembering the hen and chicks that had been taken along with the farmers, my brother Saeed raced in our family’s pickup truck to the base of Mount Sinjar, about twenty minutes away now that the road was paved, to check on our sheep. “Surely, they took them,” we groaned. “Those sheep are all we have.”

Later, when Saeed called my mother, he sounded confused. “Only two were taken,” he reported, an old, slow-moving ram and a young female lamb. The rest were grazing contentedly on the brownish-green grass and would follow my brother home. We laughed, we were so relieved. But Elias, my eldest brother, was worried. “I don’t get it,” he said. “Those villagers aren’t rich. Why did they leave the sheep behind?” He thought it had to mean something.

The day after Dishan was taken, Kocho was in chaos. Villagers huddled in front of their doors, and along with men who took turns manning a new checkpoint just beyond our village walls, they watched for any unfamiliar cars coming through Kocho. Hezni, one of my brothers, came home from his job as a policeman in Sinjar City and joined the other village men who loudly argued about what to do. Dishan’s uncle wanted to get revenge and decided to lead a mission to a village east of Kocho that was headed by a conservative Sunni tribe. “We’ll take two of their shepherds,” he declared, in a rage. “Then they’ll have to give Dishan back!”

It was a risky plan, and not everyone supported Dishan’s uncle. Even my brothers, who had all inherited bravery and a quickness to fight from our father, were split on what to do. Saeed, who was only a couple of years older than me, spent a lot of his time fantasizing about the day he would finally prove his heroism. He was in favor of revenge, while Hezni, who was over a decade older and the most empathetic of us all, thought it was too dangerous. Still, Dishan’s uncle took what allies he could find and snatched two Sunni Arab shepherds, then drove them back to Kocho, where he locked them in his house and waited.

MOST VILLAGE DISPUTES were solved by Ahmed Jasso, our practical and diplomatic mukhtar, and he sided with Hezni. “Our relationship with our Sunni neighbors is already strained,” he said. “Who knows what they will do if we try to fight with them.” Besides, he warned, the situation outside Kocho was far worse and more complicated than we imagined. A group calling itself the Islamic State, or ISIS, which had largely been born here in Iraq, then grown in Syria over the past few years, had taken over villages so close to us, we could count the black-clad figures in their trucks when they drove by. They were holding our shepherd, our mukhtar told us. “You’ll only make things worse,” Ahmed Jasso said to Dishan’s uncle, and barely half a day after the Sunni shepherds had been kidnapped, they were set free. Dishan, however, remained a captive.

Ahmed Jasso was a smart man, and the Jasso family had decades of experience negotiating with the Sunni Arab tribes. Everyone in the village turned to them with their problems, and outside Kocho they were known for being skilled diplomats. Still, some of us wondered if this time he was being too cooperative, sending the message to the terrorists that Yazidis would not protect themselves. As it was, all that stood between us and ISIS were Iraqi Kurdish fighters, called peshmerga, who had been sent from the Kurdish autonomous region to guard Kocho when Mosul fell almost two months earlier. We treated the peshmerga like honored guests. They slept on pallets in our school, and each week a different family slaughtered a lamb to feed them, a huge sacrifice for the poor villagers. I also looked up to the fighters. I had heard about female Kurds from Syria and Turkey who fought against terrorists and carried weapons, and the thought made me feel brave.

Some people, including a few of my brothers, thought we should be allowed to protect ourselves. They wanted to man the checkpoints, and Ahmed Jasso’s brother Naif tried to convince Kurdish authorities to let him form a Yazidi peshmerga unit, but he was ignored. No one offered to train the Yazidi men or encourage them to join the fight against the terrorists. The peshmerga assured us that as long as they were there, we had nothing to worry about, and that they were as determined to protect Yazidis as they were the capital of Iraqi Kurdistan. “We will sooner let Erbil fall than Sinjar,” they said. We were told to trust them, and so we did.

Still, most families in Kocho kept weapons at home, clunky Kalashnikov rifles, a big knife or two usually used to slaughter animals on holidays. Many Yazidi men, including those of my brothers who were old enough, had taken jobs in the border patrol or police force after 2003, when those jobs became available, and we felt sure that as long as the professionals watched Kocho’s borders, our men could protect their families. After all, it was those men, not the peshmerga, who built a dirt barrier with their own hands around the village after the 2007 attacks, and it was Kocho’s men who patrolled that barrier day and night for a full year, stopping cars at makeshift checkpoints and watching for strangers, until we felt safe enough to go back to a normal life.

Dishan’s kidnapping made us all panic. But the peshmerga didn’t do anything to help. Maybe they thought it was just a petty squabble between villages, not the reason Masoud Barzani, the president of the Kurdistan Regional Government, had sent them out of the safety of Kurdistan and into the unprotected areas of Iraq. Maybe they were frightened like we were. A few of the soldiers looked like they couldn’t be that much older than Saeed, my mother’s youngest son. But war changed people, especially men. It wasn’t that long ago that Saeed would play with me and our niece, Kathrine, in our courtyard, not yet old enough to know that boys were not supposed to like dolls.

Lately, though, Saeed had become obsessed with the violence sweeping through Iraq and Syria. The other day I had caught him watching videos of Islamic State beheadings on his cell phone, the images shaking in his hand, and was surprised that he held up the phone so I could watch, too. When our older brother Massoud walked into the room, he was furious. “How could you let Nadia watch!” he yelled at Saeed, who cowered. He was sorry, but I understood. It was hard to turn away from the gruesome scenes unfolding so close to our home.

The image from the video popped back into my head when I thought about our poor shepherd being held captive. If the peshmerga won’t help us get Dishan back, I will have to do something, I thought, and ran into our house. I was the baby of the family, the youngest of eleven, and a girl. Still, I was outspoken and used to being heard, and I felt giant in my anger.

Our house was close to the northern edge of the village, a one-story row of mud brick rooms lined up like beads on a necklace and connected by doorways with no doors, all leading out to a large courtyard with a vegetable garden, a bread oven called a tandoor, and, often, sheep and chickens. I lived there with my mother, six of my eight brothers and my two sisters, plus two sisters-in law and the children they had between them, and within walking distance of my other brothers, half brothers, and half sisters and most of my aunts, uncles, and cousins. The roof leaked in the winter when it rained, and the inside could feel like an oven in the Iraqi summertime, pushing us up a staircase onto the roof to sleep. When one part of the roof caved in, we patched it with pieces of metal we scavenged from Massoud’s mechanic shop, and when we needed more space, we built it. We were saving money for a new home, a more permanent one made of cement blocks, and we were getting closer every day.

I entered our house through the front door and ran to a room I shared with the other girls, where there was a mirror. Wrapping a pale scarf around my head, one I normally wore to keep my hair from getting in my eyes when bending over rows of vegetables, I tried to imagine what a fighter might do to prepare for battle. Years of labor on the farm made me stronger than my appearance let on. Still, I had no idea what I would do if I saw the kidnappers or people from their village drive through Kocho. What would I say to them? “Terrorists took our shepherd and went to your village,” I practiced in the mirror, scowling. “You could have stopped them. At least you can tell us where he was taken.” From the corner of our courtyard, I grabbed a wooden stick, like the ones used by a shepherd, and made for the front door again, where a few of my brothers stood with my mother, deep in conversation. They barely noticed when I joined them.

A few minutes later a white pickup truck from the kidnappers’ village came down the main road, two men in the front and two in the back. They were Arabs I vaguely recognized from the Sunni tribe that had taken Dishan. We watched as their truck crept down the main dirt road that snaked through the village, slowly, as though totally without fear. They had no reason to drive through Kocho, roads around the village connected cities like Sinjar and Mosler, and their presence seemed like a taunt. Breaking away from my family, I ran into the middle of the road and stood in the path of the truck. “Stop!” I shouted, waving the stick over my head, trying to make myself look bigger. “Tell us where Dishan is!”

It took half my family to restrain me. “What did you think you were going to do?” Elias scolded. “Attack them? Break their windshield?” He and a few of my other siblings had just come from the fields and were exhausted and stinking from the onions they were harvesting. To them, my attempt to avenge Dishan seemed like nothing more than a child’s outburst. My mother was also furious with me for running into the road. Under normal circumstances she tolerated my temper and was even amused by it, but in those days everyone was on edge. It seemed dangerous to draw attention to yourself, particularly if you were a young, unmarried woman. “Come here and sit,” she said sternly. “It’s shameful for you to do that, Nadia, it’s not your business. The men will take care of it.”

Life went on. Iraqis, particularly Yazidis and other minorities, are good at adjusting to new threats. You have to be if you want to try to live something close to a normal life in a country that seemed to be coming apart. Sometimes the adjustments were relatively small. We scaled down our dreams of finishing school, of giving up farmwork for something less backbreaking, of a wedding taking place on time, and it wasn’t hard to convince ourselves that those dreams had been unreachable in the first place. Sometimes the adjustments would happen gradually, without anyone noticing. We would stop talking to the Muslim students at school, or be drawn inside in fear if a stranger came through the village. We watched news of attacks on TV and started to worry more about politics. Or we shut out politics completely, feeling it was safest to stay silent. After each attack, men added to the dirt barrier outside Kocho, beginning on the western side, facing Syria, until one day we woke up to see that it surrounded us completely. Then, because we still felt unsafe, the men dug a ditch around the village as well.

We would, over generations, get used to a small pain or injustice until it became normal enough to ignore. I imagine this must be why we had come to accept certain insults, like our food being refused, that probably felt like a crime to whoever first noticed it. Even the threat of another firman was something Yazidis had gotten used to, although that adjustment was more like a contortion. It hurt.

With Dishan still captive, I returned with my siblings to the onion fields. There nothing had changed. The vegetables we planted months before were now grown; if we didn’t pick them, no one would. If we didn’t sell them, we wouldn’t have money. So we all knelt in a line beside the tangles of green sprouts, tugging bulbs out of the soil a few at a time, collecting them in woven plastic bags where they would be left to ripen until it was time to take them to market. Will we take them to the Muslim villages this year? we wondered but could not answer. When one of us pulled up the black, poisonous smelling sludge of a rotten onion, we groaned, plugged our noses, and kept going.

Because it was what we normally did, we gossiped and teased one another, telling stories each had heard a million times before. Adkee, my sister and the joker of the family, recalled the image of me that day trying to chase the car, a skinny farm girl, my scarf falling in front of my eyes, waving the stick over my head, and we all nearly tipped over into the dirt laughing. We made a game of the work, racing to see who could pick the most onions just as, months before, we had raced to see who could plant the most seeds. When the sun started to go down, we joined my mother at home for dinner in our courtyard and then slept shoulder to shoulder on mattresses on the roof of our house, watching the moon and whispering until exhaustion brought the whole family to complete silence.

We wouldn’t find out why the kidnappers stole the animals, the hen, the chicks, and our two sheep, until almost two weeks later, after ISIS had taken over Kocho and most of Sinjar. A militant, who had helped round up all of Kocho’s residents into the village’s secondary school, later explained the kidnappings to a few of the village’s women. “You say we came out of nowhere, but we sent you messages,” he said, his rifle swinging at his side. “When we took the hen and the chicks, it was to tell you we were going to take your women and children. When we took the ram, it was like taking your tribal leaders, and when we killed the ram, it meant we planned on killing those leaders. And the young lamb, she was your girls.”

Chapter Two

My mother loved me, but she didn’t want to have me. For months before I was conceived, she saved money whenever she could, a spare dinar here and there, change from a trip to the market or a pound of tomatoes sold on the sly, to spend on the birth control she didn’t dare ask my father for. Yazidis don’t marry outside the religion or allow conversion into Yazidism, and large families were the best way to guarantee that we didn’t die out completely. Plus, the more children you had, the more help you had on the farm. My mother managed to buy the pills for three months until she ran out of money, and then, almost immediately, she was pregnant with me, her eleventh and last child.

She was my father’s second wife. His first had died young, leaving him with four children who needed a woman to help raise them. My mother was beautiful, born to a poor and deeply religious family in Kocho, and her father happily gave her to my father as a wife. He already had some land and animals and, compared to the rest of Kocho, was well-off. So before her twentieth birthday, before she had even learned how to cook, my mother became a wife and stepmother to four children, and then quickly she became pregnant herself.

She never went to school and didn’t know how to read or write. Like many Yazidis, whose mother tongue is Kurdish, she didn’t speak much Arabic and could barely communicate with Arab villagers who came to town for weddings or as merchants. Even our religious stories were a mystery to her. But she worked hard, taking on the many tasks that came with being a farmer’s wife. It wasn’t enough to give birth eleven times, each time, except for the dangerous labor with my twin brothers, Saoud and Massoud, at home, a pregnant Yazidi woman was also expected to lug firewood, plant crops, and drive tractors until the moment she went into labor and afterward to carry the baby with her while she worked.

My father was known around Kocho for being a very traditional, devout Yazidi man. He wore his hair in long braids and covered his head with a white cloth. When the qawwals, traveling religious teachers who play the flute and drums and recite hymns, visited Kocho, my father was among the men who would greet them. He was a prominent voice in the jevat, or meeting house, where male villagers could gather to discuss issues facing the community with our mukhtar.

Injustice hurt my father more than any physical injury, and his pride fed his strength. The villagers who were close to him loved to tell stories of his heroism, like the time he rescued Ahmed Jasso from a neighboring tribe who were determined to kill our mukhtar, or the time the expensive Arabian horses belonging to a Sunni Arab tribal leader escaped from their stables and my father used his pistol to defend Khalaf, a poor farmer from Kocho, when he was discovered riding one in nearby fields.

“Your father always wanted to do what was right,” his friends would tell us after he passed away. “Once he let a Kurdish rebel who was running away from the Iraqi Army sleep in his house, even though the rebel led the police right to his doorstep.” The story goes, when the rebel was discovered, the police wanted to imprison both men, but my father talked his way out of it. “I didn’t help him because of politics,” he told the police. “I helped him because he is a man and I am a man,” and they let him go. “And that rebel turned out to be a friend of Masoud Barzani!” his friends recall, still amazed all these years later.

My father wasn’t a bully, but he fought if he had to. He had lost an eye in a farm accident, and what was left in the socket-a small milky ball that looked like the marbles I played with as a kid could make him look menacing. I’ve often thought since then that if my father had been alive when ISIS came to Kocho, he would have led an armed uprising against the terrorists.

By 1993, the year I was born, my parents’ relationship was falling apart, and my mother was suffering. The eldest son born to my father’s first wife had died a few years earlier in the Iran-Iraq War, and after that, my mother told me, nothing was ever good again. My father had also brought home another woman, Sara, whom he married and who now lived with their children on one end of the house my mother had long considered her own. Polygamy isn’t outlawed in Yazidism, but not everyone in Kocho would have gotten away with it. No one questioned my father, though. By the time he married Sara, he owned a great deal of land and sheep and, in a time when sanctions and war with Iran made it hard for anyone to survive in Iraq, he needed a big family to help him, bigger than my mother could provide.

I still find it hard to criticize my father for marrying Sara. Anyone whose survival is directly linked to the number of tomatoes grown in one year or the amount of time spent walking their sheep to better grass can understand why he wanted another wife and more children. These things weren’t personal. Later on, though, when he officially left my mother and sent us all to live in a small building behind our house with barely any money and land, I understood that his taking a second wife hadn’t been completely practical. He loved Sara more than he loved my mother. I accepted that, just as I accepted that my mother’s heart must have been broken when he first brought home a new wife. After he left us, she would say to me and my two sisters, Dimal and Adkee, “God willing, what happened to me won’t happen to you.” I wanted to be like her in every way, except I didn’t want to be abandoned.

My brothers weren’t all as understanding. “God will make you pay for this!” Massoud shouted at our father once, in a rage. But even they admitted that life got a little easier when my mother and Sara weren’t living together and competing for my father’s attention, and after a few years we learned how to coexist. Kocho was small, and we often saw him and Sara. I passed by their house, the house I was born in, every day on my way to elementary school; theirs was the only dog along that walk that knew me well enough not to bark. We spent holidays together, and my father would sometimes drive us to Sinjar City or to the mountain. In 2003 he had a heart attack, and we all watched as my strong father instantly became an ill, elderly man, confined to a wheelchair in the hospital. When he died a few days later, it seemed just as likely that it was out of shame over his frailty as it was because of his bad heart. Massoud regretted having yelled at him. He had assumed his father was strong enough to take anything.

My mother was a deeply religious woman, believing in the signs and dreams that many Yazidis use to interpret the present or predict the future. When the moon first appeared in the sky as a crescent, I would find her in the courtyard, lighting candles. “This is the time when children are most vulnerable to illness and accidents,” she explained. “I am praying that nothing happens to any of you.”

I often got sick to my stomach, and when I did, my mother took me to Yazidi healers who gave me herbs and teas, which she urged me to drink even though I hated the taste, and when someone died, she visited a kochek, a Yazidi mystic, who would help confirm that the deceased had made it into the afterlife. Many Yazidi pilgrims take a bit of soil before they leave Lalish, a valley in northern Iraq where our holiest temples are, and wrap it up in a small cloth folded into a triangle, which they keep in their pocket or wallet as a talisman. My mother was never without some of that holy soil, particularly after my brothers started leaving home to work with the army. “They need all the protection they can get, Nadia,” she would say. “It’s dangerous, what they are doing.”

She was also practical and hardworking, trying against great odds to make our lives better. Yazidis are among the poorest communities in Iraq, and my family was poor even by Kocho’s standards, particularly after my parents separated. For years, my brothers dug wells by hand, lowering themselves delicately into the wet, sulfurous ground inch by inch, careful not to break a bone. They also, along with my mother and sisters, farmed other people’s land, taking only a small percentage of the profit for the tomatoes and onions they harvested. The first ten years of my life, we rarely had meat for dinner, living on boiled greens, and my brothers used to say they bought new pants only when they could see their legs through the old ones.

Gradually, thanks to my mother’s hard work and the economic growth in northern Iraq after 2003, our situation, and that of most Yazidis, improved. My brothers took jobs as border guards and policemen when the central and Kurdish governments opened up positions to Yazidis. It was dangerous work, my brother Jalo joined a police unit guarding Tal Afar airport that lost a lot of its men in combat in the first year, but it paid well. Eventually we were able to move from my father’s land into our own house.

People who knew my mother only for her deep religious beliefs and work ethic were surprised by how funny she could be, and how she turned her hardship into humor. She had a teasing way of joking, and nothing, not even the reality that she would almost certainly never marry again, was off limits. One day, a few years after she and my father separated, a man visited Kocho hopeful for my mother’s attention. When she heard he was at the door, she grabbed a stick and ran after him, telling him to go away, that she would never marry again. When she came back inside, she was laughing. “You should have seen how scared he was!” she told us, imitating him until we were all laughing too. “If I was going to marry, it wouldn’t be to a man who ran away from an old lady with a stick!”

She joked about everything, about being abandoned by my father, about my fascination with hair and makeup, about her own failures. She had been going to adult literacy classes since before I was born, and when I became old enough, I started tutoring her. She was a fast learner, in part, I thought, because she was able to laugh off her mistakes.

When she talked about that scramble for birth control before I was conceived, it was as if she were telling a story from a book she had read long ago and liked only for its punch lines. Her reluctance to get pregnant with me was funny because now she couldn’t imagine life without me. She laughed because of how she had loved me the moment I was born, and because I would spend each morning warming myself by our clay oven while she baked bread, talking to her. We laughed because I would get jealous whenever she doted on my sisters or nieces instead of me, because I vowed never to leave home, and because we slept in the same bed from the day l was born until ISIS came to Kocho and tore us all apart. She was our mother and our father at the same time, and we loved her even more when we became old enough to understand how much she must have suffered.

I grew up attached to my home and never imagined living anywhere else. To outsiders, Kocho may seem too poor to be happy, and too isolated and barren to ever be anything but desperately poor. American soldiers must have gotten that impression, given the way kids would swarm them when they came to visit, begging for pens and candy. I was one of those kids, asking for things.

Kurdish politicians occasionally came to Kocho, although only in recent years and mostly before elections. One of the Kurdish parties, Barzani’s Kurdistan Democratic Party (KDP), opened a small two room office in Kocho after 2003, but it seemed to exist mostly as a clubhouse for the village men who belonged to the party. A lot of people complained privately that the KDP pressured them into supporting the party and into saying Yazidis were Kurds and Sinjar was part of Kurdistan. Iraqi politicians ignored us, and Saddam had tried to force us to say we were Arab, as though we could all be threatened into giving up our identities and that once we did we would never rebel.

Just living in Kocho was, in a way, defiant. In the mid-1970s Saddam began forcibly moving minorities, including Kurds and Yazidis, from their villages and towns into cinder-block houses in planned communities, where they could be more easily controlled, a campaign people call the “Arabization” of the north. But Kocho was far enough away from the mountain that we were spared. Yazidi traditions that became old fashioned in these new communities thrived in my village. Women wore the gauzy white dresses and headscarves of their grandmothers; elaborate weddings featured classic Yazidi music and dance; and we fasted in atonement for our sins when many Yazidis had given up that custom. It was safe and close-knit, and even fights over land or marriage ended up feeling minor. At least none of it had an impact on how much we loved one another. Villagers went to one another’s houses late into the night and walked the streets without fear. I heard visitors say that at night, from afar, Kocho glowed in the darkness. Adkee swore she once heard someone describe it as “the Paris of Sinjar.”

Kocho was a young village, full of children. There were few people living there who were old enough to have witnessed firmans first hand, and so a lot of us lived thinking those days were in the past, that the world was too modern and too civilized to be the kind of place where an entire group could be killed just because of their religion. I know that I felt that way. We grew up hearing about past massacres like folktales that helped bond us together. In one story, a friend of my mother’s described fleeing oppression in Turkey, where many Yazidis once lived, with her mother and her sister. Trapped for days in a cave with nothing to eat, her mother boiled leather to keep them alive. I heard this story many times, and it made my stomach turn. I didn’t think I could eat leather, even if I were starving. But it was just a story.

Admittedly, life in Kocho could be very hard. All those children, no matter how much they were loved, were a burden on their parents, who had to work day and night to feed their families. When we were sick, and the sickness couldn’t be healed with herbs, we would have to be taken to Sinjar City or to Mosul to see a doctor. When we needed clothes, those clothes were sewn by hand by my mother or, after we got a little wealthier, purchased once a year in a city market. During the years of United Nations sanctions on Iraq, intended to force Saddam from power, we cried when it became impossible to find sugar. When schools were finally built in the village, first a primary school and then, many years later, a secondary school, parents had to weigh the benefits of their kids getting an education against keeping them at home to work. Average Yazidis had long been denied an education, not just by the Iraqi government but also by religious leaders who worried that a state education would encourage intermarriage and, therefore, conversion and loss of Yazidi identity, but for the parents, giving up the free labor was a great sacrifice. And for what kind of future, the parents wondered, for what jobs, and where? There was no work in Kocho, and a permanent life outside the village, away from other Yazidis, attracted only the very desperate or the very ambitious.

A parent’s love could easily become a source of pain. Life on the farm was dangerous, and accidents happened. My mother pinpoints the moment she grew from a girl into an adult to when her older sister was killed, thrown from a speeding tractor and then run over right there in the middle of the family wheat field. Illnesses were sometimes too expensive to treat. My brother Jalo and his wife Jenan lost baby after baby to a disease that was inherited from Jenan’s side of the family. They were too poor to buy medication or take the babies to a doctor, and out of eight births, four children died.

Divorce took my sister Dimal’s children away. In Yazidi society, as in the rest of Iraq, women have few rights when a marriage ends, no matter what happened to end it. Other children died in wars. I was born just two years after the first Gulf War and five years after the end of the Iran-Iraq War, a pointless eight year conflict that seemed to fulfill Saddam’s desire to torture his people more than anything else. The memories of these children, who we would never see again, lived like ghosts in our house. My father cut off his braids when his eldest son was killed, and although one of my brothers was named after this son, my father could only bear to call him by a nickname, Hezni, which means “sadness.”

We measured our lives by harvests and by Yazidi holidays. Seasons could be brutal. In the wintertime Kocho’s alleyways filled with a cementlike mud that sucked the shoes off your feet, and in the summertime the heat was so intense, we had to drag ourselves to the farm at night rather than risk collapsing under the sun during the day. Sometimes harvests would disappoint, and when that happened, the gloom would stretch on for months, at least until we planted the next round of seeds. Other times, no matter how much we harvested, we didn’t make enough money. We learned the hard way, by lugging bags of produce to market and then having customers turn the vegetables over in their hands and walk away, what sold and what didn’t. Wheat and barley were the most profitable. Onions sold, but not for much. Many years we fed overripe tomatoes to our livestock, just to get rid of the excess.

Still, no matter the hardship, I never wanted to live anywhere other than Kocho. The alleyways may have filled with mud in the winter, but no one had to go far to see the people they loved most. In the summer, the heat was stifling, but that meant we all slept on the roof, side by side, talking and laughing with neighbors on their own roofs. Working on the farm was hard, but we made enough money to live a happy, simple life. I loved my village so much that when l was a child, my favorite game involved creating a miniature Kocho out of discarded boxes and bits of trash. Kathrine and I filled those model homes with handmade wooden dolls and then married the dolls to one another. Of course, before every wedding, the girl dolls would visit the elaborate house I made out of a plastic tomato crate, where I ran a hair salon.

Most importantly, I would never have left Kocho because my family was there. We were a little village ourselves. I had my eight brothers: Elias, the eldest, was like a father. Khairy was the first to risk his life as a border guard to help feed us. Pise was stubborn and loyal and would never let anything happen to us. There was Massoud, who grew up to be the best mechanic (and one of the best soccer players) in Kocho, and his twin Saoud, who ran a small convenience store in the village. Jalo opened his heart to everyone, even strangers. Saeed was full of life and mischief and longed to be a hero, and it was Hezni, the dreamer, whose affection we all competed for. My two sisters, the mothering, quiet Dimal, and Adkee, who one day would fight with our brothers to let her, a woman, drive our pickup truck and the next weep over a lamb who collapsed dead in the courtyard-still lived at home, and my half brothers, Khaled, Walid, Hajji, and Nawaf, and my half sisters, Halam and Haiam, were all nearby.

Kocho was where my mother, Shami, like good mothers everywhere, devoted her life to making sure we were fed and hopeful. It’s not the last place I saw her, but it’s where she is when I think about her, which I do every day. Even during the worst years of the sanctions, she made sure we had what we needed. When there was no money for treats, she gave us barley to trade for gum at the local store. When a merchant came through Kocho selling a dress we couldn’t afford, she badgered him into taking credit. “At least now our house is the first one they visit when they come to Kocho” she joked if one of my brothers complained about the debt.

She had grown up poor, and she never wanted us to appear needy, but villagers wanted to help us and gave us small amounts of flour or couscous when they could. Once when l was very young, my mother was walking home from the mill with only a little flour in her bag and was stopped by her uncle Sulaiman. “I know you need help. Why don’t you ever come to me?” he asked.

At first, she shook her head. “We’re fine, uncle,” she said. “We have everything we need.” But Sulaiman insisted, “I have so much extra wheat, you have to take some,” and the next thing we knew, four big oilcans full of wheat had been delivered to our house, enough for us to make bread for two months. My mother was so ashamed that she needed help that when she told us what happened, her eyes filled with tears, and she vowed that she would make our lives better. Day by day she did. Her presence was a reassurance even with terrorists nearby. “God will protect the Yazidis,” she told us every day.

There are so many things that remind me of my mother. The color white. A good and perhaps inappropriate joke. A peacock, which Yazidis consider a holy symbol, and the short prayers I say in my head when I see a picture of the bird. For twenty one years, my mother was at the center of each day. Every morning she woke up early to make bread, sitting on a low stool in front of the tandoor oven we kept in the courtyard, flattening balls of dough and slapping them against the sides of the oven until they were puffy and blistered, ready to be dipped into bowls of golden melted sheep’s butter.

Every morning for twenty one years I woke up to the slow slap, slap, slap of the dough against the oven walls and the grassy smell of the butter, letting me know my mother was close by. Half asleep, I would join her in front of the tandoor, in the winter warming my hands by the fire, and talk to her about everything, school, weddings, fights with siblings. For years, I was convinced that snakes were hatching babies on the tin roof of our outdoor shower.

“I heard them!” I insisted to her, making slithering sounds. But she just smiled at me, her youngest child. “Nadia is too scared to shower alone!” my siblings mocked me, and even when a baby snake fell on my head, prompting us to finally rebuild the shower, I had to admit they were sort of right. I never wanted to be alone.

I would pick burned edges off the fresh bread, updating my life plan for her. No longer would I simply do hair in the salon I planned to open in our house. We had enough money now to afford the kohl and eye shadow popular in cities outside Kocho, so I would also do makeup after I got home from a day teaching history at the secondary school. My mother nodded her approval. “Just as long as you never leave me, Nadia,” she would say, wrapping the hot bread in fabric. “Of course,” I always replied. “I will never leave you.”

Chapter Three

Yazidis believe that before God made man, he created seven divine beings, often called angels, who were manifestations of himself. After forming the universe from the pieces of a broken pearl-like sphere, God sent his chief Angel, Tawusi Melek, to earth, where he took the form of a peacock and painted the world the bright colors of his feathers. The story goes that on earth, Tawusi Melek sees Adam, the first man, whom God has made immortal and perfect, and the Angel challenges God’s decision. If Adam is to reproduce, Tawusi Melek suggests, he can’t be immortal, and he can’t be perfect. He has to eat wheat, which God has forbidden him to do. God tells his Angel that the decision is his, putting the fate of the world in Tawusi Melek’s hands. Adam eats wheat, is expelled from paradise, and the second generation of Yazidis are born into the world.

Proving his worthiness to God, the Peacock Angel became God’s connection to earth and man’s link to the heavens. When we pray, we often pray to Tawusi Melek, and our New Year celebrates the day he descended to earth. Colorful images of the peacock decorate many Yazidi houses, to remind us that it is because of his divine wisdom that we exist at all. Yazidis love Tawusi Melek for his unending devotion to God and because he connects us to our one God. But Muslim Iraqis, for reasons that have no real roots in our stories, scorn the Peacock Angel and slander us for praying to him.

It hurts to say it, and Yazidis aren’t even supposed to utter the words, but many people in Iraq hear the story of the Peacock Angel and call us devil worshippers. Tawusi Melek, they say, is God’s chief Angel, like Iblis, the devil figure of the Koran. They claim that our Angel defied Adam and therefore God. Some cite texts, usually written by outside scholars in the early twentieth century who were unfamiliar with the Yazidi oral tradition, that say that Tawusi Melek was sent to Hell for refusing to bow to Adam, which is not true. This is a misinterpretation, and it has had terrible consequences. The story we use to explain the core of our faith and everything we think of as good about the Yazidi religion is the same story others use to justify genocide against us.

This is the worst lie told about Yazidis, but it is not the only one. People say that Yazidism isn’t a “real” religion because we have no official book like the Bible or the Koran. Because some of us don’t shower on Wednesdays, the day that Tawusi Melek first came to earth, and our day of rest and prayer, they say we are dirty. Because we pray toward the sun, we are called pagans. Our belief in reincarnation, which helps us cope with Muslims because none of the Abrahamic faiths believe in it. Some Yazidis avoid certain foods, like lettuce, and are mocked for their strange habits. Others don’t wear blue because they see it as the color of Tawusi Melek and too holy for a human, and even that choice is ridiculed.

Growing up in Kocho, I didn’t know a lot about my own religion. Only a small part of the Yazidi population are born into the religious castes, the sheikhs and elders who teach all other Yazidis about the religion. I was a teenager before my family had enough money to take me to Lalish to be baptized, and it was not possible for me to make that trip regularly enough to learn from the sheikhs who lived there. Attacks and persecution scattered us and decreased our numbers, making it even harder for our stories to be spread orally, as they are supposed to be. Still, we were happy that our religious leaders guarded Yazidism, it was clear that in the wrong hands, our religion could be easily used against us.

There are certain things all Yazidis are taught at a young age. I knew about the Yazidi holidays, although more about how we celebrate them than about the theology behind them. I knew that on Yazidi New Year, we color eggs, visit the graves of family, and light candles in our temples. I knew that October was the best month to go to Lalish, a holy valley in the Sheikhan district where the Baba Sheikh, our most important spiritual leader, and Baba Chawish, the custodian of the shrines there, greet pilgrims. In December we fast for three days to atone for our sins. Marriage outside the faith is not allowed; nor is conversion. We were taught about the seventy three past firmans against Yazidis, and these stories of persecution were so intertwined with who we were that they might as well have been holy stories. I knew that the religion lived in the men and women who had been born to preserve it, and that l was one of them.

My mother taught us how to pray, toward the sun in the morning, Lalish during the day, and the moon at night. There are rules, but most are flexible. Prayer is meant to be a personal expression, not a chore or an empty ritual. You can pray silently by yourself or out loud, and you can pray alone or in a group, as long as everyone in that group is also Yazidi. Prayers are accompanied by a few gestures, like kissing the red and white bracelet that many Yazidi women and men wear around their wrist or, for a man, kissing the collar of his traditional white undershirt.

Most Yazidis I grew up with prayed three times a day, and prayers can be made anywhere. More often than I’ve prayed in temples, I’ve prayed in the fields, on our rooftop, even in the kitchen, helping my mother cook. After reciting a few standard lines in praise of God and Tawusi Melek, you can say anything you want. “Tell Tawusi Melek what is bothering you,” my mother told us, demonstrating the gestures. “If you are worried about someone you love, tell him that, or if you are scared of something. These are the things that Tawusi Melek can help you with.” I used to pray for my own future, to finish school and open my salon, and the futures of my siblings and my mother. Now I pray for the survival of my religion and my people.

Yazidis lived like this for a long time, proud of our religion and content to be removed from other communities. We had no ambition for more land or power, and nothing in the religion commands us to conquer non, Yazidis and spread our faith. No one can convert to Yazidism anyway. But during my childhood, our community was changing. Villagers bought televisions, first settling for state run TV before satellite dishes allowed us to watch Turkish soap operas and Kurdish news. We bought our first electric clothes washer, which seemed almost like magic, although my mother still hand, washed her traditional white veils and dresses. Many Yazidis emigrated to the United States, Germany, or Canada, creating connections to the West. And of course, my generation was able to do something our parents hadn’t even dreamed of. We went to school.

Kocho’s first school was built in the 1970s, under Saddam. It went only through the fifth grade, and the lessons were in Arabic, not Kurdish, and were deeply nationalistic. State curriculum was clear about who was important in Iraq and what religion they followed. Yazidis didn’t exist in the Iraqi history books I read in school, and Kurds were depicted as threats against the state. I read the history of Iraq as it unfolded in a sequence of battles, pitting Arab Iraqi soldiers against people who would take their country away from them. It was a bloody history, meant to make us proud of our country and the strong leaders who had kicked out the British colonists and overthrown the king, but it had the opposite effect on me. I later thought that those books must be one reason why our neighbors joined ISIS or did nothing while the terrorists attacked Yazidis.

No one who had been through an Iraqi school would think that we deserved to have our religion protected, or that there was anything bad or even strange about endless war. We had been taught about violence since our very first day of school.

As a child, my country bewildered me. It could seem like its own planet, made up of many different lands, where decades of sanctions, war, bad politics, and occupation pulled neighbors apart. In the far north of Iraq were Kurds, who longed for independence. The south was home mostly to Shiite Muslims, the country’s religious and now political majority. And lodged in the middle were Sunni Arabs, who, with Saddam Hussein as president, once dominated the state they now fight against.

That’s the simple map, one with three solid colorcoded stripes painted more or less horizontally across the country. It leaves out Yazidis or labels them as “other.” The reality of Iraq is harder to illustrate and can be overwhelming even for people who were born here. When I was growing up, the villagers in Kocho didn’t talk a lot about politics. We were concerned with the cycle of the crops, who was getting married, whether a sheep was producing milk, the kind of things that anyone from a small rural town will understand. The central government, apart from campaigns to recruit Yazidis to fight in their wars and to join the Baath Party, seemed just as uninterested in us. But we did think a lot about what it meant to be a minority in Iraq, among all the other groups in that “other” category with Yazidis that, if included on the map, would swirl those three horizontal stripes into colorful marble.

To the northeast of Kocho, a line of dots near the southern edge of Iraqi Kurdistan shows the places where Turkmens, both Shiite and Sunni Muslim, live. Christians, among them Assyrians, Chaldeans, and Armenians have many communities scattered throughout the country, especially in the Nineveh Plain. Elsewhere, flecks indicate the homes of small groups like Kaka’i, Shabak, Roma, and Mandaeans, not to mention Africans and Marsh Arabs. I have heard that somewhere near Baghdad there is still a tiny community of Iraqi Jews. Religion blends into ethnicity. Most Kurds, for instance, are Sunni Muslim, but for them, their Kurdish identity comes first. Many Yazidis consider Yazidism both an ethnic and a religious identity. Most Iraqi Arabs are Shiite or Sunni Muslims, and that division has caused a lot of fighting over the years. Few of these details appeared in our Iraqi history books.

To get from my house to the school, I had to walk along the dusty road that ringed the edge of the town, past Bashar’s house, whose father was killed by Al Qaeda; past the house I was born in, where my father and Sara still lived; and finally past my friend Walaa’s house. Walaa was beautiful, with a pale, round face, and her quiet demeanor balanced my rowdiness. Every morning she would run out to join me on my walk to the school. It was worse to walk alone. Many of the families kept Sheepdogs in their yards, and the enormous animals would stand in the gardens, barking and snarling at whoever passed by. If the gate was open, the dogs lunged after us, snapping their jaws. They weren’t pets; they were big and dangerous, and Walaa and I would sprint away from them, arriving at school panting and sweating. Only my father’s dog, who knew me, left us alone.

Our school was a dull structure made of sandcolored concrete, decorated with faded posters and surrounded by a low wall and a small, dry schoolyard garden. No matter what it looked like, it felt like a miracle to be able to go and study and meet friends. In the school garden, Walaa, Kathrine, and I would play a game with a few of the other girls called bin akhy, which in Kurdish means “in the dirt.” All at once we would each hide something, a marble, a coin, even just a soda cap, in the ground, then we would run around like crazy people, digging holes in the garden until the teacher yelled at us, caking our fingernails with dirt that was sure to upset our mothers. You kept whatever you found, which almost always ended with tears. It was an old game; even my mother remembered playing it.

History, in spite of all the gaps and injustices in the lessons, was my favorite subject and the one I excelled at. English was my worst. I tried hard to be a good student, knowing that while I studied, my siblings worked on our farm. My mother was too poor to buy me a backpack like most of the other students carried, but I wouldn’t complain. I didn’t like to ask her for things.

When she couldn’t pay the taxi fee to send me to a secondary school a few villages away while ours was being built, I started working on the farm again, and waited and prayed for the school to be finished soon. There was no point in complaining, the money wouldn’t just appear, and I was far from the only kid in Kocho whose parents couldn’t afford to send them away.

After Saddam invaded Kuwait in 1991, the United Nations put sanctions on Iraq, hoping that it would limit the president’s power. While I was growing up, I didn’t know why the sanctions existed. The only people who talked about Saddam in my house were my brothers Massoud and Hezni, and that was just to shush anyone who complained during televised speeches or rolled their eyes at the propaganda on state TV. Saddam had tried to get loyalty from Yazidis so that they would side with him against the Kurds and fight in his wars, but he did so by demanding that we join his Baath Party and call ourselves Arab, not Yazidi.

Sometimes all that was on TV was Saddam himself, seated behind a desk smoking and telling stories about Iran, with a mustached guard beside him, going on about battles and his own brilliance. “What is he talking about?” we would ask one another, and everyone shrugged. There was no mention of Yazidis in the constitution, and any sign of rebellion was quickly punished. Sometimes I felt like laughing at what I saw on TV, the dictator in his funny hat, but my brothers cautioned me not to. “They are watching us,” Massoud said. “Be careful what you say.” Saddam’s enormous intelligence ministry had eyes and ears everywhere.

I knew during that time that it was ordinary Iraqis, not the political elite and certainly not Saddam himself, who suffered the most under the sanctions. Our hospitals and markets collapsed. Medicine became more expensive, and flour was cut with gypsum, which is more often used to make cement. The deterioration was most clear to me in the schools. Once Iraq’s education system had attracted students from all over the Middle East, but under the sanctions it crumbled. Teachers’ salaries were reduced to nothing, and so teachers became hard to find, even though nearly 50 percent of Iraqi men were unemployed. The few teachers who came to Kocho when I started, Arab Muslims who lived in the school, joining the Yazidi teachers, were heroes as far as l was concerned, and I worked hard to impress them.

When Saddam was in power, school had one obvious purpose: by offering us a state education, he hoped to take away our identity as Yazidis. This was clear in every lesson and every textbook that made no mention of us, our families, our religion, or the firmans against us. Although most Yazidis grew up speaking Kurdish, our lessons were in Arabic. Kurdish was the language of rebellion, and Kurdish Spoken by Yazidis could be seen as even more threatening to the State. Still, I eagerly went to school every day that I could, and I learned Arabic quickly. I didn’t feel like I was submitting to Saddam or betraying Yazidis by learning Arabic or studying the incomplete Iraqi history; I felt empowered and smart. I would still speak Kurdish at home and pray in Kurdish. When I wrote notes to Walaa or Kathrine, my two best friends, they would be in Kurdish, and I would never call myself anything other than Yazidi. I could tell that no matter what we were learning, going to school was important. With all the children in Kocho getting an education, our connections to our country and the outside world were already changing, and our society was opening up.

Young Yazidis loved our religion but also wanted to be part of the world, and when we grew up into adults, I was sure we would become teachers ourselves, writing Yazidis into the history lessons or even running for parliament and fighting for Yazidi rights in Baghdad. I had a feeling back then that Saddam’s plan to make us disappear would backfire.

Chapter Four


In 2003, a few months after my father died, the Americans invaded Bagdad. We didn’t have satellite television to watch the battle unfold, or cell phones connecting us to the rest of the country, and so we learned slowly, over time, how quickly Saddam fell. Coalition forces flew noisily over Kocho on the way to the capital, jerking us from sleep; it was the first time I had ever seen an airplane. We had no idea at the time just how long the war would go on and how much of an impact it would have on Iraq, but in the simplest terms, we hoped that after Saddam, it would be easier to buy cooking gas.

What I remember most from those early months after the invasion was the loss of my father and little else. In Yazidi culture, when someone dies-particularly if that death is sudden and comes too soon, mourning lasts for a long time and sweeps up the entire village. Neighbors retreat from normal life along with the family and friends of the dead. Grief takes over every house and shop and spreads through the streets, as though everyone has been made sick on the same batch of sour milk. Weddings are canceled, holiday celebrations are moved indoors, and women switch out their white clothes for black. We treat happiness like a thief we have to guard against, knowing how easily it could wipe away the memory of our lost loved ones or leave us exposed in a moment of joy when we should be sad, so we limit our distractions. Televisions and radios are kept off, no matter what might be happening in Baghdad.

A few years before he died, my father took Kathrine and me to Mount Sinjar to celebrate the Yazidi New Year. It was my last time with him at the mountain. Our New Year is in April, just as the hills in northern Iraq glow with a light green fuzz and the sharp cold eases into a pleasant cool, but before the summer heat sneaks up on you like a speeding bus. April is the month that holds the promise of a big profitable harvest and leads us into months spent outdoors, sleeping on rooftops, freed from our cold, overcrowded houses. Yazidis are connected to nature. It feeds us and shelters us, and when we die, our bodies become the earth. Our New Year reminds us of this.

On the New Year we visited whomever in the family had been working as shepherds that year, driving our sheep closer to the mountain and walking them from field to field in order to keep them fed. Parts of the job were fun. Shepherds slept outside underneath handwoven blankets and lived simply, with lots of time to think and little to worry about. But it was also grueling work, far from home and family, and while they grew homesick, we missed them back in Kocho. The year my mother left to take care of the sheep, I was in middle school, and l was so distraught that I failed every one of my classes. “I am blind without you,” I told her when she returned.

That last New Year with my father, Kathrine and I rode in the back of the truck while my father and Elias sat in front, watching us in the rearview mirror to make sure we didn’t do anything reckless. The landscape whipped by, a blur of wet spring grass and yellow wheat. We held hands and gossiped, concocting overblown versions of the day’s events that we would later use to taunt the kids who had to wait at home. As far as they were concerned, it would be the most fun we ever had, away from the fields and school and work. Kathrine and I would nearly bounce over the side of the truck as it raced down the road, and the lamb tied up in the back near us was the biggest lamb we had ever seen. “We ate so much candy,” we would tell them back home, watching for the envy in their faces. “We danced all night, it was light outside by the time we went to sleep. You should have seen it.”

The true story was only slightly less exciting. My father could hardly say no to the candy we longed for, and, at the base of the mountain, the reunion with the shepherds was always joyful. The lamb, which had in fact ridden in the back of the truck along with us and was then slaughtered by my father and cooked by the women, was tender and delicious, and we all danced Yazidi dances, holding hands and spinning in a wide circle. After the best parts of the lamb were eaten and the music turned off, we slept in tents surrounded by low fences made of reeds to keep out the wind. When the weather was mild, we took down those fences and slept in the open air. It was a simple, hidden life. All you had to worry about were the things and the people around you, and they were close enough to touch.

I don’t know how my father would have felt about the Americans invading Iraq and taking Saddam out of power, but I wished he had lived long enough to see Iraq change. Kurds welcomed the U.S. soldiers, helping them enter Iraq, and they were ecstatic at the idea of Saddam being deposed. The dictator had targeted Kurds for decades, and in the late 1980s his air force had tried to exterminate them with chemical weapons in what he called the Anfal campaign. That genocide shaped the Kurds, who wanted to protect themselves from the government in Baghdad in any way they could. Because of Anfal, the Americans, British, and French established a no-fly zone over the Kurdish north, as well as the Shia areas in the south, and Kurds had been their willing allies ever since. To this day, Kurds call the 2003 US. invasion a “liberation,” and they consider it the beginning of their transformation from small vulnerable villages into big modern towns full of hotels and the offices of oil companies.

In general, Yazidis welcomed the Americans but were less certain than Kurds about what our lives would be like after Saddam. Sanctions had made our life hard, as they had for other Iraqis, and we knew that Saddam was a dictator who ruled Iraq with fear. We were poor, cut off from education, and made to do the most difficult, dangerous, and lowest-paying jobs in Iraq. But at the same time, with the Baathists in power, we in Kocho had been able to practice our religion, farm our land, and start families. We had close ties with Sunni Arab families, particularly the kiriv, whom we considered bonded to our families, and our isolation taught us to treasure these connections while our poverty told us to be practical above all else. Baghdad and the Kurdish capital, Erbil, seemed worlds away from Kocho. The only decision the rich, connected Kurds and Arabs made that mattered to us was the decision to leave us alone.

Still, the promises Americans made-about work, freedom, and security-quickly brought Yazidis fully to their side. The Americans trusted us because we didn’t have any reason to be loyal to anyone they considered an enemy, and many of our men became translators or took jobs with the Iraqi or American armies. Saddam was pushed into hiding, then found and hanged, and his Baathist institutions dismantled. Sunni Arabs, including those close to Kocho, lost authority in the country, and in the Yazidi parts of Sinjar, Sunni Arab policemen and politicians were replaced with Kurdish ones.

Sinjar is a disputed territory, claimed by both Baghdad and Kurdistan, strategically close to Mosul and Syria and potentially rich with natural gas. Like Kirkuk, another disputed territory in eastern Iraq, Kurdish political parties consider Sinjar to be part of their greater Kurdish homeland. According to them, without Sinjar, the Kurdish nation, if there ever is one, would be born incomplete. After 2003, with American support, and with the Sunni Arabs steadily losing wealth and power, Kurds who were aligned with the KDP happily came to fill the void in Sinjar. They established political offices and staffed those offices with party members. With the Sunni insurgency mounting, they manned checkpoints along our roads. They told us that Saddam was wrong to call us Arabs; we had always been Kurds.

In Kocho the changes after 2003 were huge. Within a couple of years, the Kurds started building a cell phone tower, and after school I would go with friends just outside the village to watch the giant, metal structure grow out of our farmland like a skyscraper. “Finally Kocho will be connected to the rest of the world!” my brothers said, delighted, and soon enough, most of the men and some of the women had cell phones. Satellite dishes installed on the roofs of houses meant we were no longer limited to Syrian films and Iraqi state TV, and Saddam’s marches and speeches disappeared from our living room. My uncle was among the first to get a satellite dish, and as soon as he did, we all crowded into his sitting room to see what was on. My brothers looked for the news, particularly on Kurdish channels, and I became addicted to a Turkish soap opera where the characters constantly fell in and out of love.

We had resisted calling ourselves Arab, but being told that we were Kurdish was easier for some to accept. Many Yazidis feel close to a Kurdish identity, we share a language and ethnic heritage, and it was impossible to ignore the improvements in Sinjar after the Kurds came in, even if it had more to do with the United States than with Barzani.

Jobs in the military and security forces were suddenly open to Yazidis, and some of my brothers and cousins traveled to Erbil to work in the hotels and restaurants; a new one seemed to be built every day. They quickly filled with oil workers or tourists from other parts of Iraq looking for a cooler climate, reliable electricity, or a break from the violence plaguing the rest of the country. My brother Saoud worked construction jobs near Duhok, in the west of Kurdistan, operating a cement mixer. He would come home with stories of Kurds who, like Arabs, looked down on Yazidis. Still, we needed the money.

Khairy began working as a border guard, and soon afterward Hezni became a policeman in Sinjar City. Their salaries gave our family our first steady income, and we started to live what felt like real lives, thinking about the future and not just the next day. We bought our own land to farm and our own sheep to herd and didn’t have to work for landlords anymore. The paved roads outside Kocho made it much quicker to drive to the mountain. We picnicked in the fields near the village, eating plates of meat and chopped vegetables, the men drinking Turkish beer and then tea so sweet it made my lips pucker. Our weddings grew even more elaborate; women sometimes made two trips to Sinjar City for clothes, and men slaughtered more lambs, and if they were very well-off, a cow-to share with the guests.

Some Yazidis envisioned a future Sinjar with a strong local government that was still in lraq, but others thought we would eventually be part of an independent Kurdistan. With the KDP office in Kocho and the peshmerga in Sinjar, I grew up thinking that was our destiny. We became more distant from our Sunni Arab neighbors. While travel to Kurdistan got easier, it became harder to get to the Sunni villages where insurgents, and the extremist theology that guided them, were gaining ground. Sunni Arabs, meanwhile, didn’t like the Kurdish presence in Sinjar. It reminded them of the power they had lost, and they said that with the Kurds in control, they didn’t feel welcome in Sinjar and could no longer visit Yazidi villages, even the ones where their kiriv lived. Kurdish peshmerga interrogated them at checkpoints that were once manned by Baathists, and many lost their salaries and jobs when the Americans came and dismantled Saddam’s institutions. Only recently they had been the richest and best-connected people in the country, but with a Shiite government supported by the occupying Americans in power, Sunni Arabs suddenly lost their power. Isolated in their villages, they would soon decide to fight back. Within years that fight became fueled by a religious intolerance that made Yazidis, even though we had never had any power in Iraq, their target.

I didn’t know then that the Kurdish government was content to distance Yazidis from our Arab neighbors because it helped them in their campaign to take over Sinjar, or how disruptive the American occupation was for ordinary Sunnis. I was unaware that, while I went to school, an unnamed insurgency was paving the way for Al Qaeda, and eventually ISIS, to flourish in our neighboring villages. Sunni tribes across Iraq tried, and mostly failed, to rebel against the Shiite authority in Baghdad and the Americans. They became accustomed to violence and harsh rule, which went on for so long that many Sunnis my age and younger grew up knowing nothing but war and the fundamentalist interpretation of Islam that became part of that war.

ISIS built up slowly in those villages just beyond our borders, a spark that I didn’t notice until it became a bonfire. For a young Yazidi girl, life only got better after the Americans and the Kurds took over. Kocho was expanding, I was going to school, and we were gradually lifting ourselves out of poverty. A new constitution gave more power to the Kurds and demanded that minorities be part of the government. I knew that my country was at war, but it didn’t seem like it was our fight.

IN THE BEGINNING, American soldiers visited Kocho almost once a week to hand out food and supplies and talk to the village leaders. Did we need schools? Paved roads? Running water so that we no longer had to buy tanks off of trucks? The answer to all of it, of course, was yes. Ahmed Jasso invited the soldiers over for large, elaborate meals, and our men glowed with pride when the Americans said they felt so safe in Kocho they could lean their weapons against the walls and relax. “They know the Yazidis will protect them,” Ahmed Jasso said.

Kids ran to the American soldiers when they pulled into Kocho, their armored cars kicking up dust and drowning out the village noises with their loud motors. They gave us gum and candy and took photos of us smiling with the presents. We marveled at their crisp uniforms and the friendly, conversational way they approached us, so unlike the Iraqi soldiers before them. They raved to our parents about Kocho hospitality, how comfortable and clean our village was, and how well we understood that America had liberated us from Saddam. “Americans love the Yazidis,” they told us. “And Kocho especially. We feel at home here.” Even when their visits slowed to a trickle and then stopped completely, we held on to the American praise like a badge of honor.

In 2006, when l was thirteen, one of the American soldiers gave me a ring as a present. It was a simple band with a small red stone, the first piece of jewelry I’d ever owned. It instantly became my most valued possession. I wore it everywhere, to school, digging on the farm, at home watching my mother bake bread, even to sleep at night. After a year, it had become too small for my ring finger, and I moved it onto my pinky so I wouldn’t have to leave it at home. But it slid up and down on that finger, barely catching on my knuckle, and I worried about losing it. I glanced at it constantly to make sure it was still there, curling my hand into a fist to feel it pressing against my finger.

Then one day I was out with my siblings planting rows of onion seedlings when I looked down and noticed that the ring was gone. I already hated planting the onions-each one had to be laid carefully into the cold dirt, and even the seedlings made your fingers and hands stink, and now l was furious at the tiny plants, digging frantically through them, trying to find my present. My siblings, noticing my panic, asked me what had happened. “I lost the ring!” I told them, and they stopped working to help look. They knew how important it was to me.

We walked our entire field, searching in the dark dirt for a little glimpse of gold and red, but no matter how hard we looked and how much I cried, we couldn’t find the ring. When the sun started to set, we had no choice but to give up and go home for dinner. “Nadia, it’s no big deal,” Elias said as we walked home. “It’s just a little thing. You’ll have more jewelry in your life.” But I cried for days. I was sure that I would never have anything as nice again and I worried that the American soldier, if he ever came back, would be angry with me for losing his present.

A year later a miracle happened. Picking the new onions that had sprouted from those seedlings, Khairy saw a small gold band poking out of the dirt. “Nadia, your ring!” My brother beamed, presenting it to me, and I ran to him, grabbing it out of his hand and hugging him, my hero. When I tried to slip it on, though, I found that, no matter how hard I tried, the ring was now too small even for my pinky. Later my mother saw it lying on my dresser and urged me to sell it. “It doesn’t fit you anymore, Nadia,” she said. “There’s no point in keeping it if you can’t wear it.” For her, poverty was just one wrong move away. Because I always did what she said, I went to a jewelry seller in the Sinjar City bazaar, who bought the ring from me.

Afterward I felt heavy with guilt. The ring had been a gift, and it didn’t seem right for me to sell it. I worried what the soldier would say if he returned and asked about his present. Would he think that I had betrayed him? That I didn’t love the ring? The armored cars were already pulling up to Kocho much less frequently, fighting had grown worse in the rest of the country, and the Americans were stretched thin, and I hadn’t seen that particular soldier in months.

Some of my neighbors complained that the Americans had forgotten about us, and they worried that without contact with them, Yazidis would be unprotected. But I was relieved that I wouldn’t have to explain what happened to the ring. Maybe the soldier who gave it to me, even though he was kind, would be upset that I had sold his present to the jewelry merchant in Sinjar City. Coming from America, he might not understand what even that small amount of money meant to us.

Charter Five

. . . . .


The Last Girl. My story of captivity and my fight against the Islamic State

by Nadia Murad

get it at



In November 2015, a year and three months after ISIS came to Kocho, I left Germany for Switzerland to speak to a United Nations forum on minority issues. It was the first time I would tell my story in front of a large audience. I had been up most of the night before with Nisreen, the activist who had organized the trip, thinking about what to say. I wanted to talk about everything, the children who died of dehydration fleeing ISIS, the families still stranded on the mountain, the thousands of women and children who remained in captivity, and what my brothers saw at the site of the massacre. I was only one of hundreds of thousands of Yazidi victims. My community was scattered, living as refugees inside and outside of Iraq, and Kocho was still occupied by ISIS. There was so much the world needed to hear about what was happening to Yazidis.

The first part of the journey was by train through the dark German woods. The trees passed by in a blur close to my window. I was frightened by the forest, which is so different from the valleys and fields of Sinjar, and glad that l was riding by it, not wandering between the trees. Still, it was beautiful, and I was starting to like my new home. Germans had welcomed us to their country; I heard stories of ordinary citizens greeting the trains and airplanes carrying fleeing Syrians and Iraqis. In Germany we were hopeful that we could become a part of society and not just live on the edge of it.

It was harder for Yazidis in other countries. Some refugees had arrived in places where it was clear they weren’t wanted, no matter what kind of horrors they were escaping. Other Yazidis were trapped in Iraq, desperate for the opportunity to leave, and that waiting was another kind of suffering. Some countries decided to keep refugees out altogether, which made me furious. There was no good reason to deny innocent people a safe place to live. I wanted to say all this to the UN that day.

I wanted to tell them that so much more needed to be done. We needed to establish a safe zone for religious minorities in Iraq; to prosecute ISIS, from the leaders down to the citizens who had supported their atrocities, for genocide and crimes against humanity; and to liberate all of Sinjar. Women and girls who escaped from ISIS needed help to rejoin and rebuild society, and their abuse needed to be added to the list of Islamic State war crimes. Yazidism should be taught in schools from Iraq to the United States, so that people understood the value of preserving an ancient religion and protecting the people who follow it, no matter how small the community. Yazidis, along with other religious and ethnic minorities, are what once made Iraq a great country.

They had only given me three minutes to talk, though, and Nisreen urged me to keep it simple. “Tell your own story,” she said, sipping tea in my apartment. That was a terrifying idea. I knew that if my story were to have any impact, I would have to be as honest as I could stand to be. I would have to tell the audience about Hajji Salman and the times he raped me, the terrifying night at the Mosul checkpoint, and all the abuse I witnessed. Deciding to be honest was one of the hardest decisions I have ever made, and also the most important.

I shook as I read my speech. As calmly as I could, I talked about how Kocho had been taken over and girls like me had been taken as sabaya. I told them about how I had been raped and beaten repeatedly and how I eventually escaped. I told them about my brothers who had been killed. They listened quietly, and afterward a Turkish woman came up to me. She was crying. “My brother Ali was killed,” she told me. “Our whole family is in shock because of it. I don’t know how someone can handle losing six brothers all at once.”

“It is very hard,” I said. “But there are families who lost even more than us.”

When I returned to Germany I told Nisreen that any time they needed me, I would go anywhere and do anything I could to help. I had no idea that soon I would partner with the Yazidi activists running Yazda, and begin a new life. I know now that l was born in the heart of the crimes committed against me.

AT FIRST OUR new lives in Germany felt insignificant compared to those of the people living through war in Iraq. Dimal and I moved into a small two-bedroom apartment with two of our cousins, decorating it with photos of the people we had lost or left behind. At night I slept beneath large color photos of my mother and Kathrine. We wore necklaces that spelled out the names of the dead and each day came together to weep for them and to pray to Tawusi Melek for the safe return of the missing. Every night I dreamed about Kocho, and every morning I woke up and remembered that Kocho, as I knew it, no longer existed. It’s a strange, hollow feeling. Longing for a lost place makes you feel like you have also disappeared. I have seen many beautiful countries in my travels as an activist, but nowhere I wanted to live more than Iraq.

We went to German classes and to the hospital to make sure we were healthy. Some of us tried the therapy sessions they offered, which were almost impossible to endure. We cooked our food and did the chores we had grown up doing, cleaning and baking bread, this time in a small portable metal oven that Dimal set up in the living room. But without the truly time-consuming tasks like milking sheep or farming, or the social lives that come with living in a small, tight-knit village or school, we had too many empty hours. When I first got to Germany, I begged Hezni all the time to let me come back, but he told me to give Germany a chance. He said I had to stay, that eventually I would have a life there, but I wasn’t sure I believed him.

Soon enough, I met Murad Ismael. Along with a group of Yazidis living around the world, including Hadi Pir, Ahmed Khudida, Abid Shamdeen, and Haider Elias, the former translator for the US. military who had stayed on the phone with my brother, Jalo, almost until the moment of his death. Murad had cofounded Yazda, a group fighting tirelessly for Yazidis. When I first met him I was still uncertain about what my new life would be like. I wanted to help, and to feel useful, but I didn’t know how. But when Murad told me about Yazda and the work they were doing, particularly helping to free and then advocate for women and girls who had been enslaved by ISIS, I could see my future more clearly.

As soon as these Yazidis heard that ISIS had come into Sinjar they left their normal lives to help us back in Iraq. Murad had been studying geophysics in Houston when the genocide started; others were teachers or social workers who dropped everything to help us. He told me about a sleepless two weeks spent in a small hotel room near Washington, D.C., where he and a group including Haider and Hadi spent every moment fielding calls from Yazidis in Iraq, trying to help them to safety.

Often, they succeeded. Sometimes they didn’t. They had tried to save Kocho, he told me. They had called everyone they could think of in Erbil and Baghdad. They made suggestions based on their time working with the American military (Murad and Hadi has also been translators during the occupation) and tracked ISIS on every road and through every village. When they failed to save us they vowed to do whatever they could to help anyone who survived and to get us justice. They wore their sorrows on their bodies, Haider’s back aches constantly and Murad’s face is lined with exhaustion, and in spite of that, I wanted to be just like them. After I met Murad, I started to become the person I am today. Although the mourning never stopped, our lives in Germany began to feel significant again.

When I was with ISIS, I felt powerless. If I had possessed any strength at all when my mother was torn from me, I would have protected her. If I could have stopped the terrorists from selling me or raping me, I would have. When I think back to my own escape, the unlocked door, the quiet yard, Nasser and his family in the neighborhood full of Islamic State sympathizers, I shiver at how easily it could have gone wrong. I think there was a reason God helped me escape, and a reason I met the activists with Yazda, and I don’t take my freedom for granted.

The terrorists didn’t think that Yazidi girls would be able to leave them, or that we would have the courage to tell the world every detail of what they did to us. We defy them by not letting their crimes go unanswered. Every time I tell my story, I feel that I am taking some power away from the terrorists.

Since that first trip to Geneva, I have told my story to thousands of people, politicians and diplomats, filmmakers and journalists, and countless ordinary people who became interested in Iraq after ISIS took over. I have begged Sunni leaders to more strongly denounce ISIS publicly; they have so much power to stop the violence. I have worked alongside all the men and women with Yazda to help survivors like me who have to live every day with what we have been through, as well as to convince the world to recognize what happened to the Yazidis as a genocide and to bring ISIS to justice.

Other Yazidis have done the same with the same mission: to ease our suffering and keep what is left of our community alive. Our stories, as hard as they are to hear, have made a difference. Over the past few years, Canada has decided to let in more Yazidi refugees; the UN officially recognized what ISIS did to the Yazidis as a genocide; governments have begun discussing whether to establish a safe zone for religious minorities in Iraq; and most important, we have lawyers determined to help us. Justice is all Yazidis have now, and every Yazidi is part of the struggle.

Back in Iraq, Adkee, Hezni, Saoud, and Saeed fight in their own ways. They stayed in the camp, Adkee refused to go to Germany with the other women and when I talk to them, I miss them so much I can barely stand. Every day is a struggle for the Yazidis in the camps, and still they do whatever they can to help the whole community. They hold demonstrations against ISIS and petition the Kurds and Baghdad to do more. When a mass grave is uncovered or a girl dies trying to escape, it is the refugees in the camp who bear the burden of the news first and arrange the funeral. Each container home is full of people praying for loved ones to be returned to them.

Every Yazidi refugee tries to cope with the mental and physical trauma of what they have been through and works to keep our community intact. People who, just a few years ago, were farmers, students, merchants, and housewives have become religious scholars determined to spread knowledge about Yazidism, teachers working in the small container homes used as camp classrooms, and human rights activists like me. All we want is to keep our culture and religion alive and to bring ISIS to justice for their crimes. I am proud of all we have done as a community to fight back. I have always been proud to be Yazidi.

As lucky as I am to be safe in Germany, I can’t help but envy those who stayed behind in Iraq. My siblings are closer to home, eating the Iraqi food I miss so much and living next to people they know, not strangers. If they go to town, they can speak to shopkeepers and minivan drivers in Kurdish. When the peshmerga allow us into Solagh, they will be able to visit my mother’s grave. We call one another on the phone and leave messages all day. Hezni tells me about his work helping girls escape, and Adkee tells me about life in the camp. Most of the stories are bitter and sad, but sometimes my lively sister makes me laugh so hard that I roll off my couch. I ache for Iraq.

In late May 2017, I received news from the camp that Kocho had been liberated from ISIS. Saeed had been among the members of the Yazidi unit of the Hashd alShaabi, a group of Iraqi armed militias, who had gone in, and I was happy for him that he had gotten his wish and become a fighter. Kocho was not safe; there were still Islamic State militants there, fighting, and those who left had planted IEDs everywhere before they ran, but I was determined to go back. Hezni agreed, and I flew from Germany to Erbil and then traveled to the camp.

I didn’t know what it would feel like to see Kocho, the place where we were separated and where my brothers were killed. I was with some family, including Dimal, and Murad (by now, he and others from Yazda were like family) and when it was safe enough to go, we traveled as a group, taking a long route to avoid the fighting. The village was empty. The windows in the school had been broken and, inside, we saw what was left of a dead body. My house had been looted, even the wood had been stripped off the roof, and anything left behind was burned. The album of bridal photos was a pile of ashes. We cried so hard we fell onto the floor.

Still, in spite of the destruction, the moment I walked through my front door I knew it was my home. For a moment I felt the way I had before ISIS came, and when they told me it was time to leave I begged them to let me stay just an hour more. I vowed to myself that no matter what, when December comes and it is time for Yazidis to fast in order to draw closer to God and Tawusi Melek, who gave us all life, I will be in Kocho.

A LITTLE LESS than a year since giving that first speech in Geneva, and about a year before returning to Kocho, I went to New York with some members of Yazda, including Abid, Murad, Ahmed, Haider, Hadi, and Maher Ghanem, where the United Nations named me a Goodwill Ambassador for the Dignity of Survivors of Human Trafficking. Again, I would be expected to talk about what happened to me in front of a large group of people. It never gets easier to tell your story. Each time you speak it, you relive it. When I tell someone about the checkpoint where the men raped me, or the feeling of Hajji Salman’s whip across the blanket as I lay under it, or the darkening Mosul sky while I searched the neighborhood for some sign of help, I am transported back to those moments and all their terror. Other Yazidis are pulled back into these memories, too. Sometimes even the Yazda members who have listened to my story countless times weep when I tell it; it’s their story, too.

Still, I have become used to giving speeches, and large audiences no longer intimidate me. My story, told honestly and matter-of-factly, is the best weapon I have against terrorism, and I plan on using it until those terrorists are put on trial. There is still so much that needs to be done. World leaders and particularly Muslim religious leaders need to stand up and protect the oppressed.

I gave my brief address. When I finished telling my story, I continued to talk. I told them I wasn’t raised to give speeches. I told them that every Yazidi wants ISIS prosecuted for genocide, and that it was in their power to help protect vulnerable people all over the world. I told them that I wanted to look the men who raped me in the eye and see them brought to justice. More than anything else, I said, I want to be the last girl in the world with a story like mine.

It’s Time to Panic Now! – Fred Kaplan.

John Bolton’s appointment as national security adviser puts us on a path to war.


It’s time to push the panic button.

John Bolton’s appointment as national security adviser, a post that requires no Senate confirmation, puts the United States on a path to war. And it’s fair to say President Donald Trump wants us on that path.

After all, Trump gave Bolton the job after the two held several conversations (despite White House chief of staff John Kelly’s orders barring Bolton from the building). And there was this remark that Trump made after firing Rex Tillerson and nominating the more hawkish Mike Pompeo to take his place:

”We’re getting very close to having the Cabinet and other things I want.”

Bolton has repeatedly called for launching a first strike on North Korea, scuttling the nuclear arms deal with Iran, and then bombing that country too. He says and writes these things not as part of some clever ”madman theory” to bring Kim Jong-un and the mullahs of Tehran to the bargaining table, but rather because he simply wants to destroy them and America’s other enemies too.

His agenda is not ”peace through strength,” the motto of more conventional Republican hawks that Trump included in a tweet on Wednesday, but rather regime change through war. He is a neocon without the moral fervor of some who wear that label, i.e., he is keen to topple oppressive regimes not in order to spread democracy but rather to expand American power.

In the early days of the George W. Bush administration, Vice President Dick Cheney finagled Bolton a job as undersecretary of state for arms control, an inside joke, since Bolton has never read an arms, control treaty that he liked. But his real assignment was to serve as Cheney’s spy in Foggy Bottom, monitoring and, when possible, obstructing any attempts at peaceful diplomacy mounted by Secretary of State Colin Powell.

When Powell got the boot, Cheney wanted to make Bolton deputy secretary of state, replacing Richard Armitage, who resigned along with his best friend Powell. But Powell’s replacement, Condoleezza Rice, who had been Bush’s national security adviser, blocked the move, fully aware of Bolton’s obstructionist ideology.

As a compromise, Bush nominated Bolton to be United Nations ambassador, but that move proved unbearable to even the Republican controlled Senate at the time. It was one thing to be critical of the U.N., it’s a body deserving of criticism, but Bolton opposed its very existence. ”There is no such thing as the United Nations,” he once said in a speech, adding, ”If the UN. Secretariat building in New York lost 10 stories, it wouldn’t make a lot of difference.”

More than that, he was hostile to the idea of international law, having once declared, ”It is a big mistake for us to grant any validity to international law even when it may seem in our short-term interest to do so, because over the long term, the goal of those who think that international law really means anything are those who want to constrain the United States.”

These might be quaint notions for some eccentric midlevel aide to espouse, but the United Nations is founded on international law, Security Council resolutions are drafted to enforce international law, and, as even Bush was beginning to realize by the start of his second term, around the time of Bolton’s nomination, some of those resolutions were proving useful for expressing, and sometimes enforcing, US. national security interests. How could someone with these views serve as the US. ambassador to the U.N.?

In his confirmation hearings before the Senate Foreign Relations Committee, Bolton put on a dreadful show, grumbling and scowling through his walrus mustache. Finally, in a tie vote, the committee sent Bolton’s nomination to the full Senate ”without recommendation.” Properly fearing that this foretold a rejection on the floor, Bush gave Bolton the job as a ”recess appointment” after Congress went on holiday. But the law allowing this evasion gave the Senate a chance to take a vote 18 months later. In the second round of hearings, Bolton behaved even more obnoxiously than in the first. When one Republican senator asked him whether his year and a half in the UN. had altered his ideas about the place, Bolton, rather than seizing the chance to mollify skeptics, replied, ”Not really.” The head counters in the White House withdrew the nomination, and Bolton headed back to neocon central at the American Enterprise Institute.

During Trump’s presidential transition, Bolton made the short list of candidates for deputy secretary of state, but Tillerson, who would soon get the nod for secretary, expressed misgivings about working with the guy. (Trump might have recalled that conversation earlier this month, when he decided to fire Tillerson.) After Michael Flynn flamed out as national security adviser, Bolton was also on the short list to replace him. Gen. H.R. McMaster got the nod, but Trump publicly said he liked Bolton and that he too would soon be working for the White House ”in some capacity.” And now, here he is.

In his one year and one month on the job, McMaster, who is still an active-duty Army three-star general, proved a deep disappointment to his friends and erstwhile admirers. He’d made his reputation 20 years ago, as the author of a dissertation turned book, Dereliction of Duty, which lambasted the top generals of the Vietnam era for failing to give their honest military advice to President Lyndon Johnson. And now, in his only tour as a policy adviser in Washington, McMaster has wrecked that reputation, committing his own derelictions by pandering to Trump’s proclivities and tolerating his falsehoods.

But at least McMaster assembled, and often listened to, a professional staff at the National Security Council and insisted on ousting amateur ideologues, several of them acolytes of Flynn.

Bolton is not likely to put up with a professional staff, and the flood of White House exiles will soon intensify.

One subject of discussion at Bolton’s Senate hearings, back in 2005, was his intolerance of any views that differed from his own. He displayed this trait most harshly when, as undersecretary of state, he tried to fire two intelligence analysts who challenged his (erroneous) view that Cuba was developing biological weapons and supplying the weapons to rogue regimes.

Nor is Bolton at all suited to perform one of a national security adviser’s main responsibilities, assembling the Cabinet secretaries to debate various options in foreign and military policy, mediating their differences, and either hammering out a compromise or presenting the choices to the president.

Then again, there may now no longer be many differences to mediate in this administration. The last of the grown ups is Secretary of Defense James Mattis, the retired Marine four-star general, who got that job mainly because Trump had heard his nickname was ”mad dog.” He didn’t know that Mattis regularly consulted a personal library of some 7,000 volumes on history and strategy, that (like most generals) he’s not too keen to go to war unless he really has to, and that (also like most generals) he takes the Geneva Conventions seriously and opposes torture.

In recent weeks, Trump was said to be tiring of aides who kept telling him no. He might soon tire of Bolton, who, whatever else he is, can’t be pegged as a yes man. But in the short term, Bolton may be just the man to excite Trump’s darker instincts, to actualize the frustrated hitman who raged about pelting Kim Jong-un with ”fire and fury like the world has never seen” or fomenting ”the total collapse of the Iranian regime,” which he somehow believes was about to happen, if only Obama hadn’t signed the nuclear deal and lifted sanctions.

With Tillerson out, Bolton in, and Pompeo waiting in the wings for confirmation, Trump is feeling his oats, coming into his own, like Trump is free to be Trump. Finding out just who that is may make the rest of us duck and cover.

THE DARK SIDE OF MEDITATION – Dr Miguel Farias and Dr Catherine Wikholm.

Aaron Alexis was in search of something.

He started attending a Buddhist temple and learned to meditate; he hoped it would bring him wisdom and peace. ‘I want to be a Buddhist monk,’ he once told a friend from the temple. His friend advised him to keep studying. Aaron did. He learned Thai and kept going to the temple chanting, meditating. But other things got in the way.

On 16 September 2013 Aaron drove into Washington’s Navy Yard. It was eight o’clock in the morning. He’d been working there not long before, and security let him in. He walked away from the car with a large bag and briefly disappeared into a toilet. Minutes later the security cameras caught him holding a shotgun.

Aaron walked briskly and hid behind a wall for a few seconds before advancing through the building. Within 30 minutes 12 people were dead. He killed randomly, first using his shotgun and then, after running out of ammunition, using the handgun belonging to a guard he’d just killed. He died after an exchange of gunfire with the police.

It took only 24 hours for a journalist to notice that Aaron had been a Buddhist, prompting her to write an article that asked, ‘Can there be a less positive side to meditation?’ Western Buddhists immediately reacted: ‘This man represented the Dharma teachings no more than 9/11 terrorists represented the teachings of Islam,’ wrote one. Others explained that he had a history of mental health problems. However, some noted that Buddhism, as other religions, has a history that links it to violence.

And meditation, for all its de-stressing and self-development potential, can take you deeper into the darkest recesses of your own mind than you may have wished for.

This chapter asks difficult questions that are seldom given a voice. They are questions I have wrestled with, both as a psychologist and in my own spiritual practice. Do I have unrealistic positive expectations about what meditation can do? Can it also have adverse effects, finding its way to non-spiritual, even non-peaceful ends?

When something goes wrong, the way it did with Aaron Alexis, we can’t look the other way rationalizing that he wasn’t a true Buddhist or meditator isn’t enough. We need relentlessly to examine the less familiar, hidden facets of meditation a technique that for centuries has been used to cultivate wisdom, clarity of mind, and selflessness. We need to ask ourselves if meditation has a dark side.


I’d come across the idea that without the guidance of an expert teacher meditation can have adverse effects, but I’d thought that this was a metaphor for the difficulties we might encounter as we venture deep into ourselves.

I hadn’t considered that the adverse effects might be literal ones. Then, one day I heard a first-hand account that opened my eyes to my naivety.

At the time l was teaching an open course on the psychology of spirituality. There were a few twenty-year-olds, but the majority of students were in their late fifties and early sixties and represented a combination of retired lawyers, Anglican priests, psychiatrists, and three or four yoga and meditation teachers. Louise was one of them.

In her late fifties and lean with dark, short hair, Louise was a quiet member of the group, who in general spoke up only when she felt she had something important to say. She had taught yoga for more than twenty years, stopping only when something unexpected happened that changed her life for ever. During one meditation retreat (she’d been on many), her sense of self changed dramatically. ‘Good,’ she thought initially, ‘it must be part of the dissolving experience.’ But she couldn’t help feeling anxious and frightened.

‘Don’t worry, just keep meditating and it will go away,’ the meditation teacher told her.

It didn’t. She couldn’t get back to her usual self. It felt like something was messing with her sense of identity, how she felt in her body, the very way she looked at the world and at other people. The last day of the retreat was excruciating: her body shook, she cried and panicked. The following day, back at home, she was in pieces her body was numb, she didn’t want to get out of bed. Louise’s husband took her to the GP and, within hours, she was being seen by a psychiatrist.

She spent the next 15 years being treated for psychotic depression; for part of this time, she had to be hospitalized.

Louise had chosen to give a presentation on the psychology of spiritual experience, as part of her assessment on the course. She talked lucidly about her illness and its possible origins, including a genetic predisposition to mental health problems. She explained that she had gradually taken up yoga practice again, but had never returned to meditation retreats. ‘I had to have electro-convulsive therapy,’ she told the class. That means strong electric shocks going through your skull, a treatment that is not only painful, but leads to memory loss in the short term.

I was stunned. I couldn’t know for sure; perhaps her mental illness could have developed in some other way but, as it happened, those three days of intense meditation are likely to have triggered it. I mentioned this to a friend who, in the 1970s, had taught meditation to 13 to 14-year-olds.

‘Oh yes,’ he said, ‘I once had two boys who were becoming quite emotionally disturbed; the meditation practice was unleashing emotional material that they couldn’t deal with.’ ‘So what happened?’

‘I told them to stop doing it,’ my friend told me. ‘I had twenty other children to look after. And as soon as they did, they were fine.’

Two in twenty that’s a 10 per cent probability that meditation could have an adverse effect on young adolescents. But this was anecdotal evidence taken from a single meditation class that happened forty years earlier indeed, if the hundreds of scientific articles I’d read on the effects of meditation were to go by, there seemed to be only good news. So, are cases like Louise’s and the boys in my friend’s class the exception?

I looked through the medical and psychological databases in search of articles on the possible adverse effects of meditation. There were some, most of them case studies.

One of the most striking, written in 2001 by a British psychiatrist, told the story of a 25-year-old woman who, like Louise, had a serious mental health problem following meditation retreats. The first time she was admitted to hospital her symptoms included: ‘thought disorder with flight of ideas, her mood was elevated and there were grandiose delusions including the belief that she had some special mission for the world: she had to offer “undying, unconditional love” to everyone. She had no [critical] insight?

This woman, referred to as Miss X, was diagnosed with mania. After six weeks of medication her symptoms were controlled. A psychiatrist saw her regularly for two years and she started twice-weekly psychotherapy. Then, she took part in a Zen Buddhist retreat and was hospitalized again. She couldn’t sleep for five days and, according to a psychiatrist who saw her, displayed a number of unrestrained behaviours: she was irritable, sexually disinhibited, restless, made repeated praying gestures, and attacked a member of staff. Miss X had to be transferred to an intensive psychiatric care unit for three days.

Interesting, I thought, but I was still unconvinced. All these examples could be individuals with a strong predisposition to mental illness. As I looked further into the scientific literature, though, I found other kinds of evidence.

In 1992 David Shapiro, a professor in psychiatry and human behaviour at the University of California, Irvine, published an article about the effects of meditation retreats. Shapiro examined 27 people with different levels of meditation experience. He found that 63 per cent of them had at least one negative effect and 7 per cent suffered profoundly adverse effects.

The negative effects included anxiety, panic, depression, increased negativity, pain, feeling spaced out, confusion and disorientation?

Perhaps only the least experienced felt these negative experiences. Several days of meditation might overwhelm those who were relatively new to the practice. Was that the case? The answer was no. When Shapiro divided the larger group into those with lesser and greater experience, there were no differences: all the meditators had an equal number of adverse experiences. An earlier study had arrived at a similar, but even more surprising conclusion.

Not only did those with more experience of meditating find themselves with negative symptoms particularly anxiety, confusion and restlessness they also had considerably more adverse effects than the beginners?

Amid the small pile of articles on the adverse effects of meditation, I was surprised to find two by Arnold Lazarus and Albert Ellis, co-founders of CBT.

In a 1976 article Lazarus reported that a few of his own patients had had serious disturbances after meditating; these included depression, ongoing tension and a serious suicide attempt. Lazarus strongly criticized the idea that ‘meditation is for everyone’. Instead, he argued that ‘one man’s meat is another man’s poison’, and that researchers and therapists need to know both the benefits and the risks of meditation for different kinds of people.

Albert Ellis shared Lazarus’ misgivings about meditation. He believed it could be used as a therapeutic tool, but not with everyone. ‘A few of my own clients,’ he writes, ‘have gone into dissociative semi-trance states and upset themselves considerably by meditating.’ Overall, he believed meditation could be used only in moderation as a ‘thought-distracting’ or ‘relaxing’ technique:

‘Like tranquilizers, it may have both good and bad effects especially, the harmful result of encouraging people to look away from some of their central problems, and to refrain from actually disputing and surrendering their disturbance-creating beliefs. It may also be perniciously used to enhance self-rating or “ego-strength”, so that people end up by believing “I am a great meditator and therefore am a good and noble person!” I therefore recommend meditation… as a palliative, a distraction method, and advise most of my clients to use it with discretion and not to take it too seriously or view it as a generally therapeutic method.’


I felt like an archaeologist digging up long forgotten artefacts. How could this literature on the adverse effects of meditation, including short but sharp comments from founding cognitive psychotherapists be completely absent in the recent research on meditation? It was conceivable that clinicians and researchers simply did not report the negative consequences of meditation in their articles, but it was more likely that the meditators themselves did not talk about it.

Many who encounter difficulties during or after their practice may feel they’re doing something wrong, or even that their distress is part of the process and will eventually pass. That was the case of Miss X, who had two manic episodes following meditation retreats, but eventually refused continuous treatment, explaining that her mania was nothing more than a release of blocked energy from years of not dealing with her emotions adequately. Many meditators thinking like Miss X could, to a certain extent, explain why negative reports didn’t make it into scientific journals adverse effects could be regarded as mere stones on the road to peace or spiritual attainment.

l was thinking about this when Jo Lal, our publisher, emailed to ask how the book was going. I told her what I had found.

‘Have you heard of Dr Russell Razzaque?’ Jo asked. I hadn’t.

‘We’re about to publish his book Breaking Down is Waking Up. You may find it helpful.’

Razzaque is a London-based psychiatrist whose own Buddhist meditation practice has led him to re-evaluate the meaning of mental illness. He argues that many of the psychotic experiences his patients describe resemble mystical experiences of ego-dissolution that are known to occur after years of meditation practice. Razzaque suggests that mental breakdowns are part of a spiritual growth process, in which we learn to see the self for what it is: an illusion. He describes his own mystical experience in the book:

‘I found myself descending into a deeply meditative state; I somehow travelled through the sensations of my body and the thoughts in my mind to a space of sheer nothingness that felt, at the same time, like it was somehow the womb of everything. I felt a sense of pure power and profound energy as I came upon a sudden brilliant light and a profound feeling of all-pervading joy I was everything and nothing at the same time.’

In the days that followed, however, life wasn’t so blissful. Razzaque found that he couldn’t contain his joyful experience and there was something deep within pulling him in the opposite direction. ‘I could sense the powerful currents in my whirling mind the self-doubts and the dents in self-esteem sucking me towards a ball of depression, the anxieties and fears threatening to balloon into full-blown panic, obsessions or defensive compulsions, and the speed of it all that risked pushing me into a manic state.’

Razzaque managed to keep grounded and, as a result of his difficult experience, felt greater sympathy towards his psychotic patients. Wait a minute, I thought; here we have a trained psychiatrist who can identify his symptoms and fight them off but the majority of people meditating know next to nothing about psychiatric diagnosis; nor are they familiar with seeing patients experiencing unusual states of mind. Can these difficult emotional experiences arising from meditation really be a sign of spiritual awakening?

Others before Razzaque have trodden a similar path and pointed out similarities between the symptoms of psychotic people and spiritual experiences. in the late 1980s Stan Grof edited with his wife a book on spiritual emergencies. They caution clinical psychologists and psychiatrists to be aware of and respect what on the surface may look like mental illness, but is, in fact, the expression of spiritual experiences that are having a profound, though momentarily stressful, effect. The Grofs mention shamanism and near-death experiences, as well as meditation and other spiritual practices, in association with spiritual emergencies.

Their pioneering work came to fruition when a new category was added to the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), used by psychiatrists worldwide, that of Religious and Spiritual Problems. This category acknowledges that some mental health problems, such as depersonalization, may arise as a temporary result of spiritual practices.

If you have ever felt a strong state of depersonalization, you wouldn’t forget it easily! The Cambridge Depersonalisation Scale, a questionnaire that measures symptoms, includes unusual experiences, such as: ‘Part of my body feels as if it didn’t belong to me’ or ‘I have the feeling of not having any thoughts at all, so that when I speak it feels as if my words were being uttered by an automaton?’ But other statements, such as ‘I feel so detached from my thoughts that they seem to have a life of their own’ might be quite familiar to mindfulness meditators.

With a category of religious and spiritual problems, clinicians are potentially able to recognize what are genuine manic, depressive or psychotic episodes and what are the non-pathological, although sometimes difficult effects of meditation. But it’s far from a straightforward distinction. David Lukoff, the clinical psychologist who co-authored this new category, admits that his interest in the topic arose in 1971 when he spent two months experiencing his own spiritual crisis, fully convinced that he was the reincarnation of Buddha and Christ with a mission to save the world.

But how many clinicians worldwide, I wonder, even those with a spiritual faith, would not deem someone whose life was dominated by delusion for two entire months to be seriously mentally ill. The problem centres around how we define mental illness as distinct from a spiritual emergency clearly, not all spirituality-related experiences are benign. The late Michael Thalbourne, an Australian psychological scientist, suffered from bipolar disorder, wherein periods of mania would trigger messianic delusions that had a spiritual element:

‘I sometimes get into this very focused state of mind that I can’t shake, where I believe I am Christ,’ he told me once, opening wide his eyes and gazing intensely ‘I don’t just believe I am in communion with Christ, but that l actually am the Christ.’

Michael Thalbourne had a deep personal interest in spirituality, but didn’t look at his mental suffering as a benign stone on the winding road to spiritual growth. It affected both his personal life and his academic career. ‘My university has never given me a proper academic post; they see me as unreliable and potentially dangerous,’ he explained.

The Grofs cautioned that not all difficult experiences associated with spiritual practices are necessarily ‘spiritual’. A psychotherapist or an expert spiritual teacher may have the power to help to turn a difficult experience into a meaningful one, but not always.

With the growing number of people interested in meditation in the West, many will walk away from their weekend meditation retreat or eightweek mindfulness course without expert guidance.

How many of them in their search for a moment of peace and quiet, I thought, can end up having a bumpy ride, not to mention the real danger of a journey into the hell of mental illness.


A number of Western Buddhists are aware that not all is plain sailing with meditation, they have even named the emotional difficulties that arise from their meditative practice, calling them the ‘dark night’.

The concept of a spiritual dark night isn’t originally Buddhist. Coined by the 16th-century Christian mystic St John of the Cross, the phrase originally described an advanced stage of prayer and contemplation characterized by an emotional dryness, in which the subject feels abandoned by God.

Buddhists, in principle, ought not to feel abandoned by God, but their accounts of the dark night associated with meditation are riddled with emotional and physical turmoil. A Buddhist blog sharing experiences of the dark night features a number of testimonies: ‘Nine years on and off of periods of deep depression, angst, anxiety and misery’; ‘there was a nausea that kept coming up, terrible sadness, aches and pain’; ‘l’ve had one pretty intense dark night, it lasted for nine months, included misery, despair, panic attacks… loneliness, auditory hallucinations, mild paranoia, treating my friends and family badly, long episodes of nostalgia and regret, obsessive thoughts (usually about death).’

Willoughby Britton, a neuroscientist and psychiatrist at Brown University who has conducted studies on the positive effects of mindfulness, is now trying to map these more difficult experiences, which she calls ‘The Dark Side of Dharma’.

Her interest arose from witnessing two people being hospitalized after intense meditation practice, together with her own experience after a retreat in which she felt an unimaginable terror. Reading through the classical Buddhist literature to try to understand what was happening to her, she realized that these negative experiences are mentioned as common stages of meditation.

‘l was woefully uninformed,’ she admits in an interview.

Meditation retreats easily led people to sense the world differently: the hearing gets sharper; time moves slower. But the most radical change that can occur is in what Britton calls ‘the narrative of the self’.

Try this out: focus on the present moment, nothing else than the present moment. You may be able to do it easily for a very short time. However, if you try extending this ‘presentness’ for one, two hours and keep trying for some days, your usual sense of self, that which has one foot in the past and the other in the future collapses. The practice may feel great for some, but for others it is like being continuously tossed around in a roller coaster. Vertigo, rather than blissful realization of the emptiness of the self, may be the end result.

Other unpleasant things happen, too, as Britton discovered through interviews with numerous individuals: arms flap, people twitch and have convulsions; others go through euphoria or depression, or report not feeling anything at all their physical senses go numb.

Unpleasant as they are, if these symptoms were confined to a retreat there wouldn’t be much to worry about but they’re not. Sometimes they linger, affecting work, childcare and relationships. They can become a clinical health problem, which, on average, lasts for more than three years. Some people ‘seemed to go through these experiences fairly quickly, like under a year, and in other people can last a decade’, Britton reveals.

Britton hasn’t yet published her research, but it confirms the case studies, earlier findings with groups of meditators, and Lazarus’s and Albert Ellis’s comments on the adverse effects of meditation.

These negative effects may very well turn out to be a stage in our spiritual journey, but if we don’t address them properly they can be destructive and harmful. Meditation teachers know about it Britton says but meditation researchers are usually sceptical; they ask about the prior psychiatric history of meditators who develop mental health problems, as if meditation itself had little or nothing to do with it.

I thought the same before starting the research for this book. Its title was originally going to be From Monster to Buddha, intending to highlight the astonishing possibility of personal change arising from meditation. I haven’t stopped believing in meditation’s ability to fuel change, but I am concerned that the science of meditation is promoting a skewed view: meditation wasn’t developed so we could lead less stressful lives or improve our wellbeing Its primary purpose was much more radical, to rupture your idea of who you are; to shake to the core your sense of self so that you realize there is ‘nothing there’.

But that’s not how we see meditation courses promoted in the West. Here, meditation has been revamped as a natural pill that will quieten your mind and make you happier.

I recently asked students’ in a class I was teaching on the psychology of contemplative techniques what they thought the similarities and differences were between meditation and psychotherapy. A student who was a regular meditator argued that doing psychotherapy was all about past wounds and relationships, while meditation, she said, was ‘free from all that crap; it’s all about being in the present’.

But it’s not. Repressed and traumatic material can easily resurface during intense meditation.

From the moment I accepted this and started talking to regular meditators, I kept finding more and more evidence. I discovered even more online and sometimes in the least expected places. Take Deepak Chopra’s website, for example. There is a correspondence section where readers post their questions or experiences and Chopra answers. A number of these posts concern physical or emotional symptoms that arise from meditation. On 11 April 2014 an individual who had been meditating for one year and finding in it ‘true bliss’ describes having twice experienced a deep emotional sensation, ‘like something is being ripped from me’, that left her wanting to cry and yell. Chopra’s reply is optimistic:

‘It’s both normal and okay. It just means there is some deep emotional trauma from your past that is now ready to come to the surface and be healed. After meditation I would recommend you take a few minutes and sing out loud. Find a song you love that resonates with the emotional tone of your pain. Listen to it at above normal volume so that you can really feel the sonic effect of the song and music. When you feel it has engaged your emotions, start to sing so that your voice translates your feelings into sound. If you do this every time you feel some unresolved residue of emotion after your meditation, it will facilitate the release and healing process.’

What if someone like Aaron Alexis had emailed Deepak Chopra and received a reply like this, would singing along to his favourite song, turned up nice and loud, have healed his past emotional traumas and led into the wisdom he sought, rather than a killing spree? Unlikely.

Furthermore, there is a real danger that what the person who wrote to Chopra asking for advice is feeling is not ‘normal and okay’, and that if she keeps meditating without an expert teacher, it may disturb rather than heal.


‘If every eight-year-old in the world is taught meditation, the world will be without violence within one generation.’ The Dalai Lama

When best-selling spiritual author Marian Williams tweeted the above quote, it quickly went viral. It probably helped that her friend Oprah Winfrey re-tweeted it to her 24 million followers with the comment, ‘This I believe is true. Have seen it in action?’

The notion that religious or spiritual practice is something of a cure-all isn’t unique to Eastern practices, though. Fundamentally, all religions moot that spirituality can make you a better person.

The evidence for this is ambiguous.

It is true that religions emphasize the caring part of our human nature from the ‘thou shall not kill’ of the Hebrew scriptures, through the Hindu praise in the BhagavadGita of the person who hurts nobody and is compassionate towards all beings, and the Quran’s rule to be kind to orphans, the needy and travellers, to the Buddha’s precept to ‘avoid killing, or harming any living thing’, and the Christian golden rule of treating others as you would want them to treat you. While there is psychological evidence that practising religious people are more charitable, our ability to differentiate between good and bad deeds is already in place before we acquire religious ideas.

Studies have shown that from as young as six months old, we have a preference for those we see helping another, and we’d rather be with someone neutral (who acts neither positively nor negatively) than with an uncooperative individual. And from eight months old, we are able to appreciate when a helpful individual acts against another that has behaved badly. This ingenious research was conducted with computer images and puppets, so the babies could effectively recognize positive and negative moral behaviour in strangers.

The idea that we seem to be biologically predisposed towards morality does not answer the question a 16-year-old once asked me at a public lecture in India: ‘If we are born good and kind, how come there is so much violence and evil in the world?’ Religions have dealt with such ‘problem of evil’ questions for a long time and have come up with various answers the existence of free will, disobedience to God, the work of the devil, and the concepts of illusion, karma or greed.

Psychologists rarely come up with such enticing explanations about the origins of violence and immorality. We simply know that while we are born with the ability to tell a helpful from an unhelpful gesture, a caring from a callous person, we are also rooted in our needs our desire to want things, to achieve and in trying to reach our goals we are able to hurt, and even kill. While some of us have more of a propensity towards doing this than others for example, those with psychopathic traits hurting someone else in order to meet our own needs is something we are all potentially capable of; and to at least some small degree, probably do.

While there is evidence that religion can make people act better towards others, there is also plenty of evidence to the contrary: religion can make you more prejudiced towards the non-religious or gay. But we can detach meditation from groups and religions. You can use meditation to de-stress or explore the self just as easily whether you ascribe to a set of religious beliefs or a religious group or not. The beauty of meditation is just that its separateness from the necessity of divine rules of morality and punishment. But, if we take this view, we return to the question that we asked in Chapter 5: meditation without religion might improve its attraction, but is its lack of attachment to spiritual moral guidelines is also a weakness?

I asked an old friend who runs a sociological research centre specializing in equality and racism issues what he thought of the Dalai Lama’s idea that meditation could eventually eradicate violence. He gave me a puzzled look before answering.

‘There are various factors that explain violence, right? Some psychological, others societal. Put them all together in a statistical regression model: start with level of income, education, access to health, then consider psychological factors such as the presence of childhood abuse; see how much of these explain the likelihood of my neighbour being in a fight at the pub or hitting his partner. Then, add meditation to your statistical model would it add anything in predicting violence compared to the other factors?’

‘Well …’ I started, but he interrupted me.

‘Would it have made a difference if Hitler had meditated?’ he asked grinning.

I saw what he meant. You can’t remove an individual from the larger context and one’s psychological makeup. It would not have made much of a difference if Hitler had meditated like Aaron Alexis did unless he removed himself from the society that raised him to power and he radically changed his ambitions and ideas.

On the other hand practices such as meditation and yoga are rooted in inner peacefulness, and the spiritual traditions upon which they’re built believe that radical personal changes are possible, regardless of the environment we live in.

All in all I felt I had a puzzle with quite a lot of missing or ill-fitting pieces. I couldn’t quite see the larger picture. Very soon, though, I was challenged to look in a completely different way at the question of the extent to which contemplative techniques are associated with violence.


‘KINDLY BE CALM’, read a sign in large capital letters above the reception desk. On the other side of the lounge, there was a picture on the wall of a forty-something bearded man with a pristine smile, wearing the traditional orange robes of Indian yogis. I yawned and rubbed my eyes, trying hard to keep awake. It was past midnight and I’d been travelling for eight hours on a dimly lit motorway, thick with fog, clotted with buses and trucks without rear lights.

‘Don’t go, it’s suicide,’ a friend had told me in Delhi. ‘Get the train in the morning.’

I didn’t listen. l was in awe of the driver’s night vision and his ability to notice the invisible buses and trucks just before crashing. ‘No worry, no worry,’ he said halfway through the trip. ‘My name is Bobby and everyone in India knows that no Bobby has ever been in a car accident.’

Travelling with me in the same taxi was Bishal Sitaula, a friendly and talkative professor of environment and developmental studies from Norway who had arrived from Nepal. He took out his video camera to show me footage of his Nepalese trip.

‘Then I met with this really revered Buddhist monk. Here l’m asking him a question do you want to hear?’ I looked at the screen as he pressed play. The monk had a benevolent smile. Bishal was telling him about a personal moral dilemma. ‘When my wife makes herself pretty, l look more at her. But, when I am walking up the road and see a woman with long beautiful hair and wearing nicer make up than my wife’s, and I stare at her, then, I walk five more metres and stare back at her again is this a sin or bad for my karma?’

The monk continued smiling. I imagined that if the whole world collapsed around him he would still smile.

‘No, no, it’s not a sin to look,’ he replied. ‘You may enjoy looking at a beautiful woman. That is fine. But if you crave and run after her, that is no good, no good; no good for your karma.’

Stopping the video, Bishal laughed loudly and put his arm around my shoulders.

When we arrived at the Patanjali Research Foundation, where I am taking part in a conference on the effects of yoga, I could make out only tiny fairy lights scattered around the complex. in the fog they looked like blurred dragonflies. Inside the accommodation block a sleepy lady handed me a key after I showed her my passport. The bedroom’s floor was paved in black-and-white marble. It was the coldest January recorded in Indian history; in Delhi homeless people were dying because of the low temperatures.

I took out all the blankets from the drawer and laid them on the bed. I had travelled to India on a few occasions, but had never come so far north, only a few miles away from the source of the Ganges and the river village of Rishikesh, home to a number of celebrated ashrams and yogis. It seemed the right place to build the Patanjali Research Foundation, which holds masters and PhD programs on the science of yoga and has the largest research centre in the world dedicated to the study of this millennia-old practice.

Lying under three heavy blankets, I gazed at the puffed steam coming out of my mouth and eventually fell asleep. A few hours later the radio switched on. l opened one eye and looked at my watch on the bedside table: it was 4am. ‘Where’s the damn switch?’ I thought to myself. The music poured out of the speakers within the room and out in the corridor, a smooth stream of sitar and lulling voices. I walked to the reception, but saw no one. As I turned around I noticed a man by the door with a scarf wrapped around his head.

‘The radio inside the bedroom, how do you turn it off?’ I asked.

He smiled.

‘The radio,’ I gestured, pointing at the speakers in the corridor. ‘Off, off.’

He smiled again and tilted his head from left to right repeatedly. ‘No sir, no sir. Wake up, wake up. Yoga,’ he said, still smiling and pointing outside. It was pitch black.

The music continued. At 5.30am I ventured outside. There was daylight, but the thick fog from the previous night hadn’t yet lifted. I followed some people who seemed to know where they were heading, and entered an enormous auditorium, where approximately 2,000 people were sitting on yoga mats. The spiritual guru of the Patanjali Research Foundation, Swami Ramdev, was on top of the stage, alone and wearing nothing but an orange loincloth.

A man sitting next to me whispered into my ear that he was a medical doctor at the foundation and offered to translate what Swami Ramdev was saying.

‘lt’s pranayama. We start with breathing; right breathing can heal anything.’

Yoga with machine guns

For the next hour we breathed together and listened. First, how not to breathe. Then, how to breathe through alternating nostrils, and how to use your belly and diaphragm in a syncopated way.

‘Like this!’ and ‘Don’t do this!’ he said, hyperventilating with a contracted abdomen and eyes wide open, looking like he was having a fit. ‘This breathing cures asthma; this one heals all types of arthritis; if you’re depressed, this will cure it.’

There was clapping from the audience. Steam clouds came out of people’s mouths. The list of diseases that yoga and pranayama can heal was almost endless dementia and cancer among them.

The breathing exercises went on, but my feet and hands were getting colder. After an hour of breathing, the swami stood up and began a series of asanas. ‘Finally,’ I thought, ‘we’re going to gently warm up.’ But it was far from gentle; more like a kind of yoga-onspeed mixed with aerobics. I looked around and noticed only one man among the audience who could keep up with the guru. For the last posture Swami Ramdev walked around the stage on his hands for about 1 minute. There was more clapping.

We finally moved towards a peace chant, which was interrupted by a few minutes of yoga laughter ‘very good for depression’, and followed by singing from the swami alone. ‘He has the personality of a rock star,’ I overheard someone with an American accent whisper behind me.

When the solo chant ended, Swami Ramdev uttered a shrill cry, which was imitated by most of the audience as they repeatedly raised their fists upwards. It was a strange sight. The cry and fist waving were the kind you’d see in a political or military gathering. As the session ended the translator held my hand: ‘Come, I’ll take you to Swami Ji for a blessing.’

I followed him. There was a queue of people waiting. I looked around: there were numerous posters of the Patanjali Research Foundation and university, mostly in Hindi. My translator pushed me forwards; I was now very close to the Swami. The man in front of me was carrying a beautiful, handcrafted bag. On it, next to the foundation’s name, was written ‘Self and National Character Building by Yoga’. Finally, it was my turn. Swami Ramdev smiled; I smiled back. I slowly tilted my head forwards to greet him as my translator introduced me, but halfway to the full nod, l froze.

‘Bloody hell,’ I caught myself saying. A man came from behind the Swami holding a machine gun about the length of an extended arm. He was pointing the gun at me. My translator guided me away while the Swami smiled and waved goodbye.

‘What was that?’, I thought, my eyes fixed on the gun. ‘He’s a very holy man, don’t you think?’ my translator said, still holding my arm and apparently unfazed by the bodyguard with the machine gun.

Outside I saw Bishal, the professor who had travelled with me the night before. I felt like hugging him. He was in his perennial chatty mood. ‘Hello, my friend! Chilly, huh? Did you enjoy the session?’

I asked Bishal about the sort of war cry at the end of the session and the armed bodyguard. He told me about the political influence of the swami that he’s pressured the parliament to put an end to corruption and some politicians don’t like him. ‘Look, look,’ I interrupted, pointing at the person holding a bag I’d noticed inside the yoga hall. ‘Do you see what’s written on that bag: Self and National Character Building by Yoga. What do they mean by national?’

‘Oh, that. Well, it’s all around the place. This is not only about yoga, but about social transformation.’

‘But why national?’

‘Yoga comes from India, right? It’s India’s trademark. Here, that’s part of their message, that yoga is Indian.’

I stared back in silence. As we made our way back to the accommodation hall, I noticed a large banner with a picture of the guru holding up his fist with an angry face. The writing was in Hindi. ‘And what’s that about?’

Bishal reminded me of a horrendous gang rape that had recently happened in Delhi, inside a moving bus. The girl died shortly afterwards and there had been a public outcry. My friend in Delhi had told me that Indian culture, particularly in the north, was not only sexist, but violent towards women. ‘So the banner literally asks,’ Bishar translated, ‘What shall we do with the rapists? And the red letters say: Death Penalty! Death Penalty!’

‘Are you serious?’

‘I find it strange, too,’ Bishal replied. ‘But they believe in harsh punishment.’

It was my fourth time in India, but the first that the contradictions of this country were pressing on me. Yoga, an instrument of serenity and enlightenment, was serving political purposes. The Patanjali Research Foundation is powerful: it has its own TV satellite channel, factories producing a variety of health products, a university and a leading yoga research centre. They also commission cartoons that portray Swami Ramdev as an enlightened yogi to educate children not just about the technique of yoga, but about its whole philosophy even its nationalistic and punishment views, I suspect.

There were other odd things going on. The foundation’s wireless Internet server was excellent, but it didn’t allow you to access a number of webpages. The first banned site I noticed was Facebook. I checked with a German conference participant and he couldn’t access it either. Most web searches related to drugs were forbidden, as I discovered while trying to read about the uses of morphine as a painkiller. When I asked Nandim, a Master’s student at the university, why they had censored Facebook, he laughed.

‘Very few people from the outside notice it.’

‘Why can’t you use it?’

‘Oh, you know, students were spending too much time on it.’

‘That’s rubbish,’ I said. ‘Students can spend too much time just browsing the web. Why Facebook?’

Nandim looked around before answering.

‘You know… many boys were using Facebook to talk to girls.’

‘So?’ ‘Well, that’s not allowed.’ ‘What do you mean?’ I ask. ‘The university is not sexually segregated; you have men and women.’ ‘Yes, yes, but Swamiji doesn’t like us to be together,’ Nandim said speaking in a hushed voice.

I looked at him puzzled. ‘A couple of incidents happened last year,’ he said. ‘There were boys and girls spending time together, you know, like they were a couple.’

‘Yes, it does happen.’

‘Not here, sir, not here. They were expelled from the university.’

On my way back to Delhi, this time on a sunny though heavily congested road, I saw graffiti on the wall of a tunnel. No pictures, only wide letters written in black across the extension of the wall: ‘I HATE MY LIFE.’ I felt a sudden wave of empathy for whomever had doodled it; the land that had given birth to numerous sages and yoga, the soil of nondualistic Advaita Vedanta was riddled with contradictions. At the Patanjali Research Foundation, ideals and techniques for innerpeace-making were fused together with nationalism, violence (guns and the endorsement of capital punishment), censorship and sexual repression. When I returned to my friend’s house in Delhi, he teased me for my naivety.

‘Many of these yogis are millionaires. They live in fancy air-conditioned flats. And the nationalism and violence, give me a break: do you know how many wars we, the very spiritual Indian people, have been involved in during the last fifty years?’

My doubts about meditation and yoga having a role in solving the world’s violence substantially increased after this trip.

When i returned to England, I emailed Torkel Brekkel, an Oxford colleague who specializes in the study of Asian religions. I asked what he knew about violence in the Eastern spiritual traditions. My general understanding was that in a religion such as Buddhism, which has compassion and non-violence as central principles, you would find few, if any, displays of violence among its followers.

‘That’s not the case,’ Torkel replied and added that he had lost count of the times his colleagues, students and journalists had tried telling him that Buddhism, unlike Christianity or Islam, is an essentially peaceful religion. ‘lt’s not,’ he asserted, referring me to some books on the topic, including one he’d recently edited.


During the first decade of the new millennium, while psychologists and neuroscientists were examining the positive psychological effects of Buddhist mindfulness meditation, scholars of religions were looking in the opposite direction; they were examining the violent history of Buddhism. The book edited by Torkel Brekkel is only the most recent in a number of publications looking at the use of violence by Buddhist monks and bodhisattvas (enlightened persons). The titles of the volumes are revealing: Buddhism and Violence, Buddhism and Warfare, Zen at War.

Apparently, the early Buddhist views on violence were astonishingly similar to those of the Christians who tried to follow Jesus’s saying, ‘if someone slaps you on one cheek, turn to them the other also. Like the early Christians, followers of Buddha prided themselves on being different from the fallen world. One early Buddhist text recognizes that violence should be avoided:

tremble at the rod,

all are fearful of death.

Drawing the parallel to


neither kill nor get others to kill.

(Dhammapada, 129)2-7

In another early text, the Yodhajivasutra, the Buddha explains that warriors are to be reincarnated in hell or as an animal, rather than in the company of heavenly deities (devas).

There is a particularly striking story of how the Buddha personally walked into the battlefield and avoided bloodshed. Four years after he attained enlightenment, two armies were facing each other because of a dispute about access to water. The Buddha came between the armies and asked their commanders:

‘How much value do you think water has in comparison with the life of men?’

The commanders agreed that the value of water was infinitely less important than human life.

‘Why do you then destroy lives that are valuable for valueless water?’ the Buddha asked, thus preventing the oncoming bloodshed.’

Many have reiterated this view. Buddhism’s precept of non-violence has inspired people in Asian countries living under its influence, so that ‘throughout its peaceful march of 2,500 years, no drop of blood has been shed in the name of the Buddha’, writes Narada, a distinguished Sri Lankan monk and scholar.

But a cursory glance at the news broadcasts about Buddhist countries challenges this peaceful image. Let’s start with Sri Lanka. in 2013 groups of monks were holding rallies against the Muslim minority; since 1983 many Buddhist monks have been directly involved in military campaigns against a separatist faction in northern Sri Lanka. In the first half of the twentieth century, monks joined and led the struggle for independence against the British. Two thousand years ago, King Duttagamani fought a war to re-establish Buddhism in the country ‘where he used a Buddha’s relic as his banner’. One thousand miles from Sri Lanka in Burma, in May 2013 Buddhist mobs were killing Muslims and burning mosques; one Burmese monk, jailed for inciting religious hatred, likes to call himself the ‘Burmese Bin Laden’. These events, I soon found out, aren’t exceptions to the rule.

Although preaching nonviolence to his followers, the Buddha didn’t try to persuade kings to adopt a pacifist stance. He clearly separated the waters by not allowing former soldiers to become monks and forbidding his followers to preach to soldiers violence was understood as part of life and there was no attempt to eradicate it entirely from the world. The effort was in trying to contain it in Buddhist monks. But even that failed. Just as Christianity developed its ‘just war’ theory wherein, according to St Augustine, an early Christian theologian, war could be an instrument of divine justice on wickedness, Buddhism came to develop its own theory of compassionate killing.

A text written in the fourth century entitled ‘Discourse on the stages of yogic practice’ argues that under certain circumstances even an enlightened person is allowed to kill out of compassion.

‘If a bodhisattva meets an evil person who is going to kill many people… he will think to himself: if by killing this bandit I fall in hell, what does it matter? I must not let him go to hell! Then the boddhisatva will kill him, full of both the horror of the crime and compassion for that person. In doing so, he will not commit any transgression; rather, he will acquire much blessing’

The Buddha himself told the story of how, in a previous life, he had killed out of compassion. As narrated in the Mahaupaya-Kausaya sutra, there was a time when 500 merchants went to sea in search of treasures, but one of them schemed to kill the others and keep all the treasures for himself. A deity discovered this and informed the Buddha who had the following dilemma: if the other merchants learn of the evil merchant’s plot to kill them, the evil merchant will be killed and the 499 merchants will go to hell. However, if nothing is done, the merchants will be killed and their murderer will go to hell. So, the Buddha decided to kill the evil merchant and save the others. He explained to his followers that his action was the result of compassion for the sake of a greater number of living beings.

It’s not difficult to follow the Buddha’s logic, it’s similar to the tram problem first posed by British philosopher Philippa Foot in 1967, and subsequently often used in psychology experiments on moral behaviour.

In this scenario there is a runaway tram that is heading straight towards five workers. A large man is standing on a footbridge over the tracks. If you throw him off the bridge his body will block the tram and the five men will be saved. What would you do? Rationally, you ought to kill the single man on the footbridge to save the greater number of people. But this involves choice, a conscious decision to kill a man, at your will, rather than the tragic but accidental killing of the people on the track. As a result most people’s gut reaction to this moral problem is not necessarily the most reasoned one: it may rationally be better to kill the man, but intuitively you feel it’s wrong and opt against it, letting the five workers die in a tragic accident instead.

Through dilemmas such as these, psychologists have shown that many of our moral decisions are intuitive rather than rational. However, there is a problem with these findings. New studies have found that for people who display lack of empathy such as psychopaths the intuitive answer is to kill the large man, because to them the act of killing is not particularly aversive? There is a kind of indifference or amorality about killing for people with a psychopathic personality.

Although, on the surface, this seems the very opposite of what Buddhist practice is seeking to attain, something similar to this emotional indifference comes across in some Buddhist texts. One of the crucial teachings of Buddhism is that of emptiness: the self is ultimately unreal, so the bodhisattva who kills with full knowledge of the emptiness of the self, kills no one; both the self of the killer and the self of the killed are nothing more than an illusion.

In the Nirvana sutra, there is the story of a prince who murders his father, the king, so he can accede to the throne. Heavy with remorse he consults the Buddha for advice. The Buddha makes him see that he is not responsible for the killing for two reasons. First, the king was killed as the consequence of his karma in a previous life he murdered a holy man. Second, and most important, the Buddha states the unreality of killing: ‘Great King, it is like the echo of a voice in the mountain valleys. The ignorant think it is a real voice, but the wise know it is not. Killing is like this. The foolish think it is real, but the Buddha knows it is not’

Another Buddhist text (Jueguan lun) echoes the idea of the emptiness of killing; if you do it as if it were a spontaneous act of nature, then you’re not responsible for it.

‘The fire in the bush burns the mountain; the hurricane breaks trees; the collapsing cliff crushes wild animals to death; the running mountain’s stream drowns the insects.

If a man can make his mind similar [to these forces], then, meeting a man, he may kill him all the same.’

This idea is reinforced in various other texts. If you are in a selfless and detached state of mind, you can do anything, even ‘enjoy the five sensuous pleasures with unrestricted freedom’ (the Upalipariprccha explains), as your actions will have no negative karmic consequences. In other words bodhisattvas are not morally responsible for their actions because they act without self-interest. The Fifth Dalai Lama used this argument to justify the violence of the Mongol king Gushri Khan, who in the 1630s and 1640s violently unified a large portion of Tibet and converted the people to Buddhism. The Fifth Dalai Lama glorifies this because the Mongol king was an emanation ‘of Vajrapani, the bodhisattva representing perfect yogic power’, who had realized emptiness and ‘would radiate 100 rays of light in the ten directions’.

The idea that Buddhism, unlike other religions, did not force people to convert, but ‘pacified’ the new lands to which it spread, is also a myth. Just like Christianity and Islam made churches and mosques from pagan temples and fought animistic ideas as heretical, something similar happened with Buddhism. Shamanic practices were prohibited in Mongolia from the 1500s, spirit figurines were burned and replaced with Buddhist images of six-armed Mahakala. Those who continued to practise Shamanic rites were subjected to brutal punishments or executed.

These acts were justified because of the spiritual status of rulers, who were recognized as living Buddhas, accomplished in virtue and wisdom, and endowed with unbiased compassion. Mongol laws regulated the privileges of the Buddhist clergy and the punishment of any attacks on monasteries depending on the social class of the offender: if a nobleman, the punishment was exile; if a commoner, the sentence was more likely to be death?

Bernard Faure, a professor at Columbia University, suggests that forced conversion is sometimes brutally visible in religious imagery. In the case of Tibet, there is the myth that its first Buddhist king subdued the demoness who ruled the land by nailing her down to the ground. The holiest of places in Tibetan Buddhism, the Jokhang Temple in Lhasa, is symbolically known as the nail that was driven into the vagina of the demoness. ‘The rape imagery,’ Faure writes, ‘could hardly be more explicit.’

The demonization, dehumanization and social discrimination of rivals seem to be as prevalent in Buddhism as in other faiths. In one sacred text often used by the current Dalai Lama (the Kalachacra-tantra), the final battle of the world will be between Buddhists and heretics the heretics are identified as Muslims.

In Thailand, Buddhism needed to tackle other classes of enemies. In 1976 a leading monk declared in an interview that ‘killing communists is not a sin’. These were his reasons:

‘First, killing communists is not really killing; second, sacrifice the lesser good for the greater good; third, the intention is not to kill but to protect the country; fourth, the Buddha allowed killing.’

And he concludes: ‘Our intention is not to kill human beings, but to kill monsters. This is the duty of all Thais.

This extraordinary statement doesn’t come out of the blue. In Thailand, as in other Asian countries, the state protects its Buddhist religion and Buddhist monks protect the Thai state. Thai temples are used as military bases and some soldiers are ordained as monks, they are known as ‘military monks’ and one of their primary duties is to protect, using violence if need be, Buddhist temples.

All of this was new to me. As a reader of books on Eastern spirituality and meditation since my teens, I had never come across any remote suggestion that Buddhism was similar to other religions when it came to justifying and using violent means.

If Buddhist monks and enlightened teachers can be violent towards others, why would Western meditators be any different?

I was coming to the conclusion that meditation is only a process: it can sharpen attention, quiet thoughts and angst, increase positive emotions towards ourselves and others and, in the extreme, it can lead to a deep alteration of our identity a kind of ecstatic annihilation of the ego. But with the wrong kind of motivation and without clear ethical rules, that very spiritual selflessness can serve all kinds of ill purposes.

That happened with Japanese Buddhism not long ago.

Zen soldiers

‘Why didn’t we have the religion of the Japanese, who regard sacrifice for the Fatherland as the highest good?’ Adolf Hitler

In the late 1950s journalist and author Arthur Koestler travelled to the East and met with a number of leading spiritual teachers. The narrative of his travels was published as The Lotus and the Robot. In the last chapter, entitled ‘The Stink of Zen’, Koestler takes issue with Zen’s amorality and goes as far as criticizing Suzuki, the Zen scholar who made Zen known to a wide Western audience. He quotes from Suzuki’s book Zen and Japanese Culture:

‘Zen is extremely flexible in adapting itself to almost any philosophy and moral doctrine as long as its intuitive teaching is not interfered with. It may be found wedded to anarchism or fascism, communism or democracy, atheism or idealism’.

Koestler commented that this passage ‘could have come from a philosophical minded Nazi journalist, or from one of the Zen monks who became suicide pilots’.

His meetings with Zen teachers only reinforced the idea that Zen has no interest in morality or social ethics. When he asked about the persecution of religion in totalitarian countries or Hitler’s gas chambers, the answers generally showed a lack of interest in differentiating between good and ill deeds. He regarded this as a ‘tolerance devoid of charity’ and was skeptical about the contribution Zen Buddhism had to offer post-World War II to the moral recovery of Japan, or any other country. In this short chapter Koestler pointed his finger at a phenomenon of unimagined proportions.

Forty years later it became public knowledge that the ‘stink of Zen’ dominated Japan during World War II; Koestler was right.

It was Brian Victoria, a Zen priest and historian of religions, who brought the evidence to light. He has shown how, during World War II, the Japanese military used Zen Buddhist ideas and meditation techniques and how Zen Buddhist leaders showed explicit support of the war. Victoria’s verdict is as sharp as a samurai’s sword. He reveils that nearly all of Japan’s Buddhist leaders were fervent supporters of Japanese militarism. As a result, he argues, Zen Buddhism so deeply violated Buddhism’s fundamental principles that it should no longer be recognized as an expression of the Boddidharmag. Within a Western religious context, this would be the equivalent of saying that during a certain period (such as the Inquisition), the Catholic Church was not an authentic expression of Christ’s teachings.

Victoria methodically reveals how warfare and killing were regarded as manifestations of Buddhist compassion, selflessness and dedication to the Japanese emperor. The soldier’s code, which all soldiers had to learn by heart in 1941, had a section entitled ‘View of Life and Death’ which read:

‘That which penetrates life and death is the lofty spirit of self-sacrifice for the public good. Transcending life and death, earnestly rush forward to accomplish your duty. Exhausting the power of your body and mind, calmly find joy in living in eternal duty’

This is eerily familiar to us living in a post-9/11 world. The violent rhetoric of religious extremism is probably universal, but, in the case of Zen Buddhism, its very spiritual pinnacle, the attainment of enlightened selflessness was used to train soldiers during World War II, who would sacrifice themselves as if their lives were of no consequence. Thus, an army major advised his soldiers:

‘The soldier must become one with his superior. He must actually become his superior. Similarly, he must become the order he receives. That is to say, his self must disappear.’

Islam or Christianity’s promise of eternal life is here exchanged for the Buddhist idea that, by becoming selfless, life and death become undifferentiated; there is nothing to lose by dying on the battlefield once you realize the emptiness of the self. This spirit is deeply entrenched in Japanese Buddhism, going back at least to the samurai age. Takuan, a famous Zen master from the 1600s, wrote:

‘The uplifted sword has no will of its own, it is all emptiness. It is like a flash of lightning. The man who is about to be struck down is also of emptiness, and so is the one who wields the sword. None of them are possessed of a mind that has any substantiality. As each of them is of emptiness and has no mind, the striking man is not a man, the sword in his hands is not a sword, and the “I” who is about to be struck down is like the splitting of the spring breeze in a flash of lightning’.

D.T. Suzuki expressed the same view in the twentieth century. He eloquently compared the Zen master’s use of the sword to the production of an artistic masterpiece. Although it is not the intention of the Zen master to harm anybody, the enemy appears and makes himself a victim of the enlightened swordsman, Suzuki suggests; it is as if the sword acts without an agent or through a robot, if we want to use a less poetic image.

It is then no great wonder that Hitler and the Nazis were fond of the Zen. Heinrich Himmler, leader of the SS (Schutzstaffel), who was obsessed with esoteric ideas and sent expeditions to Tibet and India, believed that all his military had to act with ‘decency’. By decency he meant that they had to remain untouched by human weakness when staring at the thousands of corpses, lying side by side, as they tumbled into the pit at concentration camps. When he was caught and questioned after the war, he didn’t have a shred of insight about the villainy of his actions; like a Zen master, he seemed indifferent.

When Brian Victoria’s book, Zen at War, was translated into Japanese, it had an unforeseen impact. Instead of trying to deny Japanese Buddhism’s ties to militarism, a number of Zen masters admitted this had happened and formally apologized. It was a long journey for Victoria, who had been ordained as a Zen priest in 1964 because he believed Zen Buddhism was free from the violence that had marked Western religions. But he hasn’t lost his faith.

He upholds Buddhism’s non-violent principles and denies the possibility of compassionate killing, arguing that under no circumstances can a bodhisattva legitimately employ violence to the point of actually taking the life of another human being.

However, this leaves us with another, no less difficult question to answer: what do we make of a bodhisattva or, in the Zen tradition, someone who has reached satori (the realization of selflessness) and still commits violence, is this person truly enlightened? Paradoxically, yes. After the war Suzuki, although not retracting any of his former works, argued that enlightenment alone is not enough to make you a responsible Zen priest. A Zen priest also needs to use intellectual discrimination, because enlightenment in itself is just a state of being that cannot tell right from wrong.

This is not what we’re used to hearing. Enlightenment in the East is regarded very much like saintliness in the West whomever reaches such a state of being is expected to be the pinnacle of selflessness and love. Followers revere their spiritual teachers, often treating them like the living embodiment of nirvana or God. The idea that the highest attainment of spiritual development may not be enough to tell right from wrong is disturbing.

Two hypotheses come to mind: either enlightenment does not necessarily make you act in an unselfish or a peaceful way; or perhaps those whom we think of as enlightened aren’t as holy as they seem. Mystics of all times have warned against the dangers of spiritual infatuation. The Spanish Christian mystic Theresa of Avila went as far as suggesting that we should never trust the goodness of the holy people who are still living? In the Christian tradition it is a sin of vanity to believe you are holy. In the Buddhist tradition it would probably be proof that the egoless master still has some ego to shed. But in the East it’s widely accepted that some people are real embodiments of compassion or God, and it’s not unusual for the masters themselves to proclaim that. Recently, in India, one man was revered by millions and looked upon as the living God.

The most selfish man on Earth

I first heard of Sai Baba through a friend who was doing a Master’s degree in the sociology of religion. Having been raised by Marxist parents, Joana was curious about religion and went off to southern Italy to do fieldwork with a community of Sai Baba followers. It was very much like any other Hindu devotional community, she told me, with lots of chanting, praying and some meditation but there were a couple of unusual things. First, there were various gifts bracelets, watches that, apparently, the guru had produced from thin air and offered to his followers. Second, the guru often showed up in people’s dreams, an event that had been the catalyst for conversion to Sai Baba’s doctrine for many of the people Joana interviewed during her fieldwork. Joana herself, despite being an atheist, had dreams about Sai Baba while staying with the community. This frightened her, but it didn’t turn her into a believer?

The first time I considered the idea that an enlightened person could be flawed was in relation to Sai Baba. l was talking to Carlos do Carmo Silva, a philosopher of religion based at the Catholic University of Lisbon. A tall, thin and unassuming man in his late fifties, he has produced work on the parallels and tensions between Buddhist and Christian mystical attainment that is the most insightful I’ve ever encountered. l was asking what he thought of Sai Baba’s claim to being an ‘avatar’, the very embodiment of God on earth.

‘Perhaps he is,’ he said gazing upwardly. ‘But other times he can be the most selfish man on Earth.’ I looked at him, puzzled. He didn’t offer an explanation for this contradiction and I wasn’t expecting him to, he often challenged me to think out of the box.

His words popped into my mind when a few months later a BBC documentary on Sai Baba accused the guru of sexually abusing some male teenage American devotees. Shocking as the revelations were, the way an Indian minister treated the BBC journalist who confronted him with the allegations was no less brutal. The auras of devotion and power surrounding the guru were astounding.

The sexual abuse allegations probably did not harm Indian devotion to Sai Baba, but they did have an effect on Western devotees. Many centres in Europe and the USA closed down. I didn’t think about it any further, though, until one evening in Oxford I was invited to comment on a lecture by the Icelandic psychologist Erlendur Haraldsson. He was speaking about some work he had done on children who claimed to remember past lives, but I knew that Haraldsson had written a book about Sai Baba’s miracles. At the end of the event, I asked him if he had personally met the Indian holy man.

‘Oh, yes, on quite a number of occasions. I spent some time at his ashram during which we spoke on a daily basis.’ ‘And what do you make of him?’

‘I do think he has some unusual powers. I can’t tell if all the stories are real, but I think some of them are,’ he confided.

‘What about the sexual abuse allegations; what do you make of them?’

Haraldsson looked down at me (he is quite a tall man) and shrugged.

‘Well, he’s obviously a gay man …’

I stared at Haraldsson and said nothing. We stayed quiet for a moment and then changed subject.

Violence comes in many shapes. Sexual abuse is one of the most difficult forms of violence to confront; often the abuser is a powerful figure, either within a family or an organization. Spiritual organizations are not immune to this. The recent scandal of sexual abuse among the Roman Catholic clergy has stirred waves in the Western world, but Buddhist monks in the East, including leading priests, have also been found guilty of this.

Recently in the USA Sasaki, a revered Zen priest known to be Leonard Cohen’s Buddhist teacher, has been accused of sexual abuse by a number of female followers. On various occasions Sasaki asked women to show their breasts, and explained that this was part of a Zen koan or a way of showing non-attachment.

Another woman complained that the master massaged her breasts during a private session and was asked to massage his genitals; The accusations against Sai Baba were very similar, but the target of the abuse was at the time a young male adolescent.

I thought again of what Carlos do Carmo Silva, the philosopher, had said: the holiest man on earth can also be the most selfish. I also remembered German psychologist Harold Wallach telling me that he had met advanced meditators who were ‘assholes’. As hard and paradoxical as it sounds, it is very likely that no human being is immune to being cruel or taking advantage of others at times, no matter how spiritually evolved.

By the time l’d uncovered all this material, I was feeling disillusioned and somewhat nauseated. The old aphorism ‘the road to hell is paved with good intentions’ played loudly in my mind. Meditation and spiritual teachers are coloured with a sweetened aura that distorts the reality of individuals, societies and history.

The unrealistically positive ideas associated with meditation only make people more vulnerable to either the adverse psychological effects or its enlightened-amoral teachers.

The other danger was that the cover up about the dark aspects of meditation, implicitly or explicitly endowed by scientists studying its effects, could destroy the good it had to offer. I painfully understood Koestler’s feelings of disillusion at the end of his chapter on Japanese Buddhism:

‘For a week or so I bargained with a Kyoto antique dealer for a small bronze Buddha of the Kamakura period; but when he came down to a price that I was able to afford, I backed out. I realized with shock that the Buddha smile had gone dead on me. It was no longer mysterious but empty.’

But I also realized, with a sense of relief and humility, that meditation need not be a panacea to cure every ill, nor a tool to moral perfection; perhaps we shouldn’t treat it very differently from prayer, which can quiet our minds, give us some comfort, and lead us towards a deeper place where we can explore who we are or be closer to God.

Perhaps meditation was never supposed to be more than a tool to help with self-knowledge; one that could never be divorced from a strong ethical grounding, who we are and the world we live in. In Patanjali’s sutras, when he describes the various aspects of yoga, meditation is only one of them. The first one, the very basis of a healthy and eventual selfless being is self-restraint (yama), which he defines as ‘non-violation, truthfulness, non-stealing, containment, and non-grasping’. And to be sure that these are the definite and nondebatable foundations he adds:

‘These restraints are not limited by birth, time or circumstance; they constitute the great vow everywhere.’

Only with this strong foundation, can the other limbs of yoga (as Patanjali calls them) emerge, including the asanas, pranayama, meditation and the blissful experiences of unity with the ground of being.


Re-reading this chapter I felt unhappy not to finish on a more hopeful note. Despite its dark side and the limitations of the current scientific research, I still think meditation is a technique with real potential for personal change, if properly guided and taught within a larger spiritual-ethical framework.

I was also aware that read on its own, religious extremists and proselytizers could use it to belittle Buddhism and Hinduism. I thought of looking for someone who, coming from the West, had embraced the Eastern meditation tradition without denying its darker side. I found that person in Swami Ambikananda, a South African woman who took religious Hindu vows and who teaches meditation and yoga, while also running a charity in the southwest of England. She has translated a number of Indian sacred texts from the Sanskrit; I’d read her clear and poetic translation of the Katha Upanishad, which has the very first recorded teaching on yogag.

She welcomed me at her house in Reading, about an hour west of London. It felt odd to call her Swami Am-bi-ka-nan-da, seven full syllables of a name; her direct and expansive personality seemed to require no more than two. I wondered what birth name she’d been given, but it felt odd to ask. She was dressed in the orange cloth of the Indian ascetics, but her way of speaking and gesticulating was definitely Western feminine. We walked into her living room and she invited me to sit down on a cushion on the floor.

‘We have no chairs here,’ she explained. ‘I hope you don’t mind.’

‘I don’t. Is it okay if I write down some notes?’

She offered me tea. I was happy to see her again. We’d first met at the day course on the psychology of meditation I gave with Catherine Crane. Her questions and comments stood out, very much like her orange garment. When I told her l was writing this book and looking into the potential dark side of meditation, she asked whether I had heard of Aaron Alexis; I hadn’t yet.

‘There is a new dogma about meditation: when it fails its limitations are never questioned,’ she told me. ‘We are told that they weren’t doing it right. But it may be neither the practice nor the person that is wrong. The truth about our human condition is that no one thing works for everyone. The spiritual journey is about the unmasking of oneself, being more authentically “self” and whatever path leads us there is grand for each of us, but that particular path is not necessarily good for all of us.’

She was aware of the dangers of contemplative practice and open about it. I asked how she had become interested in meditation and Indian spirituality.

‘My father was a Marxist atheist and my mum a devout Catholic. This was confusing but not too much, until I turned 11 or 12. Then, I heard about the doctrine of limbo. I don’t think the Catholic Church believes in it anymore you know, this place where the souls of unbaptized children were supposed to go and stay for eternity. That was it for me; I became an atheist. I didn’t think too much about the soul or religion for a while, until I had twins and then became depressed.

One day a friend thought it was a good distraction to take me to a lecture by a swami, so I went. The people there were very serious, try lighting a cigarette in a yoga lecture like I did! but there was something I liked about Swami Venkatesananda and kept meeting him. But I only got to the yoga and meditation later.’

‘How did that happen?’

‘I was visiting Swami Venkatesananda in Mauritius and had bought a pile of books on Indian spirituality and philosophy. One day he told me he needed some help in clearing up some junk and pointed at a ravine where people threw all kinds of stuff old fridges, cars, you name it. I said, yes, I’ll help you. He then picked up the whole pile of books I had just bought and threw them down the ravine. “Why did you do that?” I asked him. He told me the time had come to stop reading and to try out yoga and meditation. That’s how I started.’

I asked Ambikananda whether or not she believes meditation can change a person, and, if she does, how much meditation it would take to change. She told me about meeting Krishnamurti, the Indian-born writer who was heralded as the New World messiah by the Theosophical Society, but eventually walked away from the movement to become a kind of spiritual free thinker.

‘He told me two minutes a day was enough. I laughed; it takes me two hours of meditation to get two worthwhile minutes! But he was right that it’s not only about meditation; your intention counts. My teacher used to tell me: “Hunt down the self ruthlessly; this isn’t for the faint hearted.” There is an acknowledgment in all religious traditions whether it’s the spiritual work of Ignatius of Loyola or the process of St Theresa of Avila, or the Way of the Buddha, or the yoga path of Patanjali that you need more than meditation to change.’

‘What do you mean?’ I asked.

‘To start with, you need to have a healthy ego; what kind of self are you surrendering if you don’t have a stable sense of who you are?’

‘What about the clinical use of mindfulness to treat depression and anxiety? I suppose you don’t have a healthy sense of self there …’

I’m uncertain about the exact value of mindfulness,’ Ambikananda told me. ‘Since it has moved out of the monastic environment into the wider secular world, meditation is being sold as that which will not only make use feel better but will make us better people more successful, stronger, convincing …’

I interrupted her. ‘But are you aware that some researchers are claiming that mindfulness meditation per se can turn you into a better, more compassionate person?’

‘No, no, no,’ she stressed. ‘Meditation needs to be embedded in its context, there are moral and emotional guidelines to be followed; Patanjali spells them out clearly in his work on yoga.’

‘But the whole purpose of meditating isn’t it meant to make you an enlightened and deeply moral individual; moral in the sense of unselfish and compassionate. Isn’t that what happens?’

‘Morality can be divorced from spirituality. My ego can dissolve while I meditate, but when I get up it’s reconstructed. You can meditate 22 hours a day, but in those two hours you have left, you are a human being living in matter, and this aspect of reality’ (she touched the ground) ‘doesn’t care too much if you’re enlightened or not.’

I told Swami Ambikananda about the evidence I’d uncovered concerning the adverse aspects of meditation and its violent history in the East; she simply nodded. Even the claims of sexual abuse by some spiritual teachers didn’t surprise her. ‘I had one of the few truly celibate Indian spiritual teachers,’ she admitted.

Ambikananda then told me the story of once travelling through the Himalayas in search of a levitating holy man. She was staying at the ashram of her teacher in Rishikesh, at the foot of the Ganges, when a friend told her about a flying hermit, who lived in a cave only a day’s journey away.

‘It took us about three days, walking in the Himalayas to find him. We were going in the wrong direction for more than a day. But we managed to find our way and met the flying baba. He asked for some rupees and went into a trance state. After a few minutes I couldn’t believe my eyes: he was really lifting off the ground! I felt rather irritated; this is not supposed to happen. I got some branches from a nearby tree and moved them beneath and above him to make sure it was not a trick. I couldn’t see the trick and asked him to do it again; he did it and still I couldn’t see how. When I asked if he could also do it standing, he said he couldn’t; he had to be sitting down. I wanted to see him doing it a third time, but he refused. He said he’d teach me if I stayed for a few days and gave him some more rupees. ‘And did you?’ I asked.

‘I wouldn’t stay alone with that man for anything in this worldl’ she said laughing. ‘He made it very clear that besides money he wanted sexual favours.’

After our talk Swami Ambikananda gave me a lift to Reading railway station. I thanked her for her time and asked again about Aaron Alexis, the man who was a regular meditator and killed 12 people.

‘Do you think it had anything to do with meditation?’

‘I don’t know. I don’t dispute that he had serious mental health problems; but meditation probably didn’t help him either. Meditation is about looking into the abyss within, it wasn’t created to make you or me happy, but to help us fight the illusions we have and find out Who we truly are. My teacher used to tell me: “This’ Is your battle, you fight it with everything you ’ve got.” He also used to say that we shouldn’t take ourselves too seriously. I certainly do my best,’ she said, laughing.

Meeting this lively and grounded South African woman turned Hindu priest made me feel less pessimistic about the use of meditation and yoga in the West. If we admit its frailties and limits, that it takes other things for these techniques to make real positive change the right intention, a good teacher and moral framing they can still prove effective engines of personal change.

I wanted to test the effects on a population who might not often get the chance to try out yoga and meditation practice, but who might need the benefits more than your average person. Back in Oxford I rang the director of the Prison Phoenix Trust. ‘Sam, let’s go ahead with the research project. If you provide the yoga and meditation classes to prisoners, I’ll handle the science part.’



The Buddha Pill: Can Meditation Change You?

by Dr Miguel Farias and Dr Catherine Wikholm

get it at

Stephen Hawking. His Life And Work – Kitty Ferguson.

The Story and Science of One of the Most Extraordinary, Celebrated and Courageous Figures of Our Time.



Stephen Hawking is one of the most remarkable figures of our time, a Cambridge genius who has earned international celebrity and become an inspiration to those who have witnessed his triumph over disability. This is Hawking’s life story by Kitty Ferguson, written with help from Hawking himself and his close associates.

Ferguson’s Stephen Hawking’s Quest for a Theory of Everything was a Sunday Times bestseller in 1992. She has now transformed that short book into a hugely expanded, carefully researched, up to the minute biography giving a rich picture of Hawking’s life, his childhood, the heart rending beginning of his struggle with motor neurone disease, his ever increasing international fame, and his long personal battle for survival in pursuit of a scientific understanding of the universe. Throughout, Kitty Ferguson also summarizes and explains the cutting-edge science in which Hawking has been engaged.

Stephen Hawking is written with the clarity and simplicity for which all Kitty Ferguson’s books have been praised. The result is a captivating account of an extraordinary life and mind.


The quest for a Theory of Everything

Kitty Ferguson

IN THE CENTRE of Cambridge, England, There are a handful of narrow lanes that seem hardly touched by the twentieth or twenty-first centuries. The houses and buildings represent a mixture of eras, but a step around the corner from the wider thoroughfares into any of these little byways is a step back in time, into a passage leading between old college walls or a village street with a medieval church and churchyard or a malt house. Traffic noises from equally old but busier roads nearby are barely audible. There is near silence, birdsong, voices, footsteps. Scholars and townspeople have walked here for centuries.

When I wrote my first book about Stephen Hawking in 1990, I began the story in one of those little streets, Free School Lane. It runs off Bene’t Street, beside the church of St Bene’t’s with its eleventh century bell tower. Around the corner, in the lane, flowers and branches still droop through the iron palings of the churchyard, as they did twenty years ago. Bicycles tethered there belie the antique feel of the place, but a little way along on the right is a wall of black, rough stones with narrow slit windows belonging to the fourteenth-century Old Court of Corpus Christi College, the oldest court in Cambridge. Turn your back to that wall and you will see, high up beside a gothic-style gateway, a plaque that reads, THE CAVENDISH LABORATORY. This gateway and the passage beyond are a portal to a more recent era, oddly tucked away in the medieval street.

There is no hint of the friary that stood on this site in the twelfth century or the gardens that were later planted on its ruins. Instead, bleak, factory like buildings, almost oppressive enough to be a prison, tower over grey asphalt pavement. The situation improves further into the complex, and in the two decades since I first wrote about it some newer buildings have gone up, but the glass walls of these well-designed modern structures are still condemned to reflect little besides the grimness of their older neighbours.

For a century, until the University of Cambridge built the ‘New’ Cavendish Labs in 1974, this complex housed one of the most important centres of physics research in the world. Here, ‘J. J.’ Thomson discovered the electron, Ernest Rutherford probed the structure of the atom and the list goes on and on. When I attended lectures here in the 1990s (for not everything moved to the New Cavendish in 1974), enormous chalk-boards were still in use, hauled noisily up and down with crank driven chain pulley systems to make room for the endless strings of equations in a physics lecture.

The Cockcroft Lecture Room, part of this same site, is a much more up-to-date lecture room. Here, on 29 April 1980, scientists, guests and university dignitaries gathered in steep tiers of seats, facing a two-storey wall of chalk board and slide screen still well before the advent of PowerPoint. The occasion was the inaugural lecture of a new Lucasian Professor of Mathematics, 38-year-old mathematician and physicist Stephen William Hawking. He had been named to this illustrious chair the previous autumn.

The title announced for his lecture was a question: ‘Is the End in Sight for Theoretical Physics?’ Hawking startled his listeners by announcing that he thought it was. He invited them to join him in a sensational escape through time and space on a quest to find the Holy Grail of science: the theory that explains the universe and everything that happens in it what some were calling the Theory of Everything.

Watching Stephen Hawking, silent in a wheelchair while one of his students read his lecture for the audience, no one unacquainted with him would have thought he was a promising choice to lead such an adventure.

Theoretical physics was for him the great escape from a prison more grim than any suggested by the Old Cavendish Labs. Beginning when he was a graduate student in his early twenties, he had lived with encroaching disability and the promise of an early death. Hawking has amyotrophic lateral sclerosis, known in America as Lou Gehrig’s disease after the New York Yankees’ first baseman, who died of it. The progress of the disease in Hawking’s case had been slow, but by the time he became Lucasian Professor he could no longer walk, write, feed himself, or raise his head if it tipped forward. His speech was slurred and almost unintelligible except to those who knew him best. For the Lucasian lecture, he had painstakingly dictated his text earlier, so that it could be read by the student.

Jane and Stephen Hawking in the 60s.

But Hawking certainly was and is no invalid. He is an active mathematician and physicist, whom some were even then calling the most brilliant since Einstein. The Lucasian Professorship is an extremely prestigious position in the University of Cambridge, dating from 1663. The second holder of the chair was Sir Isaac Newton.

It was typical of Hawking’s iconoclasm to begin this distinguished professorship by predicting the end of his own field. He said he thought there was a good chance the so-called Theory of Everything would be found before the close of the twentieth century, leaving little for theoretical physicists like himself to do.

Since that lecture, many people have come to think of Stephen Hawking as the standard bearer of the quest for that theory. However, the candidate he named for Theory of Everything was not one of his own theories but ‘N=8 supergravity’, a theory which many physicists at that time hoped might unify all the particles and the forces of nature. Hawking is quick to point out that his work is only one part of a much larger picture, involving physicists all over the world, and also part of a very old quest.

The longing to understand the universe must surely be as ancient as human consciousness. Ever since human beings first began to look at the night skies as well as at the enormous variety of nature around them, and considered their own existence, they’ve been trying to explain all this with myths, religion, and, later, mathematics and science. We may not be much nearer to understanding the complete picture than our remotest ancestors, but most of us like to think, as does Stephen Hawking, that we are.

Hawking’s life story and his science continue to be full of paradoxes. Things are often not what they seem. Pieces that should fit together refuse to do so. Beginnings may be endings; cruel circumstances can lead to happiness, although fame and success may not; two brilliant and highly successful scientific theories taken together yield nonsense; empty space isn’t empty; black holes aren’t black; the effort to unite everything in a simple explanation reveals, instead, a fragmented picture; and a man whose appearance inspires shock and pity, takes us joyfully to where the boundaries of time and space ought to be but are not.

Anywhere we look in our universe, we find that reality is astoundingly complex and elusive, sometimes alien, not always easy to take, and often impossible to predict. Beyond our universe there may be an infinite number of others. The close of the twentieth century has come and gone, and no one has discovered the Theory of Everything. Where does that leave Stephen Hawking’s prediction? Can any scientific theory truly explain it all?


“Our goal is nothing less than a complete description of the universe we live in”

THE IDEA THAT all the amazing intricacy and variety we experience in the world and the cosmos may come down to something remarkably simple is not new or far-fetched. The sage Pythagoras and his followers in southern Italy in the sixth century BC studied the relationships between lengths of strings on a lyre and the musical pitches these produced, and realized that hidden behind the confusion and complexity of nature there is pattern, order, rationality. In the two and a half millennia since, our forebears have continued to find often, like the Pythagoreans, to their surprise and awe that nature is less complicated than it first appears.

Imagine, if you can, that you are a super-intelligent alien who has absolutely no experience of our universe: is there a set of rules so complete that by studying them you could discover exactly what our universe is like? Suppose someone handed you that rule book. Could it possibly be a short book?

For decades, many physicists believed that the rule book is not lengthy and contains a set of fairly simple principles, perhaps even just one principle that lies behind everything that has happened, is happening, and ever will happen in our universe. In 1980, Stephen Hawking made the brash claim that we would hold the rule book in our hands by the end of the twentieth century.

My family used to own a museum facsimile of an ancient board game. Archaeologists digging in the ruins of the city of Ur in Mesopotamia had unearthed an exquisite inlaid board with a few small carved pieces. It was obviously an elaborate game, but no one knew its rules. The makers of the facsimile had tried to deduce them from the design of the board and pieces, but those like ourselves who bought the game were encouraged to make our own decisions and discoveries about how to play it.

You can think of the universe as something like that: a magnificent, elegant, mysterious game. Certainly there are rules, but the rule book didn’t come with the game. The universe is no beautiful relic like the game found at Ur. Yes, it is old, but the game continues. We and everything we know about (and much we do not) are in the thick of the play. If there is a Theory of Everything, we and everything in the universe must be obeying its principles, even while we try to discover what they are.

You would expect the complete, unabridged rules for the universe to fill a vast library or super computer. There would be rules for how galaxies form and move, for how human bodies work and fail to work, for how humans relate to one another, for how subatomic particles interact, how water freezes, how plants grow, how dogs bark intricate rules, within rules within rules. How could anyone think this could be reduced to a few principles?

Richard Feynman, the American physicist and Nobel laureate, gave an excellent example of the way the reduction process happens. There was a time, he pointed out, when we had something we called motion and something else called heat and something else again called sound. ‘But it was soon discovered,’ wrote Feynman:

“After Sir Isaac Newton explained the laws of motion, that some of these apparently different things were aspects of the same thing. For example, the phenomena of sound could be completely understood as the motion of atoms in the air. So sound was no longer considered something in addition to motion. It was also discovered that heat phenomena are easily understandable from the laws of motion. In this way, great globs of physics theory were synthesized into a simplified theory.”

Life among the Small Pieces

All matter as we normally think of it in the universe, you and I, air, ice, stars, gases, microbes, this book, is made up of minuscule building blocks called atoms. Atoms in turn are made up of smaller objects, called particles, and a lot of empty space.

The most familiar matter particles are the electrons that orbit the nuclei of atoms and the protons and neutrons that are clustered in the nuclei. Protons and neutrons are made up of even tinier particles of matter called ‘quarks’. All matter particles belong to a class of particles called ‘fermions’, named for the great Italian physicist Enrico Fermi. They have a system of messages that pass among them, causing them to act and change in various ways. A group of humans might have a message system consisting of four different services: telephone, fax, e-mail and ‘snail mail’. Not all the humans would send and receive messages and influence one another by means of all four message services. You can think of the message system among the fermions as four such message services, called forces. There is another class of particles that carry these messages among the fermions, and sometimes among themselves as well: ‘messenger’ particles, more properly called ‘bosons’. Apparently every particle in the universe is either a fermion or a boson.

One of the four fundamental forces of nature is gravity. One way of thinking about the gravitational force holding us to the Earth is as ‘messages’ carried by bosons called gravitons between the particles of the atoms in your body and the particles of the atoms in the Earth, influencing these particles to draw closer to one another. Gravity is the weakest of the forces, but, as we’ll see later, it is a very long-range force and acts on everything in the universe. When it adds up, it can dominate all the other forces.

A second force, the electromagnetic force, is messages carried by bosons called photons among the protons in the nucleus of an atom, between the protons and the electrons nearby, and among electrons. The electromagnetic force causes electrons to orbit the nucleus. On the level of everyday experience, photons show up as light, heat, radio waves, microwaves and other waves, all known as electromagnetic radiation. The electromagnetic force is also long-range and much stronger than gravity, but it acts only on particles with an electric charge.

A third message service, the strong nuclear force, causes the nucleus of the atom to hold together.

A fourth, the weak nuclear force, causes radioactivity and plays a necessary role, in stars and in the early universe, in the formation of the elements.

The gravitational force, the electromagnetic force, the strong nuclear force, and the weak nuclear force, the activities of those four forces are responsible for all messages among all fermions in the universe and for all interactions among them. Without the four forces, every fermion (every particle of matter) would exist, if it existed at all, in isolation, with no means of contacting or influencing any other, oblivious to every other. To put it bluntly, whatever doesn’t happen by means of one of the four forces doesn’t happen. If that is true, a complete understanding of the forces would give us an understanding of the principles underlying everything that happens in the universe. Already we have a remarkably condensed rule book.

Much of the work of physicists in the twentieth century was aimed at learning more about how the four forces of nature operate and how they are related. In our human message system, we might discover that telephone, fax and e-mail are not really so separate after all, but can be thought of as the same thing showing up in three different ways. That discovery would ‘unify’ the three message services. In a similar way, physicists have sought, with some success, to unify the forces. They hope ultimately to find a theory which explains all four forces as one showing up in different ways, a theory that may even unite both fermions and bosons in a single family. They speak of such a theory as a unified theory.

A theory explaining the universe, the Theory of Everything, must go several steps further. Of particular interest to Stephen Hawking, it must answer the question, what was the universe like at the instant of beginning, before any time whatsoever had passed? Physicists phrase that question: what are the ‘initial conditions’ or the ‘boundary conditions at the beginning of the universe’? Because this issue of boundary conditions has been and continues to be at the heart of Hawking’s work, it behooves us to spend a little time with it.

The Boundary Challenge

Suppose you put together a layout for a model railway, then position several trains on the tracks and set the switches and throttles controlling the train speeds as you want them, all before turning on the power. You have set up boundary conditions. For this session with your train set, reality is going to begin with things in precisely this state and not in any other. Where each train will be five minutes after you turn on the power, whether any train will crash with another, depends heavily on these boundary conditions.

Imagine that when you have allowed the trains to run for ten minutes, without any interference, a friend enters the room. You switch off the power. Now you have a second set of boundary conditions: the precise position of everything in the layout at the second you switched it off. Suppose you challenge your friend to try to work out exactly where all the trains started out ten minutes earlier. There would be a host of questions besides the simple matter of where the trains are standing and how the throttles and switches are set. How quickly does each of the trains accelerate and slow down? Do certain parts of the tracks offer more resistance than others? How steep are the gradients? Is the power supply constant? Is it certain there has been nothing to interfere with the running of the train set, something no longer evident?

The whole exercise would indeed be daunting. Your friend would be in something like the position of a modern physicist trying to work out how the universe began, what were the boundary conditions at the beginning of time.

Boundary conditions in science do not apply only to the history of the universe. They simply mean the lie of the land at a particular point in time, for instance the start of an experiment in a laboratory. However, unlike the situation with the train set or a lab experiment, when considering the universe, one is often not allowed to set up boundary conditions.

One of Hawking’s favourite questions is how many ways the universe could have begun and still ended up the way we observe it today, assuming that we have correct knowledge and understanding of the laws of physics and they have not changed. He is using ‘the way we observe the universe today’ as a boundary condition and also, in a more subtle sense, using the laws of physics and the assumption that they have not changed as boundary conditions. The answer he is after is the reply to the question, what were the boundary conditions at the beginning of the universe, or the ‘initial conditions of the universe’ the exact layout at the word go, including the minimal laws that had to be in place at that moment in order to produce at a certain time in the future the universe as we know it today? It is in considering this question that he has produced some of his most interesting work and surprising answers.

A unified description of the particles and forces, and knowledge of the boundary conditions for the origin of the universe, would be a stupendous scientific achievement, but it would not be a Theory of Everything. In addition, such a theory must account for values that are ‘arbitrary elements’ in all present theories.

Language Lesson

Arbitrary elements include such ‘constants of nature’ as the mass and charge of the electron and the velocity of light. We observe what these are, but no theory explains or predicts them. Another example: physicists know the strength of the electromagnetic force and the weak nuclear force. The electroweak theory is a theory that unifies the two, but it cannot tell us how to calculate the difference in strength between the two forces. The difference in strength is an ‘arbitrary element’, not predicted by the theory. We know what it is from observation, and so we put it into a theory ‘by hand’. This is considered a weakness in a theory.

When scientists use the word predict, they do not mean telling the future. The question ‘Does this theory predict the speed of light?’ isn’t asking whether the theory tells us what that speed will be next Tuesday. It means, would this theory make it possible for us to work out the speed of light if it were impossible to observe what that speed is? As it happens, no present theory does predict the speed of light. It is an arbitrary element in all theories.

One of Hawking’s concerns when he wrote A Brief History of Time was that there be a clear understanding of what is meant by a theory. A theory is not Truth with a capital T, not a rule, not fact, not the final word. You might think of a theory as a toy boat. To find out whether it floats, you set it on the water. You test it. When it flounders, you pull it out of the water and make some changes, or you start again and build a different boat, benefiting from what you’ve learned from the failure.

Some theories are good boats. They float a long time. We may know there are a few leaks, but for all practical purposes they serve us well. Some serve us so well, and are so solidly supported by experiment and testing, that we begin to regard them as truth. Scientists, keeping in mind how complex and surprising our universe is, are extremely wary about calling them that. Although some theories do have a lot of experimental success to back them up and others are hardly more than a glimmer in a theorist’s eyes brilliantly designed boats that have never been tried on the water it is risky to assume that any of them is absolute, fundamental scientific ‘truth’.

It is important, however, not to dither around for ever, continuing to call into question well-established theories without having a good reason for doing so. For science to move ahead, it is necessary to decide whether some theories are dependable enough, and match observation sufficiently well, to allow us to use them as building blocks and proceed from there. Of course, some new thought or discovery might come along and threaten to sink the boat. We’ll see an example of that later in this book.

In A Brief History of Time Stephen Hawking wrote that a scientific theory is ‘just a model of the universe, or a restricted part of it, and a set of rules that relate quantities in the model to observations that we make. It exists only in our minds and does not have any other reality (whatever that may mean)? The easiest way to understand this definition is to look at some examples.

There is a film clip showing Hawking teaching a class of graduate students, probably in the early 1980s, with the help of his graduate assistant. By this time Hawking’s ability to speak had deteriorated so seriously that it was impossible for anyone who did not know him well to understand him. In the clip, his graduate assistant interprets Hawking’s garbled speech to say, ‘Now it just so happens that we have a model of the universe here’, and places a large cardboard cylinder upright on the seminar table. Hawking frowns and mutters something that only the assistant can understand. The assistant apologetically picks up the cylinder and turns it over to stand on its other end. Hawking nods approval, to general laughter.

A ‘model’, of course, does not have to be something like a cardboard cylinder or a drawing that we can see and touch. It can be a mental picture or even a story. Mathematical equations or creation myths can be models.

Getting back to the cardboard cylinder, how does it resemble the universe? To make a full-fledged theory out of it, Hawking would have to explain how the model is related to what we actually see around us, to ‘observations’, or to what we might observe if we had better technology. However, just because someone sets a piece of cardboard on the table and tells how it is related to the actual universe does not mean anyone should accept this as the model of the universe. We are to consider it, not swallow it hook, line and sinker. It is an idea, existing ‘only in our minds’. The cardboard cylinder may turn out to be a useful model. On the other hand, some evidence may turn up to prove that it is not. We shall have found that we are part of a slightly different game from the one the model suggested we were playing. Would that mean the theory was ‘bad’? No, it may have been a very good theory, and everyone may have learned a great deal from considering it, testing it, and having to change it or discard it. The effort to shoot it down may have required innovative thinking and experiments that will lead to something more successful or pay off in other ways.

What is it then that makes a theory a good theory? Quoting Hawking again, it must ‘accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations’.

For example, Isaac Newton’s theory of gravity describes a very large class of observations. It predicts the behaviour of objects dropped or thrown on Earth, as well as planetary orbits.

It’s important to remember, however, that a good theory does not have to arise entirely from observation. A good theory can be a wild theory, a great leap of imagination. ‘The ability to make these intuitive leaps is really what characterizes a good theoretical physicist,’ says Hawking. However, a good theory should not be at odds with things already observed, unless it gives convincing reasons for seeming to be at odds.

Superstring theory, one of the most exciting current theories, predicts more than three dimensions of space, a prediction that certainly seems inconsistent with observation. Theorists explain the discrepancy by suggesting the extra dimensions are curled up so small we are unable to recognize them.

We’ve already seen what Hawking means by his second requirement, that a theory contain only a few arbitrary elements.

The final requirement, according to Hawking, is that it must suggest what to expect from future observations. It must challenge us to test it. It must tell us what we will observe if the theory is correct. It should also tell us what observations would prove that it is not correct. For example, Albert Einstein’s theory of general relativity predicts that beams of light from distant stars bend a certain amount as they pass massive bodies like the sun. This prediction is testable. Tests have shown Einstein was correct.

Some theories, including most of Stephen Hawking’s, are impossible to test with our present technology, perhaps even with any conceivable future technology. They are tested with mathematics. They must be mathematically consistent with what we do know and observe. But we cannot observe the universe in its earliest stages to find out directly whether his ‘no-boundary proposal’ (to be discussed later) is correct. Although some tests were proposed for proving or disproving ‘wormholes’, Hawking does not think they would succeed. But he has told us what he thinks we will find if we ever do have the technology, and he is convinced that his theories are consistent with what we have observed so far. In some cases he has risked making some very specific predictions about the results of experiments and observations that push at the boundaries of our present capabilities.

If nature is perfectly unified, then the boundary conditions at the beginning of the universe, the most fundamental particles and the forces that govern them, and the constants of nature, are interrelated in a unique and completely compatible way, which we might be able to recognize as inevitable, absolute and self-explanatory. To reach that level of understanding would indeed be to discover the Theory of Everything, of Absolutely Everything even the answer, perhaps, to the question of why does the universe fit this description to ‘know the Mind of God’, as Hawking termed it in A Brief History of Time, or ‘the Grand Design’, as he would phrase it less dramatically in a more recent book by that name.

Laying Down the Gauntlet

We are ready to list the challenges that faced any ‘Theory of Everything’ candidate when Hawking delivered his Lucasian Lecture in 1980. You’ll learn in due course how some requirements in this list have changed subtly since then.

– It must give us a model that unifies the forces and particles.

– It must answer the question, what were the ‘boundary conditions’ of the universe, the conditions at the very instant of beginning, before any time whatsoever passed?

– It must be ‘restrictive’, allowing few options. It should, for instance, predict precisely how many types of particles there are. If it leaves options, it must somehow account for the fact that we have the universe we have and not a slightly different one.

– It should contain few arbitrary elements. We would rather not have to peek too often at the actual universe for answers. Paradoxically, the Theory of Everything itself may be an arbitrary element. Few scientists expect it to explain why there should exist either a theory or anything at all for it to describe. It is not likely to answer Stephen Hawking’s question: ‘Why does the universe [or, for that matter, the Theory of Everything] go to all the bother of existing?’

– It must predict a universe like the universe we observe or else explain convincingly why there are discrepancies. If it predicts that the speed of light is ten miles per hour, or disallows penguins or pulsars, we have a problem. A Theory of Everything must find a way to survive comparison with what we observe.

– It should be simple, although it must allow for enormous complexity. The physicist John Archibald Wheeler of Princeton wrote:

“Behind it all is surely an idea so simple, so beautiful, so compelling that when in a decade, a century, or a millennium we grasp it, we will all say to each other, how could it have been otherwise? How could we have been so stupid for so long?”

The most profound theories, such as Newton’s theory of gravity and Einstein’s relativity theories, are simple in the way Wheeler described.

– It must solve the enigma of combining Einstein’s theory of general relativity (a theory that explains gravity) with quantum mechanics (the theory we use successfully when talking about the other three forces).

This is a challenge that Stephen Hawking has taken up. We introduce the problem here. You will understand it better after reading about the uncertainty principle of quantum mechanics in this chapter and about general relativity later.

Theory Meets Theory

Einstein’s theory of general relativity is the theory of the large and the very large stars, planets, galaxies, for instance. It does an excellent job of explaining how gravity works on that level.

Quantum mechanics is the theory of the very small. It describes the forces of nature as messages among fermions (matter particles). Quantum mechanics also contains something extremely frustrating, the uncertainty principle: we can never know precisely both the position of a particle and its momentum (how it is moving) at the same time. In spite of this problem, quantum mechanics does an excellent job of explaining things on the level of the very small.

One way to combine these two great twentieth century theories into one unified theory would be to explain gravity, more successfully than has been possible so far, as an exchange of messenger particles, as we do with the other three forces. Another avenue is to rethink general relativity in the light of the uncertainty principle.

Explaining gravity as an exchange of messenger particles presents problems. When you think of the force holding you to the Earth as the exchange of gravitons (messenger particles of gravity) between the matter particles in your body and the matter particles that make up the Earth, you are describing the gravitational force in a quantum-mechanical way. But because all these gravitons are also exchanging gravitons among themselves, mathematically this is a messy business. We get infinities, mathematical nonsense.

Physical theories cannot really handle infinities. When they have appeared in other theories, theorists have resorted to something known as ‘renormalization’. Richard Feynman used renormalization when he developed a theory to explain the electromagnetic force, but he was far from pleased about it. ‘No matter how clever the word,’ he wrote, ‘it is what I would call a dippy process!’ It involves putting in other infinities and letting the infinities cancel each other out. It does sound dubious, but in many cases it seems to work in practice. The resulting theories agree with observation remarkably well.

Renormalization works in the case of electromagnetism, but it fails in the case of gravity. The infinities in the gravitational force are of a much nastier breed than those in the electromagnetic force. They refuse to go away. Supergravity, the theory Hawking spoke about in his Lucasian lecture, and superstring theory, in which the basic objects in the universe are not pointlike particles but tiny strings or loops of string, began to make promising inroads in the twentieth century; and later in this book we shall be looking at even more promising recent developments. But the problem is not completely solved.

On the other hand, what if we allow quantum mechanics to invade the study of the very large, the realm where gravity seems to reign supreme? What happens when we rethink what general relativity tells us about gravity in the light of what we know about the uncertainty principle, the principle that you can’t measure accurately the position and the momentum of a particle at the same time? Hawking’s work along these lines has had bizarre results: black holes aren’t black, and the boundary conditions may be that there are no boundaries.

While we are listing paradoxes, here’s another: empty space isn’t empty. Later in this book we’ll discuss how we arrive at that conclusion. For now be content to know that the uncertainty principle means that so-called empty space teems with particles and antiparticles. (The matter-antimatter used in science fiction is a familiar example.)

General relativity tells us that the presence of matter or energy makes spacetime curve, or warp. We’ve already mentioned one result of that curvature: the bending of light beams from distant stars as they pass a massive body like the sun.

Keep those two points in mind: (1) ‘Empty’ space is filled with particles and antiparticles, adding up to an enormous amount of energy. (2) The presence of this energy causes curvature of spacetime.

If both are true the entire universe ought to be curled up into a small ball. This hasn’t happened. When general relativity and quantum mechanics work together, what they predict seems to be dead wrong. Both general relativity and quantum mechanics are exceptionally good theories, two of the outstanding intellectual achievements of the twentieth century. They serve us magnificently not only for theoretical purposes but in many practical ways. Nevertheless, put together they yield infinities and nonsense. The Theory of Everything must somehow resolve that nonsense.

Predicting the Details

Once again imagine that you are an alien who has never seen our universe. With the Theory of Everything you ought nevertheless to be able to predict everything about it right? It’s possible you can predict suns and planets and galaxies and black holes and quasars but can you predict next year’s Derby winner? How specific can you be? Not very. The calculations necessary to study all the data in the universe are ludicrously far beyond the capacity of any imaginable computer. Hawking points out that although we can solve the equations for the movement of two bodies in Newton’s theory of gravity, we can’t solve them exactly for three bodies, not because Newton’s theory doesn’t work for three bodies but because the maths is too complicated. The real universe, needless to say, has more than three bodies in it.

Nor can we predict our health, although we understand the principles that underlie medicine, the principles of chemistry and biology, extremely well. The problem again is that there are too many billions upon billions of details in a real-life system, even when that system is just one human body.

With the Theory of Everything in our hands we’d still be a staggeringly long way from predicting everything. Even if the underlying principles are simple and well understood, the way they work out is enormously complicated. ‘A minute to learn, the lifetime of the universe to master’, to paraphrase an advertising slogan. ‘Lifetime of the universe to master’ is a gross understatement.”

Where does that leave us? What horse will win the Grand National next year is predictable with the Theory of Everything, but no computer can hold all the data or do the maths to make the prediction. Is that correct?

There’s a further problem. We must look again at the uncertainty principle of quantum mechanics.

The Fuzziness of the Very Small

At the level of the very small, the quantum level of the universe, the uncertainty principle also limits our ability to predict.

Think of all those odd, busy inhabitants of the quantum world, both fermions and bosons. They’re an impressive zoo of particles. Among the fermions there are electrons, protons and neutrons. Each proton or neutron is, in turn, made up of three quarks, which are also fermions. Then we have the bosons: photons (messengers of the electromagnetic force), gravitons (the gravitational force), gluons (the strong force), and Ws and Zs (the weak force). It would be helpful to know where all these and many others are, where they are going, and how quickly they are getting there. Is it possible to find out?

The diagram of an atom (fig 2.1) is the model proposed by New Zealander Ernest Rutherford at the Cavendish Labs in Cambridge early in the twentieth century. It shows electrons orbiting the nucleus of the atom as planets orbit the sun. We now know that things never really look like this on the quantum level. The orbits of electrons cannot be plotted as though electrons were planets. We do better to picture them swarming in a cloud around the nucleus. Why the blur?

The uncertainty principle makes life at the quantum level a fuzzy, imprecise affair, not only for electrons but for all the particles.

Regardless of how we go about trying to observe what happens, it is impossible to find out precisely both the momentum and the position of a particle at the same time. The more accurately we measure how the particle is moving, the less accurately we know its position, and vice versa.

It works like a seesaw: when the accuracy of one measurement goes up, the accuracy of the other must go down. We pin down one measurement only by allowing the other to become more uncertain.

The best way to describe the activity of a particle is to study all the possible ways it might be moving and then calculate how likely one way is as opposed to another. It becomes a matter of probabilities. A particle has this probability to be moving that way or it has that probability to be here. Those probabilities are nevertheless very useful information.

It’s a little like predicting the outcome of elections. Election poll experts work with probabilities. When they deal with large enough numbers of voters, they come up with statistics that allow them to predict who will win the election and by what margin, without having to know how each individual will vote. When quantum physicists study a large number of possible paths that particles might follow, the probabilities of their moving thus and so or of being in one place rather than another become concrete information.

Pollsters admit that interviewing an individual can influence a vote by causing the voter to become more aware of issues. Physicists have a similar dilemma. Probing the quantum level influences the answers they find.

Thus far the comparison between predicting elections and studying the quantum level seems a good one. Now it breaks down: on election day, each voter does cast a definite vote one way or another, secret perhaps but not uncertain. lf pollsters placed hidden cameras in voting booths and were not arrested they could find out how each individual voted. It is not like that in quantum physics. Physicists have devised ingenious ways of sneaking up on particles, all to no avail. The world of elementary particles does not just seem uncertain because we haven’t been clever enough to find a successful way to observe it. It really is uncertain. No wonder Hawking, in his Lucasian lecture, called quantum mechanics ‘a theory of what we do not know and cannot predict’.

Taking this limitation into account, physicists have redefined the goal of science: the Theory of Everything will be a set of laws that make it possible to predict events up to the limit set by the uncertainty principle, and that means in many cases satisfying ourselves with statistical probabilities, not specifics.

Hawking sums up our problem. In answer to the question of whether everything is predetermined either by the Theory of Everything or by God, he says yes, he thinks it is. ‘But it might as well not be, because we can never know what is determined. If the theory has determined that we shall die by hanging, then we shall not drown. But you would have to be awfully sure that you were destined for the gallows to put to sea in a small boat during a storm.’ He regards the idea of free will as ‘a very good approximate theory of human behaviour’.

Is There Really a Theory of Everything?

Not all physicists believe there is a Theory of Everything, or, if there is, that it is possible for anyone to find it. Science may go on refining what we know by making discovery after discovery, opening boxes within boxes but never arriving at the ultimate box. Others argue that events are not entirely predictable but happen in a random fashion. Some believe God and human beings have far more freedom of give and-take within this creation than a deterministic Theory of Everything would allow. They believe that as in the performance of a great piece of orchestral music, though the notes are written down, there may yet be enormous creativity in the playing of the notes that is not at all predetermined.

Whether a complete theory to explain the universe is within our reach or ever will be, there are those among us who want to make a try. Humans are intrepid beings with insatiable curiosity. Some, like Stephen Hawking, are particularly hard to discourage. One spokesman for those who are engaged in this science, Murray Gell-Mann, described the quest:

“It is the most persistent and greatest adventure in human history, this search to understand the universe, how it works and where it came from. It is difficult to imagine that a handful of residents of a small planet circling an insignificant star in a small galaxy have as their aim a complete understanding of the entire universe, a small speck of creation truly believing it is capable of comprehending the whole.”

The advertising slogan for the game Othello is ‘A minute to learn, a lifetime to master’.

‘Equal to anything!’

WHEN STEPHEN HAWKING was twelve years old, two of his schoolmates made a bet about his future. John McClenahan bet that Stephen ‘would never come to anything’; Basil King, that he would ‘turn out to be unusually capable’. The stake was a bag of sweets.

Young S. W. Hawking was no prodigy. Some reports claim he was brilliant in a haphazard way, but Hawking remembers that he was just another ordinary English schoolboy, slow learning to read, his handwriting the despair of his teachers. He ranked no more than halfway up in his school class, though he now says, in his defence, ‘It was a very bright class.’ Maybe someone might have predicted a career in science or engineering from the fact that Stephen was intensely interested in learning the secrets of how things such as clocks and radios work. He took them apart to find out, but he could seldom put them back together. Stephen was never well coordinated physically, and he was not keen on sports or other physical activities. He was almost always the last to be chosen for any sports team.

John McClenahan had good reason to think he would win the wager.

Basil King probably was just being a loyal friend or liked betting on long shots. Maybe he did see things about Stephen that teachers, parents and Stephen himself couldn’t see. He hasn’t claimed his bag of sweets, but it’s time he did. Because Stephen Hawking, after such an unexceptional beginning, is now one of the intellectual giants of our modern world and among its most heroic figures. How such transformations happen is a mystery that biographical details alone cannot explain. Hawking would have it that he is still ‘just a child who has never grown up. I still keep asking these how and why questions. Occasionally I find an answer.’

1942 – 1959

Stephen William Hawking was born during the Second World War, on 8 January 1942, in Oxford. It was a winter of discouragement and fear, not a happy time to be born. Hawking likes to recall that his birth was exactly three hundred years after the death of Galileo, who is called the father of modern science. But few people in January 1942 were thinking about Galileo.

Stephen’s parents, Frank and Isobel Hawking, were not wealthy. Frank’s very prosperous Yorkshire grandfather had over-extended himself buying farm land and then gone bankrupt in the great agricultural depression of the early twentieth century. His resilient wife, Frank’s grandmother and Stephen’s great-grandmother, saved the family from complete ruin by opening a school in their home. Her ability and willingness to take this unusual step are evidence that reading and education must already have been a high priority in the family.

Isobel, Stephen’s mother, was the second oldest of seven children. Her father was a family doctor in Glasgow. When lsobel was twelve, they moved to Devon.

It wasn’t easy for either family to scrape together money to send a child to Oxford, but in both cases they did. Taking on a financial burden of this magnitude was especially unusual in the case of lsobel’s parents, for few women went to university in the 1930s. Though Oxford had been admitting female students since 1878, it was only in 1920 that the university had begun granting degrees to women. Isobel’s studies ranged over an unusually wide curriculum in a university where students tended to be much more specialized than in an American liberal arts college or university. She studied philosophy, politics and economics.

Stephen’s father Frank was a meticulous, determined young man who kept a journal every day from the age of fourteen and would continue it until the end of his life. He was at Oxford earlier than Isobel, studying medical science with a speciality in tropical medicine. When the Second World War broke out he was in East Africa doing field research, and he intrepidly found his way overland to take ship for England and volunteer for militaw service. He was assigned instead to medical research.

Isobel held several jobs after graduation from Oxford, all of them beneath her ability and credentials as a university graduate. One was as an inspector of taxes. She so loathed it that she gave it up in disgust to become a secretary at a medical institute in Hampstead. There she met Frank Hawking. They were married in the early years of the war.

In January 1942 the Hawkings were living in Highgate, north London. In the London area hardly a night passed without air raids, and Frank and Isobel Hawking decided Isobel should go to Oxford to give birth to their baby in safety. Germany was not bombing Oxford or Cambridge, the two great English university towns, reputedly in return for a British promise not to bomb Heidelberg and Gottingen. In Oxford, the city familiar from her recent university days, Isobel spent the final week of her pregnancy first in a hotel and then, as the birth grew imminent and the hotel grew nervous, in hospital, but she was still able to go out for walks to fill her time. On one of those leisurely winter days, she happened into a bookshop and, with a book token, bought an astronomical atlas. She would later regard this as a rather prophetic purchase.

Not long after Stephen’s birth on 8 January his parents took him back to Highgate. Their home survived the war, although a V-2 rocket hit a few doors away when the Hawkings were absent, blowing out the back windows of their house and leaving glass shards sticking out of the opposite wall like little daggers. It had been a good moment to be somewhere else.

After the war the family lived in Highgate until 1950. Stephen’s sister Mary was born there in 1943 (when Stephen was less than two years old), and a second daughter, Philippa, arrived in 1946. The family would adopt another son, Edward, in 1955, when Stephen was a teenager. ln Highgate Stephen attended the Byron House School, whose ‘progressive methods’ he would later blame for his not learning to read until after he left there.

When Dr Frank Hawking, beginning to be recognized as a brilliant leader in his field, became head of the Division of Parasitology at the National Institute for Medical Research, the family moved to St Albans.

Eccentric in St Albans

The Hawkings were a close family. Their home was full of good books and good music, often reverberating with the operas of Richard Wagner played at high volume on the record player. Frank and Isobel Hawking believed strongly in the value of education, a good bit of it occurring at home. Frank gave his children a grounding in, among other things, astronomy and surveying, and Isobel took them often to the museums in South Kensington, where each child had a favourite museum and none had the slightest interest in the others’ favourites. She would leave Stephen in the Science Museum and Mary in the Natural History Museum, and then stay with Philippa too young to be left alone at the Victoria and Albert. After a while she would collect them all again.

In St Albans the Hawkings were regarded as a highly intelligent, eccentric family. Their love of books extended to such compulsive reading habits that Stephen’s friends found it odd and a little rude of his family to sit at the dining table, uncommunicative, their noses buried in their books. Reports that the family car was a used hearse are false. For many years the Hawkings drove around in a succession of used London taxis of the black, boxlike sort. This set them apart not only because of the nature of the vehicle, but also because after the war cars of any kind were not easily available. Only families who were fairly wealthy had them at all. Frank Hawking installed a table in the back of the taxi, between the usual bench seat and the fold-down seats, so that Stephen and his siblings could play cards and games. The car and the game table were put to especially good use getting to their usual holiday location, a painted gypsy caravan and an enormous army tent set up in a field at Osmington Mills, in Dorset. The Hawking campsite was only a hundred yards from the beach. It was a rocky beach, not sand, but it was an interesting part of the coast smuggler territory in a past age.

In the post-war years it was not unusual for families to live frugally with few luxuries, unable to afford home repairs, and, out of generosity or financial constraint, house more than two generations under one roof. But the Hawkings, though their house in St Albans was larger than many British homes, carried frugality and disrepair to an extreme. In this three storey, strangely put together redbrick dwelling, Frank kept bees in the cellar, and Stephen’s Scottish grandmother lived in the attic, emerging regularly to play the piano spectacularly well for local folk dances. The house was in dire need of work when the Hawkings moved in, and it stayed that way. According to Stephen’s adopted younger brother Edward, ‘It was a very large, dark house really rather spooky, rather like a nightmare.’ The leaded stained glass at the front door must originally have been beautiful but was missing pieces. The front hall was lit only by a single bulb and its fine authentic William Morris wall covering had darkened. A greenhouse behind the rotting porch lost panes whenever there was a wind. There was no central heating, the carpeting was sparse, broken windows were not replaced. The books, packed two deep on shelves all over the house, added a modicum of insulation.

Frank Hawking would brook no complaints. One had only to put on more clothes in winter, he insisted. Frank himself was often away on research trips to Africa during the coldest months. Stephen’s sister Mary recalls thinking that fathers were ‘like migratory birds. They were there for Christmas and then they vanished until the weather got warm.’ She thought that fathers of her friends who didn’t disappear were ‘a bit odd’.

The house lent itself to imaginative escapades. Stephen and Mary competed in finding ways to get in, some of them so secret that Mary was never able to discover more than ten of the eleven that Stephen managed to use. As if one such house were not enough, Stephen had another imaginary one in an imaginary place he called Drane. It seemed he did not know where this was, only that it existed. His mother became a little frantic, so determined was he to take a bus to find it, but later, when they visited Kenwood House in Hampstead Heath, she heard him declare that this was it, the house he had seen in a dream.

‘Hawkingese’ was the name Stephen’s friends gave the Hawking ‘family dialect’. Frank Hawking himself had a stutter and Stephen and his siblings spoke so rapidly at home that they also stumbled over their words and invented their own oral short hand. That did not prevent Stephen from being, according to his mother, ‘always extremely conversational’. He was also ‘very imaginative loved music and acting in plays’, also ‘rather lazy’ but ‘a self-educator from the start like a bit of blotting paper, soaking it all up’. Part of the reason for his lack of distinction in school was that he could not be bothered with things he already knew or decided he had no need to know.

Stephen had a rather commanding nature in spite of being smaller than most of his classmates. He was well organized and capable of getting other people organized. He was also known as something of a comedian. Getting knocked around by larger boys didn’t bother him much, but he had his limits, and he could, when driven to it, turn rather fierce and daunting. His friend Simon Humphrey had a heftier build than Stephen, but Simon’s mother recalled that it was Stephen, not Simon, who on one memorable occasion swung around with his fists clenched to confront the much larger bullies who were teasing them. ‘That’s the sort of thing he did he was equal to anything.’

The eight year old Stephen’s first school in St Albans was the High School for Girls, curiously named since its students included young children well below ‘high school’ age, and its Michael House admitted boys. A seven year old named Jane Wilde, in a class somewhat younger than Stephen’s, noticed the boy with ‘floppy golden brown hair’ as he sat ‘by the wall in the next door classroom’, but she didn’t meet him. She would later become his wife.

Stephen attended that school for only a few months, until Frank needed to stay in Africa longer than usual and Isobel accepted an invitation to take the children for four months to Majorca, off the east coast of Spain. Balmy, beautiful Majorca, the home of lsobel’s friend from her Oxford days, Beryl, and Beryl’s husband, the poet Robert Graves, was an enchanting place to spend the winter. Education was not entirely neglected for there was a tutor for Stephen and the Graveses’ son William.

Back in St Albans after this idyllic hiatus, Stephen went for one year to Radlett, a private school, and then did well enough in his tests to qualify for a place at the more selective St Albans School, also a private school, in the shadow of the Cathedral. Though in his first year at St Albans he managed to rank no better than an astonishing third from the bottom of his class, his teachers were beginning to perceive that he was more intelligent than he was demonstrating in the classroom. His friends dubbed him ‘Einstein’, either because he seemed more intelligent than they or because they thought he was eccentric. Probably both. His friend Michael Church remembers that he had a sort of ‘overarching arrogance some overarching sense of what the world was about’.

‘Einstein’ soon rose in ranking to about the middle of the class. He even won the Divinity prize one year. From Stephen’s earliest childhood, his father had read him stories from the Bible. ‘He was quite well versed in religious things,’ Isobel later told an interviewer. The family often enjoyed having theological debates, arguing quite happily for and against the existence of God.

Undeterred by a low class placing, ever since the age of eight or nine Stephen had been thinking more and more seriously about becoming a scientist. He was addicted to questioning how things worked and trying to find out. It seemed to him that in science he could find out the truth, not only about clocks and radios but also about everything else around him. His parents planned that at thirteen he would go to Westminster School. Frank Hawking thought his own advancement had suffered because of his parents’ poverty and the fact that he had not attended a prestigious school. Others with less ability but higher social standing had get ahead of him, or so he felt. Stephen was to have something better.

The Hawkings could not afford Westminster unless Stephen won a scholarship. Unfortunately, he was prone at this age to recurring bouts of a low fever, diagnosed as glandular fever, that sometimes was serious enough to keep him home from school in bed. As bad luck would have it, he was ill at the time of the scholarship examination. Frank’s hopes were dashed and Stephen continued at St Albans School, but he believes his education there was at least as good as the one he would have received at Westminster.

Stephen, age 14.

After the Hawkings adopted Edward in 1955, Stephen was no longer the only male sibling. Stephen accepted his new younger brother in good grace. He was, according to Stephen, ‘probably good for us. He was a rather difficult child, but one couldn’t help liking him.’

Continuing at St Albans School rather than heading off to Westminster had one distinct advantage. It meant being able to continue growing up in a little band of close friends who shared with Stephen such interests as the hazardous manufacture of fireworks in the dilapidated greenhouse and inventing board games of astounding complexity, and who relished long discussions on a wide range of subjects. Their game ‘Risk’ involved railways, factories, manufacturing, and its own stock exchange, and took days of concentrated play to finish. A feudal game had dynasties and elaborate family trees. According to Michael Church, there was something that particularly intrigued Stephen about conjuring up these worlds and setting down the laws that governed them. John McClenahan’s father had a workshop where he allowed John and Stephen to construct model aeroplanes and boats, and Stephen later remarked that he liked ‘to build working models that I could control. Since I began my Ph.D., this need has been met by my research into cosmology. If you understand how the universe operates, you control it in a way.’ In a sense, Hawking’s grown-up models of the universe stand in relation to the ‘real’ universe in the same way his childhood model aeroplanes and boats stood in relation to real aeroplanes and boats. They give an agreeable, comforting feeling of control while, in actuality, representing no control at all.

Stephen was fifteen when he learned that the universe was expanding. This shook him. ‘I was sure there must be some mistake,’ he says. ‘A static universe seemed so much more natural. It could have existed and could continue to exist for ever. But an expanding universe would change with time. If it continued to expand, it would become virtually empty. That was disturbing.

Like many other teenagers of their generation, Stephen and his friends became fascinated with extrasensory perception (ESP). They tried to dictate the throw of dice with their minds. However, Stephen’s interest turned to disgust when he attended a lecture by someone who had investigated famous ESP studies at Duke University in the United States. The lecturer told his audience that whenever the experiments got results, the experimental techniques were faulty, and whenever the experimental techniques were not faulty, they got no results. Stephen concluded that ESP was a fraud. His scepticism about claims for psychic phenomena has not changed. To his way of thinking, people who believe such claims are stalled at the level where he was at the age of fifteen.

Ancestor of ‘Cosmos’

Probably the best of all the little group’s adventures and achievements and one that captured the attention and admiration of the entire town of St Albans was building a computer that they called LUCE (Logical Uniselector Computing Engine). Cobbled together out of recycled pieces of clocks and other mechanical and electrical items, including an old telephone switchboard, LUCE could perform simple mathematical functions. Unfortunately that teenage masterpiece no longer exists. Whatever remained of it was thrown away eventually when a new head of computing at St Albans went on a cleaning spree.

The most advanced version of LUCE was the product of Stephen’s and his friends’ final years of school before university. They were having to make hard choices about the future. Frank Hawking encouraged his son to follow him into medicine. Stephen’s sister Mary would do that, but Stephen found biology too imprecise to suit him. Biologists, he thought, observed and described things but didn’t explain them on a fundamental level. Biology also involved detailed drawings, and he wasn’t good at drawing. He wanted a subject in which he could look for exact answers and get to the root of things. If he’d known about molecular biology, his career might have been very different. At fourteen, particularly inspired by a teacher named Mr Tahta, he had decided that what he wanted to do was ‘mathematics, more mathematics, and physics’.

Stephen’s father insisted this was impractical. What jobs were there for mathematicians other than teaching? Moreover he wanted Stephen to attend his own college, University College, Oxford, and at ‘Univ’ one could not read mathematics. Stephen followed his father’s advice and began boning up on chemistry, physics and only a little maths, in preparation for entrance to Oxford. He would apply to Univ to study mainly physics and chemistry.

In 1959, during Stephen’s last year before leaving home for university, his mother Isobel and the three younger children accompanied Frank when he journeyed to India for an unusually lengthy research project. Stephen stayed in St Albans and lived for the year with the family of his friend Simon Humphrey. He continued to spend a great deal of time improving LUCE, though Dr Humphrey interrupted regularly to insist he write letters to his family something Stephen on his own would have happily neglected. But the main task of that year had to be studying for scholarship examinations coming up in March. It was essential that Stephen perform extremely well in these examinations if there was to be even an outside chance of Oxford’s accepting him.

Students who rank no higher than halfway up in their school class seldom get into Oxford unless someone pulls strings behind the scenes. Stephen’s lacklustre performance in school gave Frank Hawking plenty of cause to think he had better begin pulling strings. Stephen’s headmaster at St Albans also had his doubts about Stephen’s chances of acceptance and a scholarship, and he suggested Stephen might wait another year. He was young to be applying to university. The two other boys planning to take the exams with him were a year older. However, both headmaster and father had underestimated Stephen’s intelligence and knowledge, and his capacity to rise to a challenge. He achieved nearly perfect marks in the physics section of the entrance examinations. His interview at Oxford with the Master of University College and the physics tutor, Dr Robert Berman, went so well there was no question but that he would be accepted to read physics and be given a scholarship. A triumphant Stephen joined his family in India for the end of their stay.

Not a Grey Man

In October 1959, aged seventeen, Hawking went up to Oxford to enter University College, his father’s college. ‘Univ’ is in the heart of Oxford, on the High Street. Founded in 1249, it is the oldest of the many colleges that together make up the University. Stephen would study natural science, with an emphasis on physics. By this time he had come to consider mathematics not as a subject to be studied for itself but as a tool for doing physics and learning how the universe behaves. He would later regret that he had not exerted more effort mastering that tool.

Oxford’s architecture, like Cambridge’s, is a magnificent hodge-podge of every style since the Middle Ages. Its intellectual and social traditions predate even its buildings and, like those of any great university, are a mix of authentic intellectual brilliance, pretentious fakery, innocent tomfoolery and true decadence. For a young man interested in any of these, Stephen’s new environment had much to offer. Nevertheless, for about a year and a half, he was lonely and bored. Many students in his year were considerably older than he, not only because he had sat his examinations early but because others had taken time off for national service. He was not inspired to relieve his boredom by exerting himself academically. He had discovered he could get by better than most by doing virtually no studying at all.

Contrary to their reputation, Oxford tutorials are often not one-to-one but two or three students with one tutor. A young man named Gordon Berry became Hawking’s tutorial partner. They were two of only four physics students who entered Univ that Michaelmas (autumn) term of 1959. This small group of newcomers Berry, Hawking, Richard Bryan and Derek Powney spent most of their time together, somewhat isolated from the rest of the College.

It wasn’t until he was halfway through his second year that Stephen began enjoying Oxford. When Robert Berman describes him, it’s difficult to believe he’s speaking of the same Stephen Hawking who seemed so ordinary a few years earlier and so bored the previous year. ‘He did, I think, positively make an effort to sort of come down to the other students’ level and you know, be one of the boys. If you didn’t know about his physics and to some extent his mathematical ability, he wouldn’t have told you He was very popular.’ Others who remember Stephen in his second and third years at Oxford describe him as lively, buoyant and adaptable. He wore his hair long, was famous for his wit, and liked classical music and science fiction.

The attitude among most Oxford students in those days, Hawking remembers, was ‘very antiwork’: ‘You were supposed either to be brilliant without effort, or to accept your limitations and get a fourth-class degree. To work hard to get a better class of degree was regarded as the mark of a grey man, the worst epithet in the Oxford vocabulary.’ Stephen’s freewheeling, independent spirit and casual attitude towards his studies fitted right in. In a typical incident one day in a tutorial, after reading a solution he had worked out, he crumpled up the paper disdainfully and propelled it across the room into the wastepaper basket.

The physics curriculum, at least for someone with Hawking’s abilities, could be navigated successfully without rising above this blasé approach. Hawking described it as ‘ridiculously easy. You could get through without going to any lectures, just by going to one or two tutorials a week. You didn’t need to remember many facts, just a few equations.’ You could also, it seems, get through without spending very much time doing experiments in the laboratory. Gordon and he found ways to use shortcuts in taking data and fake parts of the experiments. ‘We just didn’t apply ourselves,’ remembers Berry. ‘And Steve was right down there in not applying himself.

Derek Powney tells the story of the four of them receiving an assignment having to do with electricity and magnetism. There were thirteen questions, and their tutor, Dr Berman, told them to finish as many as they could in the week before the next tutorial. At the end of the week Richard Bryan and Derek had managed to solve one and a half of the problems; Gordon only one. Stephen had not yet begun. On the day of the tutorial Stephen missed three morning lectures in order to work on the questions, and his friends thought he was about to get his comeuppance. His bleak announcement when he joined them at noon was that he had been able to solve only ten. At first they thought he was joking, until they realized he had done ten. Derek’s comment was that this was the moment Stephen’s friends recognized ‘that it was not just that we weren’t in the same street, we weren’t on the same planet’. ‘Even in Oxford, we must all have been remarkably stupid by his standards.’

His friends were not the only ones who sometimes found his intelligence impressive. Dr Berman and other dons were also beginning to recognize that Hawking had a brilliant mind, ‘completely different from his contemporaries’. ‘Undergraduate physics was simply not a challenge for him. He did very little work, really, because anything that was do-able he could do. It was only necessary for him to know something could be done, and he could do it without looking to see how other people did it. Whether he had any books I don’t know, but he didn’t have very many, and he didn’t take notes. ‘l’m not conceited enough to think that I ever taught him anything.’ Another tutor called him the kind of student who liked finding mistakes in the textbooks better than working out the problems.

The Oxford physics course was scheduled in a way that made it easy not to see much urgent need for work. It was a three year course with no exams until the end of the third year. Hawking calculates he spent on the average about one hour per day studying: about one thousand hours in three years. ‘l’m not proud of this lack of work,’ he says. ‘l’m just describing my attitude at the time, which I shared with most of my fellow students: an attitude of complete boredom and feeling that nothing was worth making an effort for. One result of my illness has been to change all that: when you are faced with the possibility of an early death, it makes you realize that life is worth living, and that there are lots of things you want to do.’

One major explanation why Stephen’s spirits improved dramatically in the middle of his second year was that he and Gordon Berry joined the college Boat Club. Neither of them was a hefty hunk of the sort who make the best rowers. But both were light, wiry, intelligent and quick, with strong, commanding voices, and these are the attributes that college boat clubs look for when recruiting a coxswain (cox) the person who sits looking forward, facing the line of four or eight rowers, and steers the boat with handles attached to the rudder. The position of cox is definitely a position of control, something that Hawking has said appealed to him with model boats, aeroplanes and universes, a man of slight build commanding eight muscle-men.

Stephen exerted himself far more on the river, rowing and coxing for Univ, than he did at his studies. One sure way to be part of the ‘in’ crowd at Oxford was to be a member of your college rowing team. If intense boredom and a feeling that nothing was worth making an effort for were the prevailing attitudes elsewhere, all that changed on the river. Rowers, coxes and coaches regularly assembled at the boathouse at dawn, even when there was a crust of ice on the river, to perform arduous calisthenics and lift the racing shell into the water. The merciless practice went on in all weather, up and down the river, coaches bicycling along the towpath exhorting their crews. On race days emotions ran high and crowds of rowdy well-wishers sprinted along the banks of the river to keep up with their college boats. There were foggy race days when boats appeared and vanished like ghosts, and drenching race days when water filled the bottom of the boat. Boat club dinners in formal dress in the college hall lasted late and ended in battles of winesoaked linen napkins.

All of it added up to a stupendous feeling of physical well-being, camaraderie, all-stops-out effort, and of living college life to the hilt. Stephen became a popular member of the boating crowd. At the level of intercollege competition he did well. He’d never before been good at a sport, and this was an exhilarating change. The College Boatsman of that era, Norman Dix, remembered him as an ‘adventurous type; you never knew quite what he was going to do’. Broken cars and damaged boats were not uncommon as Stephen steered tight corners and attempted to take advantage of narrow manoeuvring opportunities that other coxes avoided.

At the end of the third year, however, examinations suddenly loomed larger than any boat race. Hawking almost floundered. He’d settled on theoretical physics as his speciality. That meant a choice between two areas for graduate work: cosmology, the study of the very large; or elementary particles, the study of the very small. Hawking chose cosmology. ‘It just seemed that cosmology was more exciting, because it really did seem to involve the big question: Where did the universe come from?’

Fred Hoyle, the most distinguished British astronomer of his time, was at Cambridge. Stephen had become particularly enthusiastic about the idea of working with Hoyle when he took a summer course with one of Hoyle’s most outstanding graduate students, Jayant Narlikar. Stephen applied to do Ph.D. research at Cambridge and was accepted with the condition that he get a First from Oxford.

One thousand hours of study was meagre preparation for getting a First. However, an Oxford examination offers a choice from many questions and problems. Stephen was confident he could get through successfully by doing problems in theoretical physics and avoiding any questions that required knowledge of facts. As the examination day approached, his confidence faltered. He decided, as a fail-safe, to take the Civil Service exams and apply for a job with the Ministry of Works.

The night before his Oxford examinations Stephen was too nervous to sleep. The examination went poorly. He was to take the Civil Service exams the next morning, but he overslept and missed them. Now everything hung on his Oxford results.

As Stephen and his friends waited on tenterhooks for their results to be posted, only Gordon was confident he had done well in his examinations well enough for a First, he believed. Gordon was wrong. He and Derek received Seconds, Richard a disappointing Third. Stephen ended up disastrously on the borderline between a First and a Second.

Faced with a borderline result, the examiners summoned Hawking for a personal interview, a ‘viva’. They questioned him about his plans. In spite of the tenseness of the situation, with his future hanging in the balance, Stephen managed to come up with the kind of remark for which he was famous among his friends: ‘If I get a First, I shall go to Cambridge. If I receive a Second, I will remain at Oxford. So I expect that you will give me a First.’ He got his First. Dr Berman said of the examiners: ‘They were intelligent enough to realize they were talking to someone far cleverer than most of themselves.’

That triumph notwithstanding, all was not well. Hawking’s adventures as a cox, his popularity, and his angst about his exams had pushed into the background a problem that he had first begun to notice that year and that refused to go away. ‘I seemed to be getting more clumsy, and I fell over once or twice for no apparent reason,’ he remembers. The problem had even invaded his halcyon existence on the river when he began to have difficulty sculling (rowing a one-man boat). During his final Oxford term, he tumbled down the stairs and landed on his head. His friends spent several hours helping him overcome a temporary loss of shortand long-term memory, insisted he go to a doctor to make sure no serious damage had been done, and encouraged him to take a Mensa intelligence test to prove to them and to himself that his mind was not affected. All seemed well, but they found it difficult to believe that his fall had been a simple accident.

There was indeed something amiss, though not as a result of his tumble and not with his mind. That summer, on a trip he and a friend took to Persia (now Iran), he became seriously ill, probably from a tourist stomach problem or a reaction to the vaccines required for the trip. It was a harrowing journey in other ways, more harrowing for his family back home than for Stephen. They lost touch with him for three weeks, during which time there was a serious earthquake in the area where he was travel ling. Stephen, as it turned out, had been so ill and riding on such a bumpy bus that he didn’t notice the earthquake at all.

He finally got back home, depleted and unwell. Later there would be speculation about whether a non-sterile smallpox vaccination prior to the trip had caused his illness in Persia and also his ALS, but the latter had, in fact, begun earlier. Nevertheless, because of his illness in Persia and the increasingly troubling symptoms he was experiencing, Stephen arrived at Cambridge a more unsettled and weaker twenty-year-old than he had been at Oxford the previous spring. He moved into Trinity Hall for the Michaelmas term in the autumn of 1962.

During the summer before Stephen left for Cambridge, Jane Wilde saw him while she was out walking with her friends in St Albans. He was a ‘young man with an awkward gait, his head down, his face shielded from the world under an unruly mass of straight brown hair immersed in his own thoughts, looking neither right nor left lolloping along in the opposite direction’. Jane’s friend Diana King, sister of Stephen’s friend Basil King, astonished her friends by telling them that she had gone out with him. ‘He’s strange but very clever. He took me to the theatre once. He goes on Ban the Bomb marches.“



Stephen Hawking. His Life And Work

by Kitty Ferguson

get it at

Ten Years after Bear Stearns, U.S. Financial Stability is again in Danger – Katharina Pistor * Waiting for the Chinese Bear Stearns – Daniela Gabor.

Banks are pushing for deregulation and roll backs of Dodd-Frank’s regular check-ups on their financial health. We should be worried.

Katharina Pistor

The great financial crisis with its peak in the fall of 2008 was not inevitable; it could and should have been prevented. Had it been, the economy would not have lost trillions of dollars, millions of Americans would have been spared eviction and unemployment, and the U.S.’s vibrant financial sector would have remained intact.

But few saw a crisis of these proportions coming: Almost no one raised red flags when the first mortgage originators filed for bankruptcy, or when a roster of medium-sized banks experienced liquidity shortage. Even when margin calls began putting pressure even on larger financial firms, and credit lines that had been put in place to provide only short-term, stopgap measures were running hot, almost no one sounded alarms.

Every incident was eyed in isolation even as, slowly but surely, stress built in the system as a whole, as it always does in finance, from the weaker, less resilient periphery of mortgage originators to the very core of storied investment banks whose ultimate fall threatened to bring down the entire system.

The 16th of March of this year marked the 10-year anniversary of the marriage between Bear Stearns and JP Morgan Chase, which was arranged and co-financed by the Federal Reserve. It was by no means the beginning of the crisis; rather, it marked the beginning of the end. It was the final stage in which financial firms at the core were still able to keep their heads above water; however, fearing for their own survival, they would extend another lifeline to fellow banks only with government help.

By September, it took a massive government bailout to prevent a financial meltdown.

Of course, the government could have stepped aside and allowed the banking and finance system to selfdestruct. Members of Congress who voted against the Troubled Asset Relief Program (TARP) Act at the time certainly thought it should.

Allowing it to implode would have created a much cleaner slate for rebuilding finance on sound footing just as the collapse of the financial system in the 1930s had. On the downside, no one could say for sure whether the system would recover and what it would cost to get there. Fearing the abyss, the Fed, the Treasury and Congress stepped in and did what it took (to paraphrase then-Fed Chairman Ben Bernanke) to stabilize the financial system which has since returned to making huge profits.

No good deed goes unpunished.

Instead of a new foundation for sound finance, all we got for this massive public intervention was the same old system, merely patched up with rules and regulations to make it more resilient.

Among the most far-reaching of the reform measures was a new set of essentially preventive care measures for the financial sector: annual check-ups for banks beyond a certain size, strategies for designing tests that would make it harder for financial intermediaries to game them, and additional discretionary powers for regulators to impose additional prudential measures on banks in the name of system stability. Given the massive regulatory failure in spotting early signs of trouble and preventing the 2008 crisis, that looked like the least Congress could do to keep Americans safe from financial instability.

Yet, Republicans, along with a breakaway bloc of Democratic senators, have begun to strip away much of the preventive measures in the post-crisis regulatory legislation, known as Dodd-Frank. Following lobbying by the financial industry, the US. Senate just voted to proceed with considering a new bill, which provides that only banks with consolidated assets worth $100 billion or more (up from $50 billion) will be subject to regular stress tests conducted by the Fed; but these checkups will be “periodic” rather than annual, and they will include only two rather than three stress scenarios. Similarly, in the new design, company-run internal stress tests are required now only for firms with more than $50 billion in assets (up from $10 billion). They, too, can dispense with annual checks: periodic ones will do.

This massive act of deregulation will not be accompanied by greater discretion for the Fed to impose supervisory measures on select companies. On the contrary, the Fed’s discretionary powers have been circumscribed in the bill.

Thresholds are always arbitrary; it is simply impossible to find an optimal point that imposes just enough costs on banks and financial institutions to ensure financial stability. But with these new changes, financial stability has been placed on the back burner. Instead we are presented with back-of-the-envelope calculations about the positive effects scaling back regulation will have on credit expansion, and therefore on growth. It’s a dangerous game, least because this calculation does not include the costs of future crises that result from reckless credit expansion.

If financial stability were the goal, as it should be, we might want to do away with asset thresholds altogether and instead empower regulators to diagnose and treat threats to financial instability wherever they find them. No doubt, the financial industry would reject such a move, because it craves regulatory “certainty”, the very reason its lobbyists pushed for the thresholds in the first place. But we should not fool ourselves: Limiting financial preventive care to large banks at the core of the system deprives regulators of the tools they need to diagnose a crisis in the making, and leaves us exactly where we were in the run-up to the last one.

Waiting for the Chinese Bear Stearns

Daniela Gabor

Unregulated, speculative lending markets nearly brought down the global financial system 10 years ago. Now, Western banks are exporting this failed model to the developing world.


What a difference a decade makes’, mused Mark Carney, the head of the Financial Stability Board (FSB), in a recent speech. Carney was measuring, and applauding, regulatory progress since shadow banking brought Bear Stearns down in March 2008 and Lehman Brothers six months later, and since 2013, when he warned that shadow banking in developing and emerging countries (DECS) was the threat to global financial stability. A lot has changed since.

Shadow banking is no longer used pejoratively. The IMF recently noted that DEC shadow banking ‘might yield greater efficiencies and risk sharing capacity. In scholarly and policy literature, DEC shadow banking is portrayed as an activity confined by national borders, connected closely to banks that move activities in the shadows, circumventing regulation or financial repression, complementary to traditional banks that underserve (SME) entrepreneurs, be it because of market imperfections or the priorities of the developmental state (China).

Another C is relevant for China: constructed by the Chinese state as a quasi-fiscal lever. After Lehman, China’s fiscal stimulus involved encouraging local governments to tap shadow credit, often from large state-owned banks through local Government Financing Vehicles; Yet systemic risks pale in comparison to those that gave us the Bear Sterns and Lehman moments, since (a) complex securitization and wholesale funding markets are (still) absent and (b) DECS have preserved autonomy to design regulatory regimes proportional to the risks posed by shadow banks important to economic development. At worst, DECs may have to backstop shadow credit creation, just like high-income countries did after Lehman’s collapse.

The ‘viable alternative’ story has one shortcoming. It stops short of theorizing shadow banking as a phenomenon intricately linked to financial globalization. In so doing, it misses out a recent development.

The global agenda of reforming shadow banking has morphed into a project of constructing resilient market-based finance that seeks to organize DEC financial systems around securities markets. The project re-invigorates a pre-crisis plan designed by G8 countries, led by Germany’s central bank, the Bundesbank, together with the World Bank and the IMF, to promote local currency bond markets, a plan that G20 countries endorsed in 2011. As one Bundesbank official put it then: “more developed domestic bond markets enhance national and global financial stability. Therefore, it is not surprising that this is a topic which generates an exceptional high international consensus and interest even beyond the G20.”

Deeper local securities markets, it is argued, would (a) reduce DEC dependency on short-term foreign currency debt by (b) tapping into growing demand from foreign institutional investors and their asset managers while (c) expanding the investor base to domestic institutional investors that could act as a buffer, increasing DEC’s capacity to absorb large capital inflows without capital controls; and (d) reduce global imbalances, since large DECs (for example, China and other Asian countries) would no longer need to recycle savings in US. financial markets. Everyone wins if DECs develop missing (securities) markets.

Despite paying lip service to the potential fragility of capital flows into DEC securities markets, this is a project of policy-engineered financial globalization.

The key to understanding this is in the plumbing. Plumbing, for building and securities markets, holds little to excite the imagination. Until it goes wrong.

The plumbing of securities markets refers to the money markets where securities can be financed. According to the International Monetary Fund (IMF) and the World Bank: “the money market is the starting point to developing… fixed income (i.e. securities) markets.” The institutions refer to a special segment, known as the repo market. Repo is the “plumbing” that circulates securities between asset managers, institutional investors, market-making banks and leveraged investors, “greasing” securities’ liquidity (ease of trading). It allows financial institutions to borrow against securities collateral and to lend securities to those betting on a change in price.

This is why international institutions, from the FSB to the IMF and World Bank, have insisted that DECS seeking to build resilient market-based finance need to (re)model their repo plumbing according to a ‘Western’ blueprint.

The official policy advice coincides with the View of securities markets’ lobbies, as expressed, for instance, by the Asia Securities Industry and Financial Markets Association, in the 2013 India Bond Market Roadmap and the 2017 China’s Capital Markets: Navigating the Road Ahead.

The advice ignores economist Hyman Minsky’s insights on fragile plumbing (and the lesson of the Bear Sterns and Lehman moments). Minsky was deeply interested in the plumbing of financial markets, where he looked for signs of evolutionary changes that would make monetary policy less effective while sowing the seeds of fragile finance.

Fragility, he warned, arises where: ‘the viability of loans mainly made because of collateral, however, depends upon the expected market value of the assets that are pledged. An emphasis by bankers on the collateral value and the expected values of assets is conducive to the emergence of a fragile financial structure’

Western or classic repo plumbing does precisely that. It orients (shadow) bankers towards the daily market value of collateral. For both borrower and lender, the daily market value of the security collateral is critical: the borrower does not want to leave more collateral with the lender than the cash it has borrowed, and vice-versa. This is why repo plumbing enables aggressive leverage during good times, when securities prices go up, the borrower gets cash/securities back, and can borrow more against them to buy more securities, drive their price up etc. Conversely, when securities prices fall, borrowers have to find, on a daily basis, more cash or more collateral.

Shadow bankers live with daily anxieties. One day, they may find that the repo supporting their securities portfolios is no longer there, as Bear Stearns did. Then they have to firesale collateral, driving securities’ prices down, creating more funding problems for other shadow bankers until they fold, as Lehman Brothers did.

It was such destabilizing processes that prompted the FSB to identify repos as systemic shadow markets that need tight regulation in 2011. Since then, regulatory ambitions to make the plumbing more resilient have been watered down significantly, as the global policy community turned to the project of constructing market-based finance.

Paradoxically, when the Bundesbank advises DECS to make (shadow) bankers more sensitive to the daily dynamic of securities markets, it ignores its own history. Two decades ago, finance lobbies pressured the Bundesbank to relax its strong grip on German repo markets. The Bundesbank resisted because it believed only tight control would safeguard financial stability and monetary policy effectiveness. Eventually, the Bundesbank abandoned this Minsky-like stance because it worried other Euro-area securities would be more attractive for global investors.

Since the 1980s, the policy engineering of liquid securities markets has been a project of promoting shadow plumbing, first in Europe and the US, now in DECs. Take China. Since 2009, Chinese securities markets have grown rapidly to become the third largest in the world, behind the US and Japan. Such rapid growth reflects policies to re-organize Chinese shadow banking into market-based finance, driven by a broader renminbi (RMB, China’s currency) internationalization strategy that views deep local securities markets as a critical pillar. The repo plumbing of Chinese securities markets expanded equally fast, to around US$ 8 trillion by June 2017. Chinese plumbing is now roughly similar in volumes to European and US repo markets, when in 2010 it was only a fifth of those markets. Since then, Chinese (shadow) banks increased repo funding from 10% to 30% of total funding.

Yet China’s repo is fundamentally different. Legal and market practice there does not force the Chinese (shadow) banker to care about, or to make profit from, daily changes in securities prices. Without daily collateral valuation practices, the “archaic” regime makes for patient (shadow) bankers and more resilient plumbing. This is the case in most DEC countries.

The pressure is on China to open repo markets to foreign investors and to abandon “archaic” rules if it wants RMB internationalization. While China may be able to resist such pressures, it is difficult to see how other DECs will. The global push for market-based finance prepares the terrain for organizing international development interventions via securities markets, as suggested by the growing popularity of green bonds, bond markets for infrastructure, impact investment and digital financial inclusion approaches to poverty reduction. After all, the new mantra is “development’s future is finance, not foreign aid.”

In sum, the shadow-banking-into-resilient-market based-finance agenda seeks to define the terms on which DEC countries join the global supply of securities. It silently threatens the monetary power of DEC countries to manage capital flows and the effects of global financial cycles, a hard-fought victory to weaken the political clout of what Jagdish Bhagwati termed the “Wall Street-Treasury complex” that successfully pressured DECs to open their capital accounts.

This policy-engineered financial globalization seeks a clean break from “the engineered industrialization” that involved capital controls, bank credit guided by the priorities of industrial strategies and competitive exchange rate management.

Instead, it seeks to accelerate the global diffusion of the architecture of US. securities markets and their plumbing, despite well-documented fragilities and contested social efficiency.

Questions of sustainability, credit creation and growth should not be left to securities markets. Carefully designed developmental states, historical experience suggests, work better.

America’s Captured Corporate Media ignores the rise of oligarchy. The rest of us shouldn’t – Bernie Sanders.

We need to hear from struggling Americans whose stories are rarely told in newspapers or television. Until they are, we must tell these stories elsewhere.


The rapid rise of oligarchy and wealth and income inequality is the great moral, economic, and political issue of our time. Yet, it gets almost no coverage from the corporate media.

How often do network newscasts report on the 40 million Americans living in poverty, or that we have the highest rate of childhood poverty of almost any major nation on earth? How often does the media discuss the reality that our society today is more unequal than at any time since the 1920s with the top 0.1% now owning almost as much wealth as the bottom 90%? How often have you heard the media report the stories of millions of people who today are working longer hours for lower wages than was the case some 40 years ago?

How often has ABC, CBS or NBC discussed the role that the Koch brothers and other billionaires play in creating a political system which allows the rich and the powerful to significantly control elections and the legislative process in Congress?

Sadly, the answer to these questions is: almost never. The corporate media has failed to let the American people fully understand the economic forces shaping their lives and causing many of them to work two or three jobs, while CEOs make hundreds of times more than they do. Instead, day after day, 24/7, we’re inundated with the relentless dramas of the Trump White House, Stormy Daniels, and the latest piece of political gossip.

We urgently need to discuss the reality of today’s economy and political system, and fight to create an economy that works for everyone and not just the one percent.

We need to ask the hard questions that the corporate media fails to ask: who owns America, and who has the political power? Why, in the richest country in the history of the world are so many Americans living in poverty? What are the forces that have caused the American middle class, once the envy of the world, to decline precipitously? What can we learn from countries that have succeeded in reducing income and wealth inequality, creating a strong and vibrant middle class, and providing basic human services to everyone?

We need to hear from struggling Americans whose stories are rarely told in newspapers or television. Unless we understand the reality of life in America for working families, we’re never going to change that reality.

Until we understand that the rightwing Koch brothers are more politically powerful than the Republican National Committee, and that big banks, pharmaceutical companies, and multinational corporations are spending unlimited sums of money to rig the political process, we won’t be able to move to the public funding of elections and end corporate greed.

Until we understand that the US federal minimum wage of $7.25 an hour is a starvation wage and that people cannot make it on $9 or $10 an hour, we’re not going to be able to pass a living wage of at least $15 an hour.

Until we understand that multinational corporations have been writing our trade and tax policies for the past 40 years to allow them to throw American workers out on the street and move to low-wage countries, we’re not going to be able to enact fair laws ending the race to the bottom and making the wealthy and the powerful pay their fair share.

Until we understand that we live in a highly competitive global economy and that it is counterproductive that millions of our people cannot afford a higher education or leave school deeply in debt, we will not be able to make public colleges and universities tuition free.

Until we understand that we are the only major country on earth not to guarantee healthcare to all and that we spend far more per capita on healthcare than does any other country, we’re not going to be able to pass a Medicare for all, single-payer program.

Until we understand that the US pays, by far, the highest prices in the world for prescription drugs because pharmaceutical companies can charge whatever price they want for life-saving medicine, we’re not going to be able to lower the outrageous price of these drugs.

Until we understand that climate change is real, caused by humans, and causing devastating problems around the world, especially for poor people, we’re not going to be able to transform our energy system away from fossil fuel and into sustainable forms of energy.

We need to raise political consciousness in America and help us move forward with a progressive agenda that meets the needs of our working families.

It’s up to us all to join the conversation, it’s just the beginning.

Life is fraught with danger. School playgrounds should be too – Tom Bennett.

The primary school encouraging children to play with saws and bricks deserves applause. Shielding kids from risk helps noone.

Risk is a part of life, and schools are very much in the business of preparing children for life, not just as scholars but as human beings, citizens and custodians of the world. So news that a primary school in Essex has introduced bricks and saws into its playground to help children understand risk should be celebrated, not condemned.

On the surface there is perhaps much to feel anxious about. But a moment’s reflection is enough to realise that children are frequently exposed to worse things on the way to school, and if a slide and a swing are the greatest hazard our children ever face as they grow up, then I can only assume they are raised in an isolation tank.

No amount of bubble wrap can cushion the fact that the world is perilous. The question is, how do we best equip children to deal with it? The easy answer, the wrong answer, is to attempt the impossible and to hide them, as the Buddha’s father is said to have tried, from death and disease. Risk is everywhere.

Some people blame health and safety rules for creating playgrounds devoid of opportunities to learn from adversity: a curious attitude towards one of our greatest but most maligned social innovations, risk management. Although to many it’s a dreary administrative chore, it is also responsible for countless lives saved, limbs gone unharmed and disasters averted. We mock it at our peril. Some regulations may seem petty, but set that against the benefits of prohibitions and standards that keep us upright and breathing.

In reality, there are few reasons for schools not to help children experience managed risk, and the fear of falling foul of some imagined regulation is often greater than the actual restriction. Of course there is a paradox: we want children to be as safe as possible, but avoiding risk simply makes us more likely to walk into calamity when we encounter it.

Better to teach children to swim than to hope they never fall into a river.

Children therefore need to be exposed to risks. We immunise them against calamity by acclimatising them to a hazardous world rather than locking them in towers. We could even see risk as a portal to opportunity and possibility instead of a hazard.

Playgrounds already contain sharp corners and hard surfaces; rain creates bogs in every park. I’d much rather teach children not to put their hands in a blender than tell them to fear it. The alternative is to create children who dare to do nothing and are terrified of everything. I applaud any school that teaches children to manage risk, to view life as springboard rather than a deathtrap.

It is hard to see how a school could do more to adequately train children to deal with the slings and mud pits of outrageous fortune. Parents, children’s fiercest safeguarders, have praised the scheme. A local councillor has expressed approval for it. My hope is that the school isn’t pilloried by witless controversialists addicted to outrage. That’s a risk far harder to manage.


Tom Bennett is a teacher, and the Department for Education’s independent adviser on behaviour in schools. He is also the founder of researchED, a teacher led organisation working to improve the use of evidence in education.

Professor Stephen Hawking 1942-2018

Friends and colleagues from the University of Cambridge have paid tribute to Professor Stephen Hawking, who died on 14 March 2018 at the age of 76.

Widely regarded as one of the world’s most brilliant minds, he was known throughout the world for his contributions to science, his books, his television appearances, his lectures and through biographical films. He leaves three children and three grandchildren.

Professor Hawking broke new ground on the basic laws which govern the universe, including the revelation that black holes have a temperature and produce radiation, now known as Hawking radiation. At the same time, he also sought to explain many of these complex scientific ideas to a wider audience through popular books, most notably his bestseller A Brief History of Time.

He was awarded the CBE in 1982, was made a Companion of Honour in 1989, and was awarded the US Presidential Medal of Freedom in 2009. He was the recipient of numerous awards, medals and prizes, including the Copley Medal of the Royal Society, the Albert Einstein Award, the Gold Medal of the Royal Astronomical Society, the Fundamental Physics Prize, and the BBVA Foundation Frontiers of Knowledge Award for Basic Sciences. He was a Fellow of The Royal Society, a Member of the Pontifical Academy of Sciences, and a Member of the US National Academy of Sciences.

He achieved all this despite a decades-long battle against motor neurone disease, with which he was diagnosed while a student, and eventually led to him being confined to a wheelchair and to communicating via his instantly recognisable computerised voice. His determination in battling with his condition made him a champion for those with a disability around the world.

Professor Hawking came to Cambridge in 1962 as a PhD student and rose to become the Lucasian Professor of Mathematics, a position once held by Isaac Newton, in 1979. In 2009, he retired from this position and was the Dennis Stanton Avery and Sally Tsui Wong-Avery Director of Research in the Department of Applied Mathematics and Theoretical Physics until his death. He was active scientifically and in the media until the end of his life.

Professor Stephen Toope, Vice-Chancellor of the University of Cambridge, paid tribute, saying, ”Professor Hawking was a unique individual who will be remembered with warmth and affection not only in Cambridge but all over the world. His exceptional contributions to scientific knowledge and the popularisation of science and mathematics have left an indelible legacy. His character was an inspiration to millions. He will be much missed.”

Stephen William Hawking was born on January 8, 1942 in Oxford although his family was living in north London at the time. In 1959, the family moved to St Albans where he attended St Albans School. Despite the fact that he was always ranked at the lower end of his class by teachers, his school friends nicknamed him ‘Einstein’ and seemed to have encouraged his interest in science. In his own words, “physics and astronomy offered the hope of understanding where we came from and why we are here. I wanted to fathom the depths of the Universe.”

His ambition brought him a scholarship to University College Oxford to read Natural Science.There he studied physics and graduated with a first class honours degree.

He then moved to Trinity Hall Cambridge and was supervised by Dennis Sciama at the Department of Applied Mathematics and Theoretical Physics for his PhD; his thesis was titled ‘Properties of Expanding Universes.’ In 2017, he made his PhD thesis freely available online via the University of Cambridge’s Open Access repository. There have been over a million attempts to download the thesis, demonstrating the enduring popularity of Professor Hawking and his academic legacy.

On completion of his PhD, he became a research fellow at Gonville and Caius College where he remained a fellow for the rest of his life. During his early years at Cambridge, he was influenced by Roger Penrose and developed the singularity theorems which show that the Universe began with the Big Bang.

An interest in singularities naturally led to an interest in black holes and his subsequent work in this area laid the foundations for the modern understanding of black holes. He proved that when black holes merge, the surface area of the final black hole must exceed the sum of the areas of the initial black holes, and he showed that this places limits on the amount of energy that can be carried away by gravitational waves in such a merger. He found that there were parallels to be drawn between the laws of thermodynamics and the behaviour of black holes. This eventually led, in 1974, to the revelation that black holes have a temperature and produce radiation, now known as Hawking radiation, a discovery which revolutionised theoretical physics.

He also realised that black holes must have an entropy often described as a measure of how much disorder is present in a given system equal to one quarter of the area of their event horizon: the ‘point of no return’, where the gravitational pull of a black hole becomes so strong that escape is impossible. Some forty odd years later, the precise nature of this entropy is still a puzzle. However, these discoveries led to Hawking formulating the ‘information paradox’ which illustrates a fundamental conflict between quantum mechanics and our understanding of gravitational physics. This is probably the greatest mystery facing theoretical physicists today.

To understand black holes and cosmology requires one to develop a theory of quantum gravity. Quantum gravity is an unfinished project which is attempting to unify general relativity, the theory of gravitation and of space and time with the ideas of quantum mechanics. Hawking’s work on black holes started a new chapter in this quest and most of his subsequent achievements centred on these ideas. Hawking recognised that quantum mechanical effects in the very early universe might provide the primordial gravitational seeds around which galaxies and other large-scale structures could later form. This theory of inflationary fluctuations, developed along with others in the early 1980’s, is now supported by strong experimental evidence from the COBE, WMAP and Planck satellite observations of the cosmic microwave sky.

Another influential idea was Hawking’s ‘no boundary’ proposal which resulted from the application of quantum mechanics to the entire universe. This idea allows one to explain the creation of the universe in a way that is compatible with laws of physics as we currently understand them.

Professor Hawking’s influential books included The Large Scale Structure of Spacetime, with G F R Ellis; General Relativity: an Einstein centenary survey, with W Israel; Superspace and Supergravity, with M Rocek (1981); The Very Early Universe, with G Gibbons and S Siklos, and 300 Years of Gravitation, with W Israel.

However, it was his popular science books which took Professor Hawking beyond the academic world and made him a household name. The first of these, A Brief History of Time, was published in 1988 and became a surprise bestseller, remaining on the Sunday Times bestseller list for a record breaking 237 weeks. Later popular books included Black Holes and Baby Universes, The Universe in a Nutshell, A Briefer History of Time, and My Brief History. He also collaborated with his daughter Lucy on a series of books for children about a character named George who has adventures in space.

In 2014, a film of his life, The Theory of Everything, was released. Based on the book by his first wife Jane, the film follows the story of their life together, from first meeting in Cambridge in 1964, with his subsequent academic successes and his increasing disability. The film was met with worldwide acclaim and Eddie Redmayne, who played Stephen Hawking, won the Academy Award for Best Actor at the 2015 ceremony.

Travel was one of Professor Hawking’s pastimes. One of his first adventures was to be caught up in the 7.1 magnitude Bou-in-Zahra earthquake in Iran in 1962. In 1997 he visited the Antarctic. He has plumbed the depths in a submarine and in 2007 he experienced weightlessness during a zero-gravity flight, routine training for astronauts. On his return, he quipped “Space, here I come.”

Writing years later on his website, Professor Hawking said:

“I have had motor neurone disease for practically all my adult life. Yet it has not prevented me from having a very attractive family and being successful in my work. I have been lucky that my condition has progressed more slowly than is often the case. But it shows that one need not lose hope.”

At a conference In Cambridge held in celebration of his 75th birthday in 2017, Professor Hawking said:

“It has been a glorious time to be alive and doing research into theoretical physics. Our picture of the Universe has changed a great deal in the last 50 years, and I’m happy if I’ve made a small contribution.”

And he said he wanted others to feel the passion he has for understanding the universal laws that govern us all.

“I want to share my excitement and enthusiasm about this quest. So remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious, and however difficult life may seem, there is always something you can do, and succeed at. It matters that you don’t just give up.”

Raising the Floor. How a Universal Basic Income can Renew Our Economy – Andy Stern.

Sixteen years into the twenty-first century we are trying to find solutions to its unique problems, especially those that are challenging the way we work, earn a living, and support our families, with ideas and methods that worked in the twentieth.


Most Americans, working or not, have lived through a very tough period, especially since the financial crash of 2008. What I have found from speaking with thousands of people from every economic strata is that they often blame themselves for not finding a permanent or good paying job; for getting laid off or working inconsistent hours; for taking multiple low wage jobs or contingent work just to make ends meet; for, especially in the case of recent college graduates, needing to move back into their parents’ house; for not building a nest egg or enough savings to retire; for working tirelessly so that their kids could go to college, and now their children can’t get a job.

What I want to say to each and every one is: This should not be about personal blame because the changes that are causing this jobless, wage-less recovery are structural. You worked hard. You played by the rules. You did exactly what you were supposed to do to fulfill your part of America’s social contract.

There is hope for our economy and future, but only if we come to terms with how the current explosion in technology is likely to create a shortage of jobs, a surplus of labor, and a bigger and bigger gap between the rich and poor over the next twenty years.

When I left my job as president of the Service Employees International Union (SEIU) in 2010, I undertook a five year journey to better understand the way technology is changing the economy and workplace, and to find a way to revive the American Dream. I have structured this book around many of the people I met on this journey, their assessment of the problem, my observations about whether I think they are on the money or just plain wrong, and then the solution of a universal basic income. That solution is a work in progress. I invite you to join me in debating it, refining it, and building a constituency for it, so that we can help America fulfill its historical promise to future generations of our children.

Andy Stern, Washington, DC, June 2016

Can we invent a better future?

“There’s something happening here. What it is ain’t exactly clear.” Buffalo Springfield


I am walking around one of the most out of this world places on earth, the MIT Media Lab in Cambridge, Massachusetts. There is a huge amount of brainpower here: more than twenty groups of MIT faculty, students, and researchers working on 350 projects that range from smart prostheses and sociable robots to advanced sensor networks and electronic ink. The Media Lab is famous for its emphasis on creative collaboration between the best and the brightest in disparate fields: scientists and engineers here work alongside artists, designers, philosophers, neurobiologists, and communications experts. Their mission is “to go beyond known boundaries and disciplines” and “beyond the obvious to the questions not yet asked, questions whose answers could radically improve the way people live, learn, express themselves, work, and play.”

Their motto, “inventing a better future”, conveys a forward looking confidence that’s been lacking in our nation since the 2008 financial crisis plunged us into a recession followed by a slow, anxiety inducing recovery.

On this cloudy November day, it seems that all the sunlight in Cambridge is streaming through the glass and metallic screens that cloak the Media Lab, rendering it a luminous bubble, or a glowing alternative universe. Architect Fumihiko Mako designed the building around a central atrium that rises six floors, “a kind of vertical street,” he called it, with spacious labs branching off on each floor. Walking up the atrium, you look through the glass walls and see groups of (mainly) young geniuses at work.

Or are they playing? I am struck by how casual and unhurried they seem. Whether they are lounging on couches, gathered around a computer screen, drawing equations on a wall, these inventors of the future seem to be having a whole lot of fun. That’s not how the thirty people who are leaders in the labor movement and the foundation world who accompanied me here would characterize their own workplaces. They have been grappling with growing income inequality, stagnant wages, and increasing poverty in the communities they serve, and also with political gridlock on Capitol Hill. It’s been harder for them to get funding and resources for the important work they do.

They have come here, as I have, to get a glimpse of how MIT’s wizards and their technologies will impact the millions of middle and lower income Americans whose lives are already being disrupted and diminished in the new digital economy. Will these emerging technologies create jobs or destroy them? Will they give lower and middle income families more or less access to the American Dream? Will they make my generation’s definition of a “job” obsolete for my kids and grandkids?

For the past five years, I’ve been on a personal journey to understand an issue that should be at the heart of our nation’s economic and social policies: the future of work. I have been interviewing CEOs, labor leaders, futurists, politicians, entrepreneurs, and historians to find answers to the following questions: After decades of globalization and technology driven growth, what will America’s workplaces look like in twenty years? Which job categories will be gone forever in the age of robotics and artificial intelligence? Which new ones, if any, will take their place?

The MIT Media Lab is one stop on that journey; since the early 1990s, it has been in the forefront of wireless communication, 3D printing, and digital computing. Looking around at my colleagues, I think: People like us, labor organizers, community activists, people at the helm of small foundations that work for social and economic justice, don’t usually visit places like this. We spend our time in factories and on farms, in fast food restaurants and in hospitals advocating for higher wages and better working conditions. While we refer to our organizations by acronyms, SEIU (Service Employees International Union), OSF (Open Society Foundations), and NDWA (National Domestic Workers Alliance)-there is one acronym most of us would never use to describe ourselves personally: STEM. Most of today’s discussion will involve science, technology, engineering, and mathematics, the STEM subjects and it will go way over our heads. Instead, we’ll be filtering what we see through our “progressive” justice, engagement, and empowerment lenses, and how we experience technology in our own lives.

Personally I am of two minds about technology. On the one hand, I want it to work well and make my life easier and more enjoyable. On the other, I’m afraid of the consequences if all of the futuristic promises of technology come to fruition.

As I wait for the first session to begin, I take out my iPhone and begin reading about the Media Lab’s CE 2.0 project. CE stands for consumer electronics, and CE 2.0 is “a collaboration with member companies to formulate the principles for a new generation of consumer electronics that are highly connected, seamlessly interoperable, situation-aware, and radically simpler to use,” according to the Media Lab’s website.

CE 2.0 sounds really, really great to me. Then I realize that I’m reading about it on the same iPhone that keeps dropping conference calls in my New York apartment to my partners and clients in other parts of the city. So how can I ever expect CE 2.0 to live up to the Media Lab’s hype? And then I find myself thinking: What if it does? What if CE 2.0 exceeds all the hype and disrupts a whole bunch of industries? Which jobs will become obsolete as a result of this new generation of consumer electronics? Electricians? The people who make batteries, plugs, and electrical wiring? I keep going back and forth between the promise and the hype and everything in between. Even though CE 2.0 is basically an abstraction to me, it conjures up all sorts of expectations and fears. And I think that many of my friends and colleagues have similar longings, doubts, and fears when it comes to technology.

All of the projects at the MIT Media Lab are supported by corporations. Twitter, for instance, has committed $10 million to the Laboratory for Social Machines, which is developing technologies that “make sense of semantic and social patterns across the broad span of public mass media, social media, data streams, and digital content.” Google Education is funding the Center for Mobile Learning, which seeks to innovate education through mobile computing. The corporations have no say in the direction of the research, or ownership of what the MIT researchers patent or produce; they simply have a front row seat as the researchers take the emerging technologies wherever their curiosity and the technology takes them.

Clearly, there is a counter cultural ethos to the Media Lab. Its nine governing principles are: “Resilience over strength. Pull over push. Risk over safety. Systems over objects. Compasses over maps. Practice over theory. Disobedience over compliance. Emergence over authority. Learning over education.” For me, that’s a welcome invitation to imagine, explore new frontiers, and dream.

Before we tour the various labs, Peter Cohen, the Media Lab’s Director of Development, tells us that there is an artistic or design component to most of the Media Lab’s projects. “Much of our work is displayed in museums,” he says. “And some are performed in concert halls.” One I particularly like is the brainchild of Tod Machover, who heads the Hyperinstruments/Opera of the Future group. Machover, who co-created the popular Guitar Hero and Rock Band music video games, is composing a series of urban symphonies that attempt to capture the spirit of cities around the world. Using technology he’s developed that can collect and translate sounds into music, he enlists people who live, work, and make use of each city to help create a collective musical portrait of their town. To date, he’s captured the spirits of Toronto, Edinburgh, Perth, and Lucerne through his new technology. Now he is turning his attention to Detroit. I love this idea of getting factory workers, teachers, taxi cab drivers, police officers, and other people who live and work in Detroit involved in the creation of an urban symphony that can be performed so that the entire city can enjoy and take pride in it.

We head to the Biomechatronics Lab on the second floor. Luke Mooney, our guide to this lab, is pursuing a PhD in mechanical engineering at MIT. Only twenty four, he has already designed and developed an energy efficient powered knee prosthesis. He shows us the prototype, a gleaming exoskeleton enveloping the knee of a sleek mannequin. Mooney created the prosthesis with an expert team of biophysicists, neuroscientists, and mechanical, biomedical, and tissue engineers. It will reduce the “metabolic cost of walking,” he tells us, making it easier for a sixty four year old with worn out knees and a regularly sore back like me to maybe run again and lift far more weight than I could ever have dreamed of lifting.

Looking around, I’m struck by the mess, coffee cups and Red Bull cans, plaster molds of ankles, knees, and feet, discarded tools and motors, lying all over the place, like the morning after a month of all nighters.

The founder of the Biomechatronics Lab, Hugh Herr, is out of town this day, but his life mission clearly animates the Lab. When he was seventeen, a rockclimbing accident resulted in the amputation of both his legs below his knees. Frustrated with the prosthetic devices on the market, he got a master’s degree in mechanical engineering and a PhD in biophysics, and used that knowledge to design a prosthesis that enabled him to compete as an elite rock climber again. In 2013, after the Boston Marathon bombings, he designed a special prosthesis for one of the victims: ballroom dancer Adrianne Haslet-Davis, who had lost her lower left leg in the blast. Seven months later, at a TED talk Herr was giving, Haslet-Davis walked out on the stage with her partner and danced a rumba. “In 3.5 seconds, the criminals and cowards took Adrianne off the dance floor,” Herr said. “In 200 days, we put her back.”

At our next stop, the Personal Robotics Lab, Indian born researcher Palash Nandy tells us the key to his human friendly robots: their eyes. By manipulating a robot’s eyes and eyebrows, Nandy and his colleagues can make the robot appear sad, mad, confused, excited, attentive, or bored. Hospitals are beginning to deploy human friendly robots as helpmates to terminally ill kids. “Unlike the staff and other patients, who are constantly changing,” Nandy says, “the robot is always there for the child, asking him how he’s doing, which reduces stress.”

With the help of sophisticated sensors, the Personal Robotics Lab is building robots that are increasingly responsive to the emotional states of humans. Says Nandy: “Leo the Robot might not understand what you need or mean by the words you say, but he can pick up the emotional tone of your voice.” In a video he shows us, a researcher warns Leo, “Cookie Monster is bad. He wants to eat your cookies.” In response, Leo narrows his eyes, as if to say: “I get your message. I’ll keep my distance from that greedy Cookie Monster.”

Nandy also sings the praises of a robot who helps children learn French, and one that’s been programmed to help keep adults motivated as they lose weight.

My colleagues are full of questions and also objections:

“Can’t people do most of these tasks as well or better than the robots?”

“If every child grows up with their own personal robot friend, how will they ever learn to negotiate a real human relationship?”

“If you can create a robot friend, can’t you also create a robot torturer? Ever thought of that?”

“Yeah,” Nandy says, seeming to make light of the question. “But it’s hard to imagine evil robots when I’m around robots that say ‘I love you’ all day long.”

As awestruck and exhilarated as we are by what we see, my colleagues and I are getting frustrated by the long pauses and glib answers that greet so many of our concerns about the long-term impact of the technologies being developed here on the job market, human relationships, and our political rights and freedoms. As they invent the future, are these brilliant and passionate innovators alert to the societal risks and ramifications of what they’re doing?

On our way into the Mediated Matter Lab, we encounter a chaise lounge and a grouping of truly stunning bowls and sculptures that have been created using 3D printers. Markus Kayser, our guide through the Mediated Matter Lab, is a thirty one year old grad student from northern Germany. A few years ago he received considerable acclaim and media attention for a device he created called the Sun Sinter.

Kayser shows us a video of a bearded hipster, himself, carrying a metallic suitcase over a sand dune in Egypt’s Saharan Desert. It’s like a scene in a Buster Keaton movie. He stops and pulls several photovoltaic batteries and four large lenses from the suitcase. Then he focuses the lenses, at a heat of 1,600 degrees centigrade, onto a bed of sand. Within seconds, the concentrated heat of the sun has melted the sand and transformed it into glass. What happens next on the video gives us a glimpse into the future of manufacturing. Kayser takes his laptop out of the suitcase and spends a few minutes designing a bowl on the computer. Then, with a makeshift 3D printer powered by solar energy, he prints out the bowl in layers of plywood. He places the plywood prototype of the bowl on a small patch of desert. Then he focuses the lenses of the battery charged Solar Sinter on the sand. And then, layer after layer, he melts the sand into glass until he’s manufactured a glass bowl out of the desert’s abundant supplies of sun and sand.

“My whole goal is to explore hybrid solutions that link technology and natural energy to test new scenarios for production,” Kayser tells us. But the glass bowl only hints at the possibilities. Engineers from NASA and the US Army have already talked to him about the potential of using his technology to build emergency shelters after hurricanes and housing in hazardous environments, for example, in the desert regions of Iran and Iraq.

“How about detention centers for alleged terrorists?” one of my colleagues asks slyly. “I bet the Army is licking its chops to build a glass Guantanamo in the desert only miles from the Syrian border.”

Another asks: “Has anyone talked to you about using this technology to create urban housing for the poor?”

Kayser pauses before shaking his head no. The questions that consume our group most, how can we use this powerful technology for good rather than evil and to remedy the world’s inequities and suffering, do not seem of consequence to Kayser. This is by design: the Media Lab encourages the researchers to follow the technology wherever it leads them, without the pressure of developing a big-bucks commercial product, or an application that will save the world. lf they focus on the end results and specific commercial and social outcomes, they will be less attuned to the technology, to the materials, and to nature itself, which would impede their creative process. I understand that perspective, but it also worries me.

As I watch Kayser and his Solar Sinter turn sand into glass, another image comes to mind: Nearly 4,700 years ago, in the same Egyptian desert where Kayser made his video, more than 30,000 slaves, some of them probably my ancestors, and citizen-volunteers spent seven years quarrying, cutting, and transporting thousands of tons of stone to create Pharaoh Khufu’s Great Pyramid, one of the Seven Wonders of the Ancient World. That’s a lot of labor compared with what it will take to build a modern day community of glass houses in the vicinity of the Pyramid, or in Palm Springs, using the next iteration of the Sun Sinter. I’m concerned about the Sun Sinter’s impact on construction jobs.

Employment issues are at best a distant concern for the wizards who are inventing the future. Press them and they’ll say that technological disruption always produces new jobs and industries: Isn’t that what happened after Gutenberg invented the printing press and Ford automated the assembly line?

It was. But, as I reflect on this day, I remember a conversation I had with Steven Berkenfeld, an investment banker at Barclay’s Capital. Berkenfeld has a unique and important perspective on the relationship between technology and jobs. Day in, day out, he is pitched proposals by entrepreneurs looking to take their companies public. Most of the companies are developing technologies that will help businesses become more productive and efficient. That means fewer and fewer workers, according to Berkenfeld.

“Every company is trying to do more with less,” he explained. “Industry by industry, and sector by sector, every company is looking to drive out labor.” And very few policy makers are aware of the repercussions. “They convince themselves that technology will create new jobs when, in fact, it will displace them by the millions, spreading pain and suffering throughout the country.

When you look at the future from that perspective, the single most important decisions we need to make are: How do we help people continue to make a living, and how do we keep them engaged?”

At the end of our visit to the Media Lab, my job, as convener of the group, is to summarize some of the day’s lessons. I begin with an observation: “It’s amazing how the only thing that doesn’t work here is when these genius researchers try to project their PowerPoints onto the screen.” The twenty or so people who remain in our group laugh knowingly. Just like us, the wizards at MIT can’t seem to present a PowerPoint without encountering an embarrassing technological glitch.

I continue by quoting a line from a song by Buffalo Springfield, the 1960s American-Canadian rock band: “There’s something’s happening here. What it is ain’t exactly clear.”

That’s how I feel about our day at MIT; it has given us a preview of the future of work, which will be amazing if we can grapple with the critical ethical and social justice questions it elicits. “I’ve spent my whole life in the labor movement chasing the future,” I tell my colleagues. “Now I’d like to catch up to it, or maybe even jump ahead of it, so I can see the future coming toward me.”

Toward that end, I ask everyone in the group to answer “yes,” “no,” or “abstain” to two hypotheses.

Hypothesis number one: “The role of technology in the future of work will be so significant that current conceptions of a job may no longer reflect the relationship to work for most people. Even the idea of jobs as the best and most stable source of income will come into question.”

Hypothesis number two: “The very real prospect in the United States is that twenty years from now most people will not receive a singular income from a single employer in a traditional employee-employer relationship. For some, such as those with substantial education, this might mean freedom. For others, those with a substandard education and a criminal record, the resulting structural inequality will likely increase vulnerability.”

There are a number of groans (“Jesus, Andy, can you get any more long-winded or rhetorical?”) but each member of the group writes their answers on a piece of paper, which I collect and tally. The first hypothesis gets eighteen yeses and two abstentions. The second gets sixteen yeses, three noes, and one abstention.

I am genuinely surprised by these results. Six months ago, at our last meeting of the OSF Future of Work inquiry, my colleagues had a much more varied response to these hypotheses. At least half of them did not agree with my premise that technology would have a disruptive impact on jobs, the workplace, and employer-employee relationships, and some of them disputed the premise quite angrily. (“What do you think we are, Andy-psychics?”) Today’s tally reflects their acknowledgment that something is happening at MIT and across the United States that will fundamentally change the way Americans live and work, what it is ain’t exactly clear, but it merits our serious and immediate attention.

As they go about inventing the future, the scientists and researchers at the Media Lab aren’t thinking about the consequences of their work on the millions of Americans who are laboring in factories, building our homes, guarding our streets, investing our money, computing our taxes, teaching our children and teenagers, staffing our hospitals, driving our buses, and taking care of our elders and disabled veterans.

They aren’t thinking about the millions of parents who scrimped and saved to send their kids to college, because our country told them that college was the gateway to success, only to see those same kids underemployed or jobless when they graduate and move back home.

They aren’t thinking about the dwindling fortunes of the millions of middle class Americans who spent the money they earned on products and services that made our nation’s economy and lifestyle the envy of the world.

They aren’t thinking about the forty-seven million Americans who live in poverty, including a fifth of the nation’s children.

Nor should they. That is my job, our job together, and the purpose of this book.

But I’m getting ahead of myself. The reason I am at the MIT Media Lab stems from a combination of personal and professional factors in what seemed to many to be my abrupt decision to step down as head of America’s most successful union, the Service Employees International Union, or SEIU. To better understand where I am coming from and going with this book you need to understand my personal journey.



Raising the Floor. How a Universal Basic Income can Renew Our Economy and Rebuild the American Dream

by Andy Stern

get it at

The web is under threat. Join us and fight for it – Sir Tim Berners-Lee.

Today, March 12, 2018, is the World Wide Web’s 29th birthday. Here’s a message from our founder and web inventor Sir Tim Berners-Lee on what we need to ensure that everyone has access to a web worth having.

Today, the World Wide Web turns 29. This year marks a milestone in the web’s history: for the first time, we will cross the the tipping point when more than half of the world’s population will be online.

When I share this exciting news with people, I tend to get one of two concerned reactions:

How do we get the other half of the world connected?

Are we sure the rest of the world wants to connect to the web we have today?

The threats to the web today are real and many, including those that I described in my last letter, from misinformation and questionable political advertising to a loss of control over our personal data. But I remain committed to making sure the web is a free, open, creative space for everyone.

That vision is only possible if we get everyone online, and make sure the web works for people. I founded the Web Foundation to fight for the web’s future.

Here’s where we must focus our efforts:

Close the digital divide

The divide between people who have internet access and those who do not is deepening existing inequalities, inequalities that pose a serious global threat. Unsurprisingly, you’re more likely to be offline if you are female, poor, live in a rural area or a low-income country, or some combination of the above. To be offline today is to be excluded from opportunities to learn and earn, to access valuable services, and to participate in democratic debate. If we do not invest seriously in closing this gap, the last billion will not be connected until 2042. That’s an entire generation left behind.

In 2016, the UN declared internet access a human right, on par with clean water, electricity, shelter and food. But until we make internet access affordable for all, billions will continue to be denied this basic right. The target has been set, the UN recently adopted the Alliance for Affordable Internet’s threshold for affordability: 1 GB of mobile data for less than 2% of average monthly income. The reality, however, is that we’re still a long way off from reaching this target in some countries, the cost of 1GB of mobile broadband remains over 20% of average monthly income.

What will it take to actually achieve this goal? We must support policies and business models that expand access to the world’s poorest through public access solutions, such as community networks and public WiFi initiatives. We must invest in securing reliable access for women and girls, and empowering them through digital skills training.

Make the web work for people

The web that many connected to years ago is not what new users will find today.

What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms.

This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared.

These dominant platforms are able to lock in their position by creating barriers for competitors. They acquire startup challengers, buy up new innovations and hire the industry’s top talent. Add to this the competitive advantage that their user data gives them and we can expect the next 20 years to be far less innovative than the last.

What’s more, the fact that power is concentrated among so few companies has made it possible to weaponise the web at scale. In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data.

We’ve looked to the platforms themselves for answers. Companies are aware of the problems and are making efforts to fix them with each change they make affecting millions of people. The responsibility and sometimes burden of making these decisions falls on companies that have been built to maximise profit more than to maximise social good. A legal or regulatory framework that accounts for social objectives may help ease those tensions.

Bring more voices to the debate on the web’s future

The future of the web isn’t just about those of us who are online today, but also those yet to connect. Today’s powerful digital economy calls for strong standards that balance the interests of both companies and online citizens. This means thinking about how we align the incentives of the tech sector with those of users and society at large, and consulting a diverse cross-section of society in the process.

Two myths currently limit our collective imagination: the myth that advertising is the only possible business model for online companies, and the myth that it’s too late to change the way platforms operate. On both points, we need to be a little more creative.

While the problems facing the web are complex and large, I think we should see them as bugs: problems with existing code and software systems that have been created by people and can be fixed by people.

Create a new set of incentives and changes in the code will follow. We can design a web that creates a constructive and supportive environment.

Today, I want to challenge us all to have greater ambitions for the web. I want the web to reflect our hopes and fulfil our dreams, rather than magnify our fears and deepen our divisions.

As the late internet activist, John Perry Barlow, once said: “a good way to invent the future is to predict it”. It may sound utopian, it may sound impossible to achieve after the setbacks of the last two years, but I want us to imagine that future and build it.

Let’s assemble the brightest minds from business, technology, government, civil society, the arts and academia to tackle the threats to the web’s future. At the Web Foundation, we are ready to play our part in this mission and build the web we all want. Let’s work together to make it possible.

Sir Tim Berners-Lee

Debunking the Conspiratists. The Federal Reserve System, Illuminati and The New World Order. Real Facts.

The Federal Reserve, often referred to as “the Fed,” is the central bank of the United States, established in 1913 by the Federal Reserve Act.

While the source of countless outlandish conspiracies, the very existence of it completely goes against the ethos of the American founding fathers. Despite this, the establishments of both the Democratic Party and, of course, the Republican Party absolutely love central banking, although they have their own reasons. Pretty much all libertarians, and many civic-nationalists and antiestablishment liberals, want to see it brought down. John F. Kennedy notably tried to do so in 1963, following in the footsteps of the great American hero Andrew Jackson, but the King of Camelot was shot six months later.

In 2016, the one true successor of Old Hickory and outspoken economic nationalist vowed to follow in their footsteps and finally kill the bank, once and for all. So, basically, because of the Fed, [the mint producing its own currency] is a crime, and people need to share it fairly without taking a piece of the bank’s pie. Money, so they say, to quote the old adadge, is the root of all evil today. But if you ask for a rise, it’s no surprise that they are giving none away. Figures.


From 1836, when the Second Bank of the United States lost its congressional charter, to 1913, when the Federal Reserve Act passed, the US. was without a central bank. 11 Major financial panics (and their accompanying recessions) occurred in 1873, 1884, 1893, 1901, 1903, and the Panic of 1907 led to a demand that Congress take action. The Aldrich Commission was dispatched to make a study, and shortly after its final report was made Congress changed hands from the big government Republicans (those were the days) to the more grassroots oriented and anti-federalist Democrats. Instead of one central bank located in New York as the Commission recommended, twelve regional banks were created throughout the country, with a Board of Governors, which is the bank’s present form.

What does it do?

The Fed essentially controls the amount of cash money in the United States and sets monetary policy. It has 12 branch banks (Boston, New York, Philadelphia, Cleveland, Richmond, Atlanta, Chicago, St. Louis, Minneapolis, Kansas City, Dallas, and San Francisco) which loan money to banks within their respective regions. This money is then lent at an interest rate to banks who then lend it to people and businesses. The Fed buys bonds to increase the money supply, lowering interest, and sells bonds to decrease money supply, increasing interest. This is the key part of the open-market operations for the Fed, allowing it to control inflation and growth to a certain extent. Previously the money supply was effectively in the hands of various Wall Street movers and shakers (e.g. JP Morgan).

Oh, the irony! The chairman of the Federal Reserve Board for two decades (1986-2006) was Alan Greenspan, a former disciple of Ayn Rand.

Maiden Lane, AIG, and aid to central banks

During the banking crisis of 2008, the answer to “What does it do?” became “buy loads of toxic assets from failing banks.” The Fed created a number of dummy corporations (sorry, “special purpose vehicles”) called Maiden Lane I, II, and III to buy up crap from Wall Street and government sponsored entities Fannie and Freddie. Some question the legality of the Fed’s actions, however the Federal Reserve Act of 1913 Section 13(3) gives it authority “under unusual and exigent circumstances” to extend credit to individuals, partnerships, and corporations. A subsequent, full audit of the Fed revealed numerous alleged conflicts of interest in the deals.

After drastically increasing the size of its balance sheet, in what became nicknamed the “backdoor bailout,” the Fed was able to provide $3.3 trillion in liquidity and a peak of over $9 trillion in shortterm loans and assistance to Wall Street firms and foreign central banks, over several years. Total commitments were over $29 trillion, an amazing feat considering the GDP of Earth is estimated at $70 trillion.

Common arguments against the Fed

There are many criticisms of the Fed, varying in levels of coherence. Some criticism arises from a conspiratorial worldview that falsely attributes malicious motives to the Fed instead of incompetence or bad luck. The American monetary system is difficult to understand, even for someone in the financial industry and/or someone with an advanced degree. In fact, many principles that people were taught in schools before the 2008 crisis were turned on their head (e.g. big bank debt was risk free). The 2008 crisis and subsequent growing income inequality has furthered a greater distrust of the Federal Reserve.

Conspiracy theories

The Fed has been a frequent subject of conspiracy theories, alleging the Fed deliberately creates inflation, recessions, and even the Great Depression, through manipulation of the money supply.

Father Coughlin, the John Birch Society, Liberty Lobby, Eustace Mullins, Pat Robertson, Alex Jones, Texe Marrs and several others have frequently expressed such conspiracy theories. Many of the popular claims made today are recycled from G. Edward Griffin’s The Creature from Jekyll Island.

In some (but not all) cases these conspiracy theories have an anti-Semitic component, alleging “Jews” secretly or openly control the Fed. These theories are furthermore sometimes tied in to other conspiracy theories about the Trilateral Commission or the New World Order, or manipulation of the US. economy by the Rockefeller and Rothschild banking families.

It seems like one of America’s nutjob dominionists, ‘True Scholars of the Faith‘ has come up with a new completely insane theory in 2011 about the Federal Reserve, borrowing some points from Lyndon LaRouche.

Apparently the Federal Reserve is now a foreign banking institution controlled by the British, and Britain is now firmly in control of (who else?) the Rothschild family. Through their control of the Federal Reserve, they are making the national debt increase (apparently the Federal Reserve controls fiscal policy) to the point where they can take their former colony back (apparently being indebted means England wins you). Only the Constitution Parteh can save you now!


There is also the misconception that the Fed is independent or private, sometimes called “no more federal than FedEx.” This is not entirely true as it is a quasi-public entity. The Fed, like most central banks in the world, is considered “independent,” which is basically a term of art meaning that its day-to-day operations are not overseen by the federal government; it’s similar to how state broadcasters (say, the BBC) are protected from becoming propaganda outlets. However, its chairman and board of governors are appointed by the president subject to 2/3 Senate approval, with regular 30 day reporting and Congressional oversight,“ and its mission of maintaining price levels and full employment is determined by Congress.

The extent to which banks “control” the Federal Reserve is that they technically “own” it, but not in the way that shareholders own Microsoft. Since the Federal Reserve was created by Congressional charter, they are not organized like a normal corporation. Shareholder banks have no voting power, and all decisions are made by aforementioned government-appointed policy wonks. Shareholder banks elect 6 out of the 9 members of each regional Federal Reserve Bank’s directors, but these regional directors have no power over monetary policy; that power lies solely in the hands of the central Board of Governors.

However, the Board of Governors are appointed by lists which are given to the President by the staffs of banking committees of Congress and private sources. The most powerful of these groups are the financial institutions (which includes prominent members of the Fed itself) and the media corporations over which they have control. Thus, the appointment of these members is highly susceptible to political interests. The President does not select these people from his own personal address book, nor does he ask the public to submit nominations.

Trouble with accounting identities

Some monetary conspirators claim that the Fed creates money out of nothing and lends money to the government at interest, thereby stealing “the people’s” money and selling us into debt slavery or some similar nefarious scheme to take over the US government.

The way this works, however, is not quite the same as your regular commercial bank. The interest on debt held by the Fed actually goes to two places: One, the Fed pays itself out of this interest to cover its own operating costs, and two, the rest of the interest is rebated to the Treasury.

Typically, the Congress authorizes the U.S. Treasury to issue debt obligations, usually 90 day T-bills or longer term bonds, to cover its operating deficit. The Federal Reserve then purchases these obligations out of its reserve account with Federal Reserve Notes, aka U.S. currency.


This usually ties into the above point. The basic idea is that the Constitution gives Congress the power to coin money, so the Fed is unconstitutional because it is not the Congress. This is a pseudolegal argument because the Congress may delegate its powers. This is similar to pseudolegal arguments made by gold bugs. It also raises the question that if they were right, then do all 535 members of Congress (or 541, counting the non-voting delegates) have to personally make the coins and bills?

Congressional involvement

Sometimes a big deal is made of the fact the law bringing the Fed into existence was passed December 23, 1913, implying that most of the Congress was away for Christmas. The reality is quite different -the House passed the law 298-60, with 76 not voting but with 34 announced pairs, while the Senate passed it 43-25, with 27 not voting but with announced pairs.

For those not aware, an announced pair is where a member of the House or Senate who will be absent arranges with another member who will be present and is on the opposite side of the issue to form a “pair” with the absent member, thus allowing the absent member to have recorded how he would have voted had he been present. This means that at best only 42 more House members and 15 more Senate members could have said no to the creation of the Fed.

Although the vote would have passed even had everyone been present, some denizens claim that legislating on one of the last days of the session circumvented the possibility of challenges and debate. Why Congress would meet on days it thought off limits when the two houses could, each with the consent of the other, choose to adjourn is beyond them.

Austrian school and free banking proponents

Much of the opposition to the Fed in nonconspiratorial circles (though there is some overlap) comes from the Austrian school, who are free banking proponents and generally draw on Ludwig von Mise comments against central banking. Ron Paul is particularly known for his multi-decadal anti-Fed crusade in Congress.

In short, they claim that the Fed creates the business cycle through the expansion of the money supply which leads to “market distortion” and “malinvestment” due to easy money.

Ignoring lessons from the US Free Banking Era (1837 to 1864)

The biggest flaw with the free banking proponents is they either are ignorant or ignore the many problems seen in the Free Banking Era of the US.

The first problem was that during this era banks issued bank notes based on the gold and silver in their vaults, effectively printing money. Since these bank notes could only be redeemed at face value at the bank that issued them the result was the actual value of the note decreasing the further from the bank it got. Then there was the issue if the bank failed these bank notes became worthless. This made any form of long distance commerce difficult, if not impossible.

The second problem was since the laws were set up by the individual states there was no consistency with regard to reserve requirements, interest rates for loans and deposits, capital ratios, or anything else. Worse, enforcement of what laws there were was highly variable within a state. This resulted in some states what was late called “wildcat banking,” where the bank notes were not backed by precious metal at all, but by mortgages or bonds. In other words the exact same problems as are claimed regarding the Fed but with even less oversight.

Ignoring the original Great Depression (1873-79 or 96)

Before the Crash of 1929, the term “Great Depression” referred to the period of 1873-96 which was marked by deflation (largely because the US shifted from a bimetallic standard to a de facto gold standard in 1873) and the rapid industrialization of the country.

The term Gilded Age is also applied to this period and sometimes in a pejorative manner a shiny golden cover hiding a rotting or rotted core.

The deflation that marked this period is why some wanted to return to a bimetallic standard, as hammered home in William Jennings Bryan’s Cross of Gold speech in 1896. Even the shorter range of 1873-79 stated by the NBER is longer than the 1930’s Great Depression by 22 months. This era is now called “the Long Depression”; the lesson it gives us is that switching over from a bimetallic standard to a gold standard (which the Coinage Act of 1873 effectively did) triggers deflation for extended periods of time.

After the establishment of the Fed

The US only saw three major banking crises after the establishment of the Fed (Great Depression, S&L crisis, 2008 financial crisis) and only two since the creation of federal deposit insurance, compared to one about every decade prior to that. The business cycle had also seen shorter and smaller contractions.

Essentially, what this demonstrates is that the minority of libertarian and Austrian schoolers who believe in free banking, like Ron Paul, seem to love the idea of going back to the 19th century and having us all stuff gold bricks under our mattresses every time it looks like there’s going to be a run on the bank.

Congressional criticism

Louis Thomas McFadden, a former United States Congressman and a former Chairman of the United States Banking and Currency Committee, who believed that Jewish bankers were plotting with others against the United States, testified before Congress in 1934 outlining his criticism of the Fed. He also submitted a petition for articles of impeachment against the Board of Governors of the Federal Reserve Banking System for numerous criminal acts including conspiracy, fraud, unlawful conversion, and treason. These charges went exactly nowhere.

Ron and Rand Paul have been trying to shove legislation requiring an audit of the Fed and a review of its monetary policy (apparently independence from the political bickering on the Hill is a bad thing after all) through Congress since 2011. Although three versions of this legislation have passed the House, they all failed miserably in the Senate. The 2015 version was a fail, with Bernie Sanders even voting on it.

Fractional Reserve Banking


“As a model of how money is actually created, it is ‘neat, plausible, and wrong. The fallacies in the model were first identified by practical experience, and then empirical research.” Steve Keen

Fractional reserve banking is a relatively simple but wrong way of describing the banking system. As always with bad economics, it is popular partly because older academics have a vested interest in defending the idea, but also because it serves a useful political purpose for plutocrats. Namely, it basically denies that banks control the money supply of the modern economy. This allows the rich to place the onus of controlling the money supply upon governments, specifically by cutting expenditure on the grounds that it creates infiation (it can, but it presently doesn’t).

“Banks do not, as too many textbooks still suggest, take deposits of existing money from savers and lend it out to borrowers: they create credit and money ex nihilo extending a loan to the borrower and simultaneously crediting the borrower’s money account ” Lord Adair Turner, formerly the UKs chief financial regulator

When they lend money, banks create money out of nothing. They can do so with no deposits at all, since that money is in electronic form. Obviously, this would be impossible if this was done physically, as in pre-electronic banks. The difference between lending $100,000 and $10,000,000 of physical objects is the difference between handing someone two handfuls (2.4 kilos) of gold and shipping them a truck laden with 240 kilos of it. The difference between lending those sums electronically is a single keystroke. Accidentally creating millions of dollars’ worth of objects is rare. Doing so electronically happens all the time, such as when instead of giving this man the $100,000 overdraft he asked for this bank manager gave him $10,000,000.

Neat, Plausible

Monetarists assume that the Federal Reserve can influence banks’ lending by setting the Fractional Reserve Rate. They assume that this is possible because they assume that banks can’t lend without deposits. Ergo they believe that increasing the amount of each deposit that a bank must ‘keep’ and not lend reduces the amount of lending, that decreasing the Fractional Reserve Limit increases lending, and that using a central bank to directly increase or decrease a bank’s reserves will cause it to increase or decrease the creation of new loans respectively. More broadly this belief sits well with their assumption that banks and lending don’t affect the economy, or that if they do then it is in a way which never changes and can therefore be safely ignored.

This sounds plausible because Neoclassical economists have a holistic ideological vision in which everyone’s economic activities including those of governments and banks are just like those of the individual person or household. It seems like common sense that if you cannot physically lend $50 to your friend who has forgotten their wallet, then banks must not be able to lend money unless they already have some. Likewise, it seems like common sense that governments can’t spend money unless they take the same amount or more in the form of taxes. While some people are aware that it is possible for governments to spend more than they take, Neoclassicists ensure that these people believe that it will create hyperinflation, which is Satan.

At an academic level, the Fractional Reserve theory has only been able to survive through the ongoing process of purging and no-platforming its critics. Even before the concept was created in the 1970s, it was manifestly apparent that the lending did not require deposits. Yet the inconvenient reality was simply assumed away.


If Fractional Reserve Banking really explained how the actual banking sector worked, there would be a credit crunch every few minutes as the banks waited for people to make deposits. Everyone would pay for almost every ordinary expense with cash, and no-one would ever use credit cards unless they were willing to wait minutes, hours, or even days for the payment to go through.

Instead, banks just create the money they need to get by without paying much or any attention to the level which they are nominally supposed to ‘keep’. This is why the top-down Quantitative Easing implemented by Japan for two decades and Bernanke-0bama et al. in the Great Recession did not increase lending to businesses and consumers.

“The quantity of reserve balances itself is not likely to trigger a rapid increase in lending. The narrow, textbook money multiplier does not appear to be a useful means of assessing the implications of monetary policy for future money growth or bank lending. Seth Carpenter, Federal Reserve associate director. Money, reserves, and the transmission of monetary policy: does the Money Multiplier exist?”

Just as they did in Japan, EU-US banks used the vast majority of the QE funds for the productive (for them) purposes of lending the money to each other with above-inflation rates of interest, where it has sat ever since. In both cases they used a little to buy back their own stocks, buy each other’s stocks, buy stocks in other companies, and inflate asset bubbles. Nowhere have they increased lending to businesses or consumers, because the present and predicted future returns on doing so were lower.

Of course, Neoclassicists are divided between saying that neither of these examples count because we didn’t allow them to create enough QE money or that all the things which happened instead of what they predicted must actually be good things because anything which can be done to make money must be good for society, or else it couldn’t be done. Classic!

Stopped Clock

Austrians hate the Franctional Reserve concept not because they understand that it does not apply in reality, but because they have their own equally nutty model of how things ‘should be done’ instead.

The very thought that a bank may do something other than sit in front of your money and watch it grow mold makes some people foam at the mouth. Many get very quiet if you ask where the interest on their liquid savings accounts would come from then.

The same people often howl that government intervention in the banking system is filthy socialism because it is not their favored economic policy. Safe to say this is often ignoring history, when before there was regulation of fractional reserve banking by the Federal Reserve things were much more exciting for depositors, what with all the constant banking crises and all.

Macroeconomic effects

In the USA, UK, Australia and much of the developed world all economic growth requires the exponential increase of private debt. This is the ultimate product of the banking system created by the Neoliberal-Neoclassical revolution, which has allowed debts to compound and wealth extraction from the population to increase.

Banking is one of the three principal sectors which maximise the unearned income of plutocrats by extracting value from the wage earning population, the others being Insurance and Real Estate.

Without a biblical style cancellation or restructuring of debts, this process will continue until de facto plutonomy and debt slavery of virtually the entire world population result.

Historical existence

Fractional Reserve Banking did actually exist in the days when banks kept physical objects such as gold and paper currency which could be withdrawn. In those days it was important for regulators to ensure that banks kept more than the bare minimum of valuable objects to hand, since the banks naturally wanted to keep as little of them in-house as possible in order to maximise their profits yet those could be withdrawn, possibly causing a bank run. While many imposed limits on how much could be withdrawn at once in order to limit the amount that they had to keep on hand, lowering the withdrawal amount in a crisis could actually cause more panic and therefore more to be withdrawn as a larger number of depositors showed up demanding their money back.

Many small banks in the US went bust as a result of depositors losing confidence in them, which is why the Federal Reserve bank was created to establish Fractional Reserve limits and give gold or cash money to insolvent banks in an emergency. It also limited the risk that the depositor’s insurance (FDIC) would kick in every time one too many people came into the bank asking for cash.


While banks can’t literally print their own money in a system with a central bank, they can increase the money supply. In a system of fiat currency, banks’ monetary base (i.e., what is actually in the “vaults”) is made up of money which can be supplied by the central bank in time of need. However, when banks make loans above their reserve (which is pretty much always), it adds to the money supply, specifically what economists call “M2” and “M3” (depending on the type of loan), which are considered less “liquid” than the monetary base. Thus, lending can (but not necessarily will) cause demand-pull inflation.

In the world of electronic banking, banks can now create “money” out of thin air, through creating accounts. When someone spends these accounts, they are transferred to another bank, then this is lent on an interbank market (FED funds, LIBOR), to give reserves to banks who need it.

It is always possible to still get a run on the bank if too many people demand money in excess of the reserve. A simple analogy is airline seating. Airlines know a few people will cancel, so they overbook flights by selling more tickets than seats. A run on the bank is like everyone showing up to the flight and no cancellations. (Or the plot of Mel Brooks’ The Producers, when the play they’d oversold shares of unexpectedly became a hit.) Bank runs are prevented in modern banking systems by the creation of a lender of last resort to avoid short-term liquidity shortfalls.

The fractional reserve system itself takes no account of the risks of the loans banks make. If the reserve requirement was set to 100%, interest accumulated in deposits and the generation of loans would be nearly nonexistent. However, no banks would run out of money, as long as they had absolutely no costs. This has traditionally been policy favored mostly by Scrooge McDuck, and Austrians, but has gained currency in certain circles following the 2008 crash, and has been advocated by economists Laurence Kotlikoff, John Kay and John Cochrane as well as the Financial Times’ chief economics commentator Martin Wolf, and Iceland look to be heading towards implementing full reserves.

The Conspiracy Theories

Fractional reserve banking is the subject of numerous conspiracy theories. They usually revolve around or have their roots in anti Semitism in the form of Jewish banker conspiracies like the Rothschild family controlling the world. This usually ties in to conspiracies about the Federal Reserve as well as gold buggery or sound money.

Sometimes the cry of “fractional reserve banking is fraud!” is a cover for some kind of economic woo or scam usually of the “don’t trust banks, put your money in my Ponzi scheme instead” variety.

Sometimes these theories are just the result of people failing to understand abstract concepts.

Multiplier effect

The multiplier effect, or money multiplier, refers to the effects of a bank lending money over its reserve requirements as explained above. By law, banks are required to keep x% (depending on the locale and type of bank) of the total money they lend out in reserve. The resulting amount of money is 1/x multiplied by an original deposited amount, where x is the required reserve ratio in decimal form.

For example, a bank is required to have a 20% reserve. Alice deposits her $1000 paycheck into the bank. The bank is able to lend out $800 to Bob, who buys a used car from Charlie, who deposits the $800 into another bank. The bank turns around and lends $640 to Denise, and so on down the line until there is $5000 in the system.

While it may seem a bit like smoke and mirrors to someone unfamiliar with economics, imagine instead of cash it was something with ‘obviously’ more use such as tools. We all need tools to work, but the vast majority of time we own the tool we aren’t using it. So we put the tools in a tool bank, so that others can use it while we are not. If we only need the tools for about 20% of the time, the result is that the bank causes there to be effectively 5 times as many tools in the system. That it’s currency instead of tools doesn’t change the effect.

The multiplier effect is generally regarded as a simplification in academic and policy making circles. The Bank of England has stated that “while the money multiplier theory can be a useful way of introducing money and banking in economic textbooks, it is not an accurate description of how money is created in reality”, and the multiplier model “has not featured at all in the recent academic literature”.

Charles Goodhart, the UK’s preeminent monetary economist and former member of the Bank of England’s Monetary Policy Comittee has stated that “as long as the Central Bank sets interest rates, as is the generality, the money stock is a dependent, endogenous variable. This is exactly what the heterodox, Post-Keynesians have been correctly claiming for decades, and I have been in their party on this.”

In the Post-Keynesian view, the multiplier is an expost facto accounting identity (or, in other words, a legal fiction). The reason for this is that a bank can make any loan it deems worthy and then borrow money from either the interbank loan market (a market in which banks lend excess reserves to each other) or the Fed discount window to meet reserve requirements.

Bank capitalization, charters, and the Glass-Steagall Act

Banking regulation is much stricter than regulation in other industries and the financial sector. To apply for a bank charter, the owners (usually bank holding companies) of the bank’s capitalization are required to be debt free. Banks are supposed to be unencumbered rock solid investments. Once the charter is granted the bank then can receive deposits, ie, a debt owed to depositors encumbered by the bank’s capitalization. The combined value of the banks capitalization, along with its ability to lend other peoples money (depositors money) equals the bank’s balance sheet.

If a part owner of a bank holding company were to take on private debt, and sold his stake in the bank to satisfy the debt, that could reduce the bank’s capitalization, drive down the value of other shareholders stake, curtail the bank’s ability to lend, and affect the economic growth and activity in the surrounding neighborhood. Thus holders of bank charters are strictly regulated and supposed to be responsible with a proven track record in managing their own financial affairs.

The Glass-Steagall Act strictly regulated bank’s and bank charter owners ability to use bank assets (ie, a bank’s capitalization, depositor’s money, and earnings from its capitalization and depositors money). Under Glass-Steagall banks were limited to collecting interest off of lending depositors money (which a portion was paid back to depositors) or brokering deals, bringing buyer and seller together and making a fee off the transaction without using the bank’s own cash.

Repealing Glass-Steagall opened the door to proprietary trading, removing the heretofore strict requirements of banks to only invest or engage in the most conservative activities, and allowing them to purchase with bank stock and earnings, riskier assets with potentially more lucrative return, such as sub-prime mortgages and insurance companies loaded with potential risk and liabilities.

The Volcker Rule, named after former Federal Reserve Chairman Paul Volcker and part of the Dodd-Frank Fin Reg bill aimed at Wall Street reform, is an effort to allow the Federal Reserve stricter oversight of bank holding companies ownership and activities, which is difficult due to confidentiality agreements and privacy rights.

The United States is the only country in the world to have ever imposed the segregation of consumer banking and investment banking which existed under Glass-Steagall.

The New World Order. Conspiracy Theorists, Crackpots and people who need to get a Life


The reverse side of the Great Seal of the United States (1776).

The Latin phrase “novus ordo seclorum”, appearing on the reverse side of the Great Seal since 1782 and on the back of the US. one dollar bill since 1935, translates to “New Order of the Ages” and alludes to the beginning of an era where the United States of America is an independent nationstate; conspiracy theorists claim this is an allusion to the “New World Order”.

The New World Order or NWO is claimed to be an emerging clandestine totalitarian world government by various conspiracy theories.

The common theme in conspiracy theories about a New World Order is that a secretive power elite with a globalist agenda is conspiring to eventually rule the world through an authoritarian world government, which will replace sovereign nation states, and an all-encompassing propaganda whose ideology hails the establishment of the New World Order as the culmination of history’s progress.

Many influential historical and contemporary figures have therefore been purported to be part of a cabal that operates through many front organizations to orchestrate significant political and financial events, ranging from causing systemic crises to pushing through controversial policies, at both national and international levels, as steps in an ongoing plot to achieve world domination.

World Domination 😂

Before the early 1990s, New World Order conspiracism was limited to two American countercultures, primarily the militantly antigovernment right and secondarily that part of fundamentalist Christianity concerned with the end-time emergence of the Antichrist.

Skeptics such as Michael Barkun and Chip Berlet observed that right-wing populist conspiracy theories about a New World Order had not only been embraced by many seekers of stigmatized knowledge but had seeped into popular culture, thereby inaugurating a period during the late 20th and early 21 st centuries in the United States where people are actively preparing for apocalyptic millenarian scenarios.

Those political scientists are concerned that mass hysteria over New World Order conspiracy theories could eventually have devastating effects on American political life, ranging from escalating lone-wolf terrorism to the rise to power of authoritarian ultranationalist demagogues. Think Trump.

There are numerous systemic conspiracy theories through which the concept of a New World Order is viewed. The following is a list of the major ones in roughly chronological order:

End Time

Since the 19th century, many apocalyptic millennial Christian eschatologists, starting with John Nelson Darby, have predicted a globalist conspiracy to impose a tyrannical New World Order governing structure as the fulfillment of prophecies about the “end time” in the Bible, specifically in the Book of Ezekiel, the Book of Daniel, the Olivet discourse found in the Synoptic Gospels and the Book of Revelation.

They claim that people who have made a deal with the Devil to gain wealth and power have become pawns in a supernatural chess game to move humanity into accepting a utopian world government that rests on the spiritual foundations of a syncretic-messianic world religion, which will later reveal itself to be a dystopian world empire that imposes the imperial cult of an “Unholy Trinity” of Satan, the Antichrist and the False Prophet.

In many contemporary Christian conspiracy theories, the False Prophet will be either the last pope of the Catholic Church (groomed and installed by an Alta Vendita or Jesuit conspiracy), a guru from the New Age movement, or even the leader of an elite fundamentalist Christian organization like the Fellowship, while the Antichrist will be either the President of the European Union, the Secretary General of the United Nations, or even the Caliph of a pan-Islamic state.

Some of the most vocal critics of end-time conspiracy theories come from within Christianity. In 1993, historian Bruce Barron wrote a stern rebuke of apocalyptic Christian conspiracism in the Christian Research Journal, when reviewing Robertson’s 1991 book The New World Order.

Another critique can be found in historian Gregory S. Camp’s 1997 book Selling Fear: Conspiracy Theories and End-Times Paranoia.

Religious studies scholar Richard T. Hughes argues that “New World Order” rhetoric libels the Christian faith, since the “New World Order” as defined by Christian conspiracy theorists has no basis in the Bible whatsoever. Furthermore, he argues that not only is this idea unbiblical, it is positively anti-biblical and fundamentally anti-Christian, because by misinterpreting key passages in the Book of Revelation, it turns a comforting message about the coming kingdom of God into one of fear, panic and despair in the face of an allegedly approaching one-world government.

Progressive Christians, such as preacher-theologian Peter J. Gomes, caution Christian fundamentalists that a “spirit of fear” can distort scripture and history through dangerously combining biblical literalism, apocalyptic timetables, demonization and oppressive prejudices, while Camp warns of the “very real danger that Christians could pick up some extra spiritual baggage” by credulously embracing conspiracy theories.

They therefore call on Christians who indulge in conspiracism to repent.


Freemasonry is one of the world’s oldest secular fraternal organizations and arose during late 16th, early 17th century Britain. Over the years a number of allegations and conspiracy theories have been directed towards Freemasonry, including the allegation that Freemasons have a hidden political agenda and are conspiring to bring about a New World Order, a world government organized according to Masonic principles and/or governed only by Freemasons.

The esoteric nature of Masonic symbolism and rites led to Freemasons first being accused of secretly practising Satanism in the late 18th century.

The original allegation of a conspiracy within Freemasonry to subvert religions and governments in order to take over the world traces back to Scottish author John Robison, whose reactionary conspiracy theories crossed the Atlantic and influenced outbreaks of Protestant anti-Masonry in the United States during the 19th century.

In the 1890s, French writer Léo Taxil wrote a series of pamphlets and books denouncing Freemasonry and charging their lodges with worshiping Lucifer as the Supreme Being and Great Architect of the Universe. Despite the fact that Taxil admitted that his claims were all a hoax, they were and still are believed and repeated by numerous conspiracy theorists and had a huge influence on subsequent anti-Masonic claims about Freemasonry.

Some conspiracy theorists eventually speculated that some Founding Fathers of the United States, such as George Washington and Benjamin Franklin, were having Masonic sacred geometric designs interwoven into American society, particularly in the Great Seal of the United States, the United States one-dollar bill, the architecture of National Mall landmarks and the streets and highways of Washington, DC, as part of a master plan to create the first “Masonic government” as a model for the coming New World Order.

Freemasons rebut these claims of a Masonic conspiracy:

Freemasons rebut these claims of a Masonic conspiracy. Freemasonry, which promotes rationalism, places no power in occult symbols themselves, and it is not a part of its principles to view the drawing of symbols, no matter how large, as an act of consolidating or controlling power. Furthermore, there is no published information establishing the Masonic membership of the men responsible for the design of the Great Seal.

While conspiracy theorists assert that there are elements of Masonic influence on the Great Seal of the United States, and that these elements were intentionally or unintentionally used because the creators were familiar with the symbols, in fact, the all-seeing Eye of Providence and the unfinished pyramid were symbols used as much outside Masonic lodges as within them in the late 18th century, therefore the designers were drawing from common esoteric symbols. The Latin phrase “novus ordo seclorum”, appearing on the reverse side of the Great Seal since 1782 and on the back of the one-dollar bill since 1935, translates to “New Order of the Ages”, and alludes to the beginning of an era where the United States of America is an independent nation-state; it is often translated by conspiracy theorists as New World Order”.

Although the European continental branch of Freemasonry has organizations that allow political discussion within their Masonic Lodges, Masonic researcher Trevor W. McKeown argues that the accusations ignore several facts. Firstly, the many Grand Lodges are independent and sovereign, meaning they act on their own and do not have a common agenda. The points of belief of the various lodges often differ. Secondly, famous individual Freemasons have always held views that span the political spectrum and show no particular pattern or preference. As such, the term ”Masonic government” is erroneous; there is no consensus among Freemasons about what an ideal government would look like.


The Order of the Illuminati was an Enlightenment age secret society founded by university professor Adam Weishaupt on 1 May 1776, in Upper Bavaria, Germany. The movement consisted of advocates of free thought, secularism, liberalism, republicanism, and gender equality, recruited from the German Masonic Lodges, who sought to teach rationalism through mystery schools.

In 1785, the order was infiltrated, broken up and suppressed by the government agents of Charles Theodore, Elector of Bavaria, in his preemptive campaign to neutralize the threat of secret societies ever becoming hotbeds of conspiracies to overthrow the Bavarian monarchy and its state religion, Roman Catholicism.“ There is no evidence that the Bavarian Illuminati survived its suppression in 1785.

In the late 18th century, reactionary conspiracy theorists, such as Scottish physicist John Robison and French Jesuit priest Augustin Barruel, began speculating that the Illuminati had survived their suppression and become the masterminds behind the French Revolution and the Reign of Terror. The Illuminati were accused of being subversives who were attempting to secretly orchestrate a revolutionary wave in Europe and the rest of the world in order to spread the most radical ideas and movements of the Enlightenment anti-clericalism, antimonarchism, and anti-patriarchalism, and to create a world noocracy and cult of reason.

During the 19th century, fear of an Illuminati conspiracy was a real concern of the European ruling classes, and their oppressive reactions to this unfounded fear provoked in 1848 the very revolutions they sought to prevent.

During the interwar period of the 20th century, fascist propagandists, such as British revisionist historian Nesta Helen Webster and American socialite Edith Starr Miller, not only popularized the myth of an Illuminati conspiracy but claimed that it was a subversive secret society which served the Jewish elites that supposedly propped up both finance capitalism and Soviet communism in order to divide and rule the world.

American evangelist Gerald Burton Winrod and other conspiracy theorists within the fundamentalist Christian movement in the United States, which emerged in the 1910s as a backlash against the principles of Enlightenment, secular humanism, modernism, and liberalism, became the main channel of dissemination of Illuminati conspiracy theories in the US. Rightwing populists, such as members of the John Birch Society, subsequently began speculating that some collegiate fraternities (Skull and Bones), gentlemen’s clubs (Bohemian Club), and think tanks (Council on Foreign Relations, Trilateral Commission) of the American upper class are front organizations of the Illuminati, which they accuse of plotting to create a New World Order through a one-world government.

The Protocols of the Elders of Zion

The Protocols of the Elders of Zion is an antisemitic canard, originally published in Russian in 1903, alleging a Judeo-Masonic conspiracy to achieve world domination. The text purports to be the minutes of the secret meetings of a cabal of Jewish masterminds, which has co-opted Freemasonry and is plotting to rule the world on behalf of all Jews because they believe themselves to be the chosen people of God.

The Protocols incorporate many of the core conspiracist themes outlined in the Robison and Barruel attacks on the Freemasons, and overlay them with antisemitic allegations about anti-Tsarist movements in Russia.

The Protocols reflect themes similar to more general critiques of Enlightenment liberalism by conservative aristocrats who support monarchies and state religions. The interpretation intended by the publication of The Protocols is that if one peels away the layers of the Masonic conspiracy, past the Illuminati, one finds the rotten Jewish core.

Numerous polemicists, such as Irish journalist Philip Graves in a 1921 article in The Times, and British academic Norman Cohn in his 1967 book Warrant for Genocide, have proven The Protocols to be both a hoax and a clear case of plagiarism. There is general agreement that Russian-French writer and political activist Matvei Golovinski fabricated the text for Okhrana, the secret police of the Russian Empire, as a work of counter revolutionary propaganda prior to the 1905 Russian Revolution, by plagiarizing, almost word for word in some passages, from The Dialogue in Hell Between Machiavelli and Montesquieu, a 19th-century satire against Napoleon III of France written by French political satirist and Legitimist militant Maurice Joly.

Responsible for feeding many antisemitic and anti-Masonic mass hysterias of the 20th century, The Protocols has been influential in the development of some conspiracy theories. including some New World Order theories, and appears repeatedly in certain contemporary conspiracy literature.

For example, the authors of the 1982 controversial book The Holy Blood and the Holy Grail concluded that The Protocols was the most persuasive piece of evidence for the existence and activities of the Priory of Sion. They speculated that this secret society was working behind the scenes to establish a theocratic “United States of Europe”. Politically and religiously unified through the imperial cult of a Merovingian Great Monarch, supposedly descended from a Jesus bloodline, who occupies both the throne of Europe and the Holy See, this “Holy European Empire” would become the hyperpower of the 21 st century.

Although the Priory of Sion itself has been exhaustively debunked by journalists and scholars as a hoax, some apocalyptic millenarian Christian eschatologists who believe The Protocols is authentic became convinced that the Priory of Sion was a fulfillment of prophecies found in the Book of Revelation and further proof of an anti Christian conspiracy of epic proportions signaling the imminence of a New World Order.

Skeptics argue that the current gambit of contemporary conspiracy theorists who use The Protocols is to claim that they “really” come from some group other than the Jews, such as fallen angels or alien invaders. Although it is hard to determine whether the conspiracy-minded actually believe this or are simply trying to sanitize a discredited text, skeptics argue that it does not make much difference, since they leave the actual, antisemitic text unchanged. The result is to give The Protocols credibility and circulation.

Round Table

During the second half of Britain’s “imperial century” between 1815 and 1914, English born South African businessman, mining magnate and politician Cecil Rhodes advocated the British Empire reannexing the United States of America and reforming itself into an “Imperial Federation” to bring about a hyperpower and lasting world peace. In his first will, written in 1877 at the age of 23, he expressed his wish to fund a secret society (known as the Society of the Elect) that would advance this goal:

To and for the establishment, promotion and development of a Secret Society, the true aim and object whereof shall be for the extension of British rule throughout the world, the perfecting of a system of emigration from the United Kingdom, and of colonisation by British subjects of all lands where the means of livelihood are attainable by energy, labour and enterprise, and especially the occupation by British settlers of the entire Continent of Africa, the Holy Land, the Valley of the Euphrates, the Islands of Cyprus and Candia, the whole of South America, the Islands of the Pacific not heretofore possessed by Great Britain, the whole of the Malay Archipelago, the seaboard of China and Japan, the ultimate recovery of the United States of America, as an integral part of the British Empire, the inauguration of a system of Colonial representation in the Imperial Parliament, which may tend to weld together the disjointed members of the Empire and, finally, the foundation of so great a Power as to render wars impossible, and promote the best interests of humanity.

In 1890, thirteen years after “his now famous will,” Rhodes elaborated on the same idea: establishment of “England everywhere,” which would “ultimately lead to the cessation of all wars, and one language throughout the world.” “The only thing feasible to carry out this idea is a secret society gradually absorbing the wealth of the world [“and human minds of the higher order”] to be devoted to such an object.”

Rhodes also concentrated on the Rhodes Scholarship, which had British statesman Alfred Milner as one of its trustees. Established in 1902, the original goal of the trust fund was to foster peace among the great powers by creating a sense of fraternity and a shared world view among future British, American, and German leaders by having enabled them to study for free at the University of Oxford.

Milner and British official Lionel George Curtis were the architects of the Round Table movement, a network of organizations promoting closer union between Britain and its selfgoverning colonies. To this end, Curtis founded the Royal Institute of International Affairs in June 1919 and, with his 1938 book The Commonwealth of God, began advocating for the creation of an imperial federation that eventually reannexes the US, which would be presented to Protestant churches as being the work of the Christian God to elicit their support. The Commonwealth of Nations was created in 1949 but it would only be a free association of independent states rather than the powerful imperial federation imagined by Rhodes, Milner and Curtis.

The Council on Foreign Relations began in 1917 with a group of New York academics who were asked by President Woodrow Wilson to offer options for the foreign policy of the United States in the interwar period. Originally envisioned as a group of American and British scholars and diplomats, some of whom belonging to the Round Table movement, it was a subsequent group of 108 New York financiers, manufacturers and international lawyers organized in June 1918 by Nobel Peace Prize recipient and US. secretary of state Elihu Root, that became the Council on Foreign Relations on 29 July 1921.

The first of the council’s projects was a quarterly journal launched in September 1922, called Foreign Affairs. The Trilateral Commission was founded in July 1973, at the initiative of American banker David Rockefeller, who was chairman of the Council on Foreign Relations at that time. It is a private organization established to foster closer cooperation among the United States, Europe and Japan. The Trilateral Commission is widely seen as a counterpart to the Council on Foreign Relations.

In the 1960s, right-wing populist individuals and groups with a paleoconservative worldview, such as members of the John Birch Society, were the first to combine and spread a business nationalist critique of corporate internationalists networked through think tanks such as the Council on Foreign Relations with a grand conspiracy theory casting them as front organizations for the Round Table of the “Anglo American Establishment”, which are financed by an “international banking cabal” that has supposedly been plotting from the late 19th century on to impose an oligarchic new world order through a global financial system. Antiglobalist conspiracy theorists therefore fear that international bankers are planning to eventually subvert the independence of the U.S. by subordinating national sovereignty to a strengthened Bank for International Settlements.

The research findings of historian Carroll Quigley, author of the 1966 book Tragedy and Hope, are taken by both conspiracy theorists of the American Old Right (W. Cleon Skousen) and New Left (Carl Oglesby) to substantiate this view, even though Quigley argued that the Establishment is not involved in a plot to implement a one-world government but rather British and American benevolent imperialism driven by the mutual interests of economic elites in the United Kingdom and the United States. Quigley also argued that, although the Round Table still exists today, its position in influencing the policies of world leaders has been much reduced from its heyday during World War 1 and slowly waned after the end of World War II and the Suez Crisis. Today the Round Table is largely a ginger group, designed to consider and gradually influence the policies of the Commonwealth of Nations, but faces strong opposition.

Furthermore, in American society after 1965, the problem, according to Quigley, was that no elite was in charge and acting responsibly.

Larry McDonald, the second president of the John Birch Society and a conservative Democratic member of the United States House of Representatives who represented the 7th congressional district of Georgia, wrote a foreword for Allen’s 1976 book The Rockefeller File, wherein he claimed that the Rockefellers and their allies were driven by a desire to create a one-world government that combined ”supercapitalism” with communism and would be fully under their control. He saw a conspiracy plot that was “international in scope, generations old in planning, and incredibly evil in intent.”

In his 2002 autobiography Memoirs, David Rockefeller wrote:

For more than a century ideological extremists at either end of the political spectrum have seized upon well-publicized incidents to attack the Rockefeller family for the inordinate influence they claim we wield over American political and economic institutions. Some even believe we are part of a secret cabal working against the best interests of the United States, characterizing my family and me as ‘internationalists’ and of conspiring with others around the world to build a more integrated global political and economic structure-one world, if you will. If that’s the charge, I stand guilty, and I am proud of stand guilty, and I am proud of it.

Barkun argues that this statement is partly facetious (the claim of “conspiracy” and “treason”) and partly serious, the desire to encourage trilateral cooperation among the US, Europe, and Japan, for example, an ideal that used to be a hallmark of the internationalist wing of the Republican Party (known as “Rockefeller Republicans” in honor of Nelson Rockefeller) when there was an internationalist wing. The statement, however, is taken at face value and widely cited by conspiracy theorists as proof that the Council on Foreign Relations uses its role as the brain trust of American presidents, senators and representatives to manipulate them into supporting a New World Order in the form of a one-world government.

In a 13 November 2007 interview with Canadian journalist Benjamin Fulford, Rockefeller countered that he felt no need for a world government and wished for the governments of the world to work together and collaborate. He also stated that it seemed neither likely nor desirable to have only one elected government rule the whole world. He criticized accusations of him being “ruler of the world” as nonsensical.

Some American social critics, such as Laurence H. Shoup, argue that the Council on Foreign Relations is an “imperial brain trust” which has, for decades, played a central behind-the-scenes role in shaping US. foreign policy choices for the post-World War II international order and the Cold War by determining what options show up on the agenda and what options do not even make it to the table; others, such as G. William Domhoff, argue that it is in fact a mere policy discussion forum which provides the business input to US. foreign policy planning.

Domhoff argues that “it has nearly 3,000 members, far too many for secret plans to be kept within the group. All the council does is sponsor discussion groups, debates and speakers. As far as being secretive, it issues annual reports and allows access to its historical archives.”

However, all these critics agree that “historical studies of the CFR show that it has a very different role in the overall power structure than what is claimed by conspiracy theorists.”

The Open Conspiracy

In his 1928 book The Open Conspiracy British writer and futurist H. G. Wells promoted cosmopolitanism and offered blueprints for a world revolution and world brain to establish a technocratic world state and planned economy. Wells warned however, in his 1940 book The New World Order that:

When the struggle seems to be drifting definitely towards a world social democracy, there may still be very great delays and disappointments before it becomes an efficient and beneficent world system. Countless people will hate the new world order, be rendered unhappy by the frustration of their passions and ambitions through its advent and will die protesting against it. When we attempt to evaluate its promise, we have to bear in mind the distress of a generation or so of malcontents, many of them quite gallant and graceful looking people.

Wells’s books were influential in giving a second meaning to the term “new world order”, which would only be used by state socialist supporters and anti-communist opponents for generations to come. However, despite the popularity and notoriety of his ideas, Wells failed to exert a deeper and more lasting influence because he was unable to concentrate his energies on a direct appeal to intelligentsias who would, ultimately, have to coordinate the Wellsian new world order.

New Age

British neo-Theosophical occultist Alice Bailey, one of the founders of the so-called New Age movement, prophesied in 1940 the eventual victory of the Allies of World War II over the Axis powers (which occurred in 1945) and the establishment by the Allies of a political and religious New World Order.

She saw a federal world government as the culmination of Wells’ Open Conspiracy but favorably argued that it would be synarchist because it was guided by the Masters of the Ancient Wisdom, intent on preparing humanity for the mystical second coming of Christ, and the dawning of the Age of Aquarius.

According to Bailey, a group of ascended masters called the Great White Brotherhood works on the “inner planes” to oversee the transition to the New World Order but, for now, the members of this Spiritual Hierarchy are only known to a few occult scientists, with whom they communicate telepathically, but as the need for their personal involvement in the plan increases, there will be an “Externalization of the Hierarchy” and everyone will know of their presence on Earth.

Bailey’s writings, along with American writer Marilyn Ferguson’s 1980 book The Aquarian Conspiracy, contributed to conspiracy theorists of the Christian right viewing the New Age movement as the “false religion” that would supersede Christianity in a New World Order.

Skeptics argue that the term “New Age movement” is a misnomer, generally used by conspiracy theorists as a catch-all rubric for any new religious movement that is not fundamentalist Christian. By this logic, anything that is not Christian is by definition actively and willfully antichrist Ian.

Paradoxically, since the first decade of the 21st century, New World Order conspiracism is increasingly being embraced and propagandized by New Age occultists, who are people bored by rationalism and drawn to stigmatized knowledge, such as alternative medicine, astrology, quantum mysticism, spiritualism, and theosophy.

Thus, New Age conspiracy theorists, such as the makers of documentary films like Esoteric Agenda, claim that globalists who plot on behalf of the New World Order are simply misusing occultism for Machiavellian ends, such as adopting 21 December 2012 as the exact date for the establishment of the New World Order for the purpose of taking advantage of the growing 2012 phenomenon, which has its origins in the fringe Mayanist theories of New Age writers José Argiielles, Terence McKenna, and Daniel Pinchbeck.

Skeptics argue that the connection of conspiracy theorists and occultists follows from their common fallacious premises. First, any widely accepted belief must necessarily be false. Second, stigmatized knowledge-what the Establishment spurns-must be true. The result is a large, self-referential network in which, for example, some UFO religionists promote anti Jewish phobias while some antisemites practice Peruvian shamanism.

Forth Reich

Conspiracy theorists often use the term “Fourth Reich” simply as a pejorative synonym for the “New World Order” to imply that its state ideology and government will be similar to Germany’s Third Reich.

Conspiracy theorists, such as American writer Jim Marrs, claim that some ex-Nazis, who survived the fall of the Greater German Reich, along with sympathizers in the United States and elsewhere, given haven by organizations like ODESSA and Die Spinne, have been working behind the scenes since the end of World War II to enact at least some principles of Nazism (e.g., militarism, imperialism, widespread spying on citizens, corporatism, the use of propaganda to manufacture a national consensus) into culture, government, and business worldwide, but primarily in the US.

They cite the influence of ex Nazi scientists brought in under Operation Paperclip to help advance aerospace a manufacturing in the US. with technological principles from Nazi UFOs, and the acquisition and creation of conglomerates by ex Nazis and their sympathizers after the war, in both Europe and the U.S.

This neo-Nazi conspiracy is said to be animated by an “Iron Dream” in which the American Empire, having thwarted the Judeo-Masonic conspiracy and overthrown its Zionist Occupation Government, gradually establishes a Fourth Reich formerly known as the “Western Imperium”, a pan-Aryan world empire modeled after Adolf Hitler’s New Order, which reverses the “decline of the West” and ushers a golden age of white supremacy.

Skeptics argue that conspiracy theorists grossly overestimate the influence of ex-Nazis and neo-Nazis on American society, and point out that political repression at home and imperialism abroad have a long history in the United States that predates the 20th century. Some political scientists, such as Sheldon Wolin, have expressed concern that the twin forces of democratic deficit and superpower status have paved the way in the U.S. for the emergence of an inverted totalitarianism which contradicts many principles of Nazism.

Alien invasion

Since the late 1970s, extraterrestrials from other habitable planets or parallel dimensions (such as “Greys”) and intraterrestrials from Hollow Earth (such as “Reptilians”) have been included in the New World Order conspiracy, in more or less dominant roles, as in the theories put forward by American writers Stan Deyo and Milton William Cooper, and British writer David Ickenham.

The common theme in these conspiracy theories is that aliens have been among us for decades, centuries or millennia, but a government cover-up enforced by “Men in Black” has shielded the public from knowledge of a secret alien invasion. Motivated by speciesism and imperialism, these aliens have been and are secretly manipulating developments and changes in human society in order to more efficiently control and exploit human beings.

In some theories, alien infiltrators have shapeshifted into human form and move freely throughout human society, even to the point of taking control of command positions in governmental, corporate, and religious institutions, and are now in the final stages of their plan to take over the world.

A mythical covert government agency of the United States code-named Majestic 12 is often imagined being the shadow government which collaborates with the alien occupation and permits alien abductions, in exchange for assistance in the development and testing of military “flying saucers” at Area 51, in order for United States armed forces to achieve full-spectrum dominance.

Skeptics, who adhere to the psychosocial hypothesis for unidentified flying objects, argue that the convergence of New World Order conspiracy theory and UFO conspiracy theory is a product of not only the era’s widespread mistrust of governments and the popularity of the extraterrestrial hypothesis for UFOs but of the far right and ufologists actually joining forces. Barkun notes that the only positive side to this development is that, if conspirators plotting to rule the world are believed to be aliens, traditional human scapegoats (Freemasons, Illuminati, Jews, etc.) are downgraded or exonerated.

Brave New World

Antiscience and neo-Luddite conspiracy theorists emphasize technology forecasting in their New World Order conspiracy theories. They speculate that the global power elite are reactionary modernists pursuing a transhumanist agenda to develop and use human enhancement technologies in order to become a “posthuman ruling caste”, while change accelerates toward a technological singularity, a theorized future point of discontinuity when events will accelerate at such a pace that normal unenhanced humans will be unable to predict or even understand the rapid changes occurring in the world around them.

Conspiracy theorists fear the outcome will either be the emergence of a Brave New World-like dystopia, a “Brave New World Order”, or the extinction of the human species.

Democratic transhumanists, such as American sociologist James Hughes, counter that many influential members of the United States Establishment are bioconservatives strongly opposed to human enhancement, as demonstrated by President Bush’s Council on Bioethics’s proposed international treaty prohibiting human cloning and germline engineering. Furthermore, he argues that conspiracy theorists underestimate how fringe the transhumanist movement really is.

Modern Monetary Theory. The Government Has Unlimited Money – Tom Streithorst.

Everyone knows governments need to tax before they can spend. What Modern Monetary Theory presupposes is, maybe they don’t.

Tall, bearded, with gentle brown eyes, Occupy Wall Street veteran Jesse Myerson spends his days knocking on doors in the rundown neighborhoods of southern Indiana reminding voters of the enormous wealth of their country. His message, as an organizer for the progressive grassroots group Hoosier Action, is that the United States is a spectacularly rich nation and some of that wealth could, and should, be spread to the poor people of southern Indiana.

“People have been in a terrible amount of economic pain and that has led to the spiritual death of communities,” Myerson told me of the places he goes to. Opiate addiction is rife, as is suicide. “There’s no well-organized vehicle for people to make sense of their pain except for right-wing xenophobic initiatives.”

He says his group’s biggest competition for the hearts and minds of poor Indianans is a white supremacist group called the Traditionalist Workers’ Party. “They’re organizing along the same lines as us, these oligarchs are being tyrannical and exploiting us and we need peace and prosperity, except the difference is they are organizing along a framework of scarcity,” he said. “They are saying, ‘There is not enough to go around so we white people got to stick together and make sure we are taken care of ”’

By contrast, Myerson said, “We organize along the value of abundance, that there is enough to go around, that we can all afford freedom and dignity.”

You don’t hear “there is enough to go around” much in mainstream American politics. Republicans in particular criticize safety net and stimulus programs on the grounds they would increase the federal deficit; some have pushed radical legislation that would slash spendin. But Democrats sometimes make anti-deficit arguments, too, as the did when Republicans backed a $1.5 trillion tax cut bill. Ambitious proposals that would cost a lot of money, like Bernie Sanders’s “Medicare for all,” are routinely derided for what they would cost.

The federal government has run a deficit ever fiscal year but four since 1970, leading to a national debt (the accumulation of all those deficits) of 20.6 trillion and change. When asked about this by pollsters, most voters say that the country is on the wrong track, debt-wise, and want Congress to address the issue.

Myerson isn’t bothered by the deficit: “We’re the richest country in the history of countries, in the history of riches. Of course we can afford it.”

He notes no one seems to worry about the price tag when Congress increases the Pentagon budget or decides to invade a faraway country.

His optimism about government spending is due to his exposure to Modern Monetary Theory, a school of economics that says our panic over government budget deficits is delusional, a misguided and atavistic remnant of the gold standard. MMT has become increasingly influential on the left, giving progressives like Myerson a reason to believe that a high price tag shouldn’t stop the US from instituting wide ranging social reforms like Medicare for all.

Modern Monetary Theory’s basic principle seems blindingly obvious: Under a Fiat currency system, a government can print as much money as it likes. As long as a country can mobilize the necessary real resources of labor, machinery, and raw materials, it can provide public services.

Our fear of deficits, according to MMT, comes from a profound misunderstanding of the nature of money.

Every five-year-old understands money. It’s what you give the nice lady before she hands you the ice cream cone, an object with intrinsic value that can be redeemed for goods or services. Through the lens of Modern Monetary Theory, however, a dollar is nothing but a liability issued by the US government, which promises to accept it back in payment of taxes. The dollar in your pocket represents a debt owed you by the federal government.

Money isn’t a lump of gold but rather an IOU.

This mildly metaphysical distinction ends up having huge practical consequences. It means the federal government, unlike you and me, can’t run out of cash. It can run out of things money can buy, which will drive up their price and be manifest in inflation, but it can’t run out of money. As Sam Levey, a graduate student in economics who tweets under the name Deficit Owls told me, “Macy’s can’t run out of Macy’s gift certificates.”

Especially for those who want the government to provide more services to citizens, this is a convincing argument, and one that can be understood by noneconomists. “I’ve never heard a more persuasive account of how money works,” Myerson told me.

“I’ve seen these guys debate all sorts of people. I’ve never seen anybody beat them.”

“These guys” were gathered in September at the first ever Modern Monetary Theory Conference, a Kansas City event that brought together 225 academics, activists, and investors in person and thousands of livestream viewers.

In the UMKC Student Center Auditorium, Stephanie Kelton, a tall, telegenic former economic adviser to Bernie Sanders, told the enthusiastic crowd the basic problem of the American economy today is “lack of aggregate demand” leading to “chronic unemployment.” In other words, America’s problem isn’t an inability to make stuff (supply) but rather the inability to afford to buy all the stuff we are able to make (demand). Our productive potential outstrips our capacity to consume.

“The government can afford to pay for any program it wants. It doesn’t have to raise taxes,” Kelton added. Because politicians on both left and right don’t get that, “Kids go hungry-bridges don’t get built.”

Although rarely heard on mass media, among economists this viewpoint is not particularly controversial. Oxford economist Simon Wren-Lewis told me in an email. “Most mainstream, nonideological economists would agree the US needs more infrastructure investment, and the best way to finance that is through public borrowing.” He continued: “Most people think austerity is mainstream macroeconomics, although it is not. Those who are anti-austerity look for some alternative theory, which MMT provides.”

Unemployment and underemployment caused by insufficient spending is a problem economists know how to fix. Every economics 101 textbook recognizes government can increase demand at will either by cutting taxes (letting the private sector keep more money, which it will then spend) or by increasing spending directly (creating dollars and pouring them into the economy via government expenditures).

The problem is either of these policies will increase the budget deficit, regarded by most politicians as a bad thing. Conservatives fear increased government spending will “crowd out” private sector investment. A MMT advocate might reply that crowding out only can occur when the economy is already operating at full capacity. Today, staanant wages and low interest rates indicate the economy still has plenty of slack before inflation kicks in.

MMT disciples do think that deficit spending could lead to inflation, which they see as the only downside to more spending. That is what happened back in the 1960s, when Lyndon Johnson refused to raise taxes to pay for the Vietnam War and his Great Society, even as the private sector economy was booming. The result was rising inflation in the 1970s, one of several economic factors that led to the election of Ronald Reagan.

But for the past 35 years, inflation has been negligible. The Federal Reserve has been consistently undershooting its own 2 percent inflation target since it was adopted in 2012. Right now, deflation is a bigger threat to the global economy. To those who subscribe to MMT, it seems obvious that we need to be spending more, and frustrating that not enough people understand.

The original prophet of MMT is Warren Mosler, who 30 years ago was a Wall Street investor trying to gain competitive advantage over other traders by peering deeply into exactly how the federal government taxed, borrowed, and spent.

Fit, tanned, and currently residing in St. Croix in order to lower his tax bill, the 68 year-old multimillionaire makes an odd spokesman for a progressive economics movement. As his friend and hedge fund partner Sanjiv Sharma told me, “Warren is more politics agnostic.”

As a boy, he was fascinated by machinery, how it worked, how to fix it, how to put it together. Mosler told me he planned to major in engineering, but he switched to economics after taking a course and finding it much easier. After graduating from the University of Connecticut in 1971, he was hired by a local bank and found himself being promoted rapidly. Soon, he left New England for Wall Street.

“I look at things at an elemental level,” Mosler told me. He got down in the weeds to examine precisely how the Federal Reserve and the Treasury interacted with the general economy. He wanted to understand what happened to balance sheets when the Treasury collected taxes, traded bonds, spent and created money. He came to believe that the conventional wisdom has the relationship between the government and the private sector backwards.

Most of us assume government has to tax before it spends, that like you and me it has to earn money before it purchases goods. If it wants to spend more than it taxes, and it almost always does, it must borrow from the bond market. But by examining the granular way government accounts for its spending, Mosler saw that in every case, expenditures come first.

When your Social Security check is due, the Treasury doesn’t look to see if it has enough money to pay it. It simply keystrokes that money directly into your bank account and debits itself simultaneously, thereby creating the money it pays you out of thin air.

When you pay your taxes, the same process happens in reverse. The federal government subtracts dollars your account and eliminates the same amount from the liabilities side of its ledger, effectively destroying the money you just paid to it.

Unlike households or firms or even state and local governments, the federal government is authorized to create dollars. It adds money into the economy when it spends, and it takes it out when it taxes.

“There’s nothing to prevent the federal government from creating as much money as it wants and paying it to somebody,” is how Alan Greenspan, then the Fed chairman put it to Congressman Paul Ryan during a 2005 hearing.

Wren-Lewis, the Oxford economist, told me MMT sounds more radical than it really is. “In my view a lot of what they say is mainstream. When interest rates are at their lower bound their anti-austerity policy is totally mainstream,” he said. “In terms of their theoretical framework, I would describe it as being quite close to 1970s Keynesian, with the addition of a very modern understanding of how bank money is created.” Kelton told me MMT isn’t trying to change the way government spends and taxes, it is merely describing the way it already does.

Mosler’s understanding of money provided him with an insight: Any government that prints its own currency can’t go bankrupt. That insight made him millions.

In the early 1990s, Italy was struggling with high debt and low tax receipts; economists and traders feared it was heading for collapse. Italian government bond yields inevitably shot up. Mosler recognized that Italy could not be forced into default: It could print as many lira as it needed. (This was in the pre-euro days.) He borrowed lira from Italian banks at an interest rate lower than Italian government bonds were paying and used that money to buy Italian government debt other investors were dumping. Over the next few years, this trade made him and his clients more than $100 million.

It was after that that Mosler wanted to start a dialogue with academic economists. He wrote to Harvard, Princeton, and Yale, laying out his analysis of Federal Reserve payments and their startling implications, but was ignored. But then, using his contacts with Donald Rumsfeld, wrangled a lunch with Arthur Laffer (of supply-side Laffer curve fame). Laffer told Mosler not to expect anything from Ivy League economics departments, but there was this wacky heterodox group called the post-Keynesians, and they might be interested.

These economists, including Randy Wray, Bill Mitchell, and Stephanie Kelton taught Mosler about the chartalists, an early 20th-century group of economists who like Mosler saw money as debt created by the state. (MMT is sometimes called “neo-chartalism.”)

Abba Lerner’s functional finance is another precursor to MMT. Lerner, a mid-century British economist, insisted public officials ignore the deficit and instead focus on maintaining sufficient demand to keep the economy at full employment. If unemployment was too high government should either spend more or tax less. When inflation threatened, it should cut spending or increase taxes. For Lerner, as for the MMT crowd, there’s no reason to care about the size of a government deficit.

Mosler explained to the post-Keynesians that taxation and borrowing did not finance government spending. At first Kelton didn’t believe him. “Warren is putting out this stuff and it is way out there. It is the inverse of everything that we’ve been taught,” she told me. She decided to write a paper disproving Mosler’s theories, but in the end, after looking deep into the way the Federal Reserve, the Treasury, and the private banking system interact, she concluded, to her surprise, that he was right. “I went through all of this research,” she said, “and I got to exactly the same place Warren got, just with a lot of complicating details.” Tax and bond sales do come after spending; their purpose is not to fund the government but rather to take money out the system to keep it from overheating.

Though Mosler came from outside academia, his theories dovetailed with some work done by economists. “What Warren did in some sense was remind people of things we should have known,” she told me. “He made original contributions to be sure, but he also reminded us of what was in the literature and was well-established 60, 80 years ago, and then we just unlearned all those lessons.”

Kelton and Wray introduced Mosler to Wynne Godley’s sectoral balance analysis, which suggests government deficits are not just harmless, they are actually beneficial.

To simplify Godley’s theories, every economy has two sectors: the private sector and the public or government sector. When the government spends more than it taxes, it runs a deficit. And that deficit in the public sector inevitably means a surplus for the private sector.

Kelton explained it to me this way: Imagine I’m the entire government, and you are the entire private sector. I spend $100 either going to war or fixing bridges or improving education. The private sector does the work required to achieve those goals, and the government pays it $100. It then taxes back $90, leaving $10 in the private sector’s hands. That is the government running a deficit. It is spending more than it receives back in taxes. But you, the private sector, have $10 you didn’t have before.

In order to accumulate money, the private sector needs a government deficit.

Mosler’s hedge fund profited from this theory. In the late 1990s, just about everybody thought the Clinton budget surplus strengthened the US economy. But Mosler realized the Clinton budget surplus meant the government was taking more money out of the private sector in taxes than it was putting in in spending. Mosler reasoned this private sector deficit (the flip side of the government surplus) would inevitably lead to recession, so he bet on interest rates to fall (which they did in 2001) and his hedge fund again made out like bandits.

These days, MMT advocates are interested in larger issues than lining their pockets. For Kelton, the biggest problem with the American economy is unemployment and underemployment. She told me 20 million Americans want full-time work but can’t get it. This strikes her as a shocking waste of resources and talent. To create jobs, she says, we need to boost aggregate demand and the only way to do that is to increase spending.

“You can’t make spending the enemy in an economy that depends on sales,” she told me. “Capitalism runs on sales. What you have to do to boost the economy, to boost GDP, is you have to increase spending.”

One way to stimulate spending is a tax cut, especially one whose benefits go to average Americans rather than the top 1 percent. “A tax cut for working people has the same pocketbook effect as a pay raise,” Kelton told me. “When was the last time your employer gave you a raise?”

While MMTers would favor just about any fiscal stimuli, including infrastructure spending or tax cuts, their signature policy is a federally funded but locally administered job guarantee. Anyone who wanted work, either full or part-time, would be paid $15 an hour on projects deemed valuable by their local community. This might mean building roads, but it might also including caring for the elderly or working at daycares. Needed services would be provided and unemployed and underemployed people could find work.

“It is,” Randy Wray said at a panel during the conference, “an extremely effective anti-poverty program.” Full-time, those jobs would pay over $31,000 a year, enough to take a family of five out of poverty. Wray and Kelton told a panel at the conference this program would create 14 to 19 million jobs, add $500 to $600 billion to the GDP, and add less than 1 percent to inflation. Mosler calls this a “temporary jobs program” because he is confident the extra demand created by this federal spending would spark an upsurge in private sector hiring.

Ten years after the financial crisis, the American economy remains in sorry shape. Kelton calls it “a junk economy.” Although official unemployment is relatively low, that masks a long-term stagnation in wages and a huge number of discouraged workers, not counted in unemployment statistics. Real median wages are lower today than they were when Jimmy Carter was president.

For the fjrst time in history, most Americans are likely to be worse off than their parents.

Donald Trump won last year at least in part because he recognized for many of us, the American dream is dead and our economy is crap.

MMT says it can fix it, and that all it would take to create jobs and build a better America is to end the worrying about government deficits. “You hear people all the time saying government is living beyond its means,” Kelton said. “Absolutely not. We are living far, far below our means.”

Mosler says politicians are only obsessed by the deficit because voters are: “We’ve created an electorate who believe the deficit is too large and has to come down.” MMT-supporting academics, and leftwing activists, are hoping by changing people’s minds they can transform America. Mosler is confident that once people understand the insights of MMT, they won’t forget them. “Nobody goes back,” he told me.

Myerson isn’t so sanguine. He isn’t convinced winning the intellectual debate will be enough. “The billionaires have the power so the economics that supports their agenda is going to be the predominant one.” If MMT became mainstream and increased public spending became the norm, power and wealth would shift away from the ruling class. Myerson suspects that won’t happen without a struggle. He remembers something that Ann Larson and Laura Hanna of the group Debt Collective said at the conference:

“There will be no trickledown MMT. It’s going to have to come from organizing people.”

Deadly Innocent Frauds of government monetary policy

Warren Mosler

Deadly Innocent Fraud #1: Government Must Tax To Spend

Deadly Innocent Fraud #2: With government deficits, we are leaving our debt burden to our children

Deadly Innocent Fraud #3: Federal Government budget deficits take away savings

Deadly Innocent Fraud #4: Social Security is broken

Deadly Innocent Fraud #5: The trade deficit is an unsustainable imbalance that takes away jobs and output

Deadly Innocent Fraud #6: We need savings to provide the funds for investment

Deadly Innocent Fraud #7: It’s a bad thing that higher deficits today mean higher taxes tomorrow

Can America Rescue Itself? Vote, Because That’s Just What They Don’t Want You to Do – NY Times.

This is a fragile moment for America. The integrity of democratic institutions is under assault from without and within, and basic standards of honesty and decency in public life are corroding.

If you are horrified at what is happening in Washington and in many states you can march in the streets, you can go to town halls and demand more from your representatives, you can share the latest outrageous news on your social media feed, all worthwhile activities.

But none of it matters if you don’t go out and vote.

It’s a perennial conundrum for the world’s oldest democracy: Why do so many Americans fail to go to the polls? Some abstainers think that they’re registering a protest against the awful choices. They’re fooling themselves.

Nonvoters aren’t protesting anything; they’re just putting their lives and futures in the hands of the people who probably don’t want them to vote.

We’ve seen recently what can happen when people choose instead to take their protest to the ballot box. We saw it in Vir1inia in November. We saw it, to our astonishment, in Alabama in December. We may see it this week in western Pennsylvania. Voting matters.

Casting a ballot is the best opportunity most of us will ever get to have a say in who will represent us, what issues they will address and how they will spend our money. The right to vote is so basic, President Lyndon Johnson said in 1965, that without it “all others are meaningless.”

And yet every election, tens of millions of Americans stay home. Studies of turnout among developed nations consistently rank the United States near the bottom.

In the most recent midterms, in 2014, less than 37 vercent of eligible voters went to the polls, the lowest turnout in more than 70 years. In 2016, 102 million people didn’t vote, far more than voted for any single candidate.

The problem isn’t just apathy, of course. Keeping people from voting has been an American tradition from the nation’s earliest days, when the franchise was restricted to white male landowners. It took a civil war, constitutional amendments, violently suppressed activism against discrimination and a federal act enforcing the guarantees of those amendments to extend this basic right to every adult. With each expansion of voting rights, the nation inched closer to being a truly representative democracy. Today, only one group of Americans may be legally barred from voting, those with felony records, a cruel and pointless restriction that disproportionately silences people of color.

In the months leading up to the midterm elections on Nov. 6, when the House, Senate and statehouses around the country are up for grabs, the editorial board will explore the complicated question of why Americans don’t vote, and what can be done to overcome the problem.

The explanations fall into three broad categories:


A 96-year-old woman in Tennessee was denied a voter-ID card despite presenting four forms of identification, including her birth certificate. A World War II veteran was turned away in Ohio because his Department of Veterans Affairs photo ID didn’t include his address. Andrea Anthony, a 31 yearold old black woman from Wisconsin who had voted in every major election since she was 18, couldn’t vote in 2016 because she had lost her driver’s license a few days before.

Stories like these are distressingly familiar, as more and more states pass laws that make voting harder for certain groups of voters, usually minorities, but also poor people, students and the elderly. They require forms of photo identification that minorities are much less likely to have or be able to get purportedly to reduce fraud, of which there is virtuall no evidence. They eliminate same-day registration, close polling stations in minority areas and cut back early voting hours and Sunday voting.

These new laws may not be as explicitly discriminatory as the poll taxes or literacy tests of the 20th century, but they are part of the same long-term project to keep minorities from the ballot box. And because African Americans vote overwhelmingly for Democrats, the laws are nearly always passed by Republican dominated legislatures.

In a lawsuit challenging Wisconsin’s strict new voter-ID law, a former staff member for a Republican lawmaker testified that Republicans were “politicall frothing at the mouth” at the prospect that the law would drive down Democratic turnout. It worked: After the 2016 election, one survey found that the law prevented possibly more than 17,000 registered voters, disproportionately poor and minority, from voting. Donald Trump carried the state by fewer than 23,000 votes.


The legitimacy of an election is only as good as the reliability of the machines that count the votes. And yet 43 states use voting machines that are no longer being made, and are at or near the end of their useful life. Many states still manage their voter registration rolls using software programs from the 1990s. It’s no surprise that this sort of infrastructure failure hits poorer and minority areas harder, often creating hours long lines at the polls and discouraging many voters from coming out at all. Upgrading these machines nationwide would cost at least $1 billion, maybe much more, and Congress has consistently failed to provide anything close to sufficient funding to speed along the process.

Elections are hard to run with aging voting technology, but at least those problems aren’t intentional. Hacking and other types of interference are. In 2016, Russian hackers were able to breach voter registration systems in Illinois and several other states, and targeted dozens more. They are interfering again in advance of the 2018 midterms, according to intelligence officials, who are demanding better cybersecurity measures. These include conducting regular threat assessments, using voting machines that create paper trails and conducting postelection audits. Yet President Trump, who sees any invocation of Russian interference as a challenge to the legitimacy of his election, consistently downplays or dismisses these threats. Meanwhile, Mr. Trump’s State Department has not spent a dime of the $120 million Congress allocated to it to fight disinformation campaigns by Russia and other countries.


Some people wouldn’t vote if you put a ballot box in their living room. Whether they believe there is no meaningful difference between the major parties or that the government doesn’t care what they think regardless of who is in power, they have detached themselves from the political process.

That attitude is encouraged by many in government, up to and including the current president, who cynically foster feelings of disillusionment by hawking fake tales of rigged systems and illegal voters, even as they raise millions of dollars from wealthy donors and draw legislative maps to entrench their power.

The disillusionment is understandable, and to some degree it’s justified. But it creates a self fulfilling prophecy. When large numbers of people don’t vote, elections are indeed decided by narrow, unrepresentative groups and in the interests of wealth and power. The public can then say, See? We were right. They don’t care about us. But when more people vote, the winning candidates are more broadly representative and that improves government responsiveness to the public and enhances democratic legitimacy.

These obstacles to voting and political participation are very real, and we don’t discount their impact on turnout. The good news is there are fixes for all of them.

The most important and straightforward fix is to make it easier for people to register and vote. Automatic voter registration, which first passed in Oregon just three years ago, is now the law or practice in nine states, both red and blue, and the District of Columbia. Washington State is on the cusp of becoming the tenth, and New Jersey and Nevada may be close behind. More people also turn out when states increase voting opportunities, such as by providing mailin ballots or by expanding voting hours and days.

The courts should be a bulwark protecting voting rights, and many lower federal courts have been just that in recent years, blocking the most egregious attacks on voting in states from North Carolina to Wisconsin. But the Supreme Court under Chief Justice John Roberts Jr. has made this task much harder, mainly by gutting a key provision of the Voting Rights Act in a 2013 case. Decisions like that one, which split 5 to 4, depend heavily on who is sitting in those nine seats, yet another reason people should care who gets elected.

In the end, the biggest obstacle to more Americans voting is their own sense of powerlessness. It’s true: voting is a profound act of faith, a belief that even if your voice can’t change policy on its own, it makes a difference. Consider the attitude of Andrea Anthon , the Wisconsin woman who was deterred by the state’s harsh new voter-ID law after voting her whole adult life. “Voting is important to me because I know I have a little, teeny, tiny voice, but that is a way for it to be heard,” Ms. Anthony said. “Even though it’s one vote, I feel it needs to count.”

She’s right. The future of America is in your hands. More people voting would not only mean “different political parties with different platforms and different candidates,” the writer Rebecca Solnit said. “It would change the story. It would change who gets to tell the story.”

There are a lot of stories desperately needing to be told right now, but they won’t be as long as millions of Americans continue to sit out elections. Lament the state of the nation as much as you want. Then get out and vote.

NY Times

Real Facts about Tariffs. Trump’s plan brings a gun to a knife fight, a gun aimed at his foot – Greg Jericho.

It wasn’t a total surprise that Australia was spared the steel and aluminium tariffs as Trump’s bluster of levelling tariffs for everyone is weakening.

This week saw leaders around the world trying to remember whether they were meant to take Donald Trump seriously, but not literally, or literally but not seriously, and also wondering if they have a Greg Norman somewhere they could use.

When US president Trump announced early in the week he was going to levy a 25% tariff on steel and a 15% tariff on aluminium imports, he suggested it was in order to protect national security. As with most Trump utterances, it left everyone trying to decipher just what he meant, because the US imports nearly half its steel from four nations Canada, Brazil, South Korea and Mexico which are hardly enemies of the US.

Was he doing this to attack China? He did write a series of tweets that suggested trade with China is in his sights, but while China is the biggest producer of steel, it only exports a small percentage of it to the US.

Some of his other tweets suggested that Trump was instead targeting Europe, but again, hitting steel imports was an odd way to go about it, given Europe made it quickly obvious that it would retaliate with tariffs of its own and the European Union is one of the few economies with enough grunt not to get pushed around by the US.

Then came the suggestion that Trump was using this to negotiate with Canada and Mexico over Nafta but that was also odd because that shot down his national security reasoning and opened the US up to retaliation under the World Trade Organisation rules (which would be likely anyway, given his national security reasoning was clearly bogus).

And using the threat of increasing tariffs in free trade negotiations is a weird way to go about things.

The tariffs do hurt the countries that export steel and aluminium to the US, because they force them to charge more for their product, thereby giving American steel companies an advantage, but they also hurt the US.

Tariffs are effectively consumption taxes designed to give local industries an advantage (or at least an equal footing with international competitors), and they work by raising the price of imports. Now that is great for the owners and possibly workers of those industries, but not so good for anyone else who wants to buy those goods, because now they have to pay more.

A tariff on steel and aluminium imports might help create a few extra jobs in the steel industry, but it also increases the price of all things made with steel and aluminium. That leads to job losses in those industries and also reduces the living standards for everyone because suddenly they have to pay more for things like canned goods, beer, and cars.

One study suggested that for every job gained in the steel and aluminium industries, five would be lost elsewhere.

That does not mean all free trade is a win for everyone and international trade does not occur in a textbook, but rather in the real world where governments subsidise and assist industries. But the general rule is that the costs to the economy increase with the size of the tariff and the number of industries affected (and similarly the benefits of lowering them reduce as the tariff gets closer to zero). A 25% tariff on steel is thus a rather hefty whack.

Trump is in effect going to the negotiating table with a massive weapon, a bit like taking a gun to a knife fight. The only problem is he has the gun aimed at his own foot.

And so it wasn’t a total surprise to see Trump back down and exempt Canada and Mexico, and then later give one to Australia. As the trade minister, Steve Ciobo noted this week, our steel exports to the US amount to about 0.800 of the US market and our aluminium exports account for about 1.5% so exempting Australia makes little difference.

To that end, reports that we have engaged Greg Norman to do some lobbying on our behalf seem eminently sensible. Not because Norman is some master trade negotiator, but because when dealing with Trump, nations always need to realise he is an insecure, ego driven fool who needs praise for doing the most ordinary of activities, and who sees every discussion and issue through the prism of how it makes him look.

Norman is probably the only Australian Trump has heard of, and the fact that Norman is famous and successful and would be seeking a favour from Trump would appeal to Trump’s vanity.

We could bemoan the fact that America’s electoral college system has selected this vainglorious ignoramus, or we can suck it up and use it to our advantage.

For now it appears his bluster of levelling tariffs for everyone is weakening. Trump clearly believes this the best way to negotiate trade deals, like any good swindler he’ll ignore the costs and talk only of the benefits.

The danger for Australia has always been not from a direct US tariff but should retaliation come from Europe and China. The last thing a small open economy needs is for the large economies of the world to start playing like it is 1930.

For now everyone is trying to work out just what Trump is after, mostly he is after things that he can call a win (even if they are really not). So I; nations will be thinking of things they give Trum that don’t matter in order for him to claim victory in the negotiation.

Or they can see if Greg Norman is available for hire.

The Guardian

And then they’re Back! Just when the Baby Boomer is loving the empty nest, here’s the Boomerang Child – Yvonne Roberts.

For parents who have been enjoying the freedom of living child free, now comes research to spoil it all.

The bedrooms have been redecorated in grown up colours, the 25 year old soft toys chucked out, the washing machine is blissfully underused and, thanks to the apparent current raging addictions of baby boomers, a holiday or two cruising in the Med, the Antarctic, anywhere that avoids dry land have been booked. And then they’re back.

According to a recent study by the London School of Economics (LSE), adult children who return to the family home after a period away often at university cause a significant decline in their parents’ quality of life and wellbeing.

The first study of its kind to measure the impact of the “boomerang generation” looked at 17 countries including France Germany and Italy. Dr Marco Tosi and Prof Emily Grundy applied “quality of life” measures that included feelings of control, autonomy, pleasure and self realisation in everyday life”.

When a child returns home, researchers found the score went down by an average of 0.8 points, an effect on quality of life similar to developing an age related disability such as mobility difficulties. Protestant countries showed a greater decline than Catholic ones, presumably because these nations are more accustomed to living in multigenerational, extended families.

“When children leave the parental home, marital relationships improve and parents find a new equilibrium,” says Tosi. “They enjoy this stage in life, finding new hobbies and activities. When adult children move back, it is a violation of that equilibrium.”

When a grown up child does return, often reverting to tricky adolescence, there is something comfortingly familiar about doors slamming, noise accelerating and wellbeing sliding down the scale, it’s called parenting. But this time round, it can be particularly gruelling.

It’s not easy for a twenty something whose aspirations are battered by ridiculous housing costs, student debt and low wages to have to witness the daily spectacle of baby boomers bent on rediscovering their 60s mojo with late nights and long lie ins, all the while being hard of hearing, digitally illiterate and short on memory.

Repetition and constant interrogation about the strangeness of modern life are the price the returner must pay. “Did you say you’d be back for supper?” ;“Six times.” “What’s that thing that works the TV?”, “The remote control.”

And the rules of engagement are far from clear given that nowadays it’s more likely to be the baby boomer who is rolling a spliff and starting on a second bottle before the end of The Archers.

Last week, a series of notes from parents admonishing children and teenagers was published. “Every time you don’t eat your sandwich, a unicorn dies. Love Dad,” read one lunchbox note. In a boomerang household, it’s more likely the child will leave an admonishing Post It stuck to an empty case of wine, such as drink kill!

Around one in four young adults now live with their parents in the UK, the highest number since records on the trend began in 1996. In the 60s, it was the newly marrieds who returned to live with the in-laws. The UK wasn’t part of the LSE study, but Tosi says refilling the empty nest is likely to have the same impact. And we have history.

In the 18th century, young men would leave home in their teens to serve as apprentices and young women would fly the nest into domestic service, according to the sociologist Wally Seccombe’s history of working-class life, Weathering the Storm. But by the 1850s, the Industrial Revolution had led to mass “in-migration” to cities. “Home ownership was out of the question for the vast majority,” writes Seccombe. Families huddled together, subs and took in lodgers.

In 1851, in Preston, housing costs and low wages contributed to eight out of 10 males aged 15 to 19 living at home. It could take a woman, also a wage earner, up to three days to do the weekly wash by hand. Today, a returning adult child may find that the newly liberated woman of the house has resigned from all domestic duties in the name of self-realisation. The nest is no longer what it was.

That said, one vital element is missing from the LSE study, how long does the return of he boomerang child last? A decade and he or she risks turning into a carer, while a year or two has its pluses, someone to feed the cat while Mum and Dad are paddling up the Amazon or, if finances are depleted by more mouths to feed again, down the Ouse.

There are also surprising trade offs. Research on the brain by two American psychologists, Mara Mather and Susan Turk Charles, involved tests on people up to the age of 80. Results indicated that as we get older our fight or flight dictating amygdala reacts less to negative information. We tend to see the good rather than the bad, not least because time is precious. “In younger people, the negative response is more at the ready,” says Charles.

So in what appears to be an age of perpetual anxiety for adult offspring who are perhaps temporarily suspending the quest for independence, to go back home is not just about cheap living (and potential continued warfare if more than one sibling also rejoins the nest). Mum and Dad may find their equilibrium, newfound hobbies and partnership wrecked, but there are compensations in making room for a broke son or daughter. Like all good enough parents, in tough times they can make things seem not quite as bad as they might otherwise have. Even while queueing for the shower.

The Guardian

The Indestructible Idea of the Basic Income. A History – Jesse Walker.

Is this the only policy proposal Tom Paine, Huey Long, Milton Friedman, Timothy Leary and Sam Altman can agree on?


Andy Stern is a former president of the Service Employees International Union. Charles Murray may be America’s most prominent right-wing critic of the welfare state. So when they appeared onstage together in Washington, D.C., last fall to discuss the basic income, the idea of keeping people out of poverty by giving them regular unconditional cash payments, the most striking thing about the event was that they kept agreeing with each other.

It isn’t necessarily surprising that Stern and Murray both back some version of the concept. It has supporters across the political spectrum, from Silicon Valley capitalists to academic communists. But this diverse support leads naturally to diverse versions of the proposal, not all of which are compatible with one another. Some people want to means-test the checks so that only Americans below a certain income threshold receive them; others want a fully universal program, given without exceptions. Some want to replace the existing welfare state; others want to tack a basic income onto it. There have been tons of suggestions for how to fund the payments and for how big they should be. When it comes to the basic income, superficial agreement is common but actual convergence can be fleeting.

In Stern’s case, the central issue driving his interest in the idea is the turmoil he expects automation to bring to the economy. In the future, he and Lee Kravitz predict in their 2016 book Raising the Floor, tens of millions of jobs will disappear, leaving much of the country stuck with work that is “contingent, part-time, and driven largely by people’s own motivation, creativity, and the ability to make a job out of ‘nothing. A basic income, he hopes, would bring some economic security to their lives.

Read Murray’s first detailed pitch for a guaranteed income, the 2006 book In Our Hands, and you won’t see anything like that. Its chief concern is shifting power from government bureaucracies to civil society. It doesn’t just propose a new transfer program; it calls for repealing every other transfer program. And automation isn’t a part of its argument at all.

But onstage at the Cato Institute in DC, Murray was as worried as Stern about technological job loss, warning that “we are going to be carving out millions of white-collar jobs, because artificial intelligence, after years of being overhyped, has finally come of age.” Meanwhile, Stern signaled that he was open not just to replacing welfare programs for the disadvantaged but possibly even to rethinking Social Security, provided that people still have to contribute money to some sort of retirement system and that Americans who have already paid in don’t get shortchanged. He drew the line at eliminating the government’s health insurance programs, but the other guy on the stage agreed that health care was different. Under Murray’s plan, citizens would be required to use part of their grant to buy health insurance, and insurance companies would be required to treat the population as a single pool.

The Murray/Stern convergence comes as the basic income is enjoying a wave of interest and enthusiasm. The concept comes up in debates over everything from unemployment to climate change. Pilot programs testing various versions of the idea are in the works everywhere from Oakland to Kenya, and last year Swiss voters considered a plan to introduce a guaranteed income nationwide. (They wound up rejecting the referendum overwhelmingly, with only 23 percent voting in favor. I didn’t say everyone was enthusiastic.)

This isn’t the first time the basic income or an idea like it has edged its way onto the agenda. It isn’t even the first time we’ve seemed to see an ideological convergence. This patchwork of sometimes overlapping movements with sometimes overlapping proposals has a history that stretches back centuries.

It Usually Begins with Tom Paine

Just where you pinpoint the start of that history depends on how broadly you’re willing to define basic income. The idea’s advocates have identified plenty of precursors to their proposals, but sometimes the connection can be a little tenuous. It’s true, as they’ll tell you, that in 1516 St. Thomas More suggested that society could reduce crime by “providing everyone with some means of livelihood.” It’s a bit of a leap from there to the plans being debated today.

But we have to start somewhere, and for two reasons 1795 is a good place to begin. That’s the year Thomas Paine started to write his pamphlet Agrarian Justice. It’s also the year some squires introduced a new system of relief to the English district of Speenhamland.

Agrarian Justice, which was ultimately published in 1797, posited that “the earth, in its natural, uncultivated state was…the common property of the human race.” Therefore, Paine argued, each landowner “owes to the community a ground-rent” to compensate the dispossessed for their loss. From those fees, “the sum of fifteen pounds sterling” should be paid to everyone when they turn 21, with another “ten pounds per annum” paid after they’ve turned 50.

Eighteenth century England already had a welfare apparatus. Under the Poor Laws, different local governments devised different systems for distinguishing the “deserving” from the “undeserving” poor, and for extracting labor from able-bodied paupers. Paine was proposing something else: money for everyone just for being alive and of age, delivered as a matter of “justice, and not charity.”

The squires of Speenhamland were less idealistic and more afraid. The cost of grain had skyrocketed in Britain, food riots were breaking out, and the landed classes were casting their eyes uneasily at the revolutionary violence in France. And so in May of 1795, just a few months before Paine began his pamphlet, the Speenhamland magistrates decided to adopt a new method of public assistance. Junking the old deserving/undeserving distinction, they would now ensure that all the poor in their village received an income, with the level of aid varying with the market price of bread.

The so-called Speenhamland system was soon adopted in other parishes. It persisted until 1834, when a royal commission declared it a failure. Parliament then adopted the New Poor Law, which required able-bodied paupers who wanted relief to be confined in tightly disciplined workhouses.

The commission’s report on Speenhamland painted a picture of indolence and immorality, accusing the system of discouraging men from working and encouraging women to have children out of wedlock. The royal report also suggested that the system subsidized low wages, making it more a ceiling than a floor for poor people’s incomes. Since then, several scholars have challenged the commission’s conclusions, noting that its data were not collected very scientifically, that many effects attributed to the system actually preceded it, and, perhaps most notably, that the Speenhamland approach wasn’t actually all that widespread. (Different districts had continued to give relief in different ways.) But those critiques are a relatively recent development. In between, the commission’s conclusions influenced figures ranging from Marx to Mises.

Neither the system in Paine’s pamphlet nor the system in Berkshire County was a full-fledged basic income. But between the radical calling for a new allocation of power and the insiders afraid their power would be overturned, the 1790s anticipated a lot of arguments to come.

From Huey Long to Timothy Leary Several similar proposals circulated in the ensuing centuries. The economist Henry George, famous for arguing that government should be funded by a single tax on land, thought that any “surplus revenue might be divided per capita.” The philosopher Bertrand Russell suggested that “a certain small income, sufficient for necessaries, should be secured to all.” The engineer C.H. Douglas proposed a “national dividend” as part of his economic philosophy of “social credit.” Douglas’ doctrine was briefly in vogue among modernist intellectuals, with advocates ranging from the anarchist art critic Herbert Read to the fascist poet Ezra Pound.

Pound’s favorite governor, the Louisiana populist Huey Long, launched a Share Our Wealth campaign during the Depression; its planks included a guaranteed annual income of $2,500 per family (and confiscatory taxes on incomes over $1 million). In the early ’40s, the literary socialists HG. Wells and George Bernard Shaw proposed a basic income of $3,200 to $4,800 a year. (When The Tuscaloosa News reported this, it presented the idea to its American audience as “a sort of intellectual Huey Long Share-the-Wealth plan for Great Britain”) And there were others, from the English Quakers who called their proposal a ”state bonus” to the market socialists who endorsed a “social dividend.”

Some of these thinkers hailed from the left, but many did not; or at least they weren’t a part of the conventional left. There is a current of thought that’s deeply skeptical of both the statist forms of socialism and the monopolistic forms of capitalism, and which often fixates on quirky policy ideas, George’s land tax, Douglas’ monetary scheme, that aim to tame concentrated economic power without concentrating power in the government instead. The basic income fits snugly in that tradition, especially when the payments are presented not as a form of relief but as dividends to the owners of society’s resources.

Those third-way ideas tend to take hold in two places that aren’t usually associated together: avant-garde intellectual circles and populist movements. Social credit, for instance, made a strange transit from the modernists‘ salons to the prairies of western Canada, where the Depression left voters eager to try something new.

In 1935, the Albertan electorate swept a party based on the philosophy into power; the new premier, radio evangelist William Aberhart, pledged to distribute dividends of $25 a month. This turned out to be more easily promised than delivered, and in 1937, frustrated Social Credit Party backbenchers revolted over the government’s failure to issue the checks. They won the political battle but lost the policy war. (Two decades later, flush with energy wealth, a Social Credit government finally distributed “oil dividends” to Albertans in 1957 and 1958. A Tory government reprised the idea in 2006, this time calling the oil-fueled checks a “prosperity bonus”)

Yet another version of the idea took hold among free market economists. In the 1940s, Milton Friedman and George Stigler started exploring the concept of a negative income tax. The idea here, in Stigler’s words, was to “extend the personal income tax to the lowest income brackets with negative rates in these brackets.” By making sure “the negative rates are appropriately graduated,” he added, “we may still retain some measure of incentive for a family to increase its income.” Stigler thought this would be a good way to give a hand to low-wage workers without the market-distorting effects of a minimum wage. By 1962, Friedman was proposing it as a substitute for virtually the entire welfare state, or as he put it, “the present rag bag of measures directed at the same end.”

By then a push was coming from still another direction. Convinced that automation was on the verge of driving unemployment to unsustainable levels, several social scientists argued that a guaranteed income could cure the crisis. If you’re familiar with modern worries about what artificial intelligence and self-driving trucks will do to the job market, the rhetoric of the early ’60s will sound familiar. “The coming replacement of man’s skills by the machine’s skills will destroy many jobs and render useless the work experience of vast numbers now employed,” the futurist Robert Theobald argued in his 1963 book Free Men and Free Markets. Since only the most skilled workers will enjoy the “possibility of obtaining employment in one of the restricted number of new fields,” he feared we were headed for “the complete breakdown of our present socioeconomic system.”

Theobald’s ideas inspired the Ad Hoc Committee on the Triple Revolution, whose 1964 manifesto declared that “the combination of the electronic computer and the automated self-regulating machine” was breaking “the traditional link between jobs and incomes.” The solution, it concluded, was an “unqualified right to an income” that would “take the place of the patchwork of welfare measures.” That part may not sound so different from Friedman’s proposal, but the document also called for government planning and for a transition program that featured public works, public housing, public coal plants, and other interventions in areas that Friedman would leave to the market. The statement was signed by a collection of left-leaning intellectuals, from Linus Pauling to Tom Hayden, and it left enough of a mark on the culture to inspire everything from a Martin Luther King sermon to a Philip José Farmer science fiction story.

In another quarter of the left, activists formed the National Welfare Rights Organization in 1966. One of their chief complaints was the intrusiveness and humiliation built into America’s welfare bureaucracies; they too soon called for replacing the existing transfer programs with a guaranteed income.

Yet another version of the concept picked up fans in the counterculture. When the LSD evangelist Timothy Leary ran for governor of California in 1969, he declared that the state “should be run like a successful business enterprise. Instead of extorting taxes from the citizens a well run state should return a profit. Anyone smart enough to live in California should be paid a dividend.”

Needless to say, not everyone liked such notions. When Eugene McCarthy called for a guaranteed income in the 1968 presidential primaries, his rival Bobby Kennedy derided the proposal as “a massive new extension of welfare,” declaring that the proper “answer to the welfare crisis is work, jobs, self-sufficiency, and family integrity.” Shortly after the election, a Gallup poll showed 62 percent of the public rejecting the concept of a guaranteed income.

Nonetheless, by the end of the decade the idea had fans everywhere from the Black Panthers to the Nixon administration. That’s quite a convergence. But convergence doesn’t always last.

A Moment in the Sun

Those converging forces might not have agreed on everything, but they were listening to one another. When Ralph Helstein of the United Packinghouse Workers read Friedman’s proposal for a negative income tax, he thought to himself: “That’s it. This conservative has provided us with a way to get guaranteed income.” When the Ripon Society, a liberal Republican group, endorsed the negative income tax in 1967, it cited Robert Theobald along with Friedman. And when John Price, one of the Ripon Society’s founders, brought the idea to Nixon’s attention one evening in 1968, he had convergence on his mind. According to Daniel Patrick Moynihan’s The Politics of a Guaranteed Income, Price “proposed, at a dinner meeting with Nixon, that the negative income tax and the volunteer army were the two issues that might unite the liberal and conservative wings of the Republican party.”

Moynihan, who despite being a Democrat had gotten a job on the White House staff, folded a version of the negative income tax into a proposal he called the Family Security System. When he presented his plan to the president in 1969, Nixon asked if it would “get rid of social workers.” The aide replied that it would wipe them out, and Nixon took the news with pleasure.

Moynihan’s blueprint wended its way through the West Wing, evolving as different aides and interest groups encountered it. (One Nixon advisor opposed to the plan, the Randian economist Martin Anderson, put together a memo for his boss titled “A Short History of a ‘Family Security System.’ It was a critique of Speenhamland.) Wary of phrases like guaranteed minimum income, Nixon took to calling the bill “workfare” though the legislation did, in fact, leave some space to draw a check from the government without working. The final bill, now dubbed the Family Assistance Program (FAP), was a jerry-rigged mix of liberal and conservative ideas held together by a president who was no more idealistic than the Speenhamland squires had been.

The convergence didn‘t hold. The National Welfare Rights Organization wouldn’t back the plan, in part because it included work requirements. Milton Friedman opposed it too, on the grounds that it maintained too many of the programs and poverty-trap incentives that the guaranteed income was supposed to replace. (“The bill in its present form,” he wrote in Newsweek, “is a striking example of how to spoil a good idea”) In theory, a negative income tax could appeal to the left because it stopped trying to tell poor people what to do, to the right because it displaced dysfunctional welfare bureaucracies, and to libertarians for both reasons. But the FAP didn’t do either one.

The bill did pass the House in 1970 by a vote of 243-155, and in 1971 it passed again with an even more lopsided margin. But each time it ran aground in the Senate. In the 1972 election, Nixon’s opponent, George McGovern, called for guaranteeing every American a “demogrant” of $1,000; Nixon found it expedient to attack the proposal, and McGovern retreated rather rapidly. Other versions of the Basic Income idea were floated in the Ford and Carter years, with little success.

In the meantime, several field experiments attempted to test the concept’s effects in the real world. The Office of Economic Opportunity launched the first in New Jersey and Pennsylvania in 1968. That urban effort was soon joined by the Rural Income Maintenance Experiment, or RIME, conducted in Iowa and North Carolina. Those were followed by the Seattle Income Maintenance Experiment, also known as SIME; the Denver Income Maintenance Experiment, also known as DIME; and the Gary Income Maintenance Experiment, whose creators tried to avoid calling it GIME. (It looked too much like “gimme.”) Separately, in Canada, researchers ran the “Mincome” experiment in Manitoba; it included a unique effort in which every family in a town, the 10,000-head burgh of Dauphin, was invited to receive the money.

In Canada, the experiment was cut off early and the records were stuck into storage; no one even attempted to draw conclusions from the data at the time. Later analysis suggested that recipients came away from the experiment in better health and that they didn’t reduce their workload very much.

The US. experiments, meanwhile, were widely considered failures. This was partly because they showed some work disincentives, but that wasn’t the biggest problem: Pretty much everyone expected there to be some impact on how many hours people worked, and the effect was much smaller than the idea’s opponents had predicted. In Seattle and Denver, the efforts that got the most attention employed men spent up to 10 percent less time working. The numbers for women were higher, but Congress didn’t necessarily object if more women were opting to stay at home.

What did make Congress uncomfortable was the idea that those women might opt to leave their husbands. Yet in Seattle and Denver (though not the others), the experiments appeared to raise divorce rates by 40 to 60 percent. In the late ’80s this figure would be sharply challenged, as a pair of economists at the University of Wisconsin re-analyzed the data and concluded that any permanent difference in divorce rates that could be traced to the program was less than 5 percent. But in the late ’70s, the news ensured that Jimmy Carter’s variation on the negative income tax would not pass. Daniel Patrick Moynihan, now a senator, noisily recanted his support for the idea. “Were we wrong about a guaranteed income!” he wrote to William F. Buckley. “Seemingly, it is calamitous.”

The only piece of policy passed in all this time that had more than a faint connection to a basic income was the earned income tax credit, a 1975 measure inspired by the negative income tax. (It was proposed by Sen. Russell Long, son of Share-Our-Wealth Huey.) It didn’t replace any other programs, and the unemployed didn’t qualify to get it, so it fell far short of Friedman’s proposal, let alone the plans pushed on the left.

You can also arguably find traces of the Basic Income idea in the Supplemental Security income, a system for the disabled that passed easily in 1972. In practical terms, this was yet another addition to the maze of federal welfare programs. Still, it had legislative roots in the FAP-and as the historian Brian Steensland notes in his 2008 book The Failed Welfare Revolution, it was “the first federally recognized minimum income guarantee for any category of persons.”

As the Reagan era began, the government seemed to lose all interest in the concept. No longer plausible as legislation, the basic income was back to being a utopian thought experiment. Or at least that’s the conventional historical wisdom. Looking back from 2013, the liberal site Remapping Debate declared that the idea died because “market devotees drowned out those who continued to believe that government has a vital role to play….By Ronald Reagan’s election in 1980, the country in which [a guaranteed income] had seemed mainstream a decade earlier looked considerably different.”

Except that such a program was adopted in the Reagan years. What’s more, it passed in a state with a Republican governor, it got an important assist from the market devotees of the Libertarian Party, and it was a lot more radical than Nixon’s scheme. It had more in common with a different notion from 1969: Timothy Leary’s seemingly wild suggestion that a state should stop charging taxes and start paying dividends.

A Moment in the Midnight Sun

When Alaska started raking in money from the Prudhoe Bay oil boom, officials there decided to invest the money rather than blow it all at once. So in 1976 the state decided to channel a share of its resource revenue into an investment pool called the Alaska Permanent Fund. A debate quickly began: What should the government do with the fund’s earnings?

Arlon Tussing, an energy economist at the University of Alaska, had proposed an idea shortly after the oil was found. “The only way to guarantee that the money does any good to most of us,” he told Time in 1970, “is to hand it out to the people. The state should form an investment company, something like a mutual fund, and distribute the stock to Alaskans.” Tussing was an ideological maverick, one of his friends called him a “mix of Leon Trotsky and Milton Friedman”, and the coalition that formed around his proposal was also a mixture. Terry Gardner, a Democrat, was the first legislator to formally propose it. Jay Hammond, a Republican, was the governor who championed and signed it. And it got significant legislative support from Dick Randolph, who in those days was one of the Libertarian Party’s two representatives in the Alaska House.

The Libertarian Party supported the dividends partly on the grounds that sending the money to individual citizens was preferable to letting officials spend it, and partly as a second-best option as long as privatization was off the table. “The reason we have a Permanent Fund in the first place is that with all of the subsurface wealth, the royalty income goes to the government,” points out Randolph, who left the legislature in the ’80s and now works in insurance. “There is very little private property up here, and even what private property there is, unless it was homesteaded before statehood, you don’t own the subsurface rights.” As long as that restriction was written into the law, he figured that this “money that should be in the people’s hands anyway” might as well be distributed as dividends.

Almost simultaneously, thanks largely to the Libertarians, the state repealed its income tax. Since it doesn’t levy a sales tax either, Alaska is the closest approximation we have to Leary’s dream of a dividend, paying state that doesn’t charge you anything to settle there, unless, of course, you’re an oil company.

Since 1982, Alaskans have received a check for hundreds and occasionally thousands of dollars each year. The fund remains on decent footing, though the state government is not: Alaska has been running a big deficit for the last few years, and political pressure has been building to either pass a tax or dip into the fund’s earnings to make up the difference. Not surprisingly, Randolph would rather cut spending instead. (The discovery in March of approximately 1.2 billion barrels’ worth of new oil in the North Slope may allow the state to delay whatever reckoning is to come.)

While the Alaska Permanent Fund is the biggest example of a political unit treating its citizens as stockholders, it isn’t the only one. The state’s Native Americans are organized into regional and village corporations, and those companies pay out dividends as well, though the checks are usually substantially smaller than the payments from the statewide fund. And of the nearly 240 Indian tribes in the US. that run gambling operations, about half regularly distribute profits to their members.

Global Experiments

Versions of the basic income have been proposed and occasionally enacted in other countries as well. Mongolia flirted with an Alaska style scheme from 2010 to 2012, sending all its citizens a share of the money made from a mining boom, but then it decided to replace the system with one that directed its benefits to children. The Chinese city of Macau, a former Portuguese colony that retained some autonomy when it was handed back to Beijing in 1999, adopted a “Wealth Partaking Scheme” in 2008 that in some ways resembles a citizens’ dividend: The government distributes annual payouts meant to “share the economic fruits” with the residents. It is unlike the Permanent Fund dividends or the Indian casino checks in that the authorities set the amount they’ll pay rather arbitrarily; there’s no guarantee that they’ll even continue the program from year to year.

A more substantial reform was adopted in Iran. In 2010, the government in Tehran decided to phase out its controls on the prices of several goods, including electricity, water, bread, and especially fuel. To compensate for the ensuing increases in the cost of living, it decided to replace those price subsidies with direct cash payments to the people. Initially these checks were to be means tested, but that produced public dissatisfaction over who did or didn’t qualify for the money, so the country’s rulers decided to go ahead and send the payments to anyone who wanted them. They thus essentially stumbled into a universal basic income after setting out to do something else.

That wasn’t the only way they stumbled. The authorities eased several market distorting subsidies, but they didn’t eliminate them. (The price of gasoline is still below the market rate.) Meanwhile, they wound up committing to much higher payouts than they had initially expected. Soon they were asking wealthier Iranians to opt out of the program voluntarily; last fall they bit the bullet and adopted a means test that removed about a third of the country’s population from the rolls. Replacing regulations with cash is an interesting idea, but Iran’s example is not an inspiring one.

The biggest international shift toward something like a basic income hasn’t been confined to a specific country or city. It’s a movement within the aid community.

Global aid has traditionally focused on in-kind assistance: sending meals or medicine, helping build houses or schools, and so on. This can lead to all kinds of unfortunate side effects, as when free food from abroad undercuts local farmers. There is also a recurring mismatch between what the planners in aid agencies think a community needs and what the people on the ground actually want. And so, since the turn of the century, a cheaper, more flexible, and less paternalistic approach has become more popular: Just send people cash instead.

Initially, this took the form of so-called “conditional cash transfers,” in which the recipients get money as long as they agree to certain stipulations, such as vaccinating their kids or sending them to school. Several Latin American countries had moved their social welfare systems in that direction since the ’90s, and the new programs seemed to work better than the previous approaches. Elements of the aid community adopted their own versions of the idea, arguing that the needy know their needs better than outsiders do. Sure enough, the money was sometimes spent in useful ways that had not occurred to aid agencies, on mattresses, for example, or bicycle parts.

To be clear: Most aid agencies and nongovernmental organizations have not moved in this direction. A 2015 report from the Center for Global Development estimated that only about 6 percent of international humanitarian assistance takes the form of cash or vouchers. That’s up from less than 1 percent in 2004, but it’s still a small portion.

But while most NGOs still approach conditional cash transfers warily, a dissenting segment of the aid industry has moved on to an even simpler idea: conditionless cash transfers. The leading player here is GiveDirectly, a US based group buoyed both by the research showing cash transfers’ effectiveness and by the rise of mobile payments, which have made it much easier to send people money without passing through political or bureaucratic middlemen. Follow-up research on GiveDirectly’s efforts in western Kenya showed that the recipients used the money to build assets, invest in small businesses, and purchase more food; contrary to some cynical expectations, and in line with other studies of cash based aid, there was no boost in spending on alcohol, tobacco, or gambling.

Having moved from conditional to conditionless cash payments, GiveDirectIy‘s directors started thinking about takinq another step and experimenting with a full fledged basic income, not just payments to a village’s neediest families, but a long-term income for everyone in town, one set high enough for people to live on it. Other aid groups had already conducted experiments along these lines in India and Namibia; the results appeared to be favorable, but these studies were too short term to draw firm conclusions from them, and the Namibian experiment had the additional problem of not being randomized.

And so GiveDirectly devised a randomized control trial, sort of a privately funded Kenyan sequel to the SIME/DIME experiments. In one set of villages, every adult will receive monthly payments equivalent to 75 cents a day for two years. In another set of villages, every adult will receive such payments for 12 years. In yet another set of villages, the adults will receive a single lump sum payment equivalent to what the two year group will be receiving. The fourth set of villages is the control group, so they don’t get any money at all.

The aim here, GiveDirectly’s Ian Bassin explains, is “to isolate the effects of what most people consider a ‘basic income’, that is, a permanent payment over time, from something resembling more traditional temporary supports. For example, when someone knows they have a long-term, guaranteed floor below which they cannot fall, do they take more risks like starting a business or going back to school? And does that security produce greater overall returns?”

The current plan is for about 40 villages to go on the 12 year plan, 80 to go on the two-year plan, 80 to get the lump sums, and 100 to be in the control group. To answer the first question that probably popped into your minds: No, a villager can’t change which deal he’s getting by moving from one town to another. Once enrollment has started in a village, no new arrivals can take advantage of the payments there. Conversely, if you’re already enrolled in the program, you still get the money if you leave town. After all, one potential outcome the researchers are looking for is whether people will use their payments to move someplace with greater opportunity.

The group expects the experiment to cost about $30 million, and they have thus far raised more than $24 million toward that. (The lump-sum payments are being funded separately, with the money coming from GiveDirectly’s ongoing efforts in Kenya. They expect the costs there to be a little higher than $6 million, which is within the program’s usual annual budget.) One village in the 12-year group is already receiving funds, sort of a test case to work out any logistical kinks in advance. If all goes according to plan, the rest will start receiving their money this year.

Convergence (Again)

GiveDirectly’s project isn’t the only basic-income pilot in the works. The governments of Finland, Ontario, San Francisco, and several Dutch cities have experiments either underway or on the horizon. Y Combinator, the firm run by the venture capitalist Sam Altman, is planning a privately funded effort in Oakland. The German entrepreneur Martin Bohmeyer has been crowdfunding year long no-strings-attached incomes for dozens of randomly chosen winners.

Then there’s Scott Santens, a writer who’s been using the crowdfunding platform Patreon to generate his own “basic income,” shooting for $1,000 a month. In practical terms, it’s not entirely clear what the difference is between Santens and any other fellow with a Patreon account, other than that Santens calls his proceeds a basic income and writes about the basic income a lot. But the fact that his pitch actually attracts donations speaks to just how fashionable the concept has become. Over the past half decade or so, there has been a renaissance of interest in the idea.

As in the 1960s, the interest is coming from many different directions. Center left wonks perceive the basic income as a more market friendly approach to welfare policy. Radicals hail it as an alternative to the “neoliberalism” they associate with those same wonks, imagining a day when work is detached from income and we live in a world of postscarcity abundance. Silicon Valley figures hope it will help us survive the upheaval to be unleashed when artificial intelligence wreaks havoc on labor markets. Libertarians see it as a way to simplify the welfare maze into a cheaper and less intrusive single program.

It has even entered the climate debate. In the 1990s, the environmentalist Peter Barnes proposed a “Sky Trust,” based on the notion that the atmosphere is common property; companies would have to buy carbon emission rights, and the money would then be distributed to the people in a system he modeled on Alaska’s dividends. This version of the cap-and-trade idea picked up some steam on the Bush-era left, with articles touting it in such progressive outlets as Yes! and In These Times. Meanwhile, proposals for a carbon tax routinely include a provision to rebate the proceeds to citizens. In February, the conservative Climate Leadership Council called for a gradually increasing carbon tax, with all of the money collected then sent to Americans as quarterly dividend checks, direct deposits, or IRA contributions. (When British Columbia adopted a carbon tax in 2008, the provincial government actually issued what it called “climate action dividends.” But this was a one-time gimmick, not an ongoing feature of British Columbian life.)

Different as these currents may be, they’re often aware of each other. Silicon Valley has been a center of support for GiveDirectly. Peter Barnes has been flirting with social credit ideas about “debt-free money.” Andy Stern likes the idea of carbon dividends. And the Climate Leadership Council’s carbon-tax proposal was co-authored by George Shultz, who was one of the most vocal proponents of the negative income tax inside the Nixon administration.

Perhaps these tribes will manage to band together and pass something. “It’s going to take coalitions to get anything done,” Stern told me after he spoke at Cato last year. “Being in Charles Koch’s institution in the Friedrich Hayek hall is not a place where I feel, ‘Boy, I have a lot of things in common here.‘ But l’m willing, because I think it’s necessary to find allies wherever you can.”

But convergence, you’ll recall, can be fleeting. It’s hard to imagine Congress and the White House passing a sweeping basic income plan in the current political environment. This is especially true of the plans most likely to attract libertarian support, the ones that supplant rather than supplement the existing system. Replacing the entire welfare state in one fell swoop is a tall order, especially if you want to include popular middle-class entitlements as part of the deal. And if you’re thinking of following Stigler’s suggestion and eliminating minimum wages as well, you may find yourself pretty lonely.

For all the recent buzz about a basic income, the recent trend in welfare spending has been away from conditionless cash transfers. Temporary Assistance for Needy Families, the country’s chief program for assisting the poor, used to devote a majority of its money to direct cash assistance. Now the number is just over 25 percent. According to an analysis by the Center on Budget and Policy Priorities, the remainder is funding everything from abstinence education to drug treatment.

And for all the agreement among the political tribes, there are wide splits within them too. Some figures on the left see the basic income as a form of “pity-charity liberalism” and want to focus instead on strengthening unions and creating jobs. Many libertarians dislike even Friedman’s version of the idea, arguing that it won’t displace the rest of the welfare state for long: Other programs could still creep back, leaving us with an even costlier system than before.

And environmentalists have never been united on the best way to reduce the species’ carbon footprint. Even if all the strange bedfellows agreed on a plan, they’d still have to contend with dissension from their usual bedmates.

The lncrementalist Approach

Yet the forecast isn’t hopeless for basic income fans. For one thing, there’s the possibility that some other nation might take the plunge, adopting either a full-fledged basic income or a policy that resembles one. Last year, for example, Prime Minister Theresa May raised the possibility that Britain might start issuing shale gas dividends. A move like that could change the debate in other countries.

Beyond that, there are at least two ways the idea could progress incrementally here at home even if Washington is unwilling to enact a vast reconstruction of the relief system.

First: We might see a sequel to the Permanent Fund. From time to time a state will find itself awash in riches from natural resources. Some voices will suggest that the government not spend the new money at once but put some away for a rainy day. Some fraction of those voices will suggest it create a sovereign wealth fund to invest the windfall. And some fraction of that fraction will want the fund to pay dividends.

Now, there are all sorts of potential problems with government run investment portfolios, as anyone who has followed California’s pension troubles can tell you. If you’re wary about mismanagement, you’ll be wary about states playing the market; they won’t all invest as conservatively as Alaska has.

Still, several states have such funds already the most recent additions to the list are North Dakota and West Virginia, and the number may well grow. None has followed Juneau’s example and started paying dividends, but it is hardly unimaginable that someone else will eventually adopt an Alaska-style system.

Second: Congress might not be about to pass a basic income, but it can pass reforms that make the welfare state more like a basic income without adding a new entitlement to the mix. More specifically, it can cashify and combine programs.

By cashify, I mean taking a subsidy with strings attached, food stamps, Section 8 housing vouchers, anything like that,and instead simply send money to the people who qualify for it, letting them choose how to spend it. In other words, Congress can turn vouchers into conditionless cash grants. If it wants to really slash some bureaucracy, it can replace actual government services with cash grants, too.

The more programs you cashify, the more programs you can combine. Right now the system is set up to ask whether someone is poor enough to qualify for housing assistance, for health assistance, for food assistance, and so on. What if it just asked if someone is poor enough to qualify for assistance, period?

Those programs might not converge all the way to a basic income, but they could at least become simpler, less intrusive, and less expensive. They might even stay that way. Occasionally a convergence can last.

Jesse Walker is the author of The United States of Paranoia (HarperCollins) and Rebels on the Air (NYU Press).


A Short History of Personality – Dr Miguel Farias and Dr Catherine Wikholm * Explaining Behaviorism: Operant and Classical Conditioning – Eric Charles Ph.D.

The study of personality is heavily focused on what psychologists call ‘traits’. These are patterns of behaviour, thoughts and feelings that we use to characterize people, think of one of your friends who is outgoing and adventurous (an extravert) and another who is often moody and over-sensitive (neurotic).

Most of the time, psychologists argue, these key personality traits will be astonishingly consistent throughout our lives.

The forefather of modern US psychology, William James, wrote in his Principles of Psychology (1890) that we are essentially creatures of habit:

“Already at the age of 25 you see the professional mannerism settling on the young commercial traveler, on the young doctor, on the young minister, on the young counselor-at-law. You see the little lines of cleavage running through the character, the tricks of thought, the prejudices, the ways of the “shop”…

It is well for the world that in most of us, by the age of thirty, the character has set like plaster, and will never soften again.”

William James wrote this before the turn of the twentieth century, but his concluding sentence still reverberates through modern textbooks on the science of personality. In his own time he was not alone in thinking that many of our behaviours are particularly resistant to change.

Bells, petals and electric shocks

On the other side of the Atlantic, in 1874, Francis Galton, a half-cousin of Charles Darwin, argued that both nature and nurture shape our personality, although nature played a major role. He was borrowing from Shakespeare, who in The Tempest first juxtaposed those two words. Referring to the beast Caliban, the magician Prospero calls him ‘A devil, a born devil, on whose nature nurture can never stick.’

Galton’s own perspective was very similar to Prospero’s: he believed that nature played a predominant role in the shaping of our personalities and intelligence. He based this view on his pioneering study of twins. By observing identical twins, who share all genetic makeup, and contrasting them with fraternal twins, who share only half, he noticed that identical twins were especially similar in temperament to each other. The implication was that nature, and not culture, education or the environment, played the predominant role in the making of our characters.

Not all agreed with this view. Aristotle famously spoke of the human mind as an ‘unscribed tablet’, the infant’s mind is like a blank slate on which experience etches a personality.

Among psychologists the American John Watson was one of the strongest advocates of this perspective. He thought that none of our behaviours are instinctive, and that all we do is learn though our interaction with the environment. ‘Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in,’ Watson wrote in 1924, ‘and I’ll guarantee to take any one at random and train him to become any type of specialist I might select doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors?’

This perspective swung the pendulum of determinism in the other direction: we become not what nature or biology dictates, but what our environment provides and how our minds associate events with pleasant or unpleasant responses (such as relaxation or fear).

For Watson’s Behaviourism, not only is personal change possible, but it happens all the time. In effect we can learn one response, but also unlearn it and replace it with another.

Take a biologically pleasant stimulus, such as a flower, which you might expect to look upon with joy, breathing in its scent, marvelling in its colours. With the correct kind of conditioning, you can start dreading its sight and smell. English novelist Aldous Huxley vividly portrays this concept in his novel Brave New World (1932). In the story eight month old babies are conditioned to be afraid of books and rose petals. The babies are taken to the conditioning rooms, where petals, and books with brightly coloured pages are spread all over the floor. As soon as the babies are happily playing, the director of the conditioning centre gives an order to the head nurse. A lever is switched and suddenly:

‘There was a violent explosion. Shriller and ever shriller, a siren shrieked. Alarm bells maddeningly sounded. The children started, screamed; their faces were distorted with terror. “And now,” the Director shouted (for the noise was deafening), “now we proceed to rub in the lesson with a mild electric shock.”

He waved his hand again, and the Head Nurse pressed a second lever. The screaming of the babies suddenly changed its tone. There was something desperate, almost insane, about the sharp spasmodic yelps to which they now gave utterance. Their little bodies twitched and stiffened; their limbs moved jerkily as if to the tug of unseen wires?’

The electric shocks stop. But when the director instructs the nurse again to show flowers and books, the babies shrink away in horror. ‘What man has joined,’ Huxley writes, ‘nature is powerless to put asunder.’

Of course, just as we are able to reassociate pleasant and beautiful things, such as flowers, with pain, so can we associate naturally unpleasant and dangerous stimuli, such as spiders and snakes, with positive feelings. Psychologists have used this technique countless times in the treatment of phobias.

Imagine you are terrified of spiders, so terrified that you start hyperventilating and experience blurred vision just by seeing a picture of one. A behavioural therapist would attempt to change your panic response through counter-conditioning. Slowly and systematically, a therapist shows you a picture of the dreaded spider while giving you techniques that encourage you to relax. Eventually the fear subsides and you have learned a new conditioning seeing a spider triggers a relaxation response that replaces the old one. It’s not necessarily a pleasant way to instigate change, but it is a potentially effective one.

Despite the possibilities that Behaviourism suggests for personal change, its premise that humans lack free will and rely upon conditioning to exhibit a certain behaviour has been severely criticized. Some psychologists believe that Behaviourism evades the complexity of mental life, denies the existence of unconscious motivations, and fails to explain how we develop a sense of identity, of who we are.

At the method’s heart there is a bleak determinism: the biological account of nature how the encryption of proteins at the micro-level ultimately shapes our behaviour is replaced by the mechanics of stimulus and response. It enables the possibility of personal change; but at what cost?

Extraverts, introverts and neurotics

At the same time as Watson was publishing his thoughts on behavioural change, other psychologists were developing new ideas about personality and ways to measure it. Researchers started using surveys that included lists of questions about an individual’s behaviour: ‘Are you outgoing? Do you enjoy parties? Does your mood often go up and down? Are you easily irritated?’ Questions such as these are supposed to measure personality traits that is, habitual ways to believe, think and feel.

The notion that we each have certain personality traits is not an entirely modern discovery. Around 2,500 years ago Hippocrates, the Ancient Greek physicist known as the father of Western medicine, conceived of four major temperaments choleric, phlegmatic, melancholic and sanguine. In the twentieth century, British personality psychologist Hans Eysenck noted the similarities between Hippocrates’ temperaments and the modern-day theory of personality traits?

A sanguine individual, characterized as sociable, lively and carefree, combined the trait of extraversion with low neuroticism, a highly desirable pattern in Western societies. On the opposite side, a melancholic temperament, which describes someone as quiet, anxious and moody, and is characteristic of many artists, combines introversion with high neuroticism. A choleric individual is excitable, aggressive and optimistic, an explosive combination of high extraversion and neuroticism; and a phlegmatic person is controlled, reliable and calm, indicating an introvert with low neuroticism.

Looking through these personality characteristics and trying to attribute them to our own personality is not necessarily straightforward. I could say I have quite a lot of sanguine, but also some of choleric, and a small part of melancholic. Both Hippocrates’ temperaments and Eysenck’s traits are dimensions of our characters; our personalities are not black and white, and we all have elements of introversion-extraversion and neuroticism. What modern personality tests do is to quantify each and every character by positioning a person along a continuum how much of an introvert or neurotic are you from 0 to 100?

Such questions may sound like the content of quizzes in women’s magazines, rather than the measures of scientific instruments. However, the truth is they may have a place in both. Personality traits help us to make sense of other people’s behaviours, but they also help us to choose our friends and, crucially, our partners. If at 18 many of us think that we can change the person we love to better fit with our needs, by our late twenties we’re more inclined to try finding lovers and friends that match our personalities. People often say they want to ‘be loved for who they are’. It seems then that while we recognize unchangeability in our romantic partners, we also may have a deep-rooted understanding that who we are fundamentally is set in stone that nature ultimately wins out over nurture.

The idea that personality is rooted in our biology gained support with Eysenck. Like Francis Galton decades before him, Eysenck studied identical and fraternal twins. His conclusion was that our predisposition to be an extravert or a neurotic is mostly determined by the genes we inherit. Eysenck stirred the public mind when he extended the conclusions of his personality work to intelligence. He argued that it was not education or the environment that caused differences in intelligence scores between ethnic groups, but genetics. He gave the example of how Afro-Americans had lower IQ scores than white Americans, which implied that those genetically originating from northern Europe were intellectually more gifted than others. When giving public talks he was often confronted by angry protesters, and this hatred also extended to his old students and collaborators. Decades later Gordon Claridge, now a retired professor from Oxford University whom Eysenck had once supervised, told me that when in the 1970s he had applied to take up an academic position in Sweden, the student body were resistant to his appointment as a result of his (albeit past) association with ‘a racist psychologist’.

During the 1980s and 1990s, personality research evolved in different directions. Throughout the world studies looked at different age groups and did further research with twins, some of whom had been raised separately. The conclusions reinforced the nature thesis, demonstrating that personality did change from childhood, through adolescence to early adulthood, but after early adulthood it seemed that personality changed little?

Explaining Behaviorism: Operant & Classical Conditioning

Eric Charles Ph.D

There are many explanations that can be used to help people understand the Behaviorist Point of View. Some are very factual, others argue towards practical concerns, and still others are highly philosophical.

How to Explain Behaviorism: Operant and Classical Conditioning

Operant and classical conditioning are two different ways in which organisms come to reflect the order of the environment around them. They are not perfect processes and they certainly cannot explain every facet of human and non-human behavior. That said, they are surprisingly reliable processes, and they can explain much, much, more about human and non-human behavior than anyone would have thought before extensive study of those processes began.

It is probably best to think about Operant and classical conditioning as offering two different types of developmental stories. They are not stories about what a behavior is, but rather stories about how that behavior got to be that way.

Classical conditioning stories are about things happening around the animal, no matter what the animal does.

Operant conditioning stories involve consequences of the animal’s action, i.e., what happens when the animal operates upon the world as an active agent.

There is some debate about whether we need two types of stories. There are good reasons to go either way, including some recent genetic evidence that they can be disentangled. None of that really matters here; all that matters is that you understand the two types of stories and their consequences for future behavior.

Note below that “stimulus” can refer to any object, event, or situation that an organism could potentially respond to. Note also that “response” can be anything the organism does. For now a “response” could be an overt action (such as jumping up and down), a covert action (such as tensing your leg without moving it), or even a thinking or feeling, so long as we conceive of those as active, rather than passive.

Operant Conditioning

Operant conditioning stories involve an animal doing something that changes the world in a way that produces, crudely speaking, a good or a bad outcome. When an organism does something that is followed by a good outcome, that behavior will become more likely in the future. When an organism does something that is followed by a bad outcome, that behavior will become less likely in the future. The action and outcome could coincide because of natural laws or social conventions, because someone purposely set it up that way, or it could be that the events followed due to random chance in this animals life history.

For example, in pretty much any animal’s it is good to stop touching overly, hot objects (natural law), in some worlds telling a parent you love them results in good outcomes (social convention), and in some worlds tapping a baseball bat five times on the left corner of the mound is followed by a home run (random chance).

Operant conditioning stories require that the outcome be reinforcing or punishing to the particular animal in question. (There are ways to specify that so it does not involve circular reasoning, but we don’t need to go that deep.) For example, candy might reinforce one person, but not another; some might find a graphic kill sequence in a violent video game punishing, while others find it reinforcing; etc.

Over time, the story goes, if a certain type of outcome consistently follows a particular behavior, this will affect the rate of future behaviors.

Example Traditional Story: A cat is put in a “puzzle box”. It performs a wide range of behaviors, because cats don’t like to be in cages. Eventually one of its flailing limbs pulls a lever that opens the cage door. This happens many times, and each time the lever gets pulled a little bit quicker (there is no “a ha!” moment).

Tradition vs. Necessity:

Traditionally operant conditioning stories start with a relatively “random” behavior, but they could start with any behavior. Traditionally the story then introduces an arbitrary consequence, but in real-life situations we usually care about socially mediated consequences. Traditionally many cycles for the consequence to make big changes in the frequency of future behavior, but sometimes the changes can be quite quick and others it can take a very long time.

In the traditional story the consequence always follows the behavior, but there are many cool affects that we know about when it does not, the consequence is intermittent (i.e., the “schedule of reinforcement”). Traditionally the consequence has to be immediately following the behavior, though there are some exceptions, you probably want to stick with the traditional version here.

Enhanced Traditional Story:

Often operant conditioning stories are enhanced by adding a “discriminative stimuli”, which indicates that a particular contingency (a particular connection between action and outcome) is in effect. For example, an experimenter working with rats might have a light that, when on, means that lever pressing will result in food. Similarly, a special education instructor might have a picture of a hat that, when held up, means that saying “hat” will result in an M&M.

Other Classical Conditioning Stuff:

You can do amazing things with discriminative stimuli. You can train people to respond to very specific stimuli, or to very general ”categories” of stimuli. For example, we can get pigeons to discriminate early Monet’s paintings from Picasso’s. Also, by drawing out the “schedule” of reinforcement, you can also train animals to respond for many, many times without getting reinforced. For example, we can get people to pull slot machine levers scores of times without a win.

After Conditioning:

After the events of an Operant Conditioning story, a behavior either has an increased or decreased rate of occurrence. Often there is a big increase or decrease specifically when a particular stimulus is present. So, if you know the world that a person has lived in before, you know something about why they respond now in certain ways in the presence of certain objects, events, or situations.

Classical Conditioning

Classical conditioning stories involve (at least) two things that coincide “out there” in an animal’s world. Those things could coincide because they are causally related due to natural laws or social conventions, or it could be that the events occur at random in relation to each other and this animal just happens to be the animal that experiences them together.

For example, in pretty-much any animal’s world lightning is followed by thunder (natural law), in some worlds hearing “say cheese” might be followed by a camera flash (social convention), and in some worlds eating lamb dinners could coincide with hearing bad news from loved ones (random chance).

Classical conditioning stories also require that the organism already have a developed response to one of the two events. For example, thunder could make you flinch, a bright flash could make you wince, and bad news from loved ones could make you cry.

Over time, the story goes, if two things are repeatedly paired together out there in the world, the organism will come to respond to one as they already respond to the other.

Example Traditional Story:

When Mary was a child her father liked to take many pictures of her. He always said “Say Cheese” before he took the picture, and he always used a flash. Every time the flash hit Mary, she winced slightly. Now, whenever she hears “Say cheese“ she winces.

Tradition vs. Necessity:

Traditionally classical conditioning stories start with a response that seems unlearned (an Unconditioned Response to an Unconditioned Stimulus), but they could start with any response the animal already has. Traditionally the story then introduces something the animal has no existing response to (a Neutral Stimulus), but it usually still works for stimuli that already elicit some response. Traditionally the neutral stimulus comes to evoke the response associated with unconditioned stimulus after several pairings (thus becoming a Conditioned Stimulus), but sometimes only a single pairing is required, and sometimes neutral stimuli fail to convert to conditioned stimuli even after many, very close together in time, but sometimes you can create conditioned stimuli when the pairings are far apart.

In many cases, where the traditional story does not hold, there has been a lot of research into the exceptions, and we have very good understandings of why such exceptions should exist. For example, after a single event many animals will learn to avoid novel tastes that were associated with becoming sick quite a bit later. This makes a lot of evolutionary sense; poison food present a big risk, and one dose not normally experience the full effects until quite a bit after ingestion.

On the other hand, when dealing with fairly arbitrary pairings of stimuli, as we get all the time in our modern world, the structure of the traditional story holds. For example, why should anyone ever have become excited by hearing a computerized voice say “You’ve got mail!”? Because of several pairings, that’s why.

Other Classical Conditioning Stuff:

You can do amazing things here with generalization and discrimination training, and there are many other interesting phenomenon that scientists have discovered.

After Conditioning:

After the events of a Classical Conditioning story, the presence of a conditioned stimulus elicits a conditioned response. So, if you know the world that a person has lived in before, you know something about why they respond to certain things in certain ways now.

A Bit of Light Theory

Philosophical behaviorism can be very deep. In this context, all I will say is that most behaviorists believe we can explain a great deal about human behavior using the types of stories above. That is, the preferred style to a run of the mill “Why did he do that?!?” question will begin with “Well, in the past history of that person, doing that behavior resulted in….”

Because these explanations are all about the way the world around the person works, and the person’s past history in that world, you don’t need to include traditional “mental” explanations.

That doesn’t mean that traditional “mental” stuff doesn’t exist, but it does suggest that we can explain an awful lot about human behavior before we would need to start talking about them.


Eric Charles, Ph.D., runs the research lab at CTRL, the Center for Teaching, Research, and Learning, at American University.

Inequality and Economic Growth – Joseph Stiglitz.

“Even if we act to erase material poverty, there is another greater task, it is to confront the poverty of satisfaction – purpose and dignity – that afflicts us all.

Too much and for too long, we seemed to have surrendered personal excellence and community values in the mere accumulation of material things. Our Gross National Product, now, is over $800 billion dollars a year, but that Gross National Product – if we judge the United States of America by that – that Gross National Product counts air pollution and cigarette advertising, and ambulances to clear our highways of carnage.
It counts special locks for our doors and the jails for the people who break them. It counts the destruction of the redwood and the loss of our natural wonder in chaotic sprawl.

It counts napalm and counts nuclear warheads and armored cars for the police to fight the riots in our cities. It counts Whitman’s rifle and Speck’s knife, and the television programs which glorify violence in order to sell toys to our children. Yet the gross national product does not allow for the health of our children, the quality of their education or the joy of their play. It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials.
It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country, it measures everything in short, except that which makes life worthwhile.
And it can tell us everything about America except why we are proud that we are Americans.
If this is true here at home, so it is true elsewhere in the world

Bobby Kennedy, 1968


In the middle of the twentieth century, it came to be believed that ’a rising tide lifts all boats’: economic growth would bring increasing wealth and higher living standards to all sections of society. At the time, there was some evidence behind that claim. In industrialised countries in the 1950s and 1960s every group was advancing, and those with lower incomes were rising most rapidly.

In the ensuing economic and political debate, this ’rising-tide hypothesis’ evolved into a much more specific idea, according to which regressive economic policies, policies that favour the richer classes, would end up benefiting everyone. Resources given to the rich would inevitably ‘trickle down’ to the rest.

It is important to clarify that this version of old-fashioned ‘trickle-down economics’ did not follow from the postwar evidence. The ’rising-tide hypothesis’ was equally consistent with a ’trickle-up’ theory, give more money to those at the bottom and everyone will benefit; or with a ’build-out from the middle’ theory-help those at the centre, and both those above and below will benefit.

Today the trend to greater equality of incomes which characterised the postwar period has been reversed. Inequality is now rising rapidly. Contrary to the rising-tide hypothesis, the rising tide has only lifted the large yachts, and many of the smaller boats have been left dashed on the rocks. This is partly because the extraordinary growth in top incomes has coincided with an economic slowdown.

The trickle-down notion, along with its theoretical justification, marginal productivity theory, needs urgent rethinking. That theory attempts both to explain inequality, why it occurs, and to justify it, why it would be beneficial for the economy as a whole. This essay looks critically at both claims.

It argues in favour of alternative explanations of inequality, with particular reference to the theory of rent-seeking and to the influence of institutional and political factors, which have shaped labour markets and patterns of remuneration. And it shows that:

Far from being either necessary or good for economic growth, excessive inequality tends to lead to weaker economic performance.

In light of this, it argues for a range of policies that would increase both equity and economic well-being.

The great rise of inequality

Let us start by examining the ongoing trends in income and wealth. In the past three decades, those at the top have done very well, especially in the US. Between 1980 and 2014, the richest 1 per cent have seen their average real income increase by 169 per cent (from $469,403, adjusted for inflation, to $1,2b0508) and their share of national income more than double, from 10 per cent to 21 per cent. The top 0.1 per cent have fared even better. Over the same period, their average real income increased by 281 per cent (from $1,597, 080, adjusted for inflation, to $6,087,113) and their share of national income almost tripled, from 3.4 to l0 3 per cent.

Over the same thirty-four years, median household income grew by only 11 per cent. And this growth actually occurred only in the very first years of the period: by 2014 it was only .7 per cent higher than in 1989, after peaking in 1999. But even this underestimates the extent to which those at the bottom have suffered, their incomes have only done as well as they have because hours worked have increased. Median hourly compensation (adjusted for inflation) increased by only 9 per cent from 1973 to 2014, even though at the same time productivity grew by 72.2 per cent.

To understand how significant this divergence of productivity and wages is, consider that from 1948 to 1973 both increased at the same pace, about doubling over the period.

And these statistics underestimate the true deterioration in workers’ wages, for education levels have increased (the percentage of Americans who are college graduates has nearly doubled since 1980, to more than 30 per cent), so that one should have expected a significant increase in wage rates. In fact, average real hourly wages for all Americans with only a high school diploma have decreased in the past three decades.

In the first three years of the so-called recovery from the Great Recession of 2008-2009, in other words, since the US economy returned to growth, fully 91 per cent of the gains in income went to the top 1 per cent.

By 2014, the rest of the income distribution had experienced a bit more of a boost, but even accounting for that, 58 per cent of the gains in total income have gone to the top 1 per cent since 2009. During that period, the income of the bottom 99 per cent has grown by just 4 per cent.

Presidents Bush and Obama both tried a trickle-down strategy, giving large amounts of money to the banks and the bankers. The idea was simple: by saving the banks and bankers, all would benefit. The banks would restart lending: The wealthy would create more jobs. This strategy, it was argued, would be far more efficacious than helping homeowners, businesses or workers directly.

The US Treasury typically demands that when money is given to developing countries, conditions be imposed on them to ensure not only that the money is used well, but also that the country adopts economic policies that, according to the Treasury’s economic theories, will lead to growth. But no conditions were imposed on the banks, not even, for example, requirements that they lend more or stop abusive practices. The rescue worked in enriching those at the top; but the benefits did not trickle down to the rest of the economy.

The Federal Reserve, too, tried trickle-down economics. One of the main channels by which quantitative easing was supposed to rekindle growth was by leading to higher stock market prices, which would generate higher wealth for the very rich, who would then spend some of that, which in turn would benefit the rest.

As Yeva Nersisyan and Randall Wray argue in their chapter in this volume, both the Fed and the Administration could have tried policies that more directly benefited the rest of the economy: helping homeowners, lending to small and medium-sized enterprises and fixing the broken credit channel. These trickle-down policies were relatively ineffective, one reason why seven years after the US slipped into recession, the economy was still not back to health.

Wealth is even more concentrated than income, by one estimate more than ten times so. The wealthiest 1 per cent of Americans hold 41.8 per cent of the country’s wealth; the top 0.1 per cent alone control more than 22 per cent of total wealth. Just one example of the extremes of wealth in America is the Walton family: the six heirs to the Walmart empire command a wealth of $145 billion, which is equivalent to the net worth of 1,782,020 average American families.”

Wealth inequality too is on the upswing. For the four decades before the Great Recession, the rich were getting wealthier at a more rapid pace than everyone else. Between 1978 and 2013 the share of wealth owned by the top I per cent rose dramatically, from less than 25 per cent to its current level above 40 per cent; the share of the top 10 per cent from about two-thirds to well over three-quarters! By 2010, the crisis had depleted some of the richest Americans’ wealth because of the decline in stock prices, but many Americans also had had their wealth almost entirely wiped out as their homes lost value. After the crisis, the average wealthiest l per cent of households still had 165 times the wealth of the average American in the bottom 90 per cent, more than double the ratio of thirty years ago.

In the years of ‘recovery’, as stock market values rebounded (in part as a result of the Fed’s lopsided efforts to resuscitate the economy through increasing the balance sheet of the rich), the rich have regained much of the wealth that they had lost; the same did not happen to the rest of the country.

Inequality plays out along ethnic lines in ways that should be disturbing for a country that had begun to see itself as having won out against racism. Between 2005 and 2009, a huge number of Americans saw their wealth drastically decrease. The net worth of the typical white American household was down substantially, to $113,149 in 2009, a 16 per cent loss of wealth from 2005. But the recession was much worse for other groups.

The typical African American household lost 53 per cent of its wealth, putting its assets at a mere 5 per cent of the median white American’s. The typical Hispanic household lost 66 per cent of its wealth.”

Probably the most invidious aspect of America’s inequality is that of opportunities: in the US a young person’s life prospects depend heavily on the income and education of his or her parents, even more than in other advanced countries. The ’American dream’ is largely a myth.

A number of studies have noted the link between inequality of outcomes and inequality of opportunities. When there are large inequalities of income, those at the top can buy for their offspring privileges not available to others, and they often come to believe that it is their right and obligation to do so. And, of course, without equality of opportunity those born in the bottom of the distribution are likely to end up there: inequalities of outcomes perpetuate themselves. This is deeply troubling: given our low level of equality of opportunity and our high level of inequality of income and wealth, it is possible that the future will be even worse, with still further increases in inequality of outcome and still further decreases in equality of opportunity.

A generalised international trend

While the US has been winning the race to be the most unequal country (at least within developed economies), much of what has just been described for it has also been going on elsewhere. In the past twenty-five to thirty years the Gini index, the widely used measure of income inequality, has increased by roughly 29 per cent in the United States, 17 per cent in Germany, 9 per cent in Canada, 14 per cent in UK, 12 per cent in Italy and ll per cent in Japan.”

The more countries follow the American economic model, the more the results seem to be consistent with what has occurred in the United States. The UK has now achieved the second highest level of inequality among the countries of Western Europe and North America, a marked change from its position before the Thatcher era. Germany, which had been among the most equal countries within the OECD, now ranks in the middle.

The enlargement of the share of income appropriated by the richest 1 per cent has also been a general trend, and in Anglo-Saxon countries it started earlier and it has been more marked than anywhere else. In rich countries, such as the US, the concentration of wealth is even more pronounced than that of income, and has been rising too. For instance, in the UK the income share of the top 1 per cent went up from 5.7 per cent in 1978 to 14.7 per cent in 2010, while the share of wealth owned by the top 1 per cent surged from 22.6 per cent in 1970 to 28 per cent in 2010 and the top 10 per cent’s wealth share increased from 64 per cent to 70.5 per cent over the same period.

Also disturbing are the patterns that have emerged in transition economies, which at the beginning of their movement to a market economy had low levels of inequality in income and wealth (at least according to available measurements). Today, China’s inequality of income, as measured by its Gini coefficient, is roughly comparable to that of the United States and Russia. Across the OECD, since 1985 the Gini coefficient has increased in seventeen of twenty two countries for which data is available, often dramatically.

Moreover, recent research by Piketty and his co-authors has found that the importance of inherited wealth has increased in recent decades, at least in the rich countries for which we have data. After displaying a decreasing trend in the first postwar period, the share of inheritance flows in disposable income has been increasing in the past decades.

Explaining inequality

Marginal Productivity Theory

How can we explain these worrying trends? Traditionally, there has been little consensus among economists and social thinkers on what causes inequality. In the nineteenth century, they strived to explain and either justify or criticise the evident high levels of disparity. Marx talked about exploitation. Nassau Senior, the first holder of the first chair in economics, the Drummond Professorship at All Souls College, Oxford, talked about the returns to capital as a payment for capitalists’ abstinence, for their not consuming. It was not exploitation of labour, but the. just rewards for their forgoing consumption. Neoclassical economists developed the marginal productivity theory, which argued that compensation more broadly reflected different individuals’ contributions to society.

While exploitation suggests that those at the top get what they get by taking away from those at the bottom, marginal productivity theory suggests that those at the top only get what they add. The advocates of this View have gone further: they have suggested that in a competitive market, exploitation (eg. as a result of monopoly power or discrimination) simply couldn’t persist, and that additions to capital would cause wages to increase, so workers would be better off thanks to the savings and innovation of those at the top.

More specifically, marginal productivity theory maintains that, due to competition, everyone participating in the production process earns remuneration equal to her or his marginal productivity.

This theory associates higher incomes with a greater contribution to society. This can justify, for instance, preferential tax treatment for the rich: by taxing high incomes we would deprive them of the ’just deserts’ for their contribution to society, and, even more importantly, we would discourage them from expressing their talent?‘ Moreover, the more they contribute, the harder they work and the more they save, the better it is for workers, whose wages will rise as a result.

The reason why these ideas justifying inequality have endured is that they have a grain of truth in them. Some of those who have made large amounts of money have contributed greatly to our society, and in some cases what they have appropriated for themselves is but a fraction of what they have contributed to society.

But this is only a part of the story: there are other possible causes of inequality. Disparity can result from exploitation, discrimination and exercise of monopoly power. Moreover, in general, inequality is heavily influenced by many institutional and political factors: industrial relations, labour market institutions, welfare and tax systems, for example, which can both work independently of productivity and affect productivity.

That the distribution of income cannot be explained just by standard economic theory is suggested by the fact that the before-tax and transfer distribution of income differs markedly across countries. France and Norway are examples of OECD countries that have managed by and large to resist the trend of increasing inequality. The Scandinavian countries have a much higher level of equality of opportunity, regardless of how that is assessed.

Marginal Productivity Theory is meant to have universal application. Neoclassical theory taught that one could explain economic outcomes without reference, for instance, to institutions. It held that a society’s institutions are simply a facade; economic behaviour is driven by the underlying laws of demand and supply, and the economist’s job is to understand these underlying forces. Thus, the standard theory cannot explain how countries with similar technology, productivity and per capita income can differ so much in their before-tax distribution.

The evidence, though, is that institutions do matter. Not only can the effect of institutions be analysed, but institutions can themselves often be explained, sometimes by history, sometimes by power relations and sometimes by economic forces (like information asymmetries) left out of the standard analysis.

Thus, a major thrust of modern economics is to understand the role of institutions in creating and shaping markets. The question then is: what is the relative role and importance of these alternative hypotheses? There is no easy way of providing a neat quantitathe answer, but recent events and studies have lent persuasive weight to theories putting greater focus on rent-seeking and exploitation. We shall discuss this evidence in the next section, before turning to the institutional and political factors which are at the root of the recent structural changes in income distribution.

Rent-seeking and top incomes

The term ’rent’ was originally used to describe the returns to land, since the owner of the land receives these payments by virtue of his or her ownership and not because of anything he or she does. The term was then extended to include monopoly profits (or monopoly rents), the income that one receives simply from control of a monopoly and, in general returns due to similar ownership claims.

Thus, rent-seeking means getting an income not as a reward for creating wealth but by grabbing a larger share of the wealth that would have been produced anyway. Indeed, rent-seekers typically destroy wealth, as a by-product of their taking away from others. A monopolist who overcharges for her or his product takes money from those whom she or he is overcharging and at the same time destroys value. To get her or his monopoly price, she or he has to restrict production.

Growth in top incomes in the past three decades has been driven mainly in two occupational categories: those in the financial sector (both executives and professionals) and non-financial executives.” Evidence suggests that rents have contributed on a large scale to the strong increase in the incomes of both.

Let us first consider executives in general. That the rise in their compensation has not reflected productivity is indicated by the lack of correlation between managerial pay and firm performance. As early as 1990 Jensen and Murphy, by studying a sample of 2,505 CEOs in 1,400 companies, found that annual changes in executive compensation did not reflect changes in corporate performance. Since then, the work of Bebchuk, Fried and Grinstein has shown that the huge increase in US executive compensation since 1993 cannot be explained by firm performance or industrial structure and that, instead, it has mainly resulted from flaws in corporate governance, which enabled managers in practice to set their own pay. Mishel and Sabadish examined 350 firms, showing that growth in the compensation of their CEOs largely outpaced the increase in their stock market value. Most strikingly, executive compensation displayed substantial positive growth even during periods when stock market values decreased?“

There are other reasons to doubt standard marginal productivity theory. In the United States the ratio of CEO pay to that of the average worker increased from around 20 to 1 in 1965 to 354 to 1 in 2012. There was no change in technology that could explain a change in relative productivity of that magnitude, and no explanation for why that change in technology would occur in the US and not in other similar countries. Moreover, the design of corporate compensation schemes has made it evident that they are not intended to reward effort: typically, they are related to the performance of the stock, which rises and falls depending on many factors outside the control of the CEO, such as market interest rates and the price of oil. It would have been easy to design an incentive structure with less risk, simply by basing compensation on relative performance, relative to a group of comparable companies. The struggles of the Clinton administration to introduce tax systems encouraging so-called performance pay (without imposing conditions to ensure that pay was actually related to performance) and disclosure requirements (which would have enabled market participants to better assess the extent of stock dilution associated with CEO stock option plans) clarified the battle lines: those pushing for favourable tax treatment and against disclosure understood well that these arrangements would have facilitated greater inequalities in income.

For specifically the rise in top incomes in the financial sector, the evidence is even more unfavourable to explanations based on marginal productivity theory. An empirical study by Philippon and Reshef shows that in the past two decades workers in the financial industry have enjoyed a huge ’pay-premium’ with respect to similar sectors, which cannot be explained by the usual proxies for productivity (such as the level of education or unobserved ability). According to their estimates, financial sector compensations have been about 40 per cent higher than the level that would have been expected under perfect competition.

It is also well documented that banks deemed ’too big to fail’ enjoy a rent due to an implicit state guarantee. Investors know that these large financial institutions can count, in effect, on a government guarantee, and thus they are willing to provide them funds at lower interest rates. The big banks can thus prosper not because they are more efficient or provide better service but because they are in effect subsidised by taxpayers.

There are other reasons for the super-normal returns to the large banks and their bankers. In certain of the activities of the financial sector, there is far from perfect competition. Anti competitive practices in debit and credit cards have amplified pro-existing market power to generate huge rents. Lack of transparency (e.g. in over-the-counter Credit Default Swaps (CD55) and derivatives too have generated large rents, with the market dominated by four players. It is not surprising that the rents enjoyed in this way by big banks translated into higher incomes for their managers and shareholders.

In the financial sector even more than in other industries, executive compensation in the aftermath of the crisis provided convincing evidence against marginal productivity theory as an explanation of wages at the top: the bankers who had brought their firms and the global economy to the brink of ruin continued to receive high rates of pay compensation which in no way could be related either to their social contribution or even their contribution to the firms for which they worked (both of which were negative).

For instance, a study that focused on Bear Stems and Lehman Brothers in 2000-2008 has found that the top executive managers of these two giants had brought home huge amounts of ‘performance-based’ compensations (estimated at around $1 billion for Lehman and $1.4 billion for Bear Stearns), which were not clawed back when the two firms collapsed.

Still another piece of evidence supporting the importance of rent-seeking in explaining the increase in inequality is provided by those studies that have shown that increases in taxes at the very top do not result in decreases in growth rates. If these incomes were a result of their efforts, we might have expected those at the top to respond by working less hard, with adverse effects on GDP.

The Increase in rents

Three striking aspects of the evolution of most rich countries in the past thirty-five years are (a) the increase in the wealth-to-income ratio; (b) the stagnation of median wages; and (c) the failure of the return to capital to decline.

Standard neoclassical theories, in which ’wealth’ is equated with ’capital’, would suggest that the increase in capital should be associated with a decline in the return to capital and an increase in wages. The failure of unskilled workers’ wages to increase has been attributed by some (especially in the 1990s) to skill-biased technological change, which increased the premium put by the market on skills. Hence, those with skills would see their wages rise, and those without skills would see them fall. But recent years have seen a decline in the wages paid even to skilled workers. Moreover, as my recent research shows,” average wages should have increased, even if some wages fell. Something else must be going on.

There is an alternative, and more plausible, explanation. It is based on the observation that rents are increasing (due to the increase in land rents. intellectual property rents and monopoly power). As a result, the value of those assets that are able to provide rents to their owners-such as land, houses and some financial claims, is rising proportionately. So overall wealth increases, but this does not lead to an increase in the productive capacity of the economy or in the mean marginal productivity or average wage of workers. On the contrary, wages may stagnate or even decrease, because the rise in the share of rents has happened at the expense of wages.

The assets which are driving the increase in overall wealth, in fact, are not produced capital goods. In many cases, they are not even ’productive’ in the usual sense; they are not directly related to the production of goods and services?” With more wealth put into these assets, there may be less invested in real productive capital. In the case of many countries where we have data (such as France) there is evidence that this is indeed the case:

A disproportionate part of savings in recent years has gone into the purchase of housing, which has not increased the productivity of the ‘real’ economy.

Monetary policies that lead to low interest rates can increase the value of these ’unproductive’ fixed assets, an increase in the value of wealth that is unaccompanied by any increase in the flow of goods and services. By the same token, a bubble can lead to an increase in wealth, for an extended period of time, again with possible adverse effects on the stock of ’real’ productive capital Indeed, it is easy for capitalist economies to generate such bubbles (a fact that should be obvious from the historical record, but which has also been confirmed in theoretical models.) While in recent years there has been a ’correction’ in the housing bubble (and in the underlying price of land), we cannot be confident that there has been a full correction. The increase in the wealth-income ratio may still have more to do with an increase in the value of rents than with an increase in the amount of productive capital. Those who have access to financial markets and can get credit from banks (typically those already well off) can purchase these assets, using them as collateral. As the bubble takes off, so does their wealth and society’s inequality. Again, policies amplify the resulting inequality: favourable tax treatment of capital gains enables especially high after-tax returns on these assets and increases the wealth especially of the wealthy, who disproportionately own such assets (and understandably so, since they are better able to withstand the associated risks).

The role of institutions and politics

The large influence of rent-seeking in the rise of top incomes undermines the marginal productivity theory of income distribution, The income and wealth of those at the top comes at least partly at the expense of others, just the opposite conclusion from that which emerges from trickle-down economics. When, for instance, a monopoly succeeds in raising the price of the goods which it sells, it lowers the real income of everyone else. This suggests that institutional and political factors play an important role in influencing the relative shares of capital and labour.

As we noted earlier, in the past three decades wages have grown much less than productivity, a fact which is hard to reconcile with marginal productivity theory” but is consistent with increased exploitation. This suggests that the weakening of workers’ bargaining power has been a major factor. Weak unions and asymmetric globalisation, where capital is free to move while labour is much less so, are thus likely to have contributed significantly to the great surge of inequality.

The way in which globalisation has been managed has led to lower wages in part because workers’ bargaining power has been eviscerated. With capital highly mobile and with tariffs low, firms can simply tell workers that if they don’t accept lower wages and worse working conditions, the company will move elsewhere. To see how asymmetric globalisation can affect bargaining power, imagine, for a moment, what the world would be like if there was free mobility of labour, but no mobility of capital. Countries would compete to attract workers. They would promise good schools and a good environment, as well as low taxes on workers. This could be financed by high taxes on capital. But that’s not the world we live in.

In most industrialised countries there has been a decline in union membership and influence; this decline has been especially strong in the Anglo-Saxon world. This has created an imbalance of economic power and a political vacuum.

Without the protection afforded by a union, workers have fared even more poorly than they would have otherwise. Unions’ inability to protect workers against the threat of job loss by the moving of jobs abroad has contributed to weakening the power of unions. But politics has also played a major role, exemplified in President Reagan’s breaking of the air traffic controllers” strike in the US in 1981 or Margaret Thatcher’s battle against the National Union of Mineworkers in the UK.

Central bank policies focusing on inflation have almost certainly been a further factor contributing to the growing inequality and the weakening of workers’ bargaining power. As soon as wages start to increase, especially if they increase faster than the rate of inflation, central banks focusing on inflation raise interest rates. The result is a higher average level of unemployment and a downward ratcheting effect on wages: as the economy goes into recession, real wages often fall; and then monetary policy is designed to ensure that they don’t recover.

Inequalities are affected not just by the legal and formal institutional arrangements (such as the strength of unions) but also by social custom, including whether it is viewed as acceptable to engage in discrimination.

At the same time, governments have been lax in enforcing anti-discrimination laws. Contrary to the suggestion of free-market economists, but consistent with even casual observation of how markets actually behave, discrimination has been a persistent aspect of market economies, and helps explain much of what has gone on at the bottom. The discrimination takes many forms-in housing markets, in financial markets (at least one of America’s large banks had to pay a very large fine for its discriminatory practices in the run-up to the crisis) and in labour markets. There is a large literature explaining how such discrimination persists.

Of course, market forces, the demand and supply for skilled workers, affected by changes in technology and education, play an important role as well, even if those forces are partially shaped by politics. But instead of these market forces and politics balancing each other out, with the political process dampening the increase in inequalities of income and wealth in periods when market forces have led to growing disparities, in the rich countries today the two have been working together to increase inequality.

The price of inequality

The evidence is thus unsupportive of explanations of inequality solely focused on marginal productivity. But what of the argument that we need inequality to grow?

A first justification for the claim that inequality is necessary for growth focuses on the role of savings and investment in promoting growth, and is based on the observation that those at the top save, while those at the bottom typically spend all of their earnings. Countries with a high share of wages will thus not be able to accumulate capital as rapidly as those with a low share of wages. The only way to generate savings required for longterm growth is thus to ensure sufficient income for the rich.

This argument is particularly inapposite today, where the problem is, to use Bernanke’s term, a global savings glut. But even in those circumstances where growth would be increased by an increase in national savings, there are better ways of inducing savings than increasing inequality. The government can tax the income of the rich, and use the funds to finance either private or public investment; such policies reduce inequalities in consumption and disposable income, and lead to increased national savings (appropriately measured).

A second argument centres on the popular misconception that those at the top are the job creators, and giving more money to them will thus create more jobs. Industrialised countries are full of creative entrepreneurial people throughout the income distribution. What creates jobs is demand: when there is demand, firms will create the jobs to satisfy that demand (especially if we can get the financial system to work in the way it should, providing credit to small and medium-sized enterprises).

In fact, as empirical research by the IMF has shown, inequality is associated with economic instability. In particular, lMF researchers have shown that growth spells tend to be shorter when income inequality is high. This result holds also when other determinants of growth duration (like external shocks, property rights and macroeconomic conditions) are taken into account:

On average, a 10 percentile decrease in inequality increases the expected length of a growth spell by one half.

The picture does not change if one focuses on medium-term average growth rates instead of growth duration. Recent empirical research released by the OECD shows that income inequality has a negative and statistically significant effect on medium-term growth. It estimates that in countries like the US, the UK and Italy, overall economic growth would have been six to nine percentage points higher in the past two decades had income inequality not risen.”

There are different channels through which inequality harms the economy:

First, inequality leads to weak aggregate demand. The reason is easy to understand: those at the bottom spend a larger fraction of their income than those at the top. The problem may be compounded by monetary authorities’ flawed responses to this weak demand. By lowering interest rates and relaxing regulations, monetary policy too easily gives rise to an asset bubble, the bursting of which leads in turn to recession.

Many interpretations of the current crisis have indeed emphasised the importance of distributional concems. Growing inequality would have led to lower consumption but for the effects of loose monetary policy and lax regulations, which led to a housing bubble and a consumption boom. It was, in short, only growing debt that allowed consumption to be sustained. But it was inevitable that the bubble would eventually break. And it was inevitable that, when it broke, the economy would go into a downturn.

Second, inequality of outcomes is associated with inequality of opportunity. When those at the bottom of the income distribution are at great risk of not living up to their potential, the economy pays a price not only with weaker demand today, but also with lower growth in the future. With nearly one in four American children growing up in poverty?” many of them facing not just a lack of educational opportunity but also a lack of access to adequate nutrition and health, the country’s long-term prospects are being put into jeopardy.

Third, societies with greater inequality are less likely to make public investments which enhance productivity, such as in public transportation, infrastructure, technology and education. If the rich believe that they don’t need these public facilities, and worry that a strong government, which could increase the efficiency of the economy, might at the same time use its powers to redistribute income and wealth, it is not surprising that public investment is lower in countries with higher inequality. Moreover, in such countries tax and other economic policies are likely to encourage those activities that benefit the financial sector over more productive activities.

In the United States today returns on long-term financial speculation (capital gains) are taxed at approximately half the rate of labour, and speculative derivatives are given priority in bankruptcy over workers. Tax laws encourage job creation abroad rather than at home. The result is a weaker and more unstable economy. Reforming these policies-and using other policies to reduce rent-seeking would not only reduce inequality; it would improve economic performance.

It should be noted that the existence of these adverse effects of inequality on growth is itself evidence against an explanation of today’s high level of inequality based on marginal productivity theory. For the basic premise of marginal productivity is that those at the top are simply receiving just deserts for their efforts, and that the rest of society benefits from their activities. lf that were so, we should expect to see higher growth associated with higher incomes at the top. In fact, we see just the opposite.

Reversing inequality

A wide range of policies can help reduce inequality. Policies should be aimed at reducing inequalities both in market income and in the post-traumatic and-transfer incomes. The rules of the game play a large role in determining market distribution, in preventing discrimination, in creating bargaining rights for workers, in curbing monopolies and the powers of CEOs to exploit firms’ other stakeholders and the financial sector to exploit the rest of society. These rules were largely rewritten during the past thirty years in ways which led to more inequality and poorer overall economic performance. Now they must be rewritten once again, to reduce inequality and strengthen the economy, for instance, by discouraging the short-termism that has become rampant in the financial and corporate sector.

Reforms include more support for education, including pre-school; increasing the minimum wage; strengthening earned-income tax credits; strengthening the voice of workers in the workplace, including through unions; and more effective enforcement of anti-discrimination laws. But there are four areas in particular that could make inroads in the high level of inequality which now exists.

First, executive compensation (especially in the US) has become excessive, and it is hard to justify the design of executive compensation schemes based on stock options. Executives should not be rewarded for improvements in a firm’s stock market performance in which they play no part. If the Federal Reserve lowers interest rates, and that leads to an increase in stock market prices, CEOS should not get a bonus as a result. If oil prices fall, and so profits of airlines and the value of airline stocks increase, airline CEOs should not get a bonus.

There is an easy way of taking account of these gains (or losses) which are not attributable to the efforts of executives: basing performance pay on the relative performance of firms in comparable circumstances, the design of good compensation schemes that do this has been well understood for more than a third of a century, and yet executives in major corporations have almost studiously resisted these insights. They have focused more on taking advantage of deficiencies in corporate govemance and the lack of understanding of these issues by many shareholders, to try to enhance their earnings, getting high pay when share prices increase, and also when share prices fall. In the long run, as we have seen, economic performance itself is hurt?

Second, macroeconomic policies are needed that maintain economic stability and full employment. High unemployment most severely penalises those at the bottom and the middle of the income distribution. Today, workers are suffering thrice over: from high unemployment, weak wages and cutbacks in public services, as government revenues are less than they would be if economies were functioning well.

As we have argued, high inequality has weakened aggregate demand. Fuelling asset price bubbles through hyper-expansive monetary policy and deregulation is not the only possible response. Higher public investment, in infrastructures, technology and education, would both revive demand and alleviate inequality, and this would boost growth in the long-run and in the short-run. According to a recent empirical study by the IMF, we’ll designed public infrastructure investment raises output both in the short and long term, especially when the economy is operating below potential. And it doesn’t need to increase public debt in terms of GDP: well implemented infrastructure projects would pay for themselves, as the increase in income (and thus in tax revenues) would more than offset the increase in spending.

Third, public investment in education is fundamental to address inequality. A key determinant of workers’ income is the level and quality of education. If govemments ensure equal access to education, then the distribution of wages will reflect the distribution of abilities (including the ability to benefit from education) and the extent to which the education system attempts to compensate for differences in abilities and backgrounds. If, as in the United States, those with rich parents usually have access to better education, then one generation’s inequality will be passed on to the next, and in each generation, wage inequality will reflect the income and related inequalities of the last.

Fourth, these much needed public investments could be financed through fair and full taxation of capital income. This would further contribute to counteracting the surge in inequality: it can help bring down the net return to capital, so that those capitalists who save much of their income won’t see their wealth accumulate at a faster pace than the growth of the overall economy, resulting in growing inequality of wealth. Special provisions providing for favourable taxation of capital gains and dividends not only distort the economy, but, with the vast majority of the benefits going to the very top, increase inequality.

At the same time they impose enormous budgetary costs: 2 trillion dollars from, 2013 to 2023 in the US, according to the Congressional Budget Office. The elimination of the special provisions for capital gains and dividends, coupled with the taxation of capital gains on the basis of accrual, not just realisations, is the most obvious reform in the tax code that would improve inequality and raise substantial amounts of revenues. There are many others, such as a good system of inheritance and effectively enforced estate taxation.

Conclusion: redefining economic performance

We used to think of there being a trade-off: we could achieve more equality, but only at the expense of overall economic performance. It is now clear that, given the extremes of inequality being reached in many rich countries and the manner in which they have been generated, greater equality and improved economic performance are complements.

This is especially true if we focus on appropriate measures of growth. If we use the wrong metrics, we will strive for the wrong things. As the international Commission on the Measurement of Economic Performance and Social Progress argued, there is a growing global consensus that GDP does not provide a good measure of overall economic performance. What matters is whether growth is sustainable, and whether most citizens see their living standards rising year after year.

Since the beginning of the new millennium, the US economy, and that of most other advanced countries, has clearly not been performing. In fact, for three decades, real median incomes have essentially stagnated. Indeed, in the case of the US, the problems are even worse and were manifest well before the recession: in the past four decades average wages have stagnated, even though productivity has drastically increased.

As this chapter has emphasised, a key factor underlying the current economic difficulties of rich countries is growing inequality. We need to focus not on what is happening on average, as GDP leads us to do, but on how the economy is performing for the typical citizen, reflected for instance in median disposable income. People care about health, fairness and security, and yet GDP statistics do not reflect their decline. Once these and other aspects of societal well-being are taken into account, recent performance in rich countries looks much worse.

The economic policies required to change this are not difficult to identify. We need more investment in public goods; better corporate governance, antitrust and anti-discrimination laws; a better regulated financial system; stronger workers’ rights; and more progressive tax and transfer policies. By ’rewriting the rules’ goveming the market economy in these ways, it is possible to achieve greater equality in both the pre and post tax and transfer distribution of income, and thereby stronger economic performance.

Joseph Stiglitz

Meditation can work for everybody – Eric Klein * The Buddha Pill: Can Meditation Change You? – Dr Miguel Farias and Dr Catherine Wikholm.

“When the body can be still, the mind can be still. Spirituality is what you do with those fires that burn within you.” Sister Elaine

Seven Reasons Why Meditation Doesn’t Work, And how to fix them.

by Eric Klein

We didn’t have air conditioning, when we were living in Chicago in the 1970s. So, on hot, humid summer nights, Devi and I would ride our bikes to the shores of Lake Michigan. After securing our bikes, we‘d head for the water.

The water was nice and cool. But, to enjoy it we had to move through the twigs, paper cups, and assorted debris that had accumulated at the water’s edge.

It’s the same with meditation. The deep waters of your inner mind are pure, clear, and refreshing. But, to get there you need to move through some inner. . . um. . . debris.

This debris isn’t life threatening. Just a bit messy. It’s made up of ideas, memories, sensations, misconceptions, and reasons. Reasons why meditation doesn’t work at least for you.

Here are some of the common reasons that people give. You may find some of them familiar, if you’ve gone for a swim in the waters of meditation. Even if you’ve just dipped your toe in.

1) “Meditation is self-centered.”

As meditation has become more mainstream, pictures of people (slim, beautiful people) sitting in lotus postures show up in all kinds of advertising for spas, exotic vacations, skin cream, perfume, and jewelry.

It’s easy to get the impression that meditation is just the latest fashion accessory. Like a big spiritual mirror that you gaze into while putting on organic makeup to cover any imperfections.

But, meditation is the opposite. Meditation is about taking your self much less seriously and much more lightly. And in the process opening more fully and creatively to life.

The practice of meditation reveals that most of what’s scurrying around in the mind isn’t that significant much less real. And that all the ideas about the self are more limiting than liberating. Meditation frees you from being overly preoccupied with protecting and preserving the self.

Through practice, you discover that there really is no hard and fast line between “me” and “life”.

You discover that you are part of life, not apart from life in any way. Thus, the practice of meditation shifts you from self centered to lite centered living. Whether your attention is turned within or without it’s all life.

2) “I don’t have time to meditate.”

The scattered mind never has time for what matters most. It’s busy, busy, busy. Driven by emotion fueled thoughts. The day is filled to overflowing with activities, demands, meetings, and requirements. There’s barely time to sit down for a meal much less to spend a few moments in silence and stillness.

In the mad rush to get more done, the mind becomes more fragmented and speedy.

When things do slow down like in a traffic jam or on a grocery line it’s intolerable. The mind rails against the waste of time and against slowing down. “There’s too much to do!!” it cries.

But, everyone has exactly the same amount of time each day: 1440 minutes.

It’s the experience of time that differs. The more scattered and sped up the mind the more time seems to slip through your fingers like sand. Through meditation, the mind learns to slow down. As it does so, the feeling of pressure lifts. And with it another veil lifts as well.

The veil that concealed the richness of the moment, lifts. Through meditation you touch and are touched by the richness of the present moment. You experience fullness of time which reveals that this moment yes this very moment) is always enough.

3) “My back hurts when I meditate.”

This is likely a technical, postural issue that can be handled with some simple information about how to sit. Here are some practical guidelines.

You can sit on the floor or on a chair.

The key is to keep your spine straight but not still. Allow the chin to be parallel to the ground. When seated on the floor, elevate your body on a firm cushion or folded blanket. This reduces strain on the back. Experiment with different heights of cushion.

If you sit on a chair, make sure it is firm and not too cushiony. You don’t want to sink into it. You want to sit upright.

Once you have assumed a seated posture find your physical center of gravity.

You do this by gently rocking from side to side. As you rock from left to right, feel into the core of your body. You will notice a physical sensation I call passing through the center of gravity as your body shifts from side to side.

Slow down the shifting and feel more deeply into that center of gravity as you pass through. Then reduce the side to side movement and gradually settle your body so that it is aligned along the center of gravity. Do this all by feeling inwardly and sensing that place of balance.

As you settle the body in the center of gravity feel your spine gently lengthening. The back of your skull lifts slightly and the chin is parallel to the ground. The base of the body is grounded.

Your posture is aligned along the center of gravity and the spine is effortlessly extended. Place eyes gaze gently at the root of the nose between the eyebrows

Sitting is a skill that becomes easier with practice.

4) “I’m not religious.”

It’s easy to assume that meditation is religious. When you think about monks, yogis, nuns, and other professionally religious people, concepts like meditation come to mind. And it’s true, that meditation or similar practices have been central to those on a religious quest.

But, does that mean that meditation is religious? Not really. Religions are based on articles of faith, on beliefs.

Meditation requires no beliefs. It’s based on practice and results. In this way, meditation is more like a science experiment than a religious exercise. You don’t need to believe anything in order to conduct an experiment. You just need to follow the protocol. Do the practice. It’s a self validating process. Follow the steps and see the results.

The practitioners who developed the meditation methods used their minds and bodies as laboratories. They conducted experiments in consciousness. They recorded their results. And passed them onto their students for validation testing.

Some of these experiments have stood the test of time. People have conducted these meditation experiments for thousands of years, with reliable results. It’s these tested and validated practices that have been passed from teacher to student for thousands of years.

So, whether you’re religious or not, doesn’t matter in terms of meditation. If you are religious, meditation will enrich your understanding of your faith. If you’re not, you‘ll discover that which is deeper than believing or not believing.

5) “My mind won’t get quiet.”

If you stop the average person on the street and ask them, “Is your mind basically quiet or filled with thoughts?” most will tell you, “Basically quiet.” But, sit them down on a meditation cushion for a few minutes without anything to distract them and bam most people are shocked to discover how noisy it is in there.

It’s not that meditation made their minds noisy. Rather, the practice revealed the noise that was already there. This revelation of the running, ranting mind is a movement forward on the path. Many people drop the practice at this point thinking, “I can’t meditate.” But, they are meditating! The practice is working by revealing the actual state of the conditioned mind. Don’t stop now. The key is to keep practicing. To stay with the process which will lead to the quieting of the mind chatter.

The mind isn’t quieted by willing or by effort. You can’t quiet the mind through will power. That would be like pushing down on a spring. The harder you push the more the spring pushes back. You quiet the mind in the same way that you allow a glass of muddy water to become clear. You just let the particles settle. When you don’t stir up the water the mud settles on its own.

It’s the same in meditation.

Meditation lets the mud, the noisy thoughts settle. The glass of muddy water becomes clear as gravity draws the mud together. The mind becomes clear as you shift from thinking about thoughts to being aware of what is arising. Just by being aware, present, and mindful of the activity of the mind it settles down.

6) “Meditation is . . . boring.”

I remember when my parents would take me, as a child, to watch the sunset. I didn’t get it. I couldn’t see the beauty. To me, the sunset was boring.

Being bored is a symptom of not paying attention. If you pay attention deeply to anything it becomes very, very interesting. Meditation, which is the practice of cultivating deep attention, dissolves boredom. As the mud of the mind settles, as you discover the richness of the present moment even something as simple as a breath becomes the doorway to gratitude, wonder, and joy.

But, on the other hand, meditation is actually quite boring. I mean, you’re sitting there breathing in and breathing out. What could be more boring? In, out, in, out. Or you’re repeating the same mantra over and over. It is kind of boring by design. As the surface mind gets bored, it settles down.

And in that settling, an awareness of all encompassing, and ever present silence emerges. A sense of undisturbed stillness. This stillness and silence infuse everything with aliveness and presence. Not boring at all”

7) “I don’t want to be weird.”

There are two reasons that practicing meditation can feel weird. One is neurological, the other more psychological.

Let’s start neurologically: doing anything unfamiliar can feel weird. Your neurological patterns get used to doing things a certain way. Putting your left leg in your pants before your right ones Brushing one side of your teeth before the other. Sitting in a certain chair and in a certain posture) to watch television. The list goes on.

So, when you change a pattern of behavior even in a positive direction it feels weird. Inside your brain, new neurons are firing.

New connections are being made. And old connections, old patterns, are being restrained. Subjectively it feels weird. The new neurological circuits aren’t totally grooved in yet. so you’re clumsy at the new pattern. And this clumsiness is where the weird feeling can turn more psychological.

Being clumsy can be embarrassing (even if you’re all by yourself). Even if you’re sitting there by yourself with your eyes closed you can still be “watching” what you’re doing and wondering, “Am I doing this right? Is this weird?”

Have you ever danced in front of the mirror? If you judge what your dancing it’s no fun. To enjoy the experience, you need to cut loose from any fixed ideas of what dancing should look like and even more so what you should look like.

It’s the same with meditation. Whether you want to or not, you have an idea about the kind of person who meditates. If you don’t think of yourself as that kind of person then when you meditate, you’ll feel weird. You’ll get in your own way.

But, if you relax, take a breath, and realize that your ideas about meditation are just that ideas. You don’t have to live up to these self imposed ideas of meditation. You can just cut loose and enjoy the process. When you do, you find a whole new and wonderful kind of weirdness.

But, one of the blessings of meditation is finding out that you indeed are weird. You’re weird in the best possible sense of the word. Because, the most ancient meaning of the word weird has to do with following your unique fate, your path through life. You’re weird if you follow your path and listen to the direction of your inner soul.

So, meditation, in this most basic, ancient sense, helps you be weird. Meditation helps you find your path. Through practice, you discover how to live your true life more fully and more joyfully.

Those are the seven reasons.

Along with ideas on how to move through them.

Because, there’s no reason to let a bit of debris stop you from enjoying a refreshing swim in the deep, clear, refreshing waters of your inner mind.

Ready for the next: step?

Our recommendation is for you to subscribe to the Wisdom Heart newsletter. You’ll receive information and inspiration on how to bring meditation alive in your life. Practical ideas that you can use for peace of mind and the clarity to live with greater fulfillment and purpose.

Go to

The Buddha Pill: Can Meditation Change You?

Dr Miguel Farias and Dr Catherine Wikholm


My interest in meditation began at the age of six when my parents did a course on Transcendental Meditation. I didn’t realize it then, but I was effectively being introduced to the idea that meditation can produce all manner of changes in who we are and in what we can achieve. Mind-over-matter stories are both inspiring and bewildering, hard to believe yet compelling. They have stirred me deeply enough to dedicate almost two decades of my life researching what attracts some people to techniques like meditation and yoga and whether, like many claim, they can transform us in a fundamental way.

This book tells the story of the human ambition for personal change, with a primary focus on the techniques of meditation and yoga. Hundreds of millions of people around the world meditate daily. Mindfulness courses, directly inspired by Buddhist meditation, are offered in schools and universities, and mindfulness-based therapies are now available as psychological treatments in the UK’s National Health Service.

Many scientists and teachers claim that this spiritual practice is one of the most efficient and economic tools of personal change. Yoga is no less popular. According to a recent survey by the Yoga Health Foundation, more than 250 million people worldwide practise it regularly. Through yoga we learn to notice thoughts, feelings and sensations while working with physical postures. Often, yoga practice includes a period of lying or sitting meditation.

Psychologists have developed an arsenal of theories and techniques to understand and motivate personal change. But it wasn’t psychology that produced the greatest surge of interest of the twentieth century in this topic, it was meditation. By the 1970s millions of people worldwide were signing up to learn a technique that promised quick and dramatic personal change. Transcendental Meditation was introduced to the West by Maharishi Mahesh Yogi, and quickly spread after the Beatles declared themselves to be followers of this Indian guru. To gain respectability Maharishi sponsored dozens of scientific studies about the effects of Transcendental Meditation, in academic fields ranging from psychophysiology to sociology, showing that its regular practice changed personality traits, improved mood and wellbeing and, not least, reduced criminality rates.

The publicity images for Transcendental Meditation included young people levitating in a cross-legged position and displaying blissful smiles. I recall, as a child, staring at the photographs of the levitating meditators used in the advertising brochures and thinking ‘Can they really do that?’ My parents’ enthusiasm for meditation, though, was short-lived. When I recently asked my mum about it, she just said, ‘It was a seventies thing; most of our friends were trying it out.’

Like my parents’ interest research on meditation waned rapidly. Photos of levitating people didn’t help to persuade the scientific community that this was something worth studying. We had to wait almost thirty years before a new generation of researchers reignited interest in the field, conducting the first neuroimaging studies of Tibetan monks meditating, and the first explorations of the use of mindfulness in the treatment of depression. For yoga, too, there is increasing evidence that its practice can reduce depression?

Meditation and yoga are no longer taboo words in psychology, psychiatry and neuroscience departments. There now are dedicated conferences and journals on the topic and thousands of researchers worldwide using the most advanced scientific tools to study these techniques. Many of the studies are funded by national science agencies; just looking at the US federally funded projects, from 1998 to 2009, the number increased from seven to more than 1205. The idea of personal change is increasingly central to these studies. Recent articles show improvements in cognitive and affective skills after six to eight weeks of mindfulness, including an increase in empathy?

These are exciting findings. Meditation practices seem to have an impact on our thoughts, emotions and behaviours. Yet, these studies report only modest changes. But many who use and teach these techniques make astonishing claims about their powers. At the Patanjali Research Foundation in northern India, the world’s largest yoga research centre, I hear miraculous claims about yoga from the mouth of its director-guru, Swami Ramdev: ‘Yoga can heal anything, whether it’s physical or mental illness.’

Teasing fact from fiction is a major aim of this book.

The first part explores ideas about the effects of meditation and yoga, contrasting them with the current scientific evidence of personal change. The second part puts the theories to the test, we carry out new research and scrutinize both the upsides and downsides of these practices. We have dedicated a full chapter to the darker aspects of meditation, which teachers and researchers seldom or never mention.

Although this isn’t a self-help book, it attempts to answer crucial questions for anyone interested in contemplative techniques: can these practices help me to change? If yes, how much and how do they work? And, if they do change me, is it always for the better?

These questions have shaped a significant part of my own life. In my teenage years I believed that to seek personal growth and transformation was the central goal of human existence; this led me to study psychology. I wanted to learn how to promote change through psychological therapy, although it was only later, while undergoing therapy training, that I considered the subtlety and difficulties of this process. My undergraduate psychology degree turned out to not shed much light on our potential for transformation; it rarely touched on ideas about how to make us more whole, healed, enlightened, or just a better person.

But rather than giving up, I read more about the areas of psychology I wasn’t being taught like consciousness studies and started doing research on the effects of spiritual practices. When I decided it was probably a good idea to do a doctorate, I browsed through hundreds of psychology websites in search of potential supervisors; I found one at Oxford whom I thought was open minded enough to mentor my interests, and I moved to the city in 2000.

This is the pre-history of my motivation to write this book. Its history begins in the early summer of 2009, when Shirley du Boulay, a writer and former journalist with the BBC, invited a number of people to take part in the re-creation of a ceremony that blended Christian and Indian spirituality. Images, readings and songs from both traditions were woven together, following the instructions of Henry le Soux, a French Benedictine monk who went to live in India and founded a number of Christian ashrams that adopted the simplicity of Indian spirituality (think of vegetarian food and a thin orange habit)?

I met Catherine Wikholm, the co-author of this book, at this event. She had studied philosophy and theology at Oxford University before embarking on her psychology training, and was at the time doing research relating to young offenders. Catherine and I were both drawn to an elegant woman in her fifties called Sandy Chubb, who spoke in a gentle but authoritative manner. Sandy showed us a book she had recently published with cartoonish illustrations of yoga postures. I thought it was intended for children and asked her if kids enjoyed yoga. Sandy smiled and told us the book was meant for illiterate prisoners. That was the mission of the Prison Phoenix Trust, a small charity she directed: to teach yoga and meditation in prisons. Trying to escape my feeling of embarrassment, I praised the idea of bringing contemplative techniques to prisoners. ‘It must help them to cope with the lack of freedom,’ I suggested. Sandy frowned slightly.

‘That’s not the main purpose,’ she said. Although going to prison is a punishment, Sandy told us, with the help of meditation and yoga, being locked in a small cell can help prisoners realize their true life mission.

‘Which is?’ Catherine and I both asked at the same time. ‘To be saintly, enlightened beings,’ Sandy answered.

Catherine and I kept silent. We were mildly sceptical. But also intrigued. Sandy seemed to claim that meditation and yoga techniques could radically transform criminals. I went back to my office that same evening to search for studies of meditation and yoga in prisons and found only a handful. The results weren’t dramatic but pointed in the right direction, prisoners reported less aggression and higher self-esteem? Reading closely, I noticed there were serious methodological flaws: most had small sample sizes and none included a control group a standard research practice that ensures results are not owing to chance or some variable the researcher forgot to take into account.

I wanted to know more. If Sandy’s claims were true, if meditation and yoga could transform prisoners, this could have tremendous implications for how psychologists understand and promote personal change in all individuals, not just those who are incarcerated. Having no experience of prisons, I contacted Catherine to ask if she’d be interested in working with me on this topic.

‘l’d love to!’ she said, more enthusiastic than I imagine most would be at the prospect of interviewing numerous convicted criminals and in the process spending weeks behind bars. Having started working for the prison service in her early twenties, Catherine had a strong forensic interest, particularly in the treatment of young offenders. She was passionate about the rehabilitation of prisoners in general and was curious as to whether yoga and meditation might represent an alternative means of facilitating positive, meaningful change for those who were unable or unwilling to engage with traditional rehabilitative efforts, such as offending behaviour programs.

So Catherine and l arranged to meet with Sandy at the Prison Phoenix Trust. Walking through Oxford’s trendy Summertown, where the Trust is based, we wondered what the meeting would bring. On arriving at the offices, we received a warm welcome. Sandy gave us the guided tour of their floor of the building, which comprised four rooms: the office, where she and her colleagues had their desks; a dining room for communal meals; a meditation room with cushions on the floor; and, along a corridor, a room that was wall-to-wall lined with metal filing cabinets. These, Sandy explained, were full of the letters the Prison Phoenix Trust had received from prisoners, estimated at numbering more than ten thousand.

If we were intrigued before, we were now completely hooked. Our minds filled with questions, we sat down with Sandy as she began to reveal the unusual story of how a small charity had persuaded prison governors to let them teach meditation and yoga to a broad range of prisoners, including thieves, murderers and rapists.

This story made quite an impression on us. So much so, in fact, that it inspired us to dedicate much of the following two years to designing and implementing a study of the measurable effects of yoga and meditation on prisoners. The findings of our research (which we’ll reveal later on in the book) not only sparked a flurry of media interest, but inspired us to spend the two years after that writing this book.

Our initial focus on the potential of meditative techniques to transform the ‘worst of the worst’ broadened out, as we became increasingly interested in exploring its full potential. Might Eastern contemplative techniques have the power to change all of us? As we engaged with more and more research literature, the inspiring stories of change we uncovered compounded our broadened view of the potential of yoga and meditation. Our own personal experiences, such as those of my ongoing research and Catherine’s clinical psychology doctoral training and subsequent acquaintance with mindfulness-based therapies and their application within the NHS in turn increased our curiosity.

What began as a perhaps unlikely marriage of my interest in spirituality and Catherine’s in forensic and clinical psychology has evolved into a wider exploration of the science and delusions of personal change. Just as we worked on our research together, so we have written this book together. To reflect the dynamic process of our writing, with the combining of our ideas and to avoid any messy jumping back and forth between us as narrators we have chosen to write this book in first-person narrative, as a singular, joint ‘I’. Although inevitably it may sometimes be apparent which one of us is narrating at a particular point, if simply by virtue of our gender difference, we have sought to write as a shared voice. The personal stories, interviews and accounts depicted in this book are all drawn from our real experiences. However, when discussing any examples relating to therapeutic work, we have anonymized all names and identifying details.

Over the course of the book, we will examine the scientific evidence that actually exists for the claims of change that meditation, mindfulness and yoga practitioners, teachers and enthusiasts propagate.

We also bring together our own experiences as psychologists, one more research-oriented and one more practice oriented, as well as the stories of some of the thought-provoking characters we’ve encountered along our journey. All that is to come. But for now let us begin by letting you in on the unique story that started it all.

The Prison Phoenix Trust



‘If we forget that in every criminal there is a potential saint, we are dishonouring all of the great spiritual traditions. Saul of Tarsus persecuted and killed Christians before becoming Saint Paul, author of much of the New Testament. Valmiki, the revealer of the Ramayana, was a highwayman, a robber, and a murderer. Milarepa, one of the greatest Tibetan Buddhist gurus, killed 37 people before he became a Saint. We must remember that even the worst of us can change.’ Bo Lozoff (American prison reform activist and founder of the Prison Ashram Project and the Human Kindness Foundation)

Knocking on the door of a house in a quiet street in Oxfordshire, notepad and pen in hand, I stood and waited on the front step. A minute later the door opened. A smartly dressed, elderly lady smiled at me from inside.

‘Tigger?’ I asked. ‘Yes, do come in,’ she replied.

Still full of life at ninety years old, Tigger Ramsey-Brown was a pleasure to interview. I was there to find out from her more about the story of her late younger sister, who had founded the Prison Phoenix Trust. Over cups of tea in her sunny conservatory, Tigger began vividly to recount the story of her sister and how she had started the Trust around thirty years previously.

In the beginning

Tigger pointed out that if we were going to go right to the start, this story actually begins somewhat earlier, with the marine biologist and committed Darwinist Sir Alister Hardy. At one time a Professor of Zoology at Oxford University, Hardy had happened to teach Richard Dawkins, an evolutionary biologist and outspoken atheist. Knighted for his work in biology, Hardy had a strong interest in the evolution of humankind, developing novel theories such as the aquatic ape hypothesis (which proposes that humans went through an aquatic or semi-aquatic stage in our evolution).

But he was also particularly interested in the evolution of religion and religious experience. Hardy viewed humans as spiritual animals, theorizing that spirituality was a natural part of our human consciousness. He mooted that our awareness of something ‘other’ or ‘beyond’ had arisen through exploration of our environment and he wanted to explore this further.

However, aware that fellow scientists and academics were likely to consider his interest in researching spirituality unorthodox, he waited until he retired from Oxford University before he delved deeper and founded the then-called Religious Experience Research Unit (RERU) at Manchester College, Oxford. (It is now the Alister Hardy Religious Experience Research Centre and is based in Wales.)

The goal of Hardy’s research was to discover if people today still had the same kind of mystical experiences they seemed to have had in the past. He began his study by placing adverts in newspapers, asking people to write in with their mystical experiences, in response to what became known as ‘The Hardy Question’: ‘Have you ever been aware of or influenced by a presence or power, whether you call it God or not, which is different from your everyday self?’

‘Thousands of people replied to the adverts, writing about their dreams and spiritual experiences. These responses were compiled into a database to enable researchers to analyze the different natures and functions of people’s religious and spiritual experiences. This is where Ann came in,’ Tigger told me. And so it was that in the mid-1980s in Oxfordshire, a woman named Ann Wetherall spent her days collecting and categorizing people’s dreams, visions and other spiritual experiences.

Looking for a link

Over time, as she examined the letters, Ann began to wonder if there was a common denominator in the accounts.

She noticed that it didn’t seem to matter whether someone was religious or atheist, but, more often than not, it was people who were feeling hopeless or helpless who reported a direct experience of spirituality.

Ann hypothesized that imprisonment might be a context that particularly inspired such despondent feelings and that it therefore might also trigger spiritual experiences. She got in touch with convicted murderer turned sculptor Jimmy Boyle, one of Scotland’s most famous reformed criminals. Boyle helped her to get an advert published in prison newspapers, asking for prisoners to write in about their religious or spiritual episodes. She got quite a response prisoners in their dozens wrote in to her describing their unusual experiences. Many of them had never mentioned these to anyone before and had wondered if they were going mad.

‘Ann wanted to write back and reassure them that they weren’t, and that these were valid spiritual experiences, which could be built on but the Alister Hardy Foundation did not reply to letters,’ Tigger explained. ‘That’s why Ann broke away from the research, so that she could start corresponding with the prisoners who were writing in, and offer support.

Because of their confinement in cells and separation from the outside world, Ann thought that prisoners’ experience was perhaps rather similar to that of monks. While for prisoners this withdrawal from society was not voluntary, she believed that they too could use their cell as a space for spiritual growth.’

‘What was her interpretation of spiritual growth?’ I asked.

‘Not only becoming more in touch with a greater power, but also becoming more aware of inner feelings and thoughts, as well as more connected and sensitive to other people’s needs,’ Tigger explained.

‘And the means of bringing about this kind of change?’ I asked, already pre-empting the answer…

‘Through meditation, of course.’

From spiritual experience to spiritual development

Tigger explained that she and Ann had spent their childhoods in India, growing up among Buddhist monasteries. Because of this upbringing, Ann had had a lifelong involvement with meditation, and believed that prisoners could benefit from learning it. In her letters back and forth to prisoners, she began sharing with them what she knew about meditation, in order to encourage and support their spiritual development.

Over the next couple of years, Ann’s correspondence with convicts came to strengthen her belief that prisoners had real potential for Spiritual development. ‘She thought they had a terrific spirituality, a hunger that wasn’t being met,’ Tigger explained, as our conversation moved onto Ann’s decision to set up a charitable trust, the Prison Ashram Project (now the Prison Phoenix Trust).

Founded in 1988, the organization was at first very small, comprising just Ann and three other volunteers, who wrote to prisoners, encouraging them to use their spiritual experiences as a springboard for future spiritual development.

‘You are more than you think you are’ was the project’s frequent message.

As the name suggests the Prison Ashram Project had the central premise that a prison cell can be used as an ashram, a Hindi word that refers to a spiritual hermitage, a place to develop deeper spiritual understanding through quiet contemplation or ascetic devotion.

Hermitage is not only an Eastern practice in Western Christian tradition, a monastery is a place of hermitage, too, because it is partially removed from the world. Furthermore, the word ‘cell’ is used in monasteries as well as in prisons, and there are a surprising number of similarities between the living conditions of monks and prisoners. Both live ascetic lives filled with restriction and limitation. Both monks and prisoners are able to meet their basic needs (but little more), both desist from sensual pleasures and the accumulation of wealth, and both follow a strict daily schedule.

Despite these parallels, however, there is undeniably a big difference in how monks and prisoners come to live in their respective cells. For monks living communally in monasteries, as well as hermits who live alone, living ascetically is an intentional choice, aimed at enabling them to better focus on spiritual goals. But for prisoners withdrawing from the world is not their choice; rather, it is imposed upon them as punishment. Which leads to the question: can involuntary confinement really open a door to inner freedom and personal change? Ann Wetherall believed so.

Being confined to a cell for much of the day, even against free will, could be a catalyst for spiritual development. The conditions were conducive; all that anyone needed was a radical shift in thinking. Rather than punishment, incarceration could be reconceived of as an opportunity for positive transformative experience. Prisoners had lost their physical liberty, but they could nevertheless gain spiritual freedom. Ann thought that meditation was the ideal tool with which prisoners could build spiritual growth, requiring only body, mind and breath.

So far, so good. But as Tigger talked something seemed to me to be a distinct obstacle to peaceful meditation behind bars: the undeniable fact that prisons are busy, noisy places. Granted, there might be some similarities between prisons, monasteries and spiritual retreats, I thought, but surely finding peace and quiet in a prison would be a bit of a mission impossible. Wouldn’t that render any attempt to meditate a bit futile?

‘No.’ Tigger smiled. ‘Ann believed this actually increased the importance and worth of meditation practice; the practice would enable prisoners to find a sense of peace despite their surroundings.’

Crossing continents

As it turned out Ann was not the first to think of encouraging prisoners’ spiritual development through in-cell meditation. A couple of years after setting up the Prison Ashram Project, she heard about Bo Lozoff, a spiritual leader and prison reform activist doing similar work in the USA.

Curiously, his organization was also called the Prison Ashram Project. Bo first had the idea that a prison cell could be a kind of ashram when his brother-in-law was sentenced to prison for drug smuggling. At the time Bo and his wife Sita were living at an ashram in North Carolina. There, their daily routine involved waking early, wearing all white, working all day without getting paid, abstaining from sex and eating communally. Visiting his brother-in-law in prison, Bo realized there were remarkable parallels between their day-to-day lives.

Around the same time he came across a book by renowned spiritual teacher Ram Dass, entitled Be Here Now. The combination of these two events inspired Bo and Sita to set up their own Prison Ashram Project in 1973, in cooperation with Ram Dass.

Just like Ann, they had begun corresponding with prisoners, offering encouragement and instruction in meditation and also in yoga. They also sent prisoners copies of Ram Dass’s book, along with the book that Bo himself went on to write: We’re All Doing Time A Guide for Getting Free. The central concept of this book is that it’s not only prisoners who are imprisoned, but that we are all ‘doing time’ because we allow ourselves to be so restricted by hang-ups, blocks and tensions. The message is that through meditation and yoga we can all learn to become free.

The birth of the Prison Phoenix Trust

Not long after meeting Bo, Ann changed her charity’s name to the Prison Phoenix Trust (PPT), in part because she was concerned that the word ‘ashram’ might prove an obstacle for the prison service. She was keen to step things up a notch from written correspondence and start setting up meditation and yoga workshops in prisons themselves. However, even with the new name, prison governors and officers were wary of the charity’s efforts. The Trust tried to get into prisons through the Chaplaincy; however, here too there was a surprising amount of resistance.

It’s worth remembering that in the late 1980s, prison chaplains were almost all Anglican. At that time the Anglican Church was still suspicious of practices such as meditation, which when compared with contemplation or silent prayer seemed ‘unChristian’. Many ministers thought that meditation centred on a spirituality that might be Hindu, Buddhist or even evil (stemming from the notion that to silence the mind also means making it available for the devil).

A 2011 article in the Daily Telegraph highlighted an extreme example of Christian opposition to yoga and meditation, reporting how a Catholic priest named Father Gabriele Amroth, appointed the Vatican’s chief exorcist in 1986, had publicly denounced yoga at a film festival where he had been invited to introduce The Rite (a film about exorcism, starring Anthony Hopkins): ‘Practising yoga is Satanic, it leads to evil just like reading Harry Potter,’ the priest is reported as stating, to an audience of bemused film fans?

Of course, not all devout Christians share such concerns that Christianity and Eastern spiritual practices are incompatible. Offering me another biscuit Tigger revealed the next chapter of her sister’s tale, wherein Ann would join forces with ‘a very forceful and very amazing character’.


‘Spirituality is what you do with those fires that burn within you.’ Sister Elaine

Thousands of miles away from Oxford and Ann’s fledgling charity lived a Catholic nun. As well as being a nun, Sister Elaine was a Zen master. She grew up in Canada, where in her youth she became a professional classical musician for the Calgary Symphony Orchestra. At the age of thirty, however, she realized her true calling and joined the convent of Our Lady’s Missionaries in Toronto. In 1961, after several years at the convent, she was sent to Japan for her first assignment as a Catholic missionary. Her mission was to set up a Conservatory and Cultural Centre in Osaka, where she would teach English and music to Japanese people, as well as to baptise as many of them as possible.

In order to get to know the Japanese people better, she began to practise Zen Buddhism. She started zazen (sitting meditation) and koan study, under the guidance of Yamada Koun Roshi, a well-known Zen master from the Japanese Sanbo Kyodan order. Perhaps surprisingly, it did not matter to him that Sister Elaine was a Catholic nun with no intention of becoming a Buddhist. Yamada Koun Roshi did not draw a division between different people or religions, and similarly neither does Sister Elaine, who maintains, ‘There is no separation. We make separation?

Devoted to her new discipline, Sister Elaine went on to spend some time living with Buddhist nuns in Kyoto, where the daily regime involved ten hours a day of sitting in silence.

To call the koan study lengthy would be an understatement; it took her nearly two decades of studying with her Zen teacher before she was made a roshi. This title, which translates literally as ‘old teacher’, marks the top echelon of Zen teachers. There are an estimated only 100 roshis worldwide. Very few of them are Westerners, but in 1980 Sister Elaine finally became one of them, an accredited Zen teacher of the Sanbo Kyodan order. Her achievement made her the first Canadian, and certainly the first Catholic nun, to be recognized as one of the world’s highest-ranking teachers of Zen.

In 1976, after 15 years in Japan, Our Lady’s Missionaries back in Toronto transferred Sister Elaine to the Philippines. This was during the worst years of the Marcos regime, and Sister Elaine was to be involved with animal husbandry. However, she did more than merely raise livestock. Once in the Philippines she set up a zendo (Zen meditation centre), for the Catholic Church in Manila. Word spread about her work and a leading dissident, Horacio ‘Boy’ Morales, who had headed the New People’s Army against the Marcos dictatorship, came to hear of her. Held as a political prisoner at the Bago Bantay detention centre, Morales asked Sister Elaine to come to prison to teach meditation to him and a group of fellow prisoners, each of whom had, like him, been tortured. His hope was that the practice could help them to cope with the stress of imprisonment and find inner peace.

Despite the hostility of the authorities and worrying reports of other prison visitors ‘vanishing’, Sister Elaine spent four-and-a-half years teaching meditation to those prisoners every week. During that time she witnessed a remarkable change: the prisoners transformed from being angry, tense men, trembling from torture, to being calm. This convinced her both of the therapeutic power of silent meditation and of the potential for prisoners to develop spiritually while incarcerated.

Sister Elaine’s life makes for quite an unusual story, and her work in the Philippines caught the attention of the media and subsequently of Ann Wetherall. Leaning forward in her seat, Ann’s sister, Tigger, told me of the unexpected events that would subsequently unfold.

Ann’s legacy

In 1992, four years after founding the Prison Phoenix Trust, Ann discovered she had terminal cancer. Coming to terms with this news, Ann felt fearful for the prisoners she was involved with; what would happen to her charity after she was gone? She had heard of Sister Elaine and wrote to her, asking if she would consider taking over as director after she died. Sister Elaine flew over from the Philippines to spend a week with Arm to try to come to a decision. Shortly after returning home, she phoned Ann to accept her offer, telling her ‘don’t die until I get there’.

Sadly, Ann passed away while Sister Elaine was on her way back to England. Over the six years Sister Elaine was director, the idea that yoga and meditation are beneficial for prisoners became increasingly accepted among prison governors and officers. They might not have been as interested in the potential spiritual development of prisoners, but many acknowledged the range of other, more down-to-earth benefits: prisoners doing yoga and meditation were reportedly calmer, slept better and felt less stressed and so were easier to work with.

While, like Ann, Sister Elaine believed that meditation was the key to stilling the mind, incorporating yoga into the classes was important: when the body can be still, the mind can be still.

Aged 75, Sister Elaine left the Trust not to retire, but to return to her native Canada to found a similar organization called Freeing the Human Spirit, based in Toronto.

In the years since Sister Elaine’s departure, the Prison Phoenix Trust (PPT) has continued to develop its work, with classes now running in the majority of UK prisons. Reflecting on the Trust’s progress, Sandy Chubb, the PPT’s subsequent director, remarked to me with a smile, ‘Yes, gone are the days when yoga teachers were branded yoghurt pots.’

Hearing the stories about Ann and Sister Elaine, so vividly recounted to me by Tigger and others, including the Trust’s current director Sam Settle, it made sense to me that yoga and meditation could lead to personal change in prisoners. Certainly the PPT had a whole lot of anecdotal evidence attesting to its benefits. Over the course of 25 years, PPT letter-writers have received more than 10,000 replies from prisoners reporting the positive effects of these techniques. The benefits range from increased self-esteem, better sleep and reduced dependence on drugs, medication or cigarettes, to improved emotional management and reduced stress.

Anecdote or evidence

I was invited to come and have a look through the filing cabinets that contained these letters, the amount of correspondence astounded me. Yet despite all those positive responses, as a psychologist I couldn’t help but be a little sceptical, testimonials are all very well, but what was the empirical evidence that yoga and meditation can help incarcerated criminals change for the better? Searching scientific databases I discovered there was very little rigorous research out there into the measurable psychological effects of these practices on prison populations.

The majority of studies that did exist focused specifically on meditation with some interesting results. Research into the effects of Transcendental Meditation on criminals had been taking place since the 1970s. For example, a study by US researchers Abrams and Siegel found that those prisoners who received a 14-week course of TM training showed a significant reduction in anxiety, neuroticism, hostility and insomnia compared with the control group. This would seemingly constitute early evidence for the rehabilitative effects of TM. However, the study was criticized on the grounds that it had inadequate controls, limiting the conclusions we can draw from the findings and calling into question the authors’ somewhat liberal interpretation of their statistical results.

More recent studies using other meditation techniques also yielded some promising evidence. In these studies, researchers concluded that meditation led to such positive results as improved psychosocial functioning”, a reduction in substance abuse, and decreased recidivism rates?

However, while all that sounds really promising, most of this research also had serious shortcomings. For example, sample sizes were usually very small, there was not a control group, or the research drew evidence only from questionnaire measures.

I realized that if we were to draw any realistic conclusions about whether or not yoga and meditation are effective in bringing about measureable psychological changes in incarcerated criminals, we needed better research evidence. And so the seeds were sown for our Oxford Study, the journey and findings of which we reveal in Chapter 8. While this was in the planning, I wanted to gain a deeper understanding about the PPT’s rationale for encouraging prisoners to practise yoga and meditation, and their conceptualizations of personal change.


While the PPT does believe that yoga and meditation can lead to beneficial psychological effects in prisoners, what they’re really interested in is the possibility of a radical ‘self-change’. This involves a significant shift in perspective. Sandy Chubb told me that in her experience (of teaching yoga in prisons), prisoners are lovely to work with. This didn’t surprise me all that much we all tend to be co-operative when we’re getting to do something we want to do.

What did surprise me was the comment that followed: Sandy told me that ‘prisoners are all perfect’.

Perfect is certainly not the adjective most of us would choose to describe murderers, rapists and paedophiles; for many it’s perhaps even the antonym of the word they would use. I needed Sandy to clarify. ‘What’s perfect about them?’ I asked.

The answer appears to lie in Sandy’s spiritual worldview. Like many others who believe in a universal spirituality, Sandy recognizes the divine nature of each of us including criminals and is convinced of the interconnectedness of all things. She smiles serenely when she tells me what to her is a simple, obvious truth: ‘We are a whole creation that works dynamically.’

The concept of unity or non-duality is a central premise in some Eastern spiritual belief systems, and one that effectively eliminates the ‘us’ and ‘them’ mentality that most of us have in relation to convicted criminals. Early into my interview with Sam Settle, the current director of the PPT and a former Buddhist monk, I encountered the same belief: ‘lf prisoners realized that we are all connected,’ Sam told me, ‘then they would not commit crimes.’

So while reducing re-offending is not an asserted aim of the PPT, it is considered likely to occur as a side-effect of spiritual growth. The hypothesis is that it is criminals’ mistaken idea of separateness that allows them to act in a harmful way towards others. From Sandy and Sam’s perspective, there is no ‘other’, and there are no ‘bad’ people; we are all part of the same perfect whole and meditation and yoga can help people to realize this.

Later in the book I will discuss how many people share this perspective, people who believe that not just individual but worldwide change is possible, if only there are enough people meditating.


While we could dismiss some of these ideas about the transformative potential of meditation and yoga for prisoners as utopian, Romantic, or LaLa-Land spirituality, we can also consider them in a purely secular sense, in terms of psychological and behavioural changes.

But, even if we cast aside, for now, the spiritual dimension, the notion that yoga and meditation can produce meaningful change in prisoners might still be considered somewhat ‘out there’. The very idea of the possibility of personal change is itself a loaded topic, especially in the context of prisons. Young repeat offenders are often labelled hopeless cases, written off by the time they have barely left their teens, undermining the ethos of rehabilitation that should be central to the prison system. However, for many offenders there are myriad factors that may obstruct attempts to rehabilitate not only in terms of overcoming backgrounds of adversity, but also in terms of their perceived (lack of) prospects for the future.

The institution of home

For many who have lived in prisons from an early age, the prospect of going outside is daunting.

I once worked with a prisoner, ‘John’, who was serving his tenth prison sentence at the age of only 21 years old. He attended every session of the offending behaviour program I was facilitating, only to in the final session suddenly become aggressive and disruptive to the point where he had to be removed from the group. Talking to him afterwards, trying to understand why he had sabotaged something that could have helped him towards securing an earlier release date, he admitted he was scared of being released. ‘There is nothing for me outside,’ he said, visibly upset.

When John was a young child, one of his parents murdered the other; he went on to spend the rest of his childhood in numerous short-term foster care placements. Angry and distrusting of people, he would repeatedly run away from them. He committed his first offence aged ten and received his first custodial sentence aged 15. The frequency of his impulsive crimes meant that he had spent the majority of the past six years behind bars. There were no family or friends waiting for him on the outside. The uncertainty of how to build a meaningful life, alone, in the ‘real world’ was overwhelming. Prison was all he felt he knew.


All staff members working in prisons from officers, to psychologists, to governors are acutely aware that changing prisoners can be extraordinarily difficult but it’s not impossible. In my own work with young male offenders, I lost count of the number of times I heard ‘he’ll never change’ from prison officers, who generally would have little idea of that individual’s backstory and the factors that contributed to his offending behaviour. Often the prisoners in question were boys still in their teens, some of them coming from such difficult backgrounds that it would have been a miracle if they hadn’t ended up in prison.

The desire to reform is often unsupported, sometimes owing to budget restrictions, but other times owing to a lack of belief. Changing is hard. And it’s even harder without a helping hand.

The support of others, whether friend, therapist or institution can be fundamental in whether or not we succeed in bringing about a desired change. Feeling that others believe in us can significantly boost our sense of self-efficacy. Feeling that others don’t believe in us at all undermines our self-belief so that we may start to feel a dramatic waning of our own confidence and motivation to try to change.

Changing attitudes

It was a Thursday afternoon and I was on my lunch break, in between research interviews at a West Midlands prison. I was accompanied by an officer in his late fifties, who had been assigned to facilitate the interviews; escorting prisoners from the wings to the interview room. As our break drew to a close, the officer suddenly deviated from his impromptu monologue on the joys of pigeon fancying, my knowledge of which had substantially increased over the hour, to ask whether I really thought that yoga and meditation would do anything at all for prisoners.

‘Well,’ I replied, ‘we think it might. There’s evidence that it works outside of prisons to reduce stress and increase positive emotions. So it may help prisoners to manage their emotions better and improve their self control, which might also reduce their aggression.’

‘Ha!’ said the officer. ‘I doubt it.’

‘Why?’ I asked.

‘I don’t think any of these can change,’ he told me. ‘I’m a firm believer that leopards never change their spots.’

It wasn’t just yoga and meditation the officer was dismissing as futile. He went on to say that he thought nothing could be done to change prisoners for the better; each and every one of them was a hopeless cause. ‘No matter what,’ he told me, ‘they will always revert back to what they are. It’s like a man who used to be a philanderer; he could get married to a woman and be faithful for, let’s say, ten years, but in the end, he’ll always cheat again.’

My attempts to debate failed miserably. When I maintained that I did think we could rehabilitate prisoners, he delivered his closing argument: ‘Well I’m older than you and I’ve met quite a lot of different people, so I think I know.’

Fortunately, this old-style officer is not representative of the majority of prison staff I have encountered. Over the last twenty years, a number of accredited offending behaviour programs (psychological group interventions that aim to reduce re-offending) have been developed that have been shown to be effective in bringing about improvements in prisoner behaviour, such as reducing aggression?

Despite this positive progress, with the reduction-rate for recidivism being generally around 10 per cent for program-completers, there is still clearly room for new and additional approaches particularly as many prisoners are reluctant or unable to engage with psychological treatment at all.

Arriving at a recent meeting at HMP Shrewsbury, l was escorted by a female officer who gave me a quick overview of the prison. She told me that the population was mostly sex offenders and that it was the most overcrowded prison in the country, adding, ‘We’re full of bed blockers.’

‘Bed blockers?’ I asked.

She explained that these are prisoners who had been through the sex offenders treatment program, but for one reason or another hadn’t been moved on to a different prison. The result was that they were taking up spaces that other, as yet untreated, offenders could use.

However, the main problem at Shrewsbury was not the ‘bed blockers’, who had accepted their offences and received treatment, but the many sex offenders who were in denial, and so could not be treated. Owing to the nature of their offences, such prisoners may be limited in what activities they can undertake during their sentences. Typically, for their own protection, sex offenders are segregated from ‘mainstream’ prisoners and even with good behaviour are not deemed suitable for outside work.

HMP Shrewsbury was one of the prisons that participated in our own research study. This prison had by far the biggest number of prisoners keen to do yoga and meditation, many more than we could actually manage to interview during the time we had allocated there.

As I interviewed prisoner after prisoner, all expressing a desire to do the yoga classes, it seemed to me that it could be possible that these techniques if effective could represent an alternative way to encourage positive personal change in prisoners whom the system might otherwise not be able to reach. Why? Because practising meditation and yoga doesn’t involve asking probing questions about offences of which prisoners may be deeply ashamed, feel in denial of, or simply not yet ready to address.

Sandy confirmed the particular utility of yoga and meditation for this demographic: ‘Not only is silence therapeutic and inclusive, it’s also safe for people with addiction and sex-offending histories.’ On the surface yoga is a physical activity, with desirable physiological benefits; it’s unthreatening, non-blaming and doesn’t require the admission of guilt. In this way it is possible that prisoners who would otherwise avoid explicit attempts to ‘change’ their behaviour, may nevertheless engage with a technique that could anyway bring about deep, personal transformation.


The concept of a prison cell as an ashram is an idea that captures the imagination, and the paradox of finding spiritual freedom through the loss of physical freedom is intriguing. Might there actually be truth in this unusual idea, can daily yogic sun salutations and deep breathing really make convicted rapists and murderers less violent and impulsive?

While it’s unlikely that yoga and meditation could replace traditional rehabilitative approaches, it seems possible that they may have a unique ability to reach prisoners on a different level: to make them feel more at peace, and more valued and connected. Bo Lozoff summarizes the aim of organizations that teach contemplative techniques to prisoners worldwide when he says that we should ‘allow for transformation, not merely rehabilitation’.

In other words the change that charities such as his and the PPT seek to encourage goes far beyond the cessation of offending behaviour; we are talking about a radical change in worldview. The PPT’s current director Sam Settle describes this transformation as ‘the forgetting of one’s self as one lives the forgetting of me’. In essence moving from focusing on oneself as a separate individual to seeing oneself as part of a larger whole.

Whether or not we share these ideas about the possibility of the transformation of convicted criminals from sinner to saint, from ‘monster’ to Buddha on a theoretical and anecdotal level, there does seem to be reason to think that yoga and meditation can bring about positive personal change in prisoners.

In Chaoter 8 we reveal how we put that theory to the test, but first let’s take a look at what science can tell us about the potential of Eastern techniques for bringing about meaningful change not just for prisoners, but for any of us.



‘Change is an odd process, almost contradictory: you want it, but don’t want it,’ said my clinical supervisor, playing with his curled beard and looking at me. What was he talking about? I had started my training in cognitive behavioural therapy (CBT) eight weeks earlier and was discussing my first client, ‘Mary’, a woman in her thirties, whose husband had died while on a family holiday. He had killed himself jumping off a cliff, right in front of his wife and their young child. Six months after the incident, Mary found herself depressed and sleepless.

‘I felt shock and disbelief,’ she told me, remembering. ‘I felt like I had been disembowelled and bricks sewn inside. I had to register his death the next day and felt terrible anger at having to describe myself as a widow, 24 hours after I had been a wife. Bureaucracy shouldn’t require that, you know?’ I nodded but felt tense, eager to show empathy. For the past eight weeks, l’d spent most . . .



The Buddha Pill: Can Meditation Change You?

by Dr Miguel Farias and Dr Catherine Wikholm

get it at

Dr Miguel Farias writes about the psychology of belief and spiritual practices, including meditation. He was a lecturer at the University of Oxford and is now the leader of the Brain, Belief and Behaviour group at Coventry University.

Dr Catherine Wikholm is a Clinical Psychologist registered with the Health Care and Professions Council (HCPC) and a Chartered Psychologist with the British Psychological Society (BPS). She completed her undergraduate degree in Philosophy and Theology at Oxford University, before embarking on her psychology training and gaining a Postgraduate Diploma in Psychology, Masters in Forensic Psychology and a Doctorate in Clinical Psychology. Catherine was previously employed by HM Prison Service where she worked with young offenders. She went on to work alongside Dr Miguel Farias at the Department of Experimental Psychology, Oxford University, on a randomised controlled trial that looked at the psychological effects of yoga and meditation in prisoners. The findings of this research study sparked the idea for ‘The Buddha Pill’, which she co-wrote while completing her doctorate. Catherine currently works in a NHS child and adolescent mental health service (CAMHS) in London, UK.

The Self Acceptance Project. How to Be Kind & Compassionate Toward Yourself in Any Situation – Tami Simon.

It’s hard to be kind to yourself. At least that is my experience, especially when difficult things happen.

Early on in my life, I discovered that there was a part of me that turned against myself when something unfortunate happened that l perceived to be my fault, perhaps a misstep or something that felt like a failure. The first time this became painfully obvious was when I was twenty-two years old and had just launched Sounds True.

It’s a long story, but the gist of it is that I decided it would be a terrific idea to produce and host a Soviet-American citizen’s summit for public radio as an extension of Sounds True’s conference-recording service. The broadcast was a chance for public radio listeners to hear Russian citizens in dialogue with Americans to illustrate how everyday people can become ambassadors of peace and goodwill. The broadcast itself was well done, and the content flowed seamlessly. There was only one problem, and it was a big problem: the translation feed did not come through to the broadcast audience. This meant when participants spoke in Russian (which was about twenty-five minutes of the hour-long production), listeners did not hear any English translation; they heard only the original Russian language. In other words, the live broadcast to tens of thousands of people was largely incomprehensible.

As producer and host, I was devastated and humiliated. People tried to console me: “It is good for people to hear a language that is unfamiliar. You provided a public service.” But inside, I felt like I wanted to die. Yes, end my life right there on the spot. I couldn’t take a full breath. I wanted to crawl under a rock and never come out. Instead, I crawled into my hotel bed (if there had been room under the bed, I would have crawled there), and l squeezed myself into a tight ball for about twentyfour hours. There was no comfort there, only a terrible voice inside that said things like “You should kill yourself now.” I made a decision at that time to never produce and host a live broadcast event again. The potential pain of that type of failure was too much for me to bear. I only wanted to work on projects that I could polish and make perfect (“perfect” being the operative word). It was just too painful to do something that carried with it the risk of public humiliation.

Years later, this event from my young life as a producer and host receded into the background, and with it, the pain. However, I was left with two important discoveries: first, that I was determined to design my life to avoid such “failures” at all cost, and second, that I had a terribly mean voice inside that responded to difficult situations by punishing me and declaring that it would be useless for me to continue living. To say this voice was self-aggressive was an understatement. This voice spoke to me in a way that I could never imagine speaking to another person, yet it lived in me and had the potential to turn on me if things didn’t go well.

As the years passed, I started to recognize this inner voice as a type of sub-personality (some people call it “the inner critic” or “the judge”) that seemed to have its own life. it would become active and vocal when something seemingly went wrong. This critical voice would even tear me to shreds over small and insignificant things, like cooking a meal for friends and putting too much salt in the food and then I would have a sleepless night listening to it berate me. “Really?” I thought, “Over something like this? You’ve got to be kidding me.”

With time, I started to take this voice less and less seriously, it was so out of step with the actual magnitude of situations. Through a lot of inner work, both in one-on-one therapy and on the meditation cushion, the voice gradually began to lose its power. I could still hear it, but it was no longer in charge of my state of being. Other capacities came on board, including the capacity to be kind to myself and offer myself comfort. I even became curious about this voice’s origin and purpose: What function might it be serving in the total ecology of my psyche? What were the emotions, and the accompanying physical sensations, that lay waiting for me underneath the voice? Could I turn toward those emotions and sensations with openness and curiosity?

I also became intensely curious about other people’s experiences with self-criticism and selfjudgment. How was it that some people made mistakes and viewed the entire experience as a learning opportunity? How could I become more like those types of people?

In parallel to my own growing curiosity about self-acceptance, I began working with people as a meditation instructor. In private meetings, people shared their innermost struggles with me. What often impacted me the most was how hard people were on themselves, how negative self-talk was more the norm than the exception. Very often, people had an overlay of self-judgment when they were in the midst of a difficult experience. Again and again, I heard people say, “I am suffering in this way, and I feel like I am a terrible person because I am suffering in this way.”

From those conversations, I saw that people judged themselves for so many different kinds of things for being too fat or too thin, for being too verbal or not verbal enough, for being closedhearted or too open and porous. People judged themselves about their past if only this or that had or hadn’t happened. People judged their sexual orientation or lack of a sexual orientation. People judged themselves for being too old, too this or too that, for not being “enough” of something or other. And people endlessly compared themselves to other people and mythic ideals. People had internalized voices of judgment about everything that they were and that they weren’t.

Working with meditation students, I also saw how self-judgment kept people from taking risks. It often felt like a lid that people used to keep themselves safe, small, contained, and underpotentiated. And this was painful to see how sensitive, good-hearted human beings often focus on what they supposedly lack instead of their beauty, strength, possibility, and power to create.

I started to see “unconditional selfacceptance”, being kind to ourselves no matter what is happening in our lives, as an immensely powerful life skill that most of us have not been taught.

I started to see that being kind to ourselves is actually a human capacity that changes everything. It changes how we treat ourselves day to day, how we take risks, how we love, how we create, and how we make space for what seems “unacceptable” in others.

Over time, I came to see being kind to ourselves as quite an advanced practice. I call it an “advanced practice” because I found myself in conversation with people who had been on a path of personal growth for decades, people who had been meditating or in therapy for years who still found it quite challenging to treat themselves with kindness when confronted with certain situations. And I wanted to know more about what makes self-acceptance so difficult for so many of us and, more importantly, how we can develop this capacity widely and broadly, individually and collectively, as a way to release waves and waves of kindness.

The Self-Acceptance Project was born out of this inquiry. Originally created for online broadcast hey, I started doing live broadcasts again! The Self-Acceptance Project originated as a series of interviews with psychologists, dharma teachers, neurobiologists, writers, and educators on the essential keys to being kind and compassionate toward ourselves, especially on the spot in difficult situations. The book you are reading now is derived from this original series of interviews.

The very good news that The Self-Acceptance Project delivers is that there is a lot to learn about self-acceptance that can be intensely and immediately helpful: accepting the part of ourselves that is not self-accepting, understanding how our brains are wired to look for what is wrong (known as the negativity bias), learning to immediately respond and talk to ourselves in selfloving ways when in the midst of a challenge, and more. The Self-Acceptance Project also helps us realize that feeling inadequate at times is not something unique to us; it is a feeling that many, many of us share.

When the broadcast of the original interview series was complete, I received hundreds of letters from people who listened and found the interviews extraordinarily helpful. What I learned from these letters is that people were immensely grateful that some of their favorite authors and teachers were not just offering their advice and techniques for cultivating self-acceptance, they were sharing their own struggles and journeys that had unfolded in their lives to help them develop self-acceptance first hand. The series was tremendously normalizing for listeners, and I hope this book will have the same impact on you. When we learn how our difficulties are shared, even by the people we admire (and sometimes “pedestalize”), we embrace our humanness. We see that our struggles are shared human struggles, part of the human condition. We have the opportunity to relax with being human.

I am convinced that the more accepting we are of ourselves, the more accepting we will be of other people. If there are parts of ourselves that we disown, push away, and deem unacceptable, then we will be unwilling and unable to make room to receive and embrace those aspects of other people.

Ultimately, the work of The SelfAcceptance Project is not just about you and me learning to work with ourselves in a loving and kind way. It is about learning how to relate to, and be with, anyone and I mean anyone in a loving and kind way. When we are able to be with our own difficulties and intense experiences that are seemingly unwanted, then we can be with other people’s difficulties and their seemingly unwanted experiences.

The Self-Acceptance Project is about having the bravery to open our hearts to ourselves and to everyone and everything.

When we develop a strong sense of selfacceptance, we become capable of such bravery. We may still hear critical inner voices, but they no longer hold power over us. We move forward anyway. We develop the courage to take risks and to stand in our truth because we become more confident that we can handle it if our risk-taking leads to disappointment or disapproval. We so thoroughly befriend ourselves that we can risk receiving criticism, looking like a failure, or suffering loss.

Brave people create, brave people speak up, brave people call bullshit “bullshit,” brave people bring their hearts forward and put their hearts on the line, brave people love outrageously.

May The Self-Acceptance Project help you become such a brave person!


Tara Brach

Building a true sense of self-trust comes from making contact with the deeper parts of our being, such as the truth of our loving, even when we sometimes act in ways we don’t like.

Radical Self-Acceptance

Many years ago, I began to focus on the urgent need for self-acceptance. In fact, I called it radical self-acceptance, because the notion of holding oneself with love and compassion was still so foreign.

It had become clear to me that a key part of my emotional suffering was a sense of feeling “not enough,” which, at times, escalated into full-blown selfaversion.

As I witnessed similar patterns in my students and clients, I began to realize that the absence of self-acceptance is one of the most pervasive expressions of suffering in our society. We can spend huge swaths of our life living in what I call the “trance of unworthiness,” trapped in a chronic sense of falling short.

Though we’re rarely conscious of it, we continually evaluate ourselves. So often, we perceive a gap between the person we believe we should be and our actual moment-to-moment experience.

This gap makes us feel as if we’re always, in some way, not okay. As if we’re inherently deficient. A palliative caregiver who has worked with thousands of dying people once wrote that the deepest regret expressed by her patients is that they hadn’t been true to themselves. They’d lived according to the expectations of others, according to the should, but not aligned with their own hearts.

That speaks volumes. We can move through our days so out of touch with ourselves that, at the end, we feel sorrow for not having expressed our own aliveness, creativity, and love.

So much of the time we’re simply unaware of just how pervasive that sense of something’s wrong with me is. Like an undetected toxin, it can infect every aspect of our lives. For example, in relationships, we may wear ourselves out trying to make others perceive us in a certain way smart, beautiful, spiritual, powerful, whatever our personal ideal happens to be. We want them to approve of us, love us. Yet, it’s very hard to be intimate when, at some deep level, we feel flawed or deficient. It’s hard to be spontaneous or creative or take risks or even relax in the moment if we think that we’re falling short.

Negativity Bias

From an evolutionary perspective, a sense of vulnerability is natural. Fearing that something’s wrong or about to go wrong is part of the survival instinct that keeps us safe, a good thing when we’re being chased by a grizzly bear! Although this sense of vulnerability of being threatened is innately human, all too often we turn it in on ourselves. Our self-consciousness makes it personal. We move quickly from “Something is wrong or bad,” to “I’m the one who’s wrong or bad.” This is the nature of our unconscious, selfreflexive awareness; we automatically tend to identify with what’s deficient. in psychological parlance, this scanning for and fixating on what is wrong is described as “negativity bias.”

For most of us, our feelings of deficiency were underscored by messages we received in childhood. We were told how to behave and what kinds of looks, personality, and achievements would lead to success, approval, and love. Rarely do any of us grow up feeling truly loveable and worthy just as we are.

Our contemporary culture further exacerbates our feelings of inadequacy. There are few natural ways of belonging that help to reassure us about our basic goodness, few opportunities to connect to something larger than ourselves.

Ours is a fearbased society that over-consumes, is highly competitive, and sets standards valuing particular types of intelligence, body types, and achievements.

Because the standards are set by the dominant culture, the message of inferiority is especially painful for people of color and others who are continually faced with being considered “less than” due to appearance, religion, sexual or gender orientation, or socio-economic status.

When we believe that something is inherently wrong with us, we expect to be rejected, abandoned, and separated from others. In reaction, the more primitive parts of our brain devise strategies to defend or promote ourselves. We take on chronic self-improvement projects. We exaggerate, lie, or pretend to be something we’re not in order to cover our feelings of unworthiness. We judge and behave aggressively toward others. We turn on ourselves.

Although it’s natural to try to protect ourselves with such strategies, the more evolved parts of our brain offer another option: the capacity to tend and befriend. Despite our conditioning, we each have the potential for mindful presence and unconditional love. Once we see the trance of unworthiness, how we’re suffering because we’re at war with ourself, we can commit to embracing the totality of our inner experience. This commitment, along with a purposeful training in mindfulness and compassion, can transform our relationship with all of life.

It’s helpful to understand that when we’re possessed by fearful reactivity, we can be hijacked by our primitive brain and disconnected from the neuro-circuitry that correlates with mindfulness and compassion. We become cut off from the very parts of ourselves that allow us to trust ourselves, to be more happy and free. The critical inquiry is what enables us to reconnect to regain access to our most evolved, cherished human qualities.

The gateway is the direct experience of the suffering of fear and shame that have been driving us. Not long ago, one of my students revealed that she felt as if she could never be genuinely intimate with another person because she was afraid that if anyone really knew her, they’d reject her outright. This woman had spent her whole life believing, “I’ll be rejected if somebody sees who I am.” It wasn’t until she acknowledged her pain and viewed it as a wake-up call that she could begin to stop the war against herself.

Once we recognize our suffering, the first step toward healing is learning to pause. We might think, “I’m unworthy of my partner’s love because I’m a selfish person.” Or, “I’m unworthy because I’m not a fun or spontaneous person.” Or perhaps, “I don’t deserve love because I always let people down.”

We might experience feelings of shame or fear or hopelessness. Whatever our experience, learning to pause when we’re caught in our suffering is the critical first step.

As Holocaust survivor and psychiatrist Viktor Frankl famously said: “Between stimulus and response, there is a space. In that space is our power to choose our response. In our response lies our growth and freedom.” When we pause, we can respond to the prison of our beliefs and feelings in a healing way.

The second step toward healing is to deepen attention. It’s important to ask, “Beneath all of my negative thoughts, what’s going on in my body, in my heart, right now?” When we begin to bring awareness to the underlying pain, I sometimes call that the sense of “ouch.” You might even ask how long it’s been going on and realize: “Wow. I’ve been feeling not enough for as long as I can remember.” If that happens, try placing your hand on your heart as a sign of your intention to be kind toward yourself and your suffering. You might even tell yourself, “I want to be able to be gentle with this place inside me that feels so bad.”

TAMI SIMON founded Sounds True in 1985 as a multimedia publishing house with a mission to disseminate spiritual wisdom. She hosts a popular weekly podcast called Insights at the Edge, where she has interviewed many of today’s leading teachers.


The Self Acceptance Project



The Self Acceptance Project. How to Be Kind and Compassionate Toward Yourself in Any Situation

by Tami Simon.

get it at

The Great Slump of 1930 – John Maynard Keynes.

The world has been slow to realize that we are living this year in the shadow of one of the greatest economic catastrophes of modern history.

But now that the man in the street has become aware of what is happening, he, not knowing the why and wherefore, is as full to-day of what may prove excessive fears as, previously, when the trouble was iirst coming on, he was lacking in what would have been a reasonable anxiety. He begins to doubt the future. Is he now awakening from a pleasant dream to face the darkness of facts? Or dropping off into a nightmare which will pass away?

He need not be doubtful. The other was not a dream. This is a nightmare, which will pass away with the morning. For the resources of nature and men’s devices are just as fertile and productive as they were. The rate of our progress towards solving the material problems of life is not less rapid. We are as capable as before of affording for everyone a high standard of life, high, I mean, compared with, say, twenty years ago, and will soon learn to afford a standard higher still.

We were not previously deceived. But to-day we have involved ourselves in a colossal muddle, having blundered in the control of a delicate machine, the working of which we do not understand. The result is that our possibilities of wealth may run to waste for a time, perhaps for a long time.

I doubt whether I can hope, in these articles, to bring what is in my mind into fully effective touch with the mind of the reader. I shall be saying too much for the layman, too little for the expert. For, though no one will believe it, economics is a technical and difficult subject. It is even becoming a science. However, I will do my best, at the cost of leaving out, because it is too complicated, much that is necessary to a complete understanding of contemporary events.

First of all, the extreme violence of the slump is to be noticed. In the three leading industrial countries of the world, the United States, Great Britain, and Germany, 10,000,000 workers stand idle. There is scarcely an important industry anywhere earning enough profit to make it expand, which is the test of progress. At the same time, in the countries of primary production the output of mining and of agriculture is selling, in the case of almost every important commodity, at a price which, for many or for the majority of producers, does not cover its cost. In 1921, when prices fell as heavily, the fall was from a boom level at which producers were making abnormal profits; and there is no example in modern history of so great and rapid a fall of prices from a normal figure as has occurred in the past year. Hence the magnitude of the catastrophe.

The time which elapses before production ceases and unemployment reaches its maximum is, for several reasons, much longer in the case of the primary products than in the case of manufacture. In most cases the production units are smaller and less well organized amongst themselves for enforcing a process of orderly contraction; the length of the production period, especially in agriculture, is longer; the costs of a temporary shut-down are greater; men are more often their own employers and so submit more readily to a contraction of the income for which they are willing to work; the social problems of throwing men out of employment are greater in more primitive communities; and the financial problems of a cessation of production of primary output are more serious in countries where such primary output is almost the whole sustenance of the people. Nevertheless we are fast approaching the phase in which the output of primary producers will be restricted almost as much as that of manufacturers; and this will have a further adverse reaction on manufacturers, since the primary producers will have no purchasing power wherewith to buy manufactured goods; and so on, in a vicious circle.

In this quandary individual producers base illusory hopes on courses of action which would benefit an individual producer or class of producers so long as they were alone in pursuing them, but which benefit no one if everyone pursues them. For example, to restrict the output of a particular primary commodity raises its price, so long as the output of the industries which use this commodity is unrestricted; but if output is restricted all round, then the demand for the primary commodity falls off by just as much as the supply, and no one is further forward. Or again, if a particular producer or a particular country cuts wages, then, so long as others do not follow suit, that producer or that country is able to get more of what trade is going. But if wages are cut all round, the purchasing power of the community as a whole is reduced by the same amount as the reduction of costs; and, again, no one is further forward.

Thus neither the restriction of output nor the reduction of wages serves in itself to restore equilibrium.

Moreover, even if we were to succeed eventually in re-establishing output at the lower level of money-wages appropriate to (say) the pre-war level of prices, our troubles would not be at an end. For since 1914 an immense burden of bonded debt, both national and international, has been contracted, which is fixed in terms of money. Thus every fall of prices increases the burden of this debt, because it increases the value of the money in which it is fixed.

For example, if we were to settle down to the prewar level of prices, the British National Debt would be nearly 40 per cent. greater than it was in 1924 and double what it was in 1920; the Young Plan would weigh on Germany much more heavily than the Dawes Plan, which it was agreed she could not support; the indebtedness to the United States of her associates in the Great War would represent 40-50 per cent. more goods and services than at the date when the settlements were made; the obligations of such debtor countries as those of South America and Australia would become insupportable without a reduction of their standard of life for the benefit of their creditors; agriculturists and householders throughout the world, who have borrowed on mortgage, would find themselves the victims of their creditors.

In such a situation it must be doubtful whether the necessary adjustments could be made in time to prevent a series of bankruptcies, defaults, and repudiations which would shake the capitalist order to its foundations.

Here would be a fertile soil for agitation, seditions, and revolution. It is so already in many quarters of the world. Yet, all the time, the resources of nature and men’s devices would be just as fertile and productive as they were. The machine would merely have been jammed as the result of a muddle. But because we have magneto trouble, we need not assume that we shall soon be back in a rumbling waggon and that motoring is over.

We have magneto trouble. How, then, can we start up again? Let us trace events backwards:

1. Why are workers and plant unemployed? Because industrialists do not expect to be able to sell without loss what would be produced if they were employed.

2. Why cannot industrialists expect to sell without loss? Because prices have fallen more than costs have fallen-indeed, costs have fallen very little.

3. How can it be that prices have fallen more than costs? For costs are what a business man pays out for the production of his commodity, and prices determine what he gets back when he sells it. It is easy to understand how for an individual business or an individual commodity these can be unequal. But surely for the community as a whole the business men get back the same amount as they pay out, since what the business men pay out in the course of production constitutes the incomes of the public which they pay back to the business men in exchange for the products of the latter? For this is what we understand by the normal circle of production, exchange, and consumption.

4. No! Unfortunately this is not so; and here is the root of the trouble.

It is not true that what the business men pay out as costs of production necessarily comes back to them as the saleproceeds of what they produce. It is the characteristic of a boom that their sale-proceeds exceed their costs; and it is the characteristic of a slump that their costs exceed their sale-proceeds. Moreover, it is a delusion to suppose that they can necessarily restore equilibrium by reducing their total costs, whether it be by restricting their output or cutting rates of remuneration; for the reduction of their outgoings may, by reducing the purchasing power of the earners who are also their customers, diminish their sale-proceeds by a nearly equal amount.

5. How, then, can it be that the total costs of production for the world’s business as a whole can be unequal to the total sale-proceeds? Upon what does the inequality depend? I think that I know the answer. But it is too complicated and unfamiliar for me to expound it here satisfactorily. (Elsewhere I have tried to expound it accurately.) So I must be somewhat perfunctory.

Let us take, first of all, the consumption-goods which come on to the market for sale. Upon what do the profits (or losses) of the producers of such goods depend? The total costs of production, which are the same thing as the community’s total earnings looked at from another point of view, are divided in a certain proportion between the cost of consumption-goods and the cost of capital-goods. The incomes of the public, which are again the same thing as the community‘s total earnings, are also divided in a certain proportion between expenditure on the purchase of consumption-goods and savings.

Now if the first proportion is larger than the second, producers of consumption-goods will lose money; for their sale proceeds, which are equal to the expenditure of the public on consumption-goods, will be less (as a little thought will show) than what these goods have cost them to produce. If, on the other hand, the second proportion is larger than the first, then the producers of consumption-goods will make exceptional gains. It follows that the profits of the producers of consumption goods can only be restored, either by the public spending a larger proportion of their incomes on such goods (which means saving less), or by a larger proportion of production taking the form of capital-goods (since this means a smaller proportionate output of consumption-goods).

But capital-goods will not be produced on a larger scale unless the producers of such goods are making a profit. So we come to our second question, upon what do the profits of the producers of capital-goods depend? They depend on whether the public prefer to keep their savings liquid in the shape of money or its equivalent or to use them to buy capital-goods or the equivalent. If the public are reluctant to buy the latter, then the producers of capital-goods will make a loss; consequently less capital-goods will be produced; with the result that, for the reasons given above, producers of consumption goods will also make a loss. In other words, all classes of producers will tend to make a loss; and general unemployment will ensue. By this time a vicious circle will be set up, and, as the result of a series of actions and reactions, matters will get worse and worse until something happens to turn the tide.

This is an unduly simplified picture of a complicated phenomenon. But I believe that it contains the essential truth. Many variations and fugal embroideries and orchestrations can be superimposed; but this is the tune.

If, then, I am right, the fundamental cause of the trouble is the lack of new enterprise due to an unsatisfactory market for capital investment. Since trade is international, an insufficient output of new capital-goods in the world as a whole affects the prices of commodities everywhere and hence the profits of producers in all countries alike.

Why is there an insuflicient output of new capital-goods in the world as a whole? It is due, in my opinion, to a conjunction of several causes. In the first instance, it was due to the attitude of lenders, for new capital-goods are produced to a large extent with borrowed money. Now it is due to the attitude of borrowers, just as much as to that of lenders.

For several reasons lenders were, and are, asking higher terms for loans, than new enterprise can afford:

First, the fact, that enterprise could afford high rates for some time after the war whilst war wastage was being made good, accustomed lenders to expect much higher rates than before the war.

Second, the existence of political borrowers to meet Treaty obligations, of banking borrowers to support newly restored gold standards, of speculative borrowers to take part in Stock Exchange booms, and, latterly, of distress borrowers to meet the losses which they have incurred through the fall of prices, all of whom were ready if necessary to pay almost any terms, have hitherto enabled lenders to secure from these various classes of borrowers higher rates than it is possible for genuine new enterprise to support.

Third, the unsettled state of the world and national investment habits have restricted the countries in which many lenders are prepared to invest on any reasonable terms at all. A large proportion of the globe is, for one reason or another, distrusted by lenders, so that they exact a premium for risk so great as to strangle new enterprise altogether.

For the last two years, two out of the three principal creditor nations of the world, namely, France and the United States, have largely withdrawn their resources from the international market for long-term loans.

Meanwhile, the reluctant attitude of lenders has become matched by a hardly less reluctant attitude on the part of borrowers. For the fall of prices has been disastrous to those who have borrowed, and anyone who has postponed new enterprise has gained by his delay. Moreover, the risks that frighten lenders frighten borrowers too.

Finally, in the United States, the vast scale on which new capital enterprise has been undertaken in the last five years has somewhat exhausted for the time being, at any rate so long as the atmosphere of business depression continues, the profitable opportunities for yet further enterprise. By the middle of 1929 new capital undertakings were already on an inadequate scale in the world as a whole, outside the United States. The culminating blow has been the collapse of new investment inside the United States, which today is probably 20 to 30 per cent less than it was in 1928. Thus in certain countries the opportunity for new profitable investment is more limited than it was; whilst in others it is more risky.

A wide gulf, therefore, is set between the ideas of lenders and the ideas of borrowers for the purpose of genuine new capital investment; with the result that the savings of the lenders are being used up in financing business losses and distress borrowers, instead of financing new capital works.

At this moment the slump is probably a little overdone for psychological reasons. A modest upward reaction, therefore, may be due at any time. But there cannot be a real recovery, in my judgment, until the ideas of lenders and the ideas of productive borrowers are brought together again; partly by lenders becoming ready to lend on easier terms and over a wider geographical field, partly by borrowers recovering their good spirits and so becoming readier to borrow.

Seldom in modern history has the gap between the two been so wide and so diflicult to bridge. Unless we bend our wills and our intelligences, energized by a conviction that this diagnosis is right, to find a solution along these lines, then, if the diagnosis is right, the slump may pass over into a depression, accompanied by a sagging price level, which might last for years, with untold damage to the material wealth and to the social stability of every country alike. Only if we seriously seek a solution, will the optimism of my opening sentences be confirmed, at least for the nearer future.

It is beyond the scope of this article to indicate lines of future policy. But no one can take the first step except the central banking authorities of the chief creditor countries; nor can any one Central Bank do enough acting in isolation. Resolute action by the Federal Reserve Banks of the United States, the Bank of France, and the Bank of England might do much more than most people, mistaking symptoms or aggravating circumstances for the disease itself, will readily believe.

In every way the more effective remedy would be that the Central Banks of these three great creditor nations should join together in a bold scheme to restore confidence to the international long-term loan market; which would serve to revive enterprise and activity everywhere, and to restore prices and profits, so that in due course the wheels of the world’s commerce would go round again. And even if France, hugging the supposed security of gold, prefers to stand aside from the adventure of creating new wealth, I am convinced that Great Britain and the United States, like-minded and acting together, could start the machine again within a reasonable time; if, that is to say, they were energized by a confident conviction as to what was wrong. For it is chiefly the lack of this conviction which today is paralyzing the hands of authority on both sides of the Channel and of the Atlantic.

How Economics Survived the Economic Crisis – Robert Skidelsky * Good enough for government work? Macroeconomics since the crisis – Paul Krugman.

Unlike the Great Depression of the 1930s, which produced Keynesian economics, and the stagflation of the 1970s, which gave rise to Milton Friedman’s monetarism, the Great Recession has elicited no such response from the economics profession. Why?

The tenth anniversary of the start of the Great Recession was the occasion for an elegant essay by the Nobel laureate economist Paul Krugman, who noted how little the debate about the causes and consequences of the crisis have changed over the last decade. Whereas the Great Depression of the 1930s produced Keynesian economics, and the stagilation of the 1970s produced Milton Friedman’s monetarism, the Great Recession has produced no similar intellectual shift.

This is deeply depressing to young students of economics, who hoped for a suitably challenging response from the profession.

Why has there been none?

Krugman’s answer is typically ingenious: the old macroeconomics was, as the saying goes, “good enough for government work.” It prevented another Great Depression. So students should lock up their dreams and learn their lessons.

A decade ago, two schools of macroeconomists contended for primacy: the New Classical or the “freshwater” School, descended from Milton Friedman and Robert Lucas and headquartered at the University of Chicago, and the New Keynesian, or “saltwater,” School, descended from John Maynard Keynes, and based at MIT and Harvard.

Freshwater-types believed that budgets deficits were always bad, whereas the saltwater camp believed that deficits were beneficial in a slump. Krugman is a New Keynesian, and his essay was intended to show that the Great Recession vindicated standard New Keynesian models.

But there are serious problems with Krugman’s narrative. For starters, there is his answer to Queen Elizabeth II’s nowfamous question: “Why did no one see it coming?” Krugman’s cheerful response is that the New Keynesians were looking the other way. Theirs was a failure not of theory, but of “data collection.” They had “overlooked” crucial institutional changes in the financial system. While this was regrettable, it raised no “deep conceptual issue” that is, it didn’t demand that they reconsider their theory.

Faced with the crisis itself, the New Keynesians had risen to the challenge. They dusted off their old sticky-price models from the 1950s and 1960s, which told them three things. First, very large budget deficits would not drive up near zero interest rates. Second, even large increases in the monetary base would not lead to high inflation, or even to corresponding increases in broader monetary aggregates. And, third, there would be a positive national income multiplier, almost surely greater than one, from changes in government spending and taxation.

These propositions made the case for budget deficits in the aftermath of the collapse of 2008. Policies based on them were implemented and worked “remarkably well.” The success of New Keynesian policy had the ironic effect of allowing “the more inflexible members of our profession [the New Classicals from Chicago] to ignore events in a way they couldn’t in past episodes.” So neither school, sect might be the better word, was challenged to re-think first principles.

This clever history of pre- and post-crash economics leaves key questions unanswered.

First, if New Keynesian economics was “good enough,” why didn’t New Keynesian economists urge precautions against the collapse of 2007-2008? After all, they did not rule out the possibility of such a collapse a priori.

Krugman admits to a gap in “evidence collection.” But the choice of evidence is theory-driven. In my view, New Keynesian economists turned a blind eye to instabilities building up in the banking system, because their models told them that financial institutions could accurately price risk. So there was a “deep conceptual issue” involved in New Keynesian analysis: its failure to explain how banks might come to “underprice risk worldwide,” as Alan Greenspan put it.

Second, Krugman fails to explain why the Keynesian policies vindicated in 2008-2009 were so rapidly reversed and replaced by fiscal austerity. Why didn’t policymakers stick to their stodgy fixed-price models until they had done their work? Why abandon them in 2009, when Western economies were still 4-5% below their precrash levels?

The answer I would give is that when Keynes was briefly exhumed for six months in 2008-2009, it was for political, not intellectual, reasons. Because the New Keynesian models did not offer a sufficient basis for maintaining Keynesian policies once the economic emergency had been overcome, they were quickly abandoned.

Krugman comes close to acknowledging this: New Keynesians, he writes, “start with rational behavior and market equilibrium as a baseline, and try to get economic dysfunction by tweaking that baseline at the edges.” Such tweaks enable New Keynesian models to generate temporary real effects from nominal shocks, and thus justify quite radical intervention in times of emergency. But no tweaks can create a strong enough case to justify sustained interventionist policy.

The problem for New Keynesian macroeconomists is that they fail to acknowledge radical uncertainty in their models, leaving them without any theory of what to do in good times in order to avoid the bad times. Their focus on nominal wage and price rigidities implies that if these factors were absent, equilibrium would readily be achieved. They regard the financial sector as neutral, not as fundamental (capitalism’s “ephor,” as Joseph Schumpeter put it).

Without acknowledgement of uncertainty, saltwater economics is bound to collapse into its freshwater counterpart. New Keynesian “tweaking” will create limited political space for intervention, but not nearly enough to do a proper job. So Krugman’s argument, while provocative, is certainly not conclusive. Macroeconomics still needs to come up with a big new idea.


Robert Skidelsky, Professor Emeritus of Political Economy at Warwick University and a fellow of the British Academy in history and economics, is a member of the British House of Lords. The author of a three-volume biography of John Maynard Keynes.

Project Syndicate

Oxford Review of Economic Policy, 2018

Good enough for government work? Macroeconomics since the crisis

Paul Krugman


This paper argues that when the financial crisis came policy-makers relied on some version of the Hicksian sticky-price IS-LM as their default model; these models were ”good enough for government work’.

While there have been many incremental changes suggested to the DSGE model. there has been no single ‘big new idea” because the even simpler lS-LM type models were what worked well. In particular, the policy responses based on lS-LM were appropriate.

Specifically, these models generated the insights that large budget deficits would not drive up interest rates and, while the economy remained at the zero lower bound, that very large increases in monetary base wouldn’t be inflationary, and that the multiplier on government spending was greater than 1.

The one big exception to this satisfactory understanding was in price behaviour. A large output gap was expected to lead to a large fall in inflation, but did not. If new research is necessary. it is on pricing behaviour. While there was a failure to forecast the crisis, it did not come down to a lack of understanding of possible mechanisms, or of a lack of data, but rather through a lack of attention to the right data.

I. Introduction

It’s somewhat startling, at least for those of us who bloviate about economics for a living, to realize just how much time has passed since the 2008 financial crisis. Indeed, the crisis and aftermath are starting to take on the status of an iconic historical episode, like the stagflation of the 1970s or the Great Depression itself, rather than that of freshly remembered experience. Younger colleagues sometimes ask me what it was like during the golden age of economics blogging, mainly concerned with macroeconomic debates, which they think of as an era that ended years ago.

Yet there is an odd, interesting difference, both among economists and with a wider audience, between the intellectual legacies of those previous episodes and what seems to be the state of macroeconomics now.

Each of those previous episodes of crisis was followed both by a major rethinking of macroeconomics and, eventually, by a clear victor in some of the fundamental debates. Thus, the Great Depression brought on Keynesian economies, which became the subject of fierce dispute, and everyone knew how those disputes turned out: Keynes, or Keynes as interpreted by and filtered through Hicks and Samuelson, won the argument.

In somewhat the same way, stagflation brought on the Friedman Phelps natural rate hypothesis, yes, both men wrote their seminal papers before the 1970s, but the bad news brought their work to the top of the agenda. And everyone knew, up to a point anyway, how the debate over that hypothesis ended up: basically everyone accepted the natural rate idea, abandoning the notion of a long-run trade-off between inflation and unemployment. True, the profession then split into freshwater and saltwater camps over the effectiveness or lack thereof of short-run stabilization policies, a development that I think presaged some of what has happened since 2008. But I’ll get back to that.

For now, let me instead just focus on how different the economics profession response to the post-2008 crisis has been from the responses to depression and stagflation. For this time there hasn’t been a big new idea, let alone one that has taken the profession by storm. Yes, there are lots of proclamations about things researchers should or must do differently, many of them represented in this issue of the Oxford Review. We need to put finance into the heart of the models! We need to incorporate heterogeneous agents! We need to incorporate more behavioural economics! And so on.

But while many of these ideas are very interesting, none of them seems to have emerged as the idea we need to grapple with. The intellectual impact of the crisis just seems far more muted than the scale of crisis might have led one to expect. Why?

Well, I’m going to offer what I suspect will be a controversial answer: namely, macroeconomics hasn’t changed that much because it was. in two senses, what my father’s generation used to call ‘good enough for government work”. On one side, the basic models used by macroeconomists who either practise or comment frequently on policy have actually worked quite well, indeed remarkably well. On the other, the policy response to the crisis, while severely lacking in many ways, was sufficient to avert utter disaster, which in turn allowed the more inflexible members of our profession to ignore events in a way they couldn‘t in past episodes.

In what follows I start with the lessons of the financial crisis and Great Recession, which economists obviously failed to predict. I then move on to the aftermath, the era of fiscal austerity and unorthodox monetary policy, in which I’ll argue that basic macroeconomics, at least in one version, performed extremely well. I follow up with some puzzles that remain. Finally, I turn to the policy response and its implications for the economics profession.

II. The Queen’s question

When all hell broke loose in financial markets, Queen Elizabeth II famously asked why nobody saw it coming. This was a good question but maybe not as devastating as many still seem to think.

Obviously, very few economists predicted the crisis of 2008-9; those who did, with few exceptions I can think of, also predicted multiple other crises that didn’t happen. And this failure to see what was coming can’t be brushed aside as inconsequential.

There are, however, two different ways a forecasting failure of this magnitude can happen, which have very different intellectual implications. Consider an example from a different field, meteorology. In 1987 the Met Office dismissed warnings that a severe hurricane might strike Britain; shortly afterwards, the Great Storm of 1987 arrived, wreaking widespread destruction. Meteorologists could have drawn the lesson that their fundamental understanding of weather was fatally flawed which they would presumably have done if their models had insisted that no such storm was even possible. Instead, they concluded that while the models needed refinement, the problem mainly involved data collection that the network of weather stations, buoys, etc. had been inadequate, leaving them unaware of just how bad things were looking.

How does the global financial crisis compare in this respect? To be fair, the DSGE models that occupied a lot of shelf space in journals really had no room for anything like this crisis. But macroeconomists focused on international experience, one of the hats I personally wear, were very aware that crises triggered by loss of financial confidence do happen, and can be very severe. The Asian financial crisis of 1997-9, in particular, inspired not just a realization that severe l930s-type downturns remain possible in the modern world, but a substantial amount of modelling of how such things can happen.

So the coming of the crisis didn’t reveal a fundamental conceptual gap. Did it reveal serious gaps in data collection? My answer would be, sort of, in the following sense: crucial data weren’t so much lacking as overlooked.

This was most obvious on the financial side. The panic and disruption of financial markets that began in 2007 and peaked after the fall of Lehman came as a huge surprise, but one can hardly accuse economists of having been unaware of the possibility of bank runs. lf most of us considered such runs unlikely or impossible in modern advanced economies, the problem was not conceptual but empirical: failure to take on board the extent to which institutional changes had made conventional monetary data inadequate.

This is clearly true for the United States, where data on shadow banking on the repo market, asset-backed commercial paper, etc. were available but mostly ignored. In a less obvious way, European economists failed to pay sufficient intention to the growth of interbank lending as a source of finance. In both cases the institutional changes undermined the existing financial safety net, especially deposit insurance. But this wasn’t a deep conceptual issue: when the crisis struck, I’m sure I wasn’t the only economist whose reaction was not ‘How can this be happening?” but rather to yell at oneself, ‘Diamond Dybvig, you idiot!’

(The Diamond-Dybvig model is an influential model of bank runs and related financial crises. The model shows how banks’ mix of illiquid assets (such as business or mortgage loans) and liquid liabilities (deposits which may be withdrawn at any time) may give rise to selffulfilling panics among depositors.)

In a more subtle way, economists were also under-informed about the surge in housing prices that we now know represented a huge bubble, whose bursting was at the heart of the Great Recession. In this case, rising home prices were an unmistakable story. But most economists who looked at these prices focused on broad aggregates say, national average home prices in the United States. And these aggregates, while up substantially, were still in a range that could seemingly be rationalized by appealing to factors like low interest rates. The trouble, it turned out, was that these aggregates masked the reality, because they averaged home prices in locations with elastic housing supply (say, Houston or Atlanta) with those in which supply was inelastic (Florida or Spain); looking at the latter clearly showed increases that could not be easily rationalized.

Let me add a third form of data that were available but largely ignored: it’s fairly remarkable that more wasn’t made of the sharp rise in household debt, which should have suggested something unsustainable about the growth of the 2001-7 era. And in the aftermath of the crisis macroeconomists, myself included (Eggertsson and Krugman, 2012) began taking private-sector leverage seriously in a way they should arguably have been doing before.

So did economists ignore warning signs they should have heeded? Yes. One way to summarize their (our) failure is that they ignored evidence that the private sector was engaged in financial overreach on multiple fronts, with financial institutions too vulnerable, housing prices in a bubble, and household debt unsustainable. But did this failure of observation indicate the need for a fundamental revision of how we do macroeconomics? That’s much less clear.

First, was the failure of prediction a consequence of failures in the economic framework that can be fixed by adopting a radically different framework? It’s true that a significant wing of both macroeconomists and financial economists were in the thrall of the efficient markets hypothesis, believing that financial overreach simply cannot happen or at any rate that it can only be discovered after the fact, because markets know what they are doing better than any observer. But many macroeconomists, especially in policy institutions, knew better than to trust markets to always get it right especially those who had studied or been involved with the Asian crisis of the 1990s. Yet they (we) also missed some or all of the signs of overreach. Why?

My answer may seem unsatisfying, but I believe it to be true: for the most part what happened was a demonstration of the old line that predictions are hard, especially about the future. It’s a complicated world out there, and one’s ability to track potential threats is limited. Almost nobody saw the Asian crisis coming, either. For that matter, how many people worried about political disruption of oil supplies before 1973? And so on. At any given time there tends to be a set of conventional indicators everyone looks at, determined less by fundamental theory than by recent events, and big, surprise crises almost by definition happen due to factors not on that list. If you like, it’s as if meteorologists with limited resources concentrated those resources in places that had helped track previous storms, leading to the occasional surprise when a storm comes from an unusual direction.

A different question is whether, now that we know whence the 2008 crisis came, it points to a need for deep changes in macroeconomic thinking. As I’ve already noted, bank runs have been fairly well understood for a long time; we just failed to note the changing definition of banks. The bursting of the housing bubble, with its effects on residential investment and wealth, was conceptually just a negative shock to aggregate demand.

The role of household leverage and forced deleveraging is a bigger break from conventional macroeconomics, even as done by saltwater economists who never bought into efficient markets and were aware of the risk of financial crises. That said, despite the impressive empirical work of Mian and Sufi (2011) and my own intellectual investment in the subject, I don’t think we can consider incorporating debt and leverage a fundamental new idea, as opposed to a refinement at the margin.

It’s true that introducing a role for household debt in spending behaviour makes the short-run equilibrium of the economy dependent on a stock variable, the level of debt. But this implicit role of stock variables in short-run outcomes isn‘t new: after all, nobody has ever questioned the notion that investment flows depend in part on the existing capital stock, and I’m not aware that many macroeconomists consider this a difficult conceptual issue.

And I’m not even fully convinced that household debt played that large a role in the crisis. Did household spending fall that much more than one would have expected from the simple wealth effects of the housing bust?

My bottom line is that the failure of nearly all macroeconomists, even of the saltwater camp, to predict the 2008 crisis was similar in type to the Met Office failure in 1987, a failure of observation rather than a fundamental failure of concept. Neither the financial crisis nor the Great Recession that followed required a rethinking of basic ideas.

III. Not believing in (confidence) fairies

Once the Great Recession had happened, the advanced world found itself in a situation not seen since the 1930s, except in Japan, with policy interest rates close to zero everywhere. This raised the practical question of how governments and central banks should and would respond, of which more later.

For economists, it raised the question of what to expect as a result of those policy responses. And the predictions they made were, in a sense, out-of-sample tests of their theoretical framework: economists weren’t trying to reproduce the historical time-series behaviour of aggregates given historical policy regimes, they were trying to predict the effects of policies that hadn’t been applied in modern times in a situation that hadn’t occurred in modern times.

In making these predictions, the deep divide in macroeconomics came into play, making a mockery of those who imagined that time had narrowed the gap between saltwater and freshwater schools. But let me put the freshwater school on one side, again pending later discussion, and talk about the performance of the macroeconomists, many of them trained at MIT or Harvard in the 1970s, who had never abandoned their belief that activist policy can be effective in dealing with short-run fluctuations. I would include in this group Ben Bernanke, Olivier Blanchard, Christina Romer, Mario Draghi, and Larry Summers, among those close to actual policy, and a variety of academics and commentators, such as Simon Wren-Lewis, Martin Wolf, and, of course, yours truly, in supporting roles.

I think it’s fair to say that everyone in this group came into the crisis with some version of Hicksian sticky-price IS-LM as their default, back-of-the-envelope macroeconomic model. Many were at least somewhat willing to work with DSGE models, maybe even considering such models superior for many purposes. But when faced with what amounted to a regime change from normal conditions to an economy where policy interest rates couldn’t fall, they took as their starting point what the Hicksian approach predicted about policy in a liquidity trap. That is, they did not rush to develop new theories, they pretty much stuck with their existing models.

These existing models made at least three strong predictions that were very much at odds with what many inhuential figures in the political and business worlds (backed by a few economists) were saying.

First. Hicksian macroeconomics said that very large budget deficits, which one might normally have expected to drive interest rates sharply higher, would not have that effect near the zero lower bound.

Second, the same approach predicted that even very large increases in the monetary base would not lead to high inflation, or even to corresponding increases in broader monetary aggregates.

Third, this approach predicted a positive multiplier, almost surely greater than 1, on changes in government spending and taxation.

These were not common-sense propositions. Non-economists were quite sure that the huge budget deficits the US ran in 2009-10 would bring on an attack by the ‘bond vigilantes’. Many financial commentators and political figures warned that the Fed’s expansion of its balance sheet would ‘debase the dollar’ and cause high inflation. And many political and policy figures rejected the Keynesian proposition that spending more would expand the economy, spending less lead to contraction.

In fact, if you‘re looking for a post-2008 equivalent to the kinds of debate that raged in the 1930s and again in the 1970s, a conflict between old ideas based on pre-crisis thinking, and new ideas inspired by the crisis, your best candidate would be fiscal policy. The old guard clung to the traditional Keynesian notion of a government spending multiplier somewhat limited by automatic stabilizers, but still greater than 1. The new economic thinking that achieved actual real-world influence during the crisis and aftermath-as opposed, let’s be honest, to the kind of thinking found in this issue mostly involved rejecting the Keynesian multiplier in favour of the doctrine of expansionary austerity, the argument that cutting public spending would crowd in large amounts of private spending by increasing confidence (Alesina and Ardagna, 2010). (The claim that bad things happen when public debt crosses a critical threshold also played an important real-world role, but was less a doctrine than a claimed empirical observation.)

So here, at least, there was something like a classic crisis-inspired confrontation between tired old ideas and a radical new doctrine. Sad to say, however, as an empirical matter the old ideas were proved right, at least insofar as anything in economics can be settled by experience, while the new ideas crashed and burned. Interest rates stayed low despite huge deficits. Massive expansion in the monetary base did not lead to infiation. And the experience of austerity in the euro area, coupled with the natural experiments created by some of the interregional aspects of the Obama stimulus, ended up strongly supporting a conventional, Keynesian view of fiscal policy, Even the magnitude of the multiplier now looks to be around 1.5, which was the number conventional wisdom suggested in advance of the crisis.

So the crisis and aftermath did indeed produce a confrontation between innovative new ideas and traditional views largely rooted in the 1930s. But the movie failed to follow the Hollywood script: the stodgy old ideas led to broadly accurate predictions, were indeed validated to a remarkable degree, while the new ideas proved embarrassingly wrong. Macroeconomics didn’t change radically in response to crisis because old-fashioned models, confronted with a new situation, did just fine.

IV. The case of the missing deflation

I’ve just argued that the lack of a major rethinking of macroeconomics in the aftermath of crisis was reasonable, given that conventional, off-the-shelf macroeconomics performed very well. But this optimistic assessment needs to be qualified in one important respect: while the demand side of economy did just about what economists trained at MIT in the 1970s thought it would, the supply side didn’t.

As I said, the experience of stagflation effectively convinced the whole profession of the validity of the natural-rate hypothesis. Almost everyone agreed that there was no long-run inflation unemployment trade-off. The great saltwater freshwater divide was, instead, about whether there were usable short-run trade-offs.

But if the natural-rate hypothesis was correct, sustained high unemployment should have led not just to low inflation but to continually declining inflation, and eventually deflation. You can see a bit of this in some of the most severely depressed economies, notably Greece. But deflation fears generally failed to materialize.

Put slightly differently, even saltwater, activist-minded macroeconomists came into the crisis as ‘accelerationists’: they expected to see a downward-sloping relationship between unemployment and the rate of change of inflation. What we’ve seen instead is, at best, something like the 1960s version of the Phillips curve, a downward-sloping relationship between unemployment and the level of inflation and even that relationship appears weak.

Obviously this empirical failure has not gone unnoticed. Broadly, those attempting to explain price behaviour since 2008 have gone in two directions. One side, e.g. Blanchard (2016), invokes ‘anchored’ inflation expectations: the claim that after a long period of low, stable inflation, price-setters throughout the economy became insensitive to recent inflation history, and continued to build 2 per cent or so inflation into their decisions even after a number of years of falling below that target. The other side. e.g. Daly and Hobijn (2014), harking back to Tobin (1972) and Akerlof er a1. (1996), invokes downward nominal wage rigidity to argue that the natural rate hypothesis loses validity at low inflation rates.

In a deep sense, I’d argue that these two explanations have more in common than they may seem to at first sight. The anchored-expectations story may preserve the outward form of an accelerationist Phillips curve, but it assumes that the process of expectations formation changes, for reasons not fully explained, at low inflation rates. The nominal rigidity story assumes that there is a form of money illusion. opposition to outright nominal wage cuts, that is also not fully explained but becomes significant at low overall inflation rates.

Both stories also seem to suggest the need for aggressive expansionary policy when inflation is below target: otherwise there’s the risk that expectations may become unanchored on the downward side, or simply that the economy will suffer persistent, unnecessary slack because the downward rigidity of wages is binding for too many workers.

Finally. I would argue that it is important to admit that both stories are ex post explanations of macroeconomic behaviour that was not widely predicted in advance of the post-2008 era. Pre-2008, the general view even on the saltwater side was that stable inflation was a sufficient indicator of an economy operating at potential output, that any persistent negative output gap would lead to steadily declining inflation and eventually outright deflation. This view was, in fact, a key part of the intellectual case for inflation targeting as the basis of monetary policy. If inflation will remain stable at, say, 1 per cent even in a persistently depressed economy. it’s all too easy to see how policymakers might give themselves high marks even while in reality failing at their job.

But while this is a subjective impression, I haven’t done a statistical analysis of recent literature, it does seem that surprisingly few calls for a major reconstruction of macroeconomics focus on the area in which old-fashioned macroeconomics did, in fact, perform badly post-crisis.

There have, for example, been many calls for making the financial sector and financial frictions much more integral to our models than they are, which is a reasonable thing to argue. But their absence from DSGE models wasn’t the source of any major predictive failures. Has there been any comparable chorus of demands that we rethink the inflation process, and reconsider the natural rate hypothesis? Of course there have been some papers along those lines, but none that have really resonated with the profession.

Why not? As someone who came of academic age just as the saltwater freshwater divide was opening up, I think I can offer a still-relevant insight: understanding wage and price-setting is hard, basically just not amenable to the tools we as economists have in our kit. We start with rational behaviour and market equilibrium as a baseline, and try to get economic dysfunction by tweaking that baseline at the edges; this approach has generated big insights in many areas, but wages and prices isn’t one of them.

Consider the paths followed by the two schools of macroeconomics.

Freshwater theory began with the assumption that wage and price-setters were rational maximizers, but with imperfect information, and that this lack of information explained the apparent real effects of nominal shocks. But this approach became obviously untenable by the early 1980s, when inflation declined only gradually despite mass unemployment. Now what?

One possible route would have been to drop the assumption of fully rational behaviour, which was basically the New Keynesian response. For the most part, however, those who had bought into Lucas-type models chose to cling to the maximizing model, which was economics as they knew how to do it, despite attempts by the data to tell them it was wrong. Let me be blunt: real business cycle theory was always a faintly (or more than faintly) absurd enterprise, a desperate attempt to protect intellectual capital in the teeth of reality.

But the New Keynesian alternative, while far better, wasn’t especially satisfactory either. Clever modellers pointed out that in the face of imperfect competition the aggregate costs of departures from perfectly rational price-setting could be much larger than the individual costs. As a result, small menu costs or a bit of bounded rationality could be consistent with widespread price and wage stickiness.

To be blunt again. however, in practice this insight served as an excuse rather than a basis for deep understanding. Sticky prices could be made respectable just allowing modellers to assume something like one-period-ahead price-setting, in turn letting models that were otherwise grounded in rationality and equilibrium produce something not too inconsistent with real-world observation. New Keynesian modelling thus acted as a kind of escape clause rather than a foundational building block.

But is that escape clause good enough to explain the failure of deflation to emerge despite multiple years of very high unemployment? Probably not. And yet we still lack a compelling alternative explanation, indeed any kind of big idea. At some level, wage and price behaviour in a depressed economy seems to be a subject for which our intellectual tools are badly fitted.

The good news is that if one simply assumed that prices and wages are sticky, appealing to the experience of the 1930s and Japan in the 1990s (which never experienced a true deflationary spiral), one did reasonably well on other fronts.

So my claim that basic macroeconomics worked very well after the crisis needs to be qualified by what looks like a big failure in our understanding of price dynamics but this failure didn’t do too much damage in giving rise to bad advice, and hasn’t led to big new ideas because nobody seems to have good ideas to offer.

V. The system sort of worked

In 2009 Barry Eichengreen and Kevin O’Rourke made a splash with a data comparison between the global slump to date and the early stages of the Great Depression; they showed that at the time of writing the world economy was in fact tracking quite close to the implosion that motivated Keynes’s famous essay ‘The Great Slump of 1930’ (Eichengreen and O’Rourke, 2009)

Subsequent updates, however, told a different story. Instead of continuing to plunge as it did in 1930, by the summer of 2009 the world economy first stabilized, then began to recover. Meanwhile, financial markets also began to normalize; by late 2009 many measures of financial stress were more or less back to pre-crisis levels.

So the world financial system and the world economy failed to implode. Why?

We shouldn’t give policy-makers all of the credit here. Much of what went right, or at least failed to go wrong, refiected institutional changes since the 1930s. Shadow banking and wholesale funding markets were deeply stressed, but deposit insurance still protected at good part of the banking system from runs. There never was much discretionary fiscal stimulus, but the automatic stabilizers associated with large welfare states kicked in, well, automatically: spending was sustained by government transfers, while disposable income was buffered by falling tax receipts.

That said, policy responses were clearly much better than they were in the 1930s. Central bankers and fiscal authorities officials rushed to shore up the financial system through a combination of emergency lending and outright bailouts; international cooperation assured that there were no sudden failures brought on by shortages of key currencies. As a result, disruption of credit markets was limited in both scope and duration. Measures of financial stress were back to pre-Lehman levels by June 2009.

Meanwhile, although fiscal stimulus was modest, peaking at about 2 per cent of GDP in the United States, during 2008-9 governments at least refrained from drastic tightening of fiscal policy, allowing automatic stabilizers, which, as I said, were far stronger than they had been in the 1930s to work.

Overall, then, policy did a relatively adequate job of containing the crisis during its most acute phase. As Daniel Drezner argues (2012), ‘the system worked’-well enough, anyway, to avert collapse.

So far, so good. Unfortunately, once the risk of catastrophic collapse was averted, the story of policy becomes much less happy. After practising more or less Keynesian policies in the acute phase of the crisis, governments reverted to type: in much of the advanced world, fiscal policy became Hellenized, that is, every nation was warned that it could become Greece any day now unless it turned to fiscal austerity. Given the validation of Keynesian multiplier analysis, we can confidently assert that this turn to austerity contributed to the sluggishness of the recovery in the United States and the even more disappointing, stuttering pace of recovery in Europe.

Figure 1 sums up the story by comparing real GDP per capita during two episodes: Western Europe after 1929 and the EU as a whole since 2007. In the modern episode, Europe avoided the catastrophic declines of the early 1930s, but its recovery has been so slow and uneven that at this point it is tracking below its performance in the Great Depression.

Now, even as major economies turned to fiscal austerity, they turned to unconventional monetary expansion. How much did this help? The literature is confusing enough to let one believe pretty much whatever one wants to. Clearly Mario Draghi’s “whatever it takes’ intervention (Draghi, 2012) had a dramatic effect on markets, heading off what might have been another acute crisis, but we never did get a clear test of how well outright monetary transactions would have worked in practice, and the evidence on the effectiveness of Fed policies is even less clear.

The purpose of this paper is not, however, to evaluate the decisions of policy-makers, but rather to ask what lessons macroeconomists should and did take from events. And the main lesson from 2010 onwards was that policy-makers don’t listen to us very much, except at moments of extreme stress.

This is clearest in the case of the turn to austerity, which was not at all grounded in conventional macroeconomic models. True, policy-makers were able to find some economists telling them what they wanted to hear, but the basic Hicksian approach that did pretty well over the whole period clearly said that depressed economies near the zero lower bound should not be engaging in fiscal contraction. Never mind, they did it anyway.

Even on monetary policy, where economists ended up running central banks to a degree I believe was unprecedented, the influence of macroeconomic models was limited at best. A basic Hicksian approach suggests that monetary policy is more or less irrelevant in a liquidity trap. Refinements (Krugman, 1998; Eggertsson and Woodford, 2003) suggested that central banks might be able to gain traction by raising their inflation targets, but that never happened.

The point, then, is that policy failures after 2010 tell us relatively little about the state of macroeconomics or the ways it needs to change, other than that it would be nice if people with actual power paid more attention. Macroeconomists aren’t, however, the only researchers with that problem; ask climate scientists how it’s going in their world.

Meanwhile, however, what happened in 2008-9, or more precisely, what didn’t happen, namely utter disaster, did have an important impact on macroeconomics. For by taking enough good advice from economists to avoid catastrophe, policy-makers in turn took off what might have been severe pressure on economists to change their own views.

VI. That 80s show

Why hasn’t macroeconomics been transformed by (relatively) recent events in the way it was by events in the 1930s or the 1970s? Maybe the key point to remember is that such transformations are rare in economics, or indeed in any field. ‘Science advances one funeral at a time,’ quipped Max Planck: researchers rarely change their views much in the light of experience or evidence. The 1930s and the 1970s, in which senior economists changed their minds, eg. Lionel Robbins converting to Keynesianism, were therefore exceptional.

What made them exceptional? Each case was marked by developments that were both clearly inconsistent with widely held views and sustained enough that they couldn’t be written off as aberrations. Lionel Robbins published The Great Depression, a very classical/Austrian interpretation that prescribed a return to the gold standard, in 1934. Would he have become a Keynesian if the Depression had ended by the mid-1930s? The widespread acceptance of the natural-rate hypothesis came more easily, because it played into the neoclassical mindset, but still might not have happened as thoroughly if stagflation had been restricted to a few years in the early 1970s.

From an intellectual point of view, I’d argue, the Great Recession and aftermath bear much more resemblance to the 1979-82 Volcker double-dip recession and subsequent recovery in the United States than to either the 1930s or the 1970s. And here I can speak in part from personal recollection.

By the late 1970s the great division of macroeconomics into rival saltwater and freshwater schools had already happened, so the impact of the Volcker recession depended on which school you belonged to. But in both cases it changed remarkably few minds.

For saltwater macroeconomists, the recession and recovery came mainly as validation of their pre-existing beliefs. They believed that monetary policy has real effects, even if announced and anticipated; sure enough, monetary contraction was followed by a large real downturn. They believed that prices are sticky and inflation has a great deal of inertia, so that monetary tightening would produce a ‘clockwise spiral’ in unemployment and inflation: unemployment would eventually return to the NAIRU (non-accelerating inflation rate of unemployment) at a lower rate of inflation, but only after a transition period of high unemployment. And that’s exactly what we saw.

Freshwater economists had a harder time: Lucas-type models said that monetary contraction could cause a recession only if unanticipated, and as long as economic agents couldn’t distinguish between individual shocks and an aggregate fall in demand. None of this was a tenable description of 1979-82. But recovery came soon enough and fast enough that their worldview could, in effect, ride out the storm. (I was at one conference where a freshwater economist, questioned about current events, snapped ‘I’m not interested in the latest residual.’)

What I see in the response to 2008 and after is much the same dynamic. Half the macroeconomics profession feels mainly validated by events-correctly, I’d say, although as part of that faction I would say that, wouldn’t I? The other half should be reconsidering its views but they should have done that 30 years ago, and this crisis, like that one, was sufficiently well-handled by policy-makers that there was no irresistible pressure for change. (Just to be clear, I’m not saying that it was well-handled in an objective sense: in my view we suffered huge, unnecessary losses of output and employment because of the premature turn to austerity. But the world avoided descending into a full 1930s-style depression, which in effect left doctrinaire economists free to continue believing what they wanted to believe.)

If all this sounds highly cynical, well, I guess it is. There’s a lot of very good research being done in macroeconomics now, much of it taking advantage of the wealth of new data provided by bad events. Our understanding of both fiscal policy and price dynamics are, I believe, greatly improved. And funerals will continue to feed intellectual progress: younger macroeconomists seem to me to be much more flexible and willing to listen to the data than their counterparts were, say, 20 years ago.

But the quick transformation of macroeconomics many hoped for almost surely isn’t about to happen, because events haven’t forced that kind of transformation. Many economists myself included are actually feeling pretty good about our basic understanding of macro. Many others, absent real-world catastrophe, feel free to take the blue pill and keep believing what they want to believe.

Australia has lost its compass for the world, we should look to Jacinda Ardern for inspiration – Thom Woodroffe.

Australian values are neither clear nor consistent. If we want to make a difference in the world, we should follow New Zealand’s lead.

Of all the Generation X world leaders elected in the last few years, think Justin Trudeau in Canada, Emmanuel Macron in France, and even the 31 year old Sebastian Kurz in Austria it is New Zealand’s Jacinda Ardern who has the firmest sense of what kind of country she wants to lead on the world stage.

Just four months since taking office after a decade of conservative rule, and while trying to carefully balance the views of three parties in government, New Zealand is already showing signs of regaining its trademark standing as a small but confident, principled and creative presence internationally. And Australia should take notice.

“Foreign policy is perpetually a balance between interests values. But too often it is easy to focus on the security and economic imperatives of the first and forget the second, or not realise that the two are inextricably linked.

Australia used to be the gold standard for charting the right course. Our foreign policy in the 1980s and early 1990s was characterised by Gareth Evans’ concept of being “a good international citizen” which he used to say was about “no more and no less than the pursuit of enlightened self interest”. Our crafting of the Cambodia Peace Plan and the Canberra Commission on the Elimination of Nuclear Weapons were two clear examples, as was our opposition to apartheid. But these were pursued within a wider understanding that our future prosperity and security was in Asia and we also needed to cement a role for ourselves in the region, not least through our founding of Apec.

If anything, New Zealand wore their values even more strongly on their sleeves, famously sending a ship with a cabinet minister on board to protest at the edge of a French nuclear testing site in 1973, and then in 1985 refusing entry to the nuclear powered USS Buchanan (which unfortunately caused the breakdown of their involvement in ANZUS).

But it is important to remember this was driven by the people. These are clearly the days Ardern longs for. Speaking on Tuesday before travelling to Australia she said, “Being a child of the 80s affected me in many ways and that included international events. Rather than just reading about the impact of apartheid in South Africa for instance, or nuclear testing in the Pacific, I saw instead each of these issues through the lens of our response. They weren’t history lessons, they were lessons in our values, what mattered to us, and that our size bore no relation to the impact our voice could have.”

Ardern also said that she wants the next generation of Kiwis to see their country standing up for what it believes in on the world stage. And her announcement this week that her foreign minister and deputy, Winston Peters, will also take up the ministerial title and mantle of nuclear disarmament is one of the first manifestations of that. He will push for the early entry into the landmark treaty on the prohibition of nuclear weapons, which Australia opposes despite a long history of leadership on the issue at all levels. The campaign group that just won the Nobel Peace Prize was founded in Melbourne after all.