THE UPPER TEN THOUSANDTH. The Great Leveler. Violence and the History of Inequality. From the Stone Age to the Twenty-First Century – Walter Scheidel.

The largest American private fortune currently equals about 1 million times the average annual household income.

The “Four Horsemen” of leveling, mass-mobilization warfare, transformative revolutions, state collapse, and catastrophic plagues, have repeatedly destroyed the fortunes of the rich.

Across the full sweep of history, every single one of the major compressions of material inequality we can observe in the record was driven by one or more of these four levelers.

There is no repertoire of benign means of inequality compression that has ever achieved results that are even remotely comparable to those produced by the Four Horsemen.

Environments that were free from major violent shocks and their broader repercussions hardly ever witnessed major compressions of inequality.

Will the future be different?

I wrote this book to show that the forces that used to shape inequality have not in fact changed beyond recognition. If we seek to rebalance the current distribution of income and wealth in favor of greater equality, we cannot simply close our eyes to what it took to accomplish this goal in the past. We need to ask whether great inequality has ever been alleviated without great violence, how more benign influences compare to the power of this Great Leveler, and whether the future is likely to be very different, even if we may not like the answers.

Are mass violence and catastrophes the only forces that can seriously decrease economic inequality? To judge by thousands of years of history, the answer is yes. Tracing the global history of inequality from the Stone Age to today, Walter Scheidel shows that it never dies peacefully. The Great Leveler is the first book to chart the crucial role of violent shocks in reducing inequality over the full sweep of human history around the world. The “Four Horsemen” of leveling, mass-mobilization warfare, transformative revolutions, state collapse, and catastrophic plagues, have repeatedly destroyed the fortunes of the rich.

Today, the violence that reduced inequality in the past seems to have diminished, and that is a good thing. But it casts serious doubt on the prospects for a more equal future. An essential contribution to the debate about inequality, The Great Leveler provides important new insights about why inequality is so persistent, and why it is unlikely to decline anytime soon.

THE CHALLENGE OF INEQUALITY

”A DANGEROUS AND GROWING INEQUALITY”

How many billionaires does it take to match the net worth of half of the world’s population? In 2015, the richest sixty-two persons on the planet owned as much private net wealth as the poorer half of humanity, more than 3.5 billion people. If they decided to go on a field trip together, they would comfortably fit into a large coach. The previous year, eighty-five billionaires were needed to clear that threshold, calling perhaps for a more commodious double-decker bus. And not so long ago, in 2010, no fewer 388 of them had to pool their resources to offset the assets of the global other half, a turnout that would have required a small convoy of vehicles or filled up a typical Boeing 777 or Airbus A340.

But inequaIity is not created just by multibillionaires. The richest 1 percent of the world’s households now hold a little more than half of global private net wealth. Inclusion of the assets that some of them conceal in offshore accounts would skew the distribution even further. These disparities are not simply caused by the huge differences in average income between advanced and developing economies. Similar imbalances exist within societies. The wealthiest twenty Americans currently own as much as the bottom half of their country’s households taken together, and the top 1 percent of incomes account for about a fifth of the national total.

Inequality has been growing in much of the world. In recent decades, income and wealth have become more unevenly distributed in Europe and North America, in the former Soviet bloc, and in China, India, and elsewhere. And to the one who has, more will be given: in the United States, the best-earning 1 percent of the top 1 percent (those in the highest 0.01 percent income bracket) raised their share to almost six times what it had been in the 1970s even as the top tenth of that group (the top 0.1 percent) quadrupled it. The remainder averaged gains of about three-quarters, nothing to frown at, but a far cry from the advances in higher tiers.

The “1 percent” may be a convenient moniker that smoothly rolls off the tongue, and one that I repeatedly use in this book, but it also serves to obscure the degree of wealth concentration in even fewer hands. In the 1850s, Nathaniel Parker Willis coined the term “Upper Ten Thousand” to describe New York high society. We may now be in need of a variant, the “Upper Ten Thousandth,” to do justice to those who contribute the most to widening inequality. And even within this rarefied group, those at the very top continue to outdistance all others. The largest American fortune currently equals about 1 million times the average annual household income, a multiple twenty times larger than it was in 1982. Even so, the United States may be losing out to China, now said to be home to an even larger number of dollar billionaires despite its considerably smaller nominal GDP.

All this has been greeted with growing anxiety. In 2013, President Barack Obama elevated rising inequality to a “defining challenge”:

“And that is a dangerous and growing inequality and lack of upward mobility that has jeopardized middle-class America’s basic bargain-that if you work hard, you have a chance to get ahead. I believe this is the defining challenge of our time: Making sure our economy works for every working American.”

Two years earlier, multibillionaire investor Warren Buffett had complained that he and his “mega-rich friends” did not pay enough taxes. These sentiments are widely shared. Within eighteen months of its publication in 2013, a 700-page academic tome on capitalist inequality had sold 1.5 million copies and risen to the top of the New York Times nonfiction hardcover bestseller list.

In the Democratic Party primaries for the 2016 presidential election, Senator Bernie Sanders’s relentless denunciation of the “billionaire class” roused large crowds and elicited millions of small donations from grassroots supporters. Even the leadership of the People’s Republic of China has publicly acknowledged the issue by endorsing a report on how to “reform the system of income distribution.” Any lingering doubts are dispelled by Google, one of the great money-spinning disequalizers in the San Francisco Bay Area, where I live, which allows us to track the growing prominence of income inequality in the public consciousness (Fiq. 1.1).

Figure 1.1 Top 1 percent income share in the United States (per year) and references to “income inequality” (three-year moving averages), 1970-2008.
.

So have the rich simply kept getting richer? Not quite. For all the much-maligned rapacity of the “billionaire class” or, more broadly, the “1 percent,” American top income shares only very recently caught up with those reached back in 1929, and assets are less heavily concentrated now than they were then. In England on the eve of the First World War, the richest tenth of households held a staggering 92 percent of all private wealth, crowding out pretty much everybody else; today their share is a little more than half.

High inequality has an extremely long pedigree. Two thousand years ago, the largest Roman private fortunes equaled about 1.5 million times the average annual per capita income in the empire, roughly the same ratio as for Bill Gates and the average American today. For all we can tell, even the overall degree of Roman income inequality was not very different from that in the United States. Yet by the time of Pope Gregory the Great, around 600 CE, great estates had disappeared, and what little was left of the Roman aristocracy relied on papal handouts to keep them afloat. Sometimes, as on that occasion, inequality declined because although many became poorer, the rich simply had more to lose. In other cases, workers became better off while returns on capital fell: western Europe after the Black Death, where real wages doubled or tripled and laborers dined on meat and beer while landlords struggled to keep up appearances, is a famous example.

How has the distribution of income and wealth developed over time, and why has it sometimes changed so much? Considering the enormous amount of attention that inequality has received in recent years, we still know much less about this than might be expected. A large and steadily growing body of often highly technical scholarship attends to the most pressing question: why income has frequently become more concentrated over the course of the last generation. Less has been written about the forces that caused inequality to fall across much of the world earlier in the twentieth century, and far less still about the distribution of material resources in the more distant past.

To be sure, concerns about growing income gaps in the world today have given momentum to the study of inequality in the longer run, just as contemporary climate change has encouraged analysis of pertinent historical data. But we still lack a proper sense of the big picture, a global survey that covers the broad sweep of observable history. A crosscultural, comparative, and long-term perspective is essential for our understanding of the mechanisms that have shaped the distribution of income and wealth.

THE FOUR HORSEMEN

Material inequality requires access to resources beyond the minimum that is needed to keep us all alive. Surpluses already existed tens of thousands of years ago, and so did humans who were prepared to share them unevenly. Back in the last Ice Age, hunter-gatherers found the time and means to bury some individuals much more lavishly than others.

But it was food production, farming and herding, that created wealth on an entirely novel scale. Growing and persistent inequality became a defining feature of the Holocene. The domestication of plants and animals made it possible to accumulate and preserve productive resources. Social norms evolved to define rights to these assets, including the ability to pass them on to future generations. Under these conditions, the distribution of income and wealth came to be shaped by a variety of experiences: health, marital strategies and reproductive success, consumption and investment choices, bumper harvests, and plagues of locusts and rinderpest determined fortunes from one generation to the next. Adding up over time, the consequences of luck and effort favored unequal outcomes in the long term.

In principle, institutions could have flattened emerging disparities through interventions designed to rebalance the distribution of material resources and the fruits from labor, as some premodern societies are indeed reputed to have done. In practice, however, social evolution commonly had the opposite effect. Domestication of food sources also domesticated people. The formation of states as a highly competitive form of organization established steep hierarchies of power and coercive force that skewed access to income and wealth. Political inequality reinforced and amplified economic inequality. For most of the agrarian period, the state enriched the few at the expense of the many: gains from pay and benefactions for public service often paled next to those from corruption, extortion, and plunder. As a result, many premodern societies grew to be as unequal as they could possibly be, probing the limits of surplus appropriation by small elites under conditions of low per capita output and minimal growth. And when more benign institutions promoted more vigorous economic development, most notably in the emergent West, they continued to sustain high inequality. Urbanization, commercialization, financial sector innovation, trade on an increasingly global scale, and, finally, industrialization generated rich returns for holders of capital. As rents from the naked exercise of power declined, choking off a traditional source of elite enrichment, more secure property rights and state commitments strengthened the protection of hereditary private wealth. Even as economic structures, social norms, and political systems changed, income and wealth inequality remained high or found new ways to grow.

For thousands of years, civilization did not lend itself to peaceful equalization. Across a wide range of societies and different levels of development, stability favored economic inequality. This was as true of Pharaonic Egypt as it was of Victorian England, as true of the Roman Empire as of the United States. Violent shocks were of paramount importance in disrupting the established order, in compressing the distribution of income and wealth, in narrowing the gap between rich and poor. Throughout recorded history, the most powerful leveling invariably resulted from the most powerful shocks.

Four different kinds of violent ruptures have flattened inequality: mass mobilization warfare, transformative revolution, state failure, and lethal pandemics. I call these the Four Horsemen of Leveling.

Just like their biblical counterparts, they went forth to “take peace from the earth” and “kill with sword, and with hunger, and with death, and with the beasts of the earth.” Sometimes acting individually and sometimes in concert with one another, they produced outcomes that to contemporaries often seemed nothing short of apocalyptic. Hundreds of millions perished in their wake. And by the time the dust had settled, the gap between the haves and the havenots had shrunk, sometimes dramatically.

Only specific types of violence have consistently forced down inequality. Most wars did not have any systematic effect on the distribution of resources: although archaic forms of conflict that thrived on conquest and plunder were likely to enrich victorious elites and impoverish those on the losing side, less clearcut endings failed to have predictable consequences. For war to level disparities in income and wealth, it needed to penetrate society as a whole, to mobilize people and resources on a scale that was often only feasible in modern nation-states. This explains why the two world wars were among the greatest levelers in history. The physical destruction wrought by industrial-scale warfare, confiscatory taxation, government intervention in the economy, inflation, disruption to global flows of goods and capital, and other factors all combined to wipe out elites’ wealth and redistribute resources.

They also served as a uniquely powerful catalyst for equalizing policy change, providing powerful impetus to franchise extensions, unionization, and the expansion of the welfare state. The shocks of the world wars led to what is known as the “Great Compression,” massive attenuation of inequalities in income and wealth across developed countries. Mostly concentrated in the period from 1914 to 1945, it generally took several more decades fully to run its course.

Earlier mass mobilization warfare had lacked similar pervasive repercussions. The wars of the Napoleonic era or the American Civil War had produced mixed distributional outcomes, and the farther we go back in time, the less pertinent evidence there is. The ancient Greek city-state culture, represented by Athens and Sparta, arguably provides us with earliest examples of how intense popular military mobilization and egalitarian institutions helped constrain material inequality, albeit with mixed success.

The world wars spawned the second major leveling force, transformative revolution. Internal conflicts have not normally reduced inequality: peasant revolts and urban risings were common in premodern history but usually failed, and civil war in developing countries tends to render the income distribution more unequal rather than less. Violent societal restructuring needs to be exceptionally intense if it is to reconfigure access to material resources. Similarly to equalizing mass mobilization warfare, this was primarily a phenomenon of the twentieth century. Communists who expropriated, redistributed, and then often collectivized leveled inequality on a dramatic scale. The most transformative of these revolutions were accompanied by extraordinary violence, in the end matching the world wars in terms of body count and human misery. Far less bloody ruptures such as the French Revolution leveled on a correspondingly smaller scale.

Violence might destroy states altogether. State failure or systems collapse used to be a particularly reliable means of leveling. For most of history, the rich were positioned either at or near the top of the political power hierarchy or were connected to those who were. Moreover, states provided a measure of protection, however modest by modern standards, for economic activity beyond the subsistence level. When states unraveled, these positions, connections, and protections came under pressure or were altogether lost. Although everybody might suffer when states unraveled, the rich simply had much more to lose: declining or collapsing elite income and wealth compressed the overall distribution of resources. This has happened for as long as there have been states. The earliest known examples reach back 4,000 years to the end of Old Kingdom Egypt and the Akkadian empire in Mesopotamia. Even today, the experience of Somalia suggests that this once potent equalizing force has not completely disappeared.

State failure takes the principle of leveling by violent means to its logical extremes: instead of achieving redistribution and rebalancing by reforming and restructuring existing polities, it wipes the slate clean in a more comprehensive manner. The first three horsemen represent different stages, not in the sense that they are likely to appear in sequence, whereas the biggest revolutions were triggered by the biggest wars, state collapse does not normally require similarly strong pressures, but in terms of intensity. What they all have in common is that they rely on violence to remake the distribution of income and wealth alongside the political and social order.

Human-caused violence has long had competition. In the past, plague, smallpox, and measles ravaged whole continents more forcefully than even the largest armies or most fervent revolutionaries could hope to do. In agrarian societies, the loss of a sizeable share of the population to microbes, sometimes a third or even more, made labor scarce and raised its price relative to that of fixed assets and other nonhuman capital, which generally remained intact. As a result, workers gained and landlords and employers lost as real wages rose and rents fell. Institutions mediated the scale of these shifts: elites commonly attempted to preserve existing arrangements through fiat and force but often failed to hold equalizing market forces in check.

Pandemics complete the quartet of horsemen of violent leveling. But were there also other, more peaceful mechanisms of lowering inequality? If we think of leveling on a large scale, the answer must be no. Across the full sweep of history, every single one of the major compressions of material inequality we can observe in the record was driven by one or more of these four levelers. Moreover, mass wars and revolutions did not merely act on those societies that were directly involved in these events: the world wars and exposure to communist challengers also influenced economic conditions, social expectations, and policymaking among bystanders. These ripple effects further broadened the effects of leveling rooted in violent conflict. This makes it difficult to disentangle developments after 1945 in much of the world from the preceding shocks and their continuing reverberations. Although falling income inequality in Latin America in the early 2000s might be the most promising candidate for nonviolent equalization, this trend has remained relatively modest in scope, and its sustainability is uncertain.

Other factors have a mixed record. From antiquity to the present, land reform has tended to reduce inequality most when associated with violence or the threat of violence, and least when not. Macroeconomic crises have only short-lived effects on the distribution of income and wealth. Democracy does not of itself mitigate inequality. Although the interplay of education and technological change undoubtedly influences dispersion of incomes, returns on education and skills have historically proven highly sensitive to violent shocks.

Finally, there is no compelling empirical evidence to support the view that modern economic development, as such, narrows inequalities. There is no repertoire of benign means of compression that has ever achieved results that are even remotely comparable to those produced by the Four Horsemen.

Yet shocks abate. When states failed, others sooner or later took their place. Demographic contractions were reversed after plagues subsided, and renewed population growth gradually returned the balance of labor and capital to previous levels. The world wars were relatively short, and their aftereffects have faded over time: top tax rates and union density are down, globalization is up, communism is gone, the Cold War is over, and the risk of World War III has receded. All of this makes the recent resurgence of inequality easier to understand. The traditional violent levelers currently lie dormant and are unlikely to return in the foreseeable future. No similarly potent alternative mechanisms of equalization have emerged.

Even in the most progressive advanced economies, redistribution and education are already unable fully to absorb the pressure of widening income inequality before taxes and transfers. Lower-hanging fruits beckon in developing countries, but fiscal constraints remain strong. There does not seem to be an easy way to vote, regulate, or teach our way to significantly greater equality. From a global historical perspective, this should not come as a surprise. So far as we can tell, environments that were free from major violent shocks and their broader repercussions hardly ever witnessed major compressions of inequality. Will the future be different?

WHAT THIS BOOK IS NOT ABOUT

Disparities in the distribution of income and wealth are not the only type of inequality of social or historical relevance: so are inequalities that are rooted in gender and sexual orientation; in race and ethnicity; and in age, ability, and beliefs, and so are inequalities of education, health, political voice, and life chances. The title of this book is therefore not as precise as it could be. Then again, a subtitle such as “violent shocks and the global history of income and wealth inequality from the Stone Age to the present and beyond” would not only have stretched the publisher’s patience but would also have been needlessly exclusive. After all, power inequalities have always played a central role in determining access to material resources: a more detailed title would be at once more precise and too narrow.

I do not endeavor to cover all aspects even of economic inequality. I focus on the distribution of material resources within societies, leaving aside questions of economic inequality between countries, an important and much-discussed topic. I consider conditions within particular societies without explicit reference to the many other sources of inequality just mentioned, factors whose influence on the distribution of income and wealth would be hard, if not impossible, to track and compare in the very long run. I am primarily interested in answering the question of why inequality fell, in identifying the mechanisms of leveling. Very broadly speaking, after our species had embraced domesticated food production and its common corollaries, sedentism and state formation, and had acknowledged some form of hereditary property rights, upward pressure on material inequality effectively became a given, fundamental feature of human social existence. Consideration of the finer points of how these pressures evolved over the course of centuries and millennia, especially the complex synergies between what we might crudely label coercion and market forces, would require a separate study of even greater length.

Finally, I discuss violent shocks (alongside alternative mechanisms) and their effects on material inequality but do not generally explore the inverse relationship, the question of whether, and if so, how-inequality helped generate these violent shocks. There are several reasons for my reluctance. Because high levels of inequality were a common feature of historical societies, it is not easy to explain specific shocks with reference to that contextual condition. Internal stability varied widely among contemporaneous societies having comparable levels of material inequality. Some societies that underwent violent ruptures were not particularly unequal: prerevolutionary China is one example.

Certain shocks were largely or entirely exogenous, most notably pandemics that leveled inequality by altering the balance of capital and labor. Even human-caused events such as the world wars profoundly affected societies that were not directly involved in these conflicts. Studies of the role of income inequality in precipitating civil war highlight the complexity of this relationship. None of this should be taken to suggest that domestic resource inequality did not have the potential to contribute to the outbreak of wars and revolutions or to state failure. It simply means that there is currently no compelling reason to assume a systematic causal connection between overall income and wealth inequality and the occurrence of violent shocks. As recent work has shown, analysis of more specific features that have a distributional dimension, such as competition within elite groups, may hold greater promise in accounting for violent conflict and breakdown.

For the purposes of this study, I treat violent shocks as discrete phenomena that act on material inequality. This approach is designed to evaluate the significance of such shocks as forces of leveling in the very long term, regardless of whether there is enough evidence to establish or deny a meaningful connection between these events and prior inequality. If my exclusive focus on one causal arrow, from shocks to inequality, encourages further engagement with the reverse, so much the better. It may never be feasible to produce a plausible account that fully endogenizes observable change in the distribution of income and wealth over time. Even so, possible feedback loops between inequality and violent shocks are certainly worth exploring in greater depth. My study can be no more than a building block for this larger project.

HOW IS IT DONE?

There are many ways of measuring inequality. In the following chapters, I generally use only the two most basic metrics, the Gini coefficient and percentage shares of total income or wealth. The Gini coefficient measures the extent to which the distribution of income or material assets deviates from perfect equality. If each member of a given population receives or holds exactly the same amount of resources, the Gini coefficient is 0; if one member controls everything and everybody else has nothing, it approximates 1. Thus the more unequal the distribution, the higher the Gini value. It can be expressed as a fraction of 1 or as a percentage; I prefer the former so as to distinguish it more clearly from income or wealth shares, which are generally given as percentages. Shares tell us which proportion of the total income or wealth in a given population is received or owned by a particular group that is defined by its position within the overall distribution. For example, the much-cited “1 percent” represent those units, often households, of a given population that enjoy higher incomes or dispose of greater assets than 99 percent of its units. Gini coefficients and income shares are complementary measures that emphasize different properties of a given distribution: whereas the former compute the overall degree of inequality, the latter provide much-needed insight into the shape of the distribution.

Both indices can be used for measuring the distribution of different versions of the income distribution. Income prior to taxes and public transfers is known as “market” income, income after transfers is called “gross” income, and income net of all taxes and transfers is defined as “disposable” income. In the following, I refer only to market and disposable income. Whenever I use the term income inequality without further specification, I mean the former. For most of recorded history, market income inequality is the only type that can be known or estimated. Moreover, prior to the creation of extensive systems of fiscal redistribution in the modern West, differences in the distribution of market, gross, and disposable income were generally very small, much as in many developing countries today. In this book, income shares are invariably based on the distribution of market income. Both contemporary and historical data on income share, especiaily those at the very top of the distribution, are usually derived from tax records that refer to income prior to fiscal intervention. On a few occasions, I also refer to ratios between shares or particular percentiles of the income distribution, an alternative measure of the relative weight of different brackets. More sophisticated indices of inequality exist but cannot normally be applied to long-term studies that range across highly diverse data sets.

The measurement of material inequality raises two kinds of problems: conceptual and evidential. Two major conceptual issues merit attention here. First, most available indices measure and express relative inequality based on the share of total resources captured by particular segments of the population. Absolute inequality, by contrast, focuses on the difference in the amount of resources that accrue to these segments. These two approaches tend to produce very different results. Consider a population in which the average household in the top decile of income distribution earns ten times as much as an average household in the bottom decile, say, $100,000 versus $10,000. National income subsequently doubles while the distribution of income remains unchanged. The Gini coefficient and income shares remain the same as before. From this perspective, incomes have gone up without raising inequality in the process. Yet at the same time, the income gap between the top and bottom deciles has doubled, from $90,000 to $180,000, ensuring much greater gains for affluent than for low-income households. The same principle applies to the distribution of wealth. In fact, there is hardly any credible scenario in which economic growth will fail to cause absolute inequality to rise. Metrics of relative inequality can therefore be said to be more conservative in outlook as they serve to deflect attention from persistently growing income and wealth gaps in favor of smaller and multidirectional changes in the distribution of material resources. In this book, I follow convention in prioritizing standard measures of relative inequality such as the Gini coefficient and top income shares but draw attention to their limitations where appropriate.

A different problem stems from the Gini coefficient of income distribution’s sensitivity to subsistence requirements and to levels of economic development. At least in theory, it is perfectly possible for a single person to own all the wealth that exists in a given population. However, nobody completely deprived of income would be able to survive. This means that the highest feasible Gini values for income are bound to fall short of the nominal ceiling of 1. More specifically, they are limited by the amount of resources in excess of those needed to meet minimum subsistence requirements. This constraint is particularly powerful in the low-income economies that were typical of most of human history and that still exist in parts of the world today. For instance, in a society having a GDP equivalent to twice minimal subsistence, the Gini coefficient could not rise above 0.5 even if a single individual somehow managed to monopolize all income beyond what everybody else needed for bare survival. At higher levels of output, the maximum degree of inequality is further held in check by changing definitions of what constitutes minimum subsistence and by largely impoverished populations’ inability to sustain advanced economies. Nominal Gini coefficients need to be adjusted accordingly to calculate what has been called the extraction rate, the extent to which the maximum amount of inequality that is theoretically possible in a given environment has been actualized. This is a complex issue that is particularly salient to any comparisons of inequality in the very long run but that has only very recently begun to attract attention. I address it in more detail in the appendix at the end of this book.

This brings me to the second category: problems related to the quality of the evidence. The Gini coefficient and top income shares are broadly congruent measures of inequality: they generally (though not invariably) move in the same direction as they change over time. Both are sensitive to the shortcomings of the underlying data sources. Modern Gini coefficients are usually derived from household surveys from which putative national distributions are extrapolated. This format is not particularly suitable for capturing the very largest incomes. Even in Western countries, nominal Ginis need to be adjusted upward to take full account of the actual contribution of top incomes. In many developing countries, moreover, surveys are often of insufficient quality to support reliable national estimates. In such cases, wide confidence intervals not only impede comparison between countries but also can make it hard to track change over time.

Attempts to measure the overall distribution of wealth face even greater challenges, not only in developing countries, where a sizeable share of elite assets is thought to be concealed offshore, but even in data-rich environments such as the United States. Income shares are usually computed from tax records, whose quality and characteristics vary greatly across countries and over time and that are vulnerable to distortions motivated by tax evasion. Low participation rates in lower-income countries and politically driven definitions of what constitutes taxable income introduce additional complexities. Despite these difficulties, the compilation and online publication of a growing amount of information on top income shares in the “World Wealth and Income Database” has put our understanding of income inequality on a more solid footing and redirected attention from somewhat opaque single-value metrics such as the Gini coefficient to more articulated indices of resource concentration.

All these problems pale in comparison to those we encounter once we seek to extend the study of income and wealth inequality farther back in time. Regular income taxes rarely predate the twentieth century. In the absence of household surveys, we have to rely on proxy data to calculate Gini coefficients. Prior to about 1800, income inequality across entire societies can be estimated only with the help of social tables, rough approximations of the incomes obtained by different parts of the population that were drawn up by contemporary observers or inferred, however tenuously, by later scholars. More rewarding, a growing number of data sets that in parts of Europe reach back to the High Middle Ages have shed light on conditions in individual cities or regions. Surviving archival records of wealth taxes in French and Italian cities, taxes on housing rental values in the Netherlands, and income taxes in Portugal allow us to reconstruct the underlying distribution of assets and sometimes even incomes. So do early modern records of the dispersion of agricultural land in France and of the value of probate estates in England. In fact, Gini coefficients can fruitfully be applied to evidence that is much more remote in time. Patterns of landownership in late Roman Egypt; variation in the size of houses in ancient and early medieval Greece, Britain, Italy, and North Africa and in Aztec Mexico; the distribution of inheritance shares and dowries in Babylonian society; and even the dispersion of stone tools in Catal HéyUk, one of the earliest known proto-urban settlements in the world, established almost 10,000 years ago, have all been analyzed in this manner. Archaeology has enabled us to push back the boundaries of the study of material inequality into the Paleolithic at the time of the last Ice Age.

We also have access to a whole range of proxy data that do not directly document distributions but that are nevertheless known to be sensitive to changes in the level of income inequality. The ratio of land rents to wages is a good example. In predominantly agrarian societies, changes in the price of labor relative to the value of the most important type of capital tend to reflect changes in the relative gains that accrued to different classes: a rising index value suggests that landlords prospered at the expense of workers, causing inequality to grow. The same is true of a related measure, the ratio of mean per capita GDP to wages. The larger the nonlabor share in GDP, the higher the index, and the more unequal incomes were likely to be. To be sure, both methods have serious weaknesses. Rents and wages may be reliably reported for particular locales but need not be representative of larger populations or entire countries, and GDP guesstimates for any premodern society inevitably entail considerable margins of error. Nevertheless, such proxies are generally capable of giving us a sense of the contours of inequality trends over time. ReaI incomes represent a more widely available but somewhat less instructive proxy. In western Eurasia, real wages, expressed in grain equivalent, have now been traced back as far as 4,000 years. This very long-term perspective makes it possible to identify instances of unusually elevated real incomes for workers, a phenomenon plausibly associated with lowered inequality. Even so, information on real wages that cannot be contextualized with reference to capital values or GDP remains a very crude and not particularly reliable indicator of overall income inequality.

Recent years have witnessed considerable advances in the study of premodern tax records and the reconstruction of real wages, rent/wage ratios, and even GDP levels. It is not an exaggeration to say that much of this book could not have been written twenty or even ten years ago. The scale, scope, and pace of progress in the study of historical income and wealth inequality gives us much hope for the future of this field. There is no denying that long stretches of human history do not admit even the most rudimentary quantitative analysis of the distribution of material resources. Yet even in these cases we may be able to identify signals of change over time. Elite displays of wealth are the most promising, and, indeed, often the only, marker of inequality. When archaeological evidence of lavish elite consumption in housing, diet, or burials gives way to more modest remains or signs of stratification fade altogether, we may reasonably infer a degree of equalization.

In traditional societies, members of the wealth and power elites were often the only ones who controlled enough income or assets to suffer large losses, losses that are visible in the material record. Variation in human stature and other physiological features can likewise be associated with the distribution of resources, although other factors, such as pathogen loads, also played an important role. The more we move away from data that document inequality in a more immediate manner, the more conjectural our readings are bound to become. Yet global history is simply impossible unless we are prepared to stretch. This book is an attempt to do just that.

In so doing we face an enormous gradient in documentation, from detailed statistics concerning the factors behind the recent rise in American income inequality to vague hints at resource imbalances at the dawn of civilization, with a wide array of diverse data sets in between. To join all this together in a reasonably coherent analytical narrative presents us with a formidable challenge: in no small measure, this is the true challenge of inequality invoked in the title of this introduction. I have chosen to structure each part of this book in what seems to me the best way to address this problem. The opening part follows the evolution of inequality from our primate beginnings to the early twentieth century and is thus organized in conventional chronological fashion (chapters 1-3).

This changes once we turn to the Four Horsemen, the principal drivers of violent leveling. In the parts devoted to the first two members of this quartet, war and revolution, my survey starts in the twentieth century and subsequently moves back in time. There is a simple reason for this. Leveling by means of mass mobilization warfare and transformative revolution has primarily been a feature of modernity. The “Great Compression” of the 1910s to 1940s not only produced by far the best evidence of this process but also represents and indeed constitutes it in paradigmatic form (chapters 4-5).

In a second step, I look for antecedents of these violent ruptures, moving from the American Civil War all the way back to the experience of ancient China, Rome, and Greece, as well as from the French Revolution to the countless revolts of the premodern era (chapters 6 and 8). I follow the same trajectory in my discussion of civil war in the final part of chapter 6, from the consequences of such conflicts in contemporary developing countries to the end of the Roman Republic. This approach allows me to establish models of violent leveling that are solidly grounded in modern data before I explore whether they can also be applied to the more distant past.

In Part V, on plagues, I employ a modified version of the same strategy by moving from the best documented case-the Black Death of the Late Middle Ages (chapter 10), to progressively less well known examples, one of which (the Americas after 1492) happens to be somewhat more recent whereas the others are located in more ancient times (chapter 11). The rationale is the same: to establish the key mechanisms of violent leveling brought about by epidemic mass mortality with the help of the best available evidence before I search for analogous occurrences elsewhere.

Part IV, on state failure and systems collapse, takes this organizing principle to its logical conclusion. Chronology matters little in analyzing phenomena that were largely confined to premodern history, and there is nothing to be gained from following any particular time sequence. The dates of particular cases matter less than the nature of the evidence and the scope of modern scholarship, both of which vary considerably across space and time. I thus begin with a couple of well-attested examples before I move on to others that I discuss in less detail (chapter 9).

Part VI, on alternatives to violent leveling, is for the most part arranged by topic as I evaluate different factors (chapters 12-13) before I turn to counterfactual outcomes (chapter 14). The final part, which together with Part I frames my thematic survey, returns to a chronological format. Moving from the recent resurgence in inequality (chapter 15) to the prospects of leveling in the near and more distant future (chapter 16), it completes my evolutionary overview.

A study that brings together Hideki Tojo’s Japan and the Athens of Pericles or the Classic Lowland Maya and present-day Somalia may seem puzzling to some of my fellow historians, although less so, I hope, to readers from the social sciences. As I said, the challenge of exploring the global history of inequality is a serious one. If we want to identify forces of leveling across recorded history, we need to find ways to bridge the divide between different areas of specialization both within and beyond academic disciplines and to overcome huge disparities in the quality and quantity of the data. A long-term perspective calls for unorthodox solutions.

DOES IT MATTER?

All this raises a simple question. If it is so difficult to study the dynamics of inequality across very different cultures and in the very long run, why should we even try? Any answer to this question needs to address two separate but related issues, does economic inequality matter today, and why is its history worth exploring? Princeton philosopher Harry Frankfurt, best known for his earlier disquisition On Bullshit, opens his booklet On Inequality by disagreeing with Obama’s assessment quoted at the beginning of this introduction: “our most fundamental challenge is not the fact that the incomes of Americans are widely unequal. It is, rather, the fact that too many of our peopIe are poor.” Poverty, to be sure, is a moving target: someone who counts as poor in the United States need not seem so in central Africa. Sometimes poverty is even defined as a function of inequality, in the United Kingdom, the official poverty line is set as a fraction of median income, although absolute standards are more common, such as the threshold of $1.25 in 2005 prices used by the World Bank or reference to the cost of a basket of consumer goods in America.

Nobody would disagree that poverty, however defined, is undesirable: the challenge lies in demonstrating that income and wealth inequality as such has negative effects on our lives, rather than the poverty or the great fortunes with which it may be associated.

The most hard-nosed approach concentrates on inequality’s effect on economic growth. Economists have repeatedly noted that it can be hard to evaluate this relationship and that the theoretical complexity of the problem has not always been matched by the empirical specification of existing research. Even so, a number of studies argue that higher levels of inequality are indeed associated with lower rates of growth. For instance, lower disposable income inequality has been found to lead not only to faster growth but also to longer growth phases. Inequality appears to be particularly harmful to growth in developed economies. There is even some support for the much-debated thesis that high levels of inequality among American households contributed to the credit bubble that helped trigger the Great Recession of 2008, as lower-income households drew on readily available credit (in part produced by wealth accumulation at the top) to borrow for the sake of keeping up the with consumption patterns of more affluent groups. Under more restrictive conditions of lending, by contrast, wealth inequality is thought to disadvantage low-income groups by blocking their access to credit.

Among developed countries, higher inequality is associated with less economic mobility across generations. Because parental income and wealth are strong indicators of educational attainment as well as earnings, inequality tends to perpetuate itself over time, and all the more so the higher it is. The disequalizing consequences of residential segregation by income are a related issue. In metropolitan areas in the United States since the 1970s, population growth in high and low income areas alongside shrinking middle-income areas has led to increasing polarization. Affluent neighborhoods in particular have become more isolated, a development likely to precipitate concentration of resources, including locally funded public services, which in turns affects the life chances of children and impedes intergenerational mobility.

In developing countries, at least certain kinds of income inequality increase the likelihood of internal conflict and civil war. High-income societies contend with less extreme consequences. In the United States, inequality has been said to act on the political process by making it easier for the wealthy to exert influence, although in this case we may wonder whether it is the presence of very large fortunes rather than inequality per se that accounts for this phenomenon. Some studies find that high levels of inequality are correlated with lower levels of self-reported happiness. Only health appears to be unaffected by the distribution of resources as such, as opposed to income levels: whereas health differences generate income inequality, the reverse remains unproven.

What all these studies have in common is that they focus on the practical consequences of material inequality, on instrumental reasons for why it might be deemed a problem. A different set of objections to a skewed distribution of resources is grounded in normative ethics and notions of social justice, a perspective well beyond the scope of my study but deserving of greater attention in a debate that is all too often dominated by economic concerns. Yet even on the more limited basis of purely instrumental reasoning there is no doubt that at least in certain contexts, high levels of inequality and growing disparities in income and wealth are detrimental to social and economic development.

But what constitutes a “high” level, and how do we know whether “growing” imbalances are a novel feature of contemporary society or merely bring us closer to historically common conditions? Is there, to use Francois Bourguignon’s term, a “normal” level of inequality to which countries that are experiencing widening inequality should aspire to return? And if, as in many developed economies, inequality is higher now than it was a few decades ago but is lower than a century ago, what does this mean for our understanding of the determinants of the distribution of income and wealth?

Inequality either grew or held fairly steady for much of recorded history, and significant reductions have been rare. Yet policy proposals designed to stem or reverse the rising tide of inequality tend to show little awareness or appreciation of this historical background. Is that as it should be? Perhaps our age has become so fundamentally different, so completely untethered from its agrarian and undemocratic foundations, that history has nothing left to teach us. And indeed, there is no question that much has changed: low-income groups in rich economies are generally better off than most people were in the past, and even the most disadvantaged residents of the least developed countries live longer than their ancestors lived. The experience of life at the receiving end of inequality is in many ways very different from what it used to be.

But it is not economic or more broadly human development that concerns us here, rather how the fruits of civilization are distributed, what causes them to be distributed the way they are, and what it would take to change these outcomes. I wrote this book to show that the forces that used to shape inequality have not in fact changed beyond recognition. If we seek to rebalance the current distribution of income and wealth in favor of greater equality, we cannot simply close our eyes to what it took to accomplish this goal in the past. We need to ask whether great inequality has ever been alleviated without great violence, how more benign influences compare to the power of this Great Leveler, and whether the future is likely to be very different, even if we may not like the answers.

Part 1

A brief history of inequality

Chapter 1

THE RISE OF INEQUALITY

PRIMORDIAL LEVELING

Has inequality always been with us? Our closest nonhuman relatives in the world today, the African great apes, gorillas, chimpanzees, and bonobos, are intensely hierarchical creatures. Adult gorilla males divide into a dominant few endowed with harems of females and many others having no consorts at all. Silverbacks dominate not only the females in their groups but also any males who stay on after reaching maturity. Chimpanzees, especially but not only males, expend tremendous energy on status rivalry. Bullying and aggressive dominance displays are matched by a wide range of submission behaviors by those on the lower rungs of the pecking order. In groups of fifty or a hundred, ranking is a central and stressful fact of life, for each member occupies a specific place in the hierarchy but is always looking for ways to improve it. And there is no escape: because males who leave their group to avoid overbearing dominants run the risk of being killed by males in other groups, they tend to stay put and compete or submit. Echoing the phenomenon of social circumscription that has been invoked to explain the creation of hierarchy among humans, this powerful constraint serves to shore up inequality.

Their closest relatives, the bonobos, may present a gentler image to the world but likewise feature alpha males and females. Considerably less violent and intent on bullying than chimpanzees, they nevertheless maintain clear hierarchical rankings. Although concealed ovulation and the lack of systematic domination of females by males reduce violent conflict over mating opportunities, hierarchy manifests in feeding competition among males.

Across these species, inequality is expressed in unequal access to food sources, the closest approximation of human-style income disparities, and, above all, in terms of reproductive success. Dominance hierarchy, topped by the biggest, strongest, and most aggressive males, which consume the most and have sexual relations with the most females, is the standard pattern.

It is unlikely that these shared characteristics evolved only after these three species had branched off from the ancestral line, a process that commenced about 11 million years ago with the emergence of gorillas and that continued 3 million years later with the split of the common ancestor of chimpanzees and bonobos from the earliest forerunners of what were to evolve into australopiths and, eventually, humans. Even so, marked social expressions of inequality may not always have been common among primates. Hierarchy is a function of group living, and our more distant primate relatives, who branched off earlier, are now less social and live either on their own or in very small or transient groups. This is true both of gibbons, whose ancestors split from those of the great apes some 22 million years ago, and of the orangutans, the first of the great apes to undergo speciation about 17 million years ago and now confined to Asia. Conversely, hierarchical sociality is typical of the African genera of this taxonomic family, including our own. This suggests that the most recent common ancestor of gorillas, chimpanzees, bonobos, and humans already displayed some version of this trait, whereas more distant precursors need not have done.

Analogy to other primate species may be a poor guide to inequality among earlier hominids and humans. The best proxy evidence we have is skeletal data on sexual size dimorphism, the extent to which mature members of one sex, in this case, males, are taller, heavier, and stronger than those of the other. Among gorillas, as among sea lions, intense inequality among males with and without harems as well as between males and females is associated with a high degree of male-biased size dimorphism. Judging from the fossil record, prehuman hominids, australopiths and paranthropi, reaching back more than 4 million years, appear to have been more dimorphic than humans. If the orthodox position, which has recently come under growing pressure, can be upheld, some of the earliest species, Australopithecus afarensis and anamensis, which emerged 3 to 4 million years ago, were defined by a male body mass advantage of more than 50 percent, whereas later species occupied an intermediate position between them and humans. With the advent of larger-brained Homo erectus more than 2 million years ago, sexual size dimorphism had already declined to the relatively modest amount we still observe today. Insofar as the degree of dimorphism was correlated with the prevalence of agonistic male-on-male competition for females or shaped by female sexual selection, reduced sex differences may be a sign of lesser reproductive variance among males. On this reading, evolution attenuated inequality both among males and between the sexes. Even so, higher rates of reproductive inequality for men than for women have persisted alongside moderate levels of reproductive polygyny.

Other developments that may have begun as long as 2 million years ago are also thought to have fostered greater equality. Changes in the brain and in physiology that promoted cooperative breeding and feeding would have countered aggression by dominants and would have softened hierarchies in larger groups. Innovations in the application of violence may have contributed to this process. Anything that helped subalterns resist dominants would have curtailed the powers of the latter and thus diminished overall inequality. Coalition-building among lower-status men was one means to this end, use of projectile weapons another. Fights at close quarters, whether with hands and teeth or with sticks and rocks, favored stronger and more aggressive men. Weapons began to play an equalizing role after they could be deployed over a greater distance.

Some 2 million years ago, anatomical changes in the shoulder made it possible for the first time to throw stones and other objects in an effective manner, a skill unavailable to earlier species and to nonhuman primates today. This adaptation not only improved hunting abilities but also made it easier for gammas to challenge alphas. The manufacturing of spears was the next step, and enhancements such as fire-hardened tips and, later, stone tips followed. Controlled use of fire dates back perhaps 800,000 years, and heat treatment technology is at least 160,000 years old. The appearance of darts or arrow tips made of stone, first attested about 70,000 years ago in South Africa, was merely the latest phase in a drawn-out process of projectile weapons development. No matter how primitive they may seem to modern observers, such tools privileged skill over size, strength, and aggressiveness and encouraged first strikes and ambushes as well as cooperation among weaker individuals. The evolution of cognitive skills was a vital complement necessary for more accurate throwing, improved weapons design, and more reliable coalition building. Full language capabilities, which would have facilitated more elaborate alliances and reinforced notions of morality, may date back as few as 100,000 or as many as 300,000 years. Much of the chronology of these social changes remains unclear: they may have been strung out over the better part of the last 2 million years or may have been more concentrated among anatomically modern humans, our own species of Homo sapiens, which arose in Africa at least 200,000 years ago.

What matters most in the present context is the cumulative outcome, the improved ability of lower-status individuals to confront alpha males in ways that are not feasible among nonhuman primates. When dominants became embedded in groups whose members were armed with projectiles and capable of balancing their influence by forming coalitions, overt dominance through brute force and intimidation was no longer a viable option. If this conjecture, for this is all it can be, is correct, then violence and, more specifically, novel strategies of organizing and threatening violent action, played an important and perhaps even critical role in the first great leveling in human history. By that time, human biological and social evolution had given rise to an egalitarian equilibrium. Groups were not yet large enough, productive capabilities not yet differentiated enough, and intergroup conflict and territoriality not yet developed enough to make submission to the few seem the least bad option for the many. Whereas animalian forms of domination and hierarchy had been eroded, they had not yet been replaced by new forms of inequality based on domestication, property, and war. That world has been largely but not completely lost. Defined by low levels of resource inequality and a strong egalitarian ethos, the few remaining foraging populations in the world today give us a sense, however limited, of what the dynamics of equality in the Middle and Upper Paleolithic may have looked like.

Powerful logistical and infrastructural constraints help contain inequality among hunter-gatherers. A nomadic lifestyle that does not feature pack animals severely limits the accumulation of material possessions, and the small size and fluid and flexible composition of foraging groups are not conducive to stable asymmetric relationships beyond basic power disparities of age and gender. Moreover, forager egalitarianism is predicated on the deliberate rejection of attempts to dominate. This attitude serves as a crucial check to the natural human propensity to form hierarchies: active equalization is employed to maintain a level playing field. Numerous means of enforcing egalitarian values have been documented by anthropologists, graduated by severity. Begging, scrounging, and stealing help ensure a more equal distribution of resources. Sanctions against authoritarian behavior and self-aggrandizement range from gossip, criticism, ridicule, and disobedience to ostracism and even physical violence, including homicide. Leadership consequently tends to be subtle, dispersed among multiple group members, and transient; the least assertive have the best chances to influence others. This distinctive moral economy has been called “reverse dominance hierarchy”: operative among adult men (who commonly dominate women and children), it represents the ongoing and preemptive neutralization of authority.

Among the Hadza, a group of a few hundred huntergatherers in Tanzania, camp members forage individually and strongly prefer their own households in distributing the acquired food. At the same time, food sharing beyond one’s own household is expected and common, especially when resources can readily be spotted by others. Hadza may try to conceal honey because it is easier to hide, but if found out, they are compelled to share. Scrounging is tolerated and widespread. Thus even though individuals clearly prefer to keep more for themselves and their immediate kin, norms interfere: sharing is common because the absence of domination makes sharing hard to resist. Large perishable items such as big game may even be shared beyond the camp group. Saving is not valued, to the extent that available resources tend to be consumed without delay and not even shared with people who happen to be absent at that moment. As a result, the Hadza have only minimal private possessions: jewelry, clothes, a digging stick, and sometimes a cooking pot for women and a bow and arrows, clothes and jewelry, and perhaps a few tools for men. Many of these goods are not particularly durable, and owners do not form strong attachments to them. Property beyond these basic items does not exist, and territory is not defended. The lack or dispersion of authority makes it hard to arrive at group decisions, let alone enforce them. In all these respects, the Hadza are quite representative of extant foraging groups more generally.

A foraging mode of subsistence and an egalitarian moral economy combine into a formidable obstacle to any form of development for the simple reason that economic growth requires some degree of inequality in income and consumption to encourage innovation and surplus production. Without growth, there was hardly any surplus to appropriate and pass on. The moral economy prevented growth, and the lack of growth prevented the production and concentration of surplus. This must not be taken to suggest that foragers practice some form of communism: consumption is not equalized, and individuals differ not just in terms of their somatic endowments but also with respect to their access to support networks and material resources. As I show in the next section, forager inequality is not nonexistent but merely very low compared to inequality in societies that rely on other modes of subsistence.

We also need to allow for the possibility that contemporary hunter-gatherers may differ in important ways from our pre-agrarian ancestors. Surviving forager groups are utterly marginalized and confined to areas that are beyond the reach of, or of little interest to, farmers and herders, environments that are well suited to a lifestyle that eschews the accumulation of material resources and firm claims to territory. Prior to the domestication of plants and animals for food production, foragers were much more widely spread out across the globe and had access to more abundant natural resources. In some cases, moreover, contemporary foraging groups may respond to a dominant world of more hierarchical farmers and pastoralists, defining themselves in contradistinction to outside norms. Remaining foragers are not timeless or “living fossils,” and their practices need to be understood within specific historical contexts.

For this reason, prehistoric populations need not always have been as egalitarian as the experience of contemporary hunter-gatherers might suggest. Observable material inequalities in burial contexts that date from before the onset of the Holocene, which began about 11,700 years ago, are rare but do exist. The most famous example of unearned status and inequality comes from Sungir, a Pleistocene site 120 miles north of Moscow whose remains date from about 30,000 to 34,000 years ago, a time corresponding to a relatively mild phase of the last Ice Age. It contains the remains of a group of hunters and foragers who killed and consumed large mammals such as bison, horse, reindeer, antelope, and especially mammoth alongside wolf, fox, brown bear, and cave lion.

Three human burials stand out. One features an adult man who was buried with some 3,000 beads made of mammoth ivory that had probably been sewn onto his fur clothing as well as around twenty pendants and twenty-five mammoth ivory rings. A separate grave was the final resting place of a girl of about ten years and a roughly twelve-year-old boy. Both children’s clothing was adorned with an even larger number of ivory beads, about 10,000 overall, and their grave goods included a wide range of prestige items such as spears made of straightened mammoth tusk and various art objects.

Massive effort must have been expended on these deposits: modern scholars have estimated that it would have taken anywhere from fifteen to forty-five minutes to carve a single bead, which translates to a total of 1.6 to 4.7 years of work for one person carving forty hours a week. A minimum of seventy-five arctic foxes needed to be caught to extract the 300 canines attached to a belt and headgear in the children’s grave, and considering the difficulty of extracting them intact, the actual number may well have been higher. Although a substantial spell of relative sedentism would have given the members of this group enough spare time to accomplish all this, the question remains why they would have wished to do so in the first place. These three persons do not appear to have been buried with everyday clothing and objects. That the beads for the children were smaller than those for the man implies that these beads had been manufactured specifically for the children, whether in life or, more likely, just for their burial. For reasons unknown to us, these individuals were considered special. Yet the two children were too young to have earned their privileged treatment: perhaps they owed it to family ties to someone who mattered more than others. The presence of possibly fatal injuries in both the man and the boy and of femoral shortening that would have disabled the girl in life merely add to the mystery.

Although the splendor of the Sungir burials has so far remained without parallel in the Paleolithic record, other rich graves have been found farther west. In Dolni Véstovice in Moravia, at roughly the same time, three individuals were buried with intricate headgear and resting on ocher-stained ground. Later examples are somewhat more numerous. The cave of Arene Candide on the Ligurian coast housed a deep pit grave for a lavishly adorned adolescent male put to rest on a bed of red ocher about 28,000 or 29,000 years ago. Hundreds of perforated shells and deer canines found around his head would originally have been attached to some organic headgear. Pendants made of mammoth ivory, four batons made of elk antlers, and an exceptionally long blade made of exotic flint that had been placed in his right hand added to the assemblage.

A young woman buried in Saint-Germaine-la-Riviere some 16,000 years ago bore ornaments of shell and teeth: the latter, about seventy perforated red deer canines, must have been imported from 200 miles away. About 10,000 years ago, in the early Holocene but in a foraging context, a three-year-old child was laid to rest with 1,500 shell beads at the La Madeleine rock shelter in the Dordogne.

It is tempting to interpret these findings as the earliest harbingers of inequalities to come. Evidence of advanced and standardized craft production, time investment in highly repetitive tasks, and the use of raw materials sourced from far away offers us a glimpse of economic activities more advanced than those found among contemporary hunter-gatherers. It also hints at social disparities not normally associated with a foraging existence: lavish graves for children and adolescents point to ascribed and perhaps even inherited status. The existence of hierarchical relations is more difficult to infer from this material but is at least a plausible option.

But there is no sign of durable inequalities. Increases in complexity and status differentiation appear to have been temporary in nature. Egalitarianism need not be a stable category: social behavior could vary depending on changing circumstances or even recurring seasonal pressures. And although earliest coastal adaptations, cradles of social evolution in which access to maritime food resources such as shellfish encouraged territoriality and more effective leadership, may reach back as far as 100,000 years, there is, at least as yet, no related evidence of emergent hierarchy and consumption disparities. For all we can tell, social or economic inequality in the Paleolithic remained sporadic and transient.

THE GREAT DISEQUALIZATION

Inequality took off only after the last Ice Age had come to an end and climatic conditions entered a period of unusual stability. The Holocene, the first interglacial warm period for more than 100,000 years, created an environment that was more favorable to economic and social development. As these improvements allowed humans to extract more energy and grow in numbers, they also laid the ground for an increasingly unequal distribution of power and material resources. This led to what I call the “Great Disequalization,” a transition to new modes of subsistence and new forms of social organization that eroded forager egalitarianism and replaced it with durable hierarchies and disparities in income and wealth.

For these developments to occur, there had to be productive assets that could be defended against encroachment and from which owners could draw a surplus in a predictable manner. Food production by means of farming and herding fulfills both requirements and came to be the principal driver of economic, social, and political change.

However, domestication of plants and animals was not an indispensable prerequisite. Under certain conditions, foragers were also able to exploit undomesticated natural resources in an analogous fashion. Territoriality, hierarchy, and inequality could arise where fishing was feasible or particularly productive only in certain locations. This phenomenon, which is known as maritime or riverine adaptation, is well documented in the ethnographic record. From about 500 CE, pressure on fish stocks as a result of population growth along the West Coast of North America from Alaska to California encouraged foraging populations to establish control over highly localized salmon streams. This was sometimes accompanied by a shift from mostly uniform dwellings to stratified societies that featured large houses for chiefly families, clients, and slaves.

Detailed case studies have drawn attention to the close connection between resource scarcity and the emergence of inequality. From about 400 to 900 CE, the site of Keatley Creek in British Columbia housed a community of a few hundred members near the Fraser River that capitalized on the local salmon runs. Judging from the archaeological remains, salmon consumption declined around 800, and mammalian meat took its place. At this time, signs of inequality appear in the record. A large share of the fish bone recovered from the pits of the largest houses comes from mature Chinook and sockeye salmon, a prize catch rich in fat and calories. Prestige items such as rare types of stone are found there. Two of the smallest houses, by contrast, contained bones of only younger and less nutritious fish. As in many other societies at this level of complexity, inequality was both celebrated and mitigated by ceremonial redistribution: roasting pits that were large enough to prepare food for sizable crowds suggest that the rich and powerful organized feasts for the community. A thousand years later, potlatch rituals in which leaders competed among themselves through displays of generosity were a common feature across the Pacific Northwest. Similar changes took place at the Bridge River site in the same area: from about 800, as the owners of large buildings began to accumulate prestige goods and abandoned communal food preparation outdoors, poorer residents attached themselves to these households, and inequality became institutionalized.

On other occasions, it was technological progress that precipitated disequalizing social and economic change. For thousands of years, the Chumash on the Californian coast, in what is now Santa Barbara and Ventura counties, had lived as egalitarian foragers who used simple boats and gathered acorns. Around 500 to 700, the introduction of large oceangoing plank canoes that could carry a dozen men and venture more than sixty miles out to sea allowed the Chumash to catch larger fish and to establish themselves as middlemen in the shell trade along the coast. They sold flint obtained from the Channel Islands to inland groups in exchange for acorns, nuts, and edible grasses. This generated a hierarchical order in which polygamous chiefs controlled canoes and access to territory, led their men in war, and presided over ritual ceremonies. In return, they received payments of food and shells from their followers.

In such environments, foraging societies could attain relatively high levels of complexity. As reliance on concentrated local resources grew, mobility declined, and occupational specialization, strictly defined ownership of assets, perimeter defense, and intense competition between neighboring groups that commonly involved the enslavement of captives fostered hierarchy and inequality.

Among foragers, adaptations of this kind were possible only in specific ecological niches and did not normally spread beyond them. Only the domestication of food resources had the potential to transform economic activity and social relations on a global scale: in its absence, stark inequalities might have remained confined to small pockets along coasts and rivers, surrounded by a whole world of more egalitarian foragers. But this was not to be.

A variety of edible plants began to be domesticated on different continents, first in Southwest Asia about 11,500 years ago, then in China and South America 10,000 years ago, in Mexico 9,000 years ago, in New Guinea more than 7,000 years ago, and in South Asia, Africa, and North America some 5,000 years ago. The domestication of animals, when it did occur, sometimes preceded and sometimes followed these innovations. The shift from foraging to farming could be a drawn-out process that did not always follow a linear trajectory.

This was especially true of the Natufian culture and its prepottery Neolithic successors in the Levant, the first to witness this transition. From about 14,500 years ago, warmer and wetter weather allowed regional forager groups to grow in size and to operate from more permanent settlements, hunting abundant game and collecting wild cereal grains in sufficient quantities to require at least small storage facilities. The material evidence is very limited but shows signs of what leading experts have called an “incipient social hierarchy.” Archaeologists have discovered one larger building that might have served communal uses and a few special basalt mortars that would have taken great effort to manufacture. According to one count, about 8 percent of the recovered skeletons from the Early Natufian period, about 14,500 to 12,800 years ago, wore seashells, sometimes brought in from hundreds of miles away, and decorations made of bone or teeth. At one site, three males were buried with shell headdresses, one of them fringed with shells four rows deep. Only a few graves contained stone tools and figurines. The presence of large roasting pits and hearths may point to redistributive feasts of the type held much later in the American Northwest.

Yet whatever degree of social stratification and inequality had developed under these benign environmental conditions faded during a cold phase from about 12,800 to 11,700 years ago known as the Younger Dryas, when the remaining foragers returned to a more mobile lifestyle as local resources dwindled or became less predictable. The return to climatic stability around 11,700 years ago coincided with the earliest evidence for the cultivation of wild crops such as einkorn, emmer, wheat, and barley. During what is known as the early PrePottery Neolithic (about 11,500 to 10,500 years ago), settlements expanded and food eventually came to be stored in individual households, a practice that points to changing concepts of ownership. That some exotic materials such as obsidian appeared for the first time may reflect a desire to express and shore up elevated status.

The later Pre-Pottery Neolithic (about 10,500 to 8,300 years ago) has yielded more specific information. About 9,000 years ago, the village of Cayonil in southeastern Turkey comprised different zones whose buildings and finds differed in size and quality. Larger and better-built structures feature unusual and exotic artifacts and tend to be located in close proximity to a plaza and a temple. Whereas only a small share of graves include obsidian, beads, or tools, three of the four richest in-house burials in Cayonij took place in houses next to the plaza.

All of this may be regarded as markers of elite standing. There can be no doubt that most of the inequality we observe in the following millennia was made possible by farming. But other paths existed. I have already mentioned aquatic adaptations that allowed substantial political and economic disparities to arise in the absence of food domestication. In other cases, the introduction of the domesticated horse as a conveyance could have disequalizing effects even in the absence of food production. In the eighteenth and nineteenth centuries, the Comanche in the borderlands of the American Southwest formed a warrior culture that relied on horses of European origin to conduct warfare and raids over long distances. Buffalo and other wild mammals were their principal food source, complemented by gathered wild plants and maize obtained via trade or plunder. These arrangements supported high levels of inequality: captive boys were employed to tend to the horses of the rich, and the number of horses owned divided Comanche households rather sharply into the “rich” (tsaanaakatu), the “poor” (tahkapu), and the “very poor” (tubitsi tahkapu).

More generally, foraging, horticultural, and agricultural societies were not always systematically associated with different levels of inequality: some foraging groups could be more unequal than some farming communities. A survey of 258 Native American societies in North America suggests that the size of the surplus, not domestication as such, was the key determinant of levels of material inequality: whereas two-thirds of societies that had no or hardly any surplus did not manifest resource inequaIity, four in five of those that generated moderate or large surpluses did. This correlation is much stronger than between different modes of subsistence on the one hand and inequality on the other.

A collaborative study of twenty-one small-scale societies at different levels of development, huntergatherers, horticulturalists, herders, and farmers, and in different parts of the world identifies two crucial determinants of inequality: ownership rights in land and livestock and the ability to transmit wealth from one generation to the next.

Researchers looked at three different types of wealth: embodied (mostly body strength and reproductive success), relational (exemplified by partners in labor), and material (household goods, land, and livestock). In their sample, embodied endowments were the most important wealth category among foragers and horticulturalists, and material wealth was the least important one, whereas the opposite was true of herders and farmers. The relative weight of different wealth classes is an important factor mediating the overall degree of inequality. Physical constraints on embodied wealth are relatively stringent, especially for body size and somewhat less so for strength, hunting returns, and reproductive success. Relational wealth, though more flexible, was also more unevenly distributed among farmers and pastoralists, and measures of inequality in land and livestock in these two groups reached higher levels than those for utensils or boat shares among foragers and horticulturalists. The combination of diverse inequality constraints that apply to different types of wealth and the relative significance of particular types of wealth accounts for observed differences by mode of subsistence. Average composite wealth Gini coefficients were as low as 0.25 to 0.27 for hunter-gatherers and horticulturalists but were much higher for herders (0.42) and agriculturalists (0.48). For material wealth alone, the main divide appears to lie between foragers (0.36) and all others (0.51 to 0.57).

Transmissibility of wealth is another crucial variable. The degree of intergenerational wealth transmission was about twice as high for farmers and herders as for the others, and the material possessions available to them were much more suitable for transmission than were the assets of foragers and horticulturalists. These systematic differences exercise a strong influence on the inequality of life chances, measured in terms of the likelihood that a child of parents in the top composite wealth decile ends up in the same decile compared to that of a child of parents in the poorest decile. Defined in this way, intergenerational mobility was generally moderate: even among foragers and horticulturalists, offspring of the top decile were at least three times as likely to reproduce this standing as those of the bottom decile were to ascend to it.

For farmers, however, the odds were much better (about eleven times), and they were better still for herders (about twenty times). These discrepancies can be attributed to two factors. About half of this effect is explained by technology, which determines the relative importance and characteristics of different wealth types. Institutions governing the mode of wealth transmission account for the other half, as agrarian and pastoralist norms favor vertical transmission to kin.

According to this analysis, inequality and its persistence over time has been the result of a combination of three factors: the relative importance of different classes of assets, how suitable they are for passing on to others, and actual rates of transmission. Thus groups in which material wealth plays a minor role and does not readily lend itself to transmission and in which inheritance is discouraged are bound to experience lower levels of overall inequality than groups in which material wealth is the dominant asset class, is highly transmissible, and is permitted to be left to the next generation.

In the long run, transmissibility is critical: if wealth is passed on between generations, random shocks related to health, parity, and returns on capital and labor that create inequality will be preserved and accumulate over time instead of allowing distributional outcomes toregress to the mean.

In keeping with the observations made in the aforementioned survey of Native American societies, the empirical findings derived from this sample of twenty-one small-scale societies likewise suggest that domestication is not a sufficient precondition for significant disequalization. Reliance on defensible natural resources appears to be a more critical factor, because these can generally be bequeathed to the next generation. The same is true of investments such as plowing, terracing, and irrigation. The heritability of such productive assets and their improvements fosters inequality in two ways: by enabling it to increase over time and by reducing intergenerational variance and mobility.

A much broader survey of more than a thousand societies at different levels of development confirms the central role of transmission. According to this global data set, about a third of simple forager societies have inheritance rules for movable property, but only one in twelve recognizes the transmission of real estate. By contrast, almost all societies that practice intensive forms of agriculture are equipped with rules that cover both. Complex foragers and horticulturalists occupy an intermediate position.

Inheritance presupposes the existence of property rights. We can only conjecture the circumstances of their creation: Samuel Bowles has argued that farming favored rights in property that were impractical or unfeasible for foragers because farm resources such as crops, buildings, and animals could easily be delimited and defended, prerequisites not shared by the dispersed natural resources on which foragers tend to rely. Exceptions such as aquatic adaptations and horse cultures are fully consistent with this explanation.

. . .

from

The Great Leveler. Violence and the History of Inequality from the Stone Age to the Twenty-First Century

by Walter Scheidel

get it at Amazon.com

AWAKENING. THE SCIENCE OF MEDITATION. How to Change Your Brain, Mind and Body – Daniel Goleman and Richard J. Davidson.

“To alleviate suffering and promote flourishing by integrating science with contemplative practice.”

An altered trait, a new characteristic that arises from a meditation practice endures, apart from meditation itself. Altered traits shape how we behave in our daily lives, not just during or immediately after we meditate. As meditation trains the mind, it reshapes the brain.

The most compelling impacts of meditation are not better health or sharper business performance but, rather, a further reach toward our better nature. These deep changes are external signs of strikingly different brain function.

Now we can share scientific confirmation of these profound alterations of being, a transformation that dramatically ups the limits on psychological science’s ideas of human possibility. We offer a cleareyed view based on hard science, sifting out results that are not nearly as compelling as the claims made for them.

As with gaining skill in a given sport, finding a meditation practice that appeals to you and sticking with it will have the greatest benefits. Just find one to try, decide on the amount of time each day you can realistically practice daily, even as short as a few minutes, try it for a month, and see how you feel after those thirty days.

More than forty years ago, two friends and collaborators at Harvard, Daniel Goleman and Richard Davidson were unusual in arguing for the benefits of meditation. Now, as mindfulness and other brands of meditation become ever more popular, promising to fix everything from our weight to our relationship to our professional career, these two bestselling authors sweep away the misconceptions around these practices and show how smart practice can change our personal traits and even our genome for the better.

Drawing on cutting-edge research, Goleman and Davidson expertly reveal what we can learn from a one-of-a-kind data pool that includes world-class meditators. They share for the first time remarkable findings that show how meditation without drugs or high expense can cultivate qualities such as selflessness, equanimity, love and compassion, and redesign our neural circuitry.

Demonstrating two master thinkers at work, The Science of Meditation explains precisely how mind training benefits us. More than daily doses or sheer hours, we need smart practice, including crucial ingredients such as targeted feedback from a master teacher and a more spacious worldview.

Gripping in its storytelling and based on a lifetime of thought and action, this is one of those rare books that has the power to change us at the deepest level.

Chapter 1

The Deep Path and the Wide

One bright fall morning, Steve Z, a lieutenant colonel working in the Pentagon, heard a “crazy, loud noise,” and instantly was covered in debris as the ceiling caved in, knocking him to the floor, unconscious. It was September 11 , 2001, and a passenger jet had smashed into the huge building, very near to Steve’s office.

The debris that buried Steve saved his life as the plane’s fuselage exploded, a fireball of flames scouring the open office. Despite a concussion, Steve returned to work four days later, laboring through feverish nights, 6:00 pm. to 6:00 am, because those were daytime hours in Afghanistan. Soon after, he volunteered for a year in Iraq.

“I mainly went to Iraq because I couldn’t walk around the Mall without being hypervigilant, wary of how people looked at me, totally on guard,” Steve recalls. “I couldn’t get on an elevator, I felt trapped in my car in traffic.”

His symptoms were classic post-traumatic stress disorder. Then came the day he realized he couldn’t handle this on his own. Steve ended up with a psychotherapist he still sees. She led him, very gently, to try mindfulness.

Mindfulness, he recalls, “gave me something I could do to help feel more calm, less stressed, not be so reactive.” As he practiced more, added loving-kindness to the mix, and went on retreats, his PTSD symptoms gradually became less frequent, less intense. Although his irritability and restlessness still came, he could see them coming.

Tales like Steve’s offer encouraging news about meditation. We have been meditators all our adult lives, and, like Steve, know for ourselves that the practice has countless benefits.

But our scientific backgrounds give us pause, too. Not everything chalked up to meditation’s magic actually stands up to rigorous tests. And so we have set out to make clear what works and what does not.

Some of what you know about meditation may be wrong. But what is true about meditation you may not know.

Take Steve’s story. The tale has been repeated in endless variations by countless others who claim to have found relief in meditation methods like mindfulness, not just from PTSD but from virtually the entire range of emotional disorders.

Yet mindfulness, part of an ancient meditation tradition, was not intended to be such a cure; this method was only recently adapted as a balm for our modern forms of angst. The original aim, embraced in some circles to this day, focuses on a deep exploration of the mind toward a profound alteration of our very being.

On the other hand, the pragmatic applications of meditation, like the mindfulness that helped Steve recover from trauma, appeal widely but do not go so deep. Because this wide approach has easy access, multitudes have found a way to include at least a bit of meditation in their day.

There are, then, two paths: the deep and the wide. Those two paths are often confused with each other, though they differ greatly.

We see the deep path embodied at two levels: in a pure form, for example, in the ancient lineages of Theravada Buddhism as practiced in Southeast Asia, or among Tibetan yogis (for whom we’ll see some remarkable data in chapter eleven, “A Yogi’s Brain”). We’ll call this most intensive type of practice Level 1.

At Level 2, these traditions have been removed from being part of a total lifester-monk or yogi, for example, and adapted into forms more palatable for the West. At Level 2, meditation comes in forms that leave behind parts of the original Asian source that might not make the cross-cultural journey so easily.

Then there are the wide approaches. At Level 3, a further remove takes these same meditation practices out of their spiritual context and distributes them ever more wider, as is the case with mindfulness-based stress reduction (better known as MBSR), founded by our good friend Jon Kabat-Zinn and taught now in thousands of clinics and medical centers, and far beyond. Or Transcendental Meditation (TM), which offers classic Sanskrit mantras to the modern world in a user-friendly format.

The even more widely accessible forms of meditation at Level 4 are, of necessity, the most watered-down, all the better to render them handy for the largest number of people. The current vogues of mindfulness-at-your-desk, or via minutes-long meditation apps, exemplify this level.

We foresee also a Level 5, one that exists now only in bits and pieces, but which may well increase in number and reach with time. At Level 5, the lessons scientists have learned in studying all the other levels will lead to innovations and adaptations that can be of widest benefit, a potential we explore in the final chapter, “A Healthy Mind.”

The deep transformations of Level 1 fascinated us when we originally encountered meditation. Dan studied ancient texts and practiced the methods they describe, particularly during the two years he lived in India and Sri Lanka in his grad school days and just afterward. Richie (as everyone calls him) followed Dan to Asia for a lengthy visit, likewise practicing on retreat there, meeting with meditation scholars, and more recently has scanned the brains of Olympic-level meditators in his lab at the University of Wisconsin.

Our own meditation practice has been mainly at Level 2. But from the start, the wide path, Levels 3 and 4, has also been important to us. Our Asian teachers said if any aspect of meditation could help alleviate suffering, it should be offered to all, not just those on a spiritual search. Our doctoral dissertations applied that advice by studying ways meditation could have cognitive and emotional payoffs.

The story we tell here mirrors our own personal and professional journey. We have been close friends and collaborators on the science of meditation since the 1970s, when we met at Harvard during graduate school, and we have both been practitioners of this inner art over all these years (although we are nowhere near mastery).

While we were both trained as psychologists, we bring complementary skills to telling this story. Dan is a seasoned science journalist who wrote for the New York Times for more than a decade. Richie, a neuroscientist, founded and heads the University of Wisconsin’s Center for Healthy Minds, in addition to directing the brain imaging laboratory at the Waisman Center there, replete with its own fMRl, PET scanner, and a battery of cuttingedge data analysis programs, along with hundreds of servers for the heavy-duty computing required for this work. His research group numbers more than a hundred experts, who range from physicists, statisticians, and computer scientists to neuroscientists and psychologists, as well as scholars of meditative traditions.

Coauthoring a book can be awkward. We’ve had some of that, to be sure, but whatever drawbacks coauthorship brought us has been vastly overshadowed by the sheer delight we find in working together. We’ve been best friends for decades but labored separately over most of our careers. This book has brought us together again, always a joy.

You are holding the book we had always wanted to write but could not. The science and the data we needed to support our ideas have only recently matured. Now that both have reached a critical mass, we are delighted to share this.

Our joy also comes from our sense of a shared, meaningful mission: we aim to shift the conversation with a radical reinterpretation of what the actual benefits of meditation are, and are not, and what the true aim of practice has always been.

THE DEEP PATH

After his return from India in the fall of 1974, Richie was in a seminar on psychopathology back at Harvard. Richie, with long hair and attire in keeping with the zeitgeist of Cambridge in those times, including a colorful woven sash that he wore as a belt, was startled when his professor said, “One clue to schizophrenia is the bizarre way a person dresses,” giving Richie a meaningful glance.

And when Richie told one of his Harvard professors that he wanted to focus his dissertation on meditation, the blunt response came immediately: that would be a career-ending move.

Dan set out to research the impacts of meditation that uses a mantra. On hearing this, one of his clinical psychology professors asked with suspicion, “How is a mantra any different from my obsessive patients who can’t stop saying ‘shit-shit-shit’?” The explanation that the expletives are involuntary in the psychopathology, while the silent mantra repetition is a voluntary and intentional focusing device, did little to placate him.

These reactions were typical of the opposition we faced from our department heads, who were still responding with knee-jerk negativity toward anything to do with consciousness, perhaps a mild form of PTSD after the notorious debacle involving Timothy Leary and Richard Alpert. Leary and Alpert had been very publicly ousted from our department in a brouhaha over letting Harvard undergrads experiment with psychedelics. This was some five years before we arrived, but the echoes lingered.

Despite our academic mentors’ seeing our meditation research as a blind alley, our hearts told us this was of compelling import. We had a big idea: beyond the pleasant states meditation can produce, the real payoffs are the lasting traits that can result.

An altered trait, a new characteristic that arises from a meditation practice endure, apart from meditation itself. Altered traits shape how we behave in our daily lives, not just during or immediately after we meditate.

The concept of altered traits has been a lifelong pursuit, each of us playing synergistic roles in the unfolding of this story. There were Dan’s years in India as an early participant-observer in the Asian roots of these mindaltering methods. And on Dan’s return to America he was a not-so-successful transmitter to contemporary psychology of beneficial changes from meditation and the ancient working models for achieving them.

Richie’s own experiences with meditation led to decades pursuing the science that supports our theory of altered traits. His research group has now generated the data that lend credence to what could otherwise seem mere fanciful tales. And by leading the creation of a fledgling research field, contemplative neuroscience, he has been grooming a coming generation of scientists whose work builds on and adds to this evidence.

In the wake of the tsunami of excitement over the wide path, the alternate route so often gets missed: that is, the deep path, which has always been the true goal of meditation. As we see it, the most compelling impacts of meditation are not better health or sharper business performance but, rather, a further reach toward our better nature.

A stream of findings from the deep path markedly boosts science’s models of the upper limits of our positive potential. The further reaches of the deep path cultivate enduring qualities like selflessness, equanimity a loving presence, and impartial compassion, highly positive altered traits.

When we began, this seemed big news for modern psychology, if it would listen. Admittedly, at first the concept of altered traits had scant backing save for the gut feelings we had from meeting highly seasoned practitioners in Asia, the claims of ancient meditation texts, and our own fledgling tries at this inner art. Now, after decades of silence and disregard, the last few years have seen ample findings that bear out our early hunch. Only of late have the scientific data reached critical mass, confirming what our intuition and the texts told us: these deep changes are external signs of strikingly different brain function.

Much of that data comes from Richie’s lab, the only scientific center that has gathered findings on dozens of contemplative masters, mainly Tibetan yogis, the largest pool of deep practitioners studied anywhere.

These unlikely research partners have been crucial in building a scientific case for the existence of a way of being that has eluded modern thought, though it was hiding in plain sight as a goal of the world’s major spiritual traditions. Now we can share scientific confirmation of these profound alterations of being, a transformation that dramatically ups the limits on psychological science’s ideas of human possibility.

The very idea of “awakening”, the goal of the deep path, seems a quaint fairy tale to a modern sensibility. Yet data from Richie’s lab, some just being published in journals as this book goes to press, confirm that remarkable, positive alterations in brain and behavior along the lines of those long described for the deep path are not a myth but a reality.

THE WIDE PATH

We have both been longtime board members of the Mind and Life Institute, formed initially to create intensive dialogues between the Dalai Lama and scientists on wide-ranging topics. In 2000 we organized one on “destructive emotions,” with several top experts on emotions, including Richie. Midway through that dialogue the Dalai Lama, turning to Richie, made a provocative challenge.

His own tradition, the Dalai Lama observed, had a wide array of time-tested practices for taming destructive emotions. So, he urged, take these methods into the laboratory in forms freed from religious trappings, test them rigorously, and if they can help people lessen their destructive emotions, then spread them widely to all who might benefit.

That fired us up. Over dinner that night, and several nights following, we began to plot the general course of the research we report in this book.

The Dalai Lama’s challenge led Richie to refocus the formidable power of his lab to assess both the deep and the wide paths. And, as founding director of the Center for Healthy Minds, Richie has spurred work on useful, evidence-based applications suitable for schools, clinics, businesses, even for cops, for anyone, anywhere, ranging from a kindness program for preschoolers to treatments for veterans with PTSD.

The Dalai Lama’s urging catalyzed studies that support the wide path in scientific terms, a vernacular welcomed around the globe. Meanwhile the wide way has gone viral, becoming the stuff of blogs, tweets, and snappy apps. For instance, as we write this, a wave of enthusiasm surrounds mindfulness, and hundreds of thousands, maybe millions, now practice the method.

But viewing mindfulness (or any variety of meditation) through a scientific lens starts with questions like: When does it work, and when does it not? Will this method help everyone? Are its benefits any different from, say, exercise? These are among the questions that brought us to write this book.

Meditation is a catch-all word for myriad varieties of contemplative practice, just as sports refers to a wide range of athletic activities. For both sports and meditation, the end results vary depending on what you actually do.

Some practical advice: for those about to start a meditation practice, or who have been grazing among several, keep in mind that as with gaining skill in a given sport, finding a meditation practice that appeals to you and sticking with it will have the greatest benefits. Just find one to try, decide on the amount of time each day you can realistically practice daily, even as short as a few minutes, try it for a month, and see how you feel after those thirty days.

Just as regular workouts give you better physical fitness, most any type of meditation will enhance mental fitness to some degree. As we’ll see, the specific benefits from one or another type get stronger the more total hours of practice you put in.

A CAUTIONARY TALE

Swami X, as we’ll call him, was at the tip of the wave of meditation teachers from Asia who swarmed to America in the mid-1970s, during our Harvard days. The swami reached out to us saying he was eager to have his yogic prowess studied by scientists at Harvard who could confirm his remarkable abilities.

It was the height of excitement about a then new technology, biofeedback, which fed people instant information about their physiology, blood pressure, for instance, which otherwise was beyond their conscious control. With that new incoming signal, people were able to nudge their body’s operations in healthier directions. Swami X claimed he had such control without the need for feedback.

Happy to stumble on a seemingly accomplished subject for research, we were able to finagle the use of a physiology lab at Harvard Medical School’s Massachusetts Mental Health Center.

But come the day of testing the swami’s prowess, when we asked him to lower his blood pressure, he raised it. When asked to raise it, he lowered it. And when we told him this, the swami berated us for serving him “toxic tea” that supposedly sabotaged his gifts.

Our physiological tracings revealed he could do none of the mental feats he had boasted about. He did, however, manage to put his heart into atrial fibrillation, a high-risk biotalent, with a method he called “dog samadhi,” a name that mystifies us to this day.

From time to time the swami disappeared into the men’s room to smoke a bidi (these cheap cigarettes, a few flakes of tobacco wrapped in a plant leaf, are popular throughout India). A telegram from friends in India soon after revealed that the “swami” was actually the former manager of a shoe factory who had abandoned his wife and two children and come to America to make his fortune.

No doubt Swami X was seeking a marketing edge to attract disciples. In his subsequent appearances he made sure to mention that “scientists at Harvard” had studied his meditative prowess. This was an early harbinger of what has become a bountiful harvest of data refried into sales hype.

With such cautionary incidents in mind, we bring open but skeptical minds, the scientist’s mind-set, to the current wave of meditation research. For the most part we view with satisfaction the rise of the mindfulness movement and its rapidly growing reach in schools, business, and our private lives, the wide approach. But we bemoan how the data all too often is distorted or exaggerated when science gets used as a sales hook.

The mix of meditation and monetizing has a sorry track record as a recipe for hucksterism, disappointment, even scandal. All too often, gross misrepresentations, questionable claims, or distortions of scientific studies are used to sell meditation. A business website, for instance, features a blog post called “How Mindfulness Fixes Your Brain, Reduces Stress, and Boosts Performance.” Are these claims justified by solid scientific findings? Yes and no, though the “no” too easily gets overlooked.

Among the iffy findings gone viral with enthusiastic claims: that meditation thickens the brain’s executive center, the prefrontal cortex, while shrinking the amygdala, the trigger for our freeze-fight-or-flight response; that meditation shifts our brain’s set point for emotions into a more positive range; that meditation slows aging; and that meditation can be used to treat diseases ranging from diabetes to attention deficit hyperactivity disorder.

On closer look, each of the studies on which these claims are based has problems with the methods used; they need more testing and corroboration to make firm claims. Such findings may well stand up to further scrutiny, or maybe not.

The research reporting amygdala shrinkage, for instance, used a method to estimate amygdala volume that may not be very accurate. And one widely cited study describing slower aging used a very complex treatment that included some meditation but was mixed with a special diet and intensive exercise as well; the impact of meditation per se was impossible to decipher.

Still, social media are rife with such claims, and hyperbolic ad copy can be enticing. So we offer a cleareyed view based on hard science, sifting out results that are not nearly as compelling as the claims made for them.

Even well-meaning proponents have little guidance in distinguishing between what’s sound and what’s questionable, or just sheer nonsense. Given the rising tide of enthusiasm, our more sober-minded take comes not a moment too soon.

A note to readers.

The first three chapters cover our initial forays into meditation, and the scientific hunch that motivated our quest.

Chapters four through twelve narrate the scientific journey, with each chapter devoted to a particular topic like attention or compassion; each of these has an “In a Nutshell” summary at the end for those who are more interested in what we found than how we got there.

In chapters eleven and twelve we arrive at our long-sought destination, sharing the remarkable findings on the most advanced meditators ever studied.

In chapter thirteen, “Altering Traits,” we lay out the benefits of meditation at three levels: beginner, longterm, and “Olympic.”

In our final chapter we speculate on what the future might bring, and how these findings might be of greater benefit not just to each of us individually but to society.

THE ACCELERATION

As early as the 1830s, Thoreau and Emerson, along with their fellow American Transcendentalists, flirted with these Eastern inner arts. They were spurred by the first English-language translations of ancient spiritual texts from Asia, but had no instruction in the practices that supported those texts. Almost a century later, Sigmund Freud advised psychoanalysts to adopt an “even-hovering attention” while listening to their clients, but again, offered no method.

The West’s more serious engagement took hold mere decades ago, as teachers from the East arrived, and as a generation of Westerners traveled to study meditation in Asia, some returning as teachers. These forays paved the way for the current acceleration of the wide path, along with fresh possibilities for those few who choose to pursue the deep way.

In the 1970s, when we began publishing our research on meditation, there were just a handful of scientific articles on the topic. At last count there numbered 6,838 such articles, with a notable acceleration of late. For 2014 the annual number was 925, in 2015 the total was 1,098, and in 2016 there were 1,113 such publications in the English language scientific literature.

PRIMING THE FIELD

It was April 2001, on the top floor of the Fluno Center on the campus of the University of Wisconsin-Madison, and we were convening with the Dalai Lama for an afternoon of scientific dialogue on meditation research findings. Missing from the room was Francisco Varela, a Chilean-born neuroscientist and head of a cognitive neuroscience laboratory at the French National Center for Scientific Research in Paris. His remarkable career included cofounding the Mind and Life Institute, which had organized this very gathering.

As a serious meditation practitioner, Francisco could see the promise for a full collaboration between seasoned meditators and the scientists studying them. That model became standard practice in Richie’s lab, as well as others.

Francisco had been scheduled to participate, but he was fighting liver cancer and a severe downturn meant he could not travel. He was in his bed at home in Paris, close to dying.

This was in the days before Skype and videoconferencing, but Richie’s group managed a two-way video hookup between our meeting room and Francisco’s bedroom in his Paris apartment. The Dalai Lama addressed him very directly, looking closely into the camera. They both knew that this would be the very last time they would see each other in this lifetime.

The Dalai Lama thanked Francisco for all he had done for science and for the greater good, told him to be strong, and said that they would remain connected forever. Richie and many others in the room had tears streaming down, appreciating the momentous import of the moment. Just days after the meeting, Francisco passed away.

Three years later, in 2004, an event occurred that made real a dream Francisco had often talked about. At the Garrison Institute, an hour up the Hudson River from New York City, one hundred scientists, graduate students, and postdocs had gathered for the first in what has become a yearly series of events, the Summer Research Institute (SRI), a gathering devoted to furthering the rigorous study of meditation.

The meetings are organized by the Mind and Life Institute, itself formed in 1987 by the Dalai Lama, Francisco, and Adam Engle, a lawyer turned businessman. We were founding board members. The mission of Mind and Life is “to alleviate suffering and promote flourishing by integrating science with contemplative practice.”

Mind and Life’s summer institute, we felt, could offer a more welcoming reality for those who, like us in our grad school days, wanted to do research on meditation. While we had been isolated pioneers, we wanted to knit together a community of like-minded scholars and scientists who shared this quest. They could be supportive of each other’s work at a distance, even if they were alone in their interests at their own institution.

Details of the SRI were hatched over the kitchen table in Richie’s home in Madison, in a conversation with Adam Engle, Richie and a handful of scientists and scholars then organized the first summer program and served as faculty for the week, featuring topics like the cognitive neuroscience of attention and mental imagery. As of this writing, thirteen more meetings have followed (with two so far in Europe, and possibly future meetings in Asia and South America).

Beginning with the very first SRI, the Mind and Life Institute began a program of small grants named in honor of Francisco. These few dozen, very modest Varela research awards (up to $25,000, though most research of this kind takes far more in funding) have leveraged more than $60 million in follow-on funding from foundations and US federal granting agencies. And the initiative has borne plentiful fruit: fifty or so graduates of the SRI have published several hundred papers on meditation.

As these young scientists entered academic posts, they swelled the numbers of researchers doing such studies. They have driven in no small part the ever-growing numbers of scientific studies on meditation.

At the same time, more established scientists have shifted their focus toward this area as results showed valuable yield. The findings rolling out of Richie’s brain lab at the University of Wisconsin, and labs of other scientists, from the medical schools of Stanford and Emory, Yale and Harvard, and far beyond, routinely make headlines.

Given meditation’s booming popularity, we feel a need for a hard-nosed look. The neural and biological benefits best documented by sound science are not necessarily the ones we hear about in the press, on Facebook, or from email marketing blasts. And some of those trumpeted far and wide have little scientific merit.

Many reports boil down to the ways a short daily dose of meditation alters our biology and emotional life for the better. This news, gone viral, has drawn miliions worldwide to find a slot in their daily routine for meditation.

But there are far greater possibilities, and some perils. The moment has come to tell the bigger tale, the headlines are missing.

There are several threads in the tapestry we weave here. One can be seen in the story of our decades-long friendship and our shared sense of a greater purpose, at first a distant and unlikely goal but one in which we persisted despite obstacles. Another traces the emergence of neuroscience’s evidence that our experiences shape our brains, a platform supporting our theory that as meditation trains the mind, it reshapes the brain. Then there’s the flood of data we’ve mined to show the gradient of this change.

At the outset, mere minutes a day of practice have surprising benefits (though not all those that are claimed). Beyond such payoffs at the beginning, we can now show that the more hours you practice, the greater the benefits you reap. And at the highest levels of practice we find true altered traits, changes in the brain that science has never observed before, but which we proposed decades ago.

Chapter 2

Ancient Clues

Our story starts one early November morning in 1970, when the spire of the stupa in Bodh Gaya was lost to view, enveloped in the ethereal mist rising from the Niranjan River nearby. Next to the stupa stood a descendant of the very Bodhi Tree under which, legend has it, Buddha sat in meditation as he became enlightened.

Through the mist that morning, Dan glimpsed an elderly Tibetan monk amble by as he made his postdawn rounds, circumambulating the holy site. With shortcropped gray hair and eyeglasses as thick as the bottoms of Coke bottles, he fingered his mala beads while mumbling softly a mantra praising the Buddha as a sage, or muni in Sanskrit: “Muni, muni, mahamuni, mahamuniya swaha!”

A few days later, friends happened to bring Dan to visit that very monk, Khunu Lama. He inhabited a sparse, unheated cell, its concrete walls radiating the late-fall chill. A wooden-plank tucket served as both bed and day couch, with a small stand alongside for perching texts to read, and little else. As befits a monk, the room was empty of any private belongings.

From the early-morning hours until late into the night, Khunu Lama would sit on that bed, a text always open in front of him. Whenever a visitor would pop in, and in the Tibetan world that could be at just about any time, he would invariably welcome them with a kindly gaze and warm words.

Khunu’s qualities, a loving attention to whoever came to see him, an ease of being, and a gentle presence, struck Dan as quite unlike, and far more positive than, the personality traits he had been studying for his degree in clinical psychology at Harvard. That training focused on negatives: neurotic patterns, overpowering burdensome feelings, and outright psychopathology.

Khunu, on the other hand, quietly exuded the better side of human nature. His humility, for instance, was fabled. The story goes that the abbot of the monastery, in recognition of Khunu’s spiritual status, offered him as living quarters a suite of rooms on the monastery’s top floor, with a monk to serve as an attendant. Khunu declined, preferring the simplicity of his small, bare monk’s cell.

Khunu Lama was one of those rare masters revered by all schools of Tibetan practice. Even the Dalai Lama sought him out for teachings, receiving instructions on Shantideva’s Bodhicharyavatara, a guide to the compassion-filled life of a bodhisattva. To this day, whenever the Dalai Lama teaches this text, one of his favorites, he credits Khunu as his mentor on the topic.

Before meeting Khunu Lama, Dan had spent months with an Indian yogi, Neem Karoli Baba, who had drawn him to India in the first place. Neem Karoli, known by the honorific Maharaji, was newly famous in the West as the guru of Ram Dass, who in those years toured the country with mesmerizing accounts of his transformation from Richard Alpert (the Harvard professor fired for experimenting with psychedelics, along with his colleague Timothy Leary) to a devotee of this old yogi. By accident, during Christmas break from his Harvard classes in 1968, Dan met Ram Dass, who had just returned from being with Neem Karoli in India, and that encounter eventually propelled Dan’s journey to India.

Dan managed to get a Harvard Predoctoral Traveling Fellowship to India in fall 1970, and located Neem Karoli Baba at a small ashram in the Himalayan foothills. Living the life of a sadhu, Maharaji’s only worldly possessions seemed to be the white cotton dhoti he wore on hot days and the heavy woolen plaid blanket he wrapped around himself on cold ones. He kept no particular schedule, had no organization, nor offered any fixed program of yogic poses or meditations. Like most sadhus, he was itinerant, unpredictably on the move. He mainly hung out on a tucket on the porch of whatever ashram, temple, or home he was visiting at the time.

Maharaji seemed always to be absorbed in some state of ongoing quiet rapture, and, paradoxically, at the same time was attentive to whoever was with him. What struck Dan was how utterly at peace and how kind Maharaji was. Like Khunu, he took an equal interest in everyone who came, and his visitors ranged from the highest-ranking government officials to beggars.

There was something about his ineffable state of mind that Dan had never sensed in anyone before meeting Maharaji. No matter what he was doing, he seemed to remain effortlessly in a blissful, loving space, perpetually at ease. Whatever state Maharaji was in seemed not some temporary oasis in the mind, but a lasting way of being: a trait of utter wellness.

BEYOND THE PARADIGM

After two months or so making daily visits to Maharaji at the ashram, Dan and his friend Jeff (now widely known as the devotional singer Krishna Das) went traveling with another Westerner who was desperate to renew his visa after spending seven years in India living as a sadhu. That journey ended for Dan at Bodh Gaya, where he was soon to meet Khunu Lama.

Bodh Gaya, in the North Indian state Bihar, is a pilgrimage site for Buddhists the world over, and most every Buddhist country has a building in the town where its pilgrims can stay. The Burmese vihara, or pilgrim’s rest house, had been built before the takeover by a military dictatorship that forbade Burma’s citizens to travel. The vihara had lots of rooms but few pilgrims, and soon became an overnight stop for the ragged band of roaming Westerners who wandered through town.

When Dan arrived there in November 1970, he met the sole long-term American resident, Joseph Goldstein, a former Peace Corps worker in Thailand. Joseph had spent more than four years studying at the vihara with Anagarika Munindra, a meditation master. Munindra, of slight build and always clad in white, belonged to the Barua caste in Bengal, whose members had been Buddhist since the time of Gautama himself.

Munindra had studied vipassana (the Theravadan meditation and root source of many now-popular forms of mindfulness) under Burmese masters of great repute. Munindra, who became Dan’s first instructor in the method, had just invited his friend S. N. Goenka, a jovial, paunchy former businessman recently turned meditation teacher, to come to the vihara to lead a series of ten-day retreats.

Goenka had become a meditation teacher in a tradition established by Ledi Sayadaw, a Burmese monk who, as part of a cultural renaissance in the early twentieth century meant to counter British colonial influence, revolutionized meditation by making it widely available to laypeople. While meditation in that culture had for centuries been the exclusive provenance of monks and nuns, Goenka learned vipassana from U Ba Khin (U is an honorific in Burmese), at one time Burma’s accountant general, who had been taught the method by a farmer, who was in turn taught by Ledi Sayadaw.

Dan took five of Goenka’s ten-day courses in a row, immersing himself in this rich meditation method. He was joined by about a hundred fellow travelers. This gathering in the winter of 1970-71 was a seminal moment in the transfer of mindfulness from an esoteric practice in Asian countries to its current widespread adoption around the world. A handful of the students there, with Joseph Goldstein leading the way, later became instrumental in bringing mindfulness to the West.

Starting in his college years Dan had developed a twice-daily habit of twenty-minute meditation sessions, but this immersion in ten days of continual practice brought him to new levels. Goenka’s method started with simply noting the sensations of breathing in and out, not for just twenty minutes but for hours and hours a day. This cultivation of concentration then morphed into a systematic whole-body scan of whatever sensations were occurring anywhere in the body. What had been “my body, my knee” becomes a sea of shifting sensation, a radical shift in awareness.

. . .

from

The Science of Meditation. How to Change Your Brain, Mind and Body

by Daniel Goleman and Richard J. Davidson

get it at Amazon.com

ADVERTISING AND ACADEMIA ARE CONTROLLING OUR THOUGHTS. Didn’t you know? – George Monbiot * A typology of consumer strategies for resisting advertising, and a review of mechanisms for countering them – Marieke L. Fransen, Peeter W.J. Verlegh, Amna Kirmani, Edith G. Smit.

We have the ability to twiddle some knobs in a machine learning dashboard we build, and around the world hundreds of thousands of people are going to quietly change their behaviour in ways that, unbeknownst to them, feel second-nature but are really by design.”

By abetting the ad industry, universities are leading us into temptation, when they should be enlightening us.

“Our ACE typology distinguishes three types of resistance strategies: Avoiding, Contesting, and Empowering. We introduce these strategies, and present research describing advertising tactics that may be used to neutralize each of them.”

We are subject to constant influence, some of which we see, much of which we don’t. And there is one major industry that seeks to decide on our behalf. Its techniques get more sophisticated every year, drawing on the latest findings in neuroscience and psychology. It is called advertising.

To what extent do we decide? We tell ourselves we choose our own life course, but is this ever true? If you or I had lived 500 years ago, our worldview, and the decisions we made as a result, would have been utterly different. Our minds are shaped by our social environment, in particular the belief systems projected by those in power: monarchs, aristocrats and theologians then; corporations, billionaires and the media today.

Humans, the supremely social mammals, are ethical and intellectual sponges. We unconsciously absorb, for good or ill, the influences that surround us. Indeed, the very notion that we might form our own minds is a received idea that would have been quite alien to most people five centuries ago. This is not to suggest we have no capacity for independent thought. But to exercise it, we must, consciously and with great effort, swim against the social current that sweeps us along, mostly without our knowledge.

Surely, though, even if we are broadly shaped by the social environment, we control the small decisions we make? Sometimes. Perhaps. But here, too, we are subject to constant influence, some of which we see, much of which we don’t. And there is one major industry that seeks to decide on our behalf. Its techniques get more sophisticated every year, drawing on the latest findings in neuroscience and psychology. It is called advertising.
But what puzzles and disgusts me even more than this failure is the willingness of universities to host research that helps advertisers hack our minds. The Enlightenment ideal, which all universities claim to endorse, is that everyone should think for themselves. So why do they run departments in which researchers explore new means of blocking this capacity?

. . .

The Guardian

“The literature does not provide a clear overview of the different ways in which consumers may resist advertising, and the tactics that can be used to counter or avoid such resistance. This article fills this gap by providing an overview of the different types of resistance that consumers may show, and by discussing the ways in which resistance may be countered.”

A typology of consumer strategies for resisting advertising, and a review of mechanisms for countering them.

Marieke L. Fransen, Peeter W.J. Verlegh, Amna Kirmani, Edith G. Smit.

This article presents a typology of the different ways in which consumers resist advertising, and the tactics that can be used to counter or avoid such resistance. It brings together literatures from different fields of study, including advertising, marketing, communication, science and psychology. Although researchers in these subfields have Shown a substantial interest in (consumer) resistance, these streams of literature are poorly connected. This article aims to facilitate the exchange of knowledge, and serve as a starting point for future research.

Our ACE typology distinguishes three types of resistance strategies: Avoiding, Contesting, and Empowering. We introduce these strategies, and present research describing advertising tactics that may be used to neutralize each of them.

Keywords: persuasion; resistance; reactance; knowledge

Introduction

Advertising is designed to persuade consumers by creating brand and product awareness, or by communicating social, emotional or functional product benefits. But consumers are not always open to advertising, and often resist its attempts at persuasion. This resistance is nothing new: 20 years ago, Calfee and Ringold (1994) reviewed six decades of research on consumers’ opinions about advertising; they showed that scepticism abides, and that the majority of consumers (about 70%) feel that advertising tries to persuade people to buy things they do not want or need.

This defensive response to advertising has been studied in several streams of research. In marketing and consumer research, for example, Friestad and Wright (1994) developed the persuasion knowledge model to describe consumers’ responses to persuasive attempts. The model has become one of the key theories in marketing research, and is widely applied to understand when and how consumers respond defensively to marketing communications, ranging from traditional TV ads to advergames and social media applications (Panic, Cauberghe, and De Pelsmacker 2013; Van Noort, Antheunis, and Verlegh 2014).

In addition to the persuasion knowledge model, there has been a substantial amount of work focusing on topics such as scepticism, selective exposure, and reactance, which may all be classified as resistance to advertising. Unfortunately the literature does not provide a clear overview of the different ways in which consumers may resist advertising, and the tactics that can be used to counter or avoid such resistance. This article fills this gap by providing an overview of the different types of resistance that consumers may show, and by discussing the ways in which resistance may be countered.

Thus article should not only be interesting for practitioners, but also for academics, as it brings together literatures from different fields of study, including advertising, marketing, communication science and psychology. Although researchers in these subfields have shown a substantial interest in (consumer) resistance, these streams of literature are poorly connected, and this paper aims to facilitate the exchange of knowledge between these subflelds. The presented framework for organizing the different types of strategies provides further integration of different findings, and should serve as a starting point for further exploration of the defensive strategies employed by consumers.

This paper develops a typology of the main types of consumer resistance and provides some (evidence-based) strategies for coping with this resistance. We refer to this as the ACE typology, since it distinguishes among Avoiding, Contesting, and Empowering types of resistance strategies that consumers can use. We first introduce these strategies, and then suggest some advertising tactics that may be used to neutralize each of these types of resistance. The typology is summarized in Figure 1.

ACE a typology of resistance strategies

Knowles and Linn (2004) emphasize that resistance is a motivational state, in which people have the goal to reduce attitudinal or behavioural change or to retain one’s current attitude. Following their conceptualization, we view the mitigation of attitudinal or behavioural change as a (possible) outcome of the strategies that are employed by consumers who are motivated to resist persuasion. In this section, we will define the Avoidance, Contesting and Empowerment strategies. Further elaboration can be found in Fransen, Smit, and Verlegh (2014).

Avoidance strategies

Advertising avoidance is a well-studied phenomenon. Speck and Elliot (1997) investigated advertising avoidance in magazines, newspapers, radio and television. They identified several ways that people avoid advertising; (a) physical avoidance; (b) mechanical avoidance; and (c) cognitive avoidance. Physical avoidance entails a variety of strategies aimed at not seeing or hearing the ad. These include leaving the room or skipping the advertising section in a newspaper. In an insightful ethnographic study, Brodin (2007) found that TV viewers use commercial breaks to talk to others, go to the bathroom, or engage in other behaviours that purposefully or accidently lead to advertising avoidance. Using an eye tracking methodology, Dreze and Hussherr, (2003) found that consumers actively avoid looking at banners when using the Internet. In fact, consumers can employ the modern methods of physical avoidance, such as blocking online ads, filtering email, or subscribing to ‘do not email’, ‘do not call’ or ‘do not track’ programs (Johnson 2013).

Mechanical avoidance includes zapping, zipping, or muting the television or radio when the commercials start. The literature shows that a high percentage of television viewers zap (Tse and Lee 2001) or zip (Stemberg 1987) during commercial breaks. ‘Block zipping’, blocking two or more commercials at the same time, seems the most prevalent form of zipping (Cronin and Menelly 1992). Stafford and Stafford (1996) adopted the uses and gratifications perspective from communication theory to explain why people engage in mechanical avoidance. Boredom was found to explain both zipping and zapping behaviour whereas curiosity predicted only zapping behaviour.

Cognitive ad avoidance means not paying attention to specific advertisements. Consumers may engage in ‘selective exposure’ and ‘selective attention’; the tendency to avoid or devote less attention to persuasive communications that are likely to contain messages that contradict with existing beliefs or opinions (Freedman and Sears 1965; Knobloch-Westerwick and Meng 2009). In other words, people are motivated to seek information that is consonant with their beliefs and attitudes and to avoid information that is dissonant with their beliefs and attitudes. Most research on selective exposure is conducted in the fields of political and health communication (for a review see Smith, Fabrigar, and Norris 2008).

Research on the determinants of avoidance behaviour demonstrates that viewers are less inclined to avoid commercial messages that are emotional and entertaining, and more inclined to avoid messages that are informational (Olney, Holbrook, and Batra 1991; Woltman, Wedel, and Pieters, 2003). In addition, viewers are less likely to avoid advertisements on regularly purchased products (Siddarth and Chattopdahyay 1998).

An interesting question is whether there are differences between active (conscious) avoidance and passive (unconscious) avoidance. To show active avoidance, consumers have to be aware of the fact that an ad is there, but have to somehow force themselves not to see or hear it. Passive avoidance on the other hand does not necessarily require such action, and might therefore call for different types of neutralizing strategies.

Contesting strategies

In addition to avoiding advertising messages, consumers may resist advertising by using a contesting strategy. Contesting strategies involve actively refuting the ad by challenging it. An ad can be countered by considering different characteristics of the ad, (a) the advertising message itself (the content), (b) the source of the ad or (c) the persuasive tactics that are used in the ad.

In the persuasion literature, contesting the content of persuasive messages has been referred to as counter-arguing (e.g., Buller 1986; Wright, 1975; Jacks and Cameron 2003). Defined as a thought process that decreases agreement with a counter-attitudinal message, counter-arguing is often described as a mediating variable between a persuasive message and outcomes such as attitudes and behaviour (Festinger and Maccoby 1964; Silvia 2006). People who engage in counter-arguing scrutinize the arguments presented, and subsequently try to generate reasons to refute them.

Contesting the source of a message, referred to as source derogation, occurs when individuals dismiss the validity of the source. For instance, consumers may question the source’s expertise, trustworthiness, or motives (Jacks and Cameron (2003). As a consequence, the message will lose credibility, which reduces its impact. Source derogation is often used when the source can be construed as biased (Wright 1973). Batinic and Appel 2013) demonstrated that information from commercial sources (i.e., advertising) is perceived to be less trustworthy than information from non-commercial sources, such as consumer recommendations or word of mouth.

Contesting the persuasive tactics used in a message has often been examined in the context of the Persuasion Knowledge Model (Friestad and Wright (1994). When consumers become suspicious of the advertiser’s manipulative intent, they resist the advertising message. For instance, Campbell (1995) finds that borrowed-interest appeals, whereby marketers use consumers’ interest in an (unrelated) topic (e.g., celebrities or puppies) to trigger interest in their product or service, can lead to negative attitudes towards the advertiser. Similarly, consumers are more likely to become suspicious of advertisers’ motives when ads feature negative comparisons to the competition (Jain and Posavac 2014) or incomplete comparisons (Kirmani and Zhu 2007). Finally, consumers may counter-argue the ad and derogate the source when the advertiser is perceived as spending too much money, such as when the ad is repeated often (Kirmani 1997).

Empowering strategies

Empowering strategies are related to the recipients themselves, not to the content of the persuasive message. They involve reassuring the self or one’s existing attitude. Three types of empowering strategies have been described in the literature: attitude bolstering, social validation, and self-assertion.

Consumers who engage in attitude bolstering focus on defending their existing attitudes and behaviours rather than refuting or challenging a message. To achieve this, they generate thoughts that are supportive of those attitudes and behaviours when they are exposed to a persuasive message that challenges them (Lydon, Zanna, and Ross 1988; Meirick 2002). For example, a person who is ‘pro-choice’ might resist a message against abortion by actively thinking about arguments that are in support of their own position, rather than considering the arguments presented in the message.

A second empowering strategy is social validation, which entails validating one’s attitude with significant others (Jacks and Cameron 2003). Consumers who use this strategy will actively look for (significant) others who share their existing beliefs, in order to confirm their current attitudes or behaviours. Social validation is related to the concept of ‘social proof’; when uncertain about how to behave, people have the tendency to look at the behaviour of others (Cialdini 2001). Jacks and Cameron (2003) argue that people may use a similar heuristic when they seek to defend themselves against an unwanted persuasion attempt. They demonstrated that people who are presented with a persuasive message that is incongruent with their existing attitude think of others who share their existing beliefs. Their current attitude or behaviour is validated in this way, which makes them less susceptible to the influence of dissonant messages.

In their research on resistance strategies, Jacks and Cameron (2003) observed a third empowerment strategy: asserting the self. When using self-assertions, people remind themselves that they are confident about their attitudes and behaviours, and that nothing can be done to change these. Self-assertion provides a boost to one’s self-esteem, which reduces susceptibility to persuasive messages (Rhodes and Wood 1992; Leary and Baumeister 2000). In addition to boosting confidence in one’s own opinions, this strategy reduces the extent to which consumers feel social pressure to conform to the norms that are imposed by others (Levine and Moreland 1990).

Now we have introduced our typology of Avoidance, Contesting and Empowering resistance strategies, the next section examines tactics that can be used by advertisers to neutralize these three types of resistance strategies.

Resistance-neutralizing persuasion tactics

Advertisers have available to them a range of persuasion techniques to create successful advertisements. These tactics often focus on making a message more attractive by using, for example, humour, celebrities, or music. Knowles and Linn (2004) refer to these traditional persuasion techniques as ‘alpha strategies’, strategies that focus on increasing approach towards the attitudinal object. In contrast, they propose the term ‘omega strategies’ for tactics that are aimed specifically at reducing consumer resistance to persuasion. These strategies explicitly focus on reducing avoidance forces, in other words: decreasing the motivation to move away from the attitudinal object. Hence, omega strategies aim to neutralize resistance that people may experience when exposed to an ad.

We argue that such resistance-neutralizing tactics should be more effective when they are tailored to the specific resistance strategy that is adopted by consumers. In this section, we will therefore describe for each of the ACE strategies, the advertising tactics that are most likely to reduce resistance and enhance effectiveness.

Neutralizing avoidance strategies

By nature, avoidance-type resistance strategies are perhaps the most difficult to counter, because the avoidance behaviour itself cuts off the possibility of communication. One obvious strategy for preventing avoidance is the use of Forced Exposure. For example, in an online context people are often forced to view or hear commercials when they watch a video stream or listen to a radio channel. Hegner, Kusse, and Pruyn (2014) found that consumers perceive such ads to be intrusive, although this perception is weaker when the ad has a (positive) emotional appeal (a finding that is reminiscent of the finding that TV ads are less likely to be avoided if they are emotional rather than informational. Olney, Holbrook, and Batra 1990). Another form of forced exposure is so-called horizontal advertising blocks, in which television stations broadcast advertisements simultaneously. Research by Nam, Kwon, and Lee (2010) demonstrated that such horizontal advertising blocks are effective in reducing zapping behaviour. This tactic is, however, also perceived as intrusive and may lead to a negative image.

Although some research demonstrates that forced exposure may lead to negative responses and negative associations with the advertiser (e.g., Edwards, Li, and Lee (2002), there are also studies suggesting that ‘any’ advertising exposure can be beneficial. Greyser’s (1973) classic work on imitation in advertising suggested for example that marketers often believe that irritating ads help raise brand awareness. Skumik and colleagues (2005) found that consumers may forget the valence of previously encountered information about a brand, while (positive effects of) familiarity remain. It therefore remains to be investigated how consumers respond to such forced exposure. One interesting possibility is that, while consumers may have a negative explicit response to forced exposure, they could still have a positive (implicit) response to the advertised product. It should be noted however, that consumers who cannot avoid advertising may also adopt different resistance strategies.

Rather than forcing exposure to advertising, marketers may choose to prevent avoidance by disguising the persuasive intent or the sender of the message. Marketers have developed a wide range of strategies to achieve this (cf., Kaikati and Kaikati 2004). One strategy that seeks to downplay the persuasive nature of marketing messages is to embed branded messages into the editorial content of a medium, so that consumers are less likely to recognize these messages as persuasive attempts. Such brand placements may occur in magazines, TV and radio shows, movies and games (van Reijmersdal, Smit, and Neijens 2010). In response to rising ethical concerns about this practice, the FTC and FCC have formally expressed their concerns, and the European Union has even developed regulation that requires marketers to inform consumers of the commercial intent of such messages. Several recent studies have examined consumers’ responses to such disclosures. In general, this research seems to suggest that such information often activates persuasion knowledge and has negative consequences for consumers’ evaluations of the advertised brands (Boerman et al. in press; Campbell, Mohr, and Verlegh 2013).

Marketers may also counter avoidance by enlisting consumers to share brand-related messages with others. Typically, consumers have greater trust in information provided by their peers than in information provided by marketers. Consumers may share brandrelated information via online or offline word of mouth, which can be stimulated through word-of-mouth marketing programs. The power of word of mouth lies in the fact that messages received by friends are not perceived as persuasive attempts, reducing the motivation to avoid such messages. The effectiveness of word of mouth marketing depends on the extent to which consumers attribute the message to enthusiasm about the brand or product rather than ulterior motives (Verlegh et al. 2013). Marketers who make use of such strategies should thus take care to avoid such attributions, and seek to maintain the informal and friendly character of word of mouth as an exchange of information among friends (Tuk et al. 2009).

In addition to exchanging information, viral marketing may stimulate consumers to share branded content. In crafting viral campaigns, marketers often use humorous, surprising, sexual or otherwise appealing content (cf., Golan and Zaidner 2008). It is important, however, to keep in mind that such campaigns should also convey brand-relevant information in order to achieve marketing communication goals such as enhancing brand awareness or attitude (Akpinar and Berger 2014).

Neutralizing contesting strategies

Several techniques are available to advertisers seeking to reduce consumer contesting of their messages. A direct and well-established strategy of coping with counterarguments is two-sided advertising. A two-sided advertisement includes both positive and negative elements. When people are also exposed to negative features of a product or service, they are less likely to come up with counterarguments themselves. ORen marketers directly refute the negative elements or diminish its importance in the ad. Moreover, advertising is perceived as more trustworthy when it includes (some) negative information, so that the overall impact of the ad increases (Eisend 2006). In a classic paper on oneversus two-sided advertising, Kamins and Assael (1987) found that two-sidedness is effective in reducing source derogation. In practice, however, the use of two-sided advertising is not very common, as marketers are wary of spreading negative information about their products. One exception is product failure, where brands often acknowledge their mistake (i.e. negative element) and then present their solution (i.e., positive element). Doing so prevents consumers from generating (perhaps more persuasive) negative elements (Fennis and Stroebe 2013).

There are also more indirect ways of coping with contesting strategies, which reduce the, ability, opportunity or motivation to generate counterarguments or engage in other contesting strategies (cf., Burkley 2008). Knowles and Linn 2004) demonstrated for example that participants generated significantly less counterarguments to a target message when it was presented at the end (versus the beginning) of a series of (seven) persuasive messages. Their finding illustrates the possibility of using cognitive depletion as a tactic for reducing consumers’ ability to contest messages. Recently, similar results were obtained by Janssen et a1. (2014), who demonstrated that mentally depleted consumers were less able to resist advertising, even when they received a forewaming that informed them of the persuasive intent of the message.

In addition to cognitive depletion, marketers may use distraction to reduce consumers’ opportunity to engage in contesting strategies. An example is given by the ‘disrupt then reframe’ technique, which is often used in personal selling (Fennis, Das, and Pruyn 2004). In this technique a subtle, unexpected twist (i.e., disruption) in the sales script, which distracts people’s attention, is followed by the persuasive conclusion of a message (i.e., the reframe). For example, when selling apples one could say ‘these apples are 250 cents, that is only 2.5 dollars, it is a bargain!’ This simple disruption (i.e., 250 cents) in combination with the reframe (i.e., it’s a bargain!’) distracts people and thereby reduces their efforts to contest the message.

Finally, to reduce the motivation to use contesting strategies, marketers may offer safety cues and warrants to minimize the perceived risk associated with a purchase. Research by van Noort, Kerkhof, and Fennis (2008) demonstrated that the presence of safety cues on websites provides people with a safe feeling. When people feel safe they are less inclined to contest the information on the website. Another way of providing a sense of safety is by postponing the payment, e.g., ‘Buy now, pay later’. These offers will reduce resistance and the use of counter-arguing, especially when the distance between the purchase and payment increases (Knowles and Linn 2004).

Neutralizing empowerment strategies

To neutralize resistance strategies that involve asserting the self or an existing attitude, marketers need to focus on the consumer rather than the message. Interestingly, Jacks and O’Brien (2004) found that people who are self-affirmed are actually more open to persuasive messages, suggesting that self-affirmation may also be used to enhance rather than reduce persuasion. Take, for example, an ad that urges consumers to stop smoking. Smokers may perceive such an ad as threatening to their self-view, because it reminds them of their unhealthy behaviour. This threat may be mitigated, however, by reminding them of their previous successes or important values (Steele 1988).

When people are self-affirmed, they are more open to messages that are dissonant with their attitudes and behaviour because they do not feel the need to protect their self-view. Pursuing this logic, it might be possible for advertisers to focus on enhancing consumers’ self-esteem and self-efficacy. One strategy could be to emphasize the experience and knowledge of consumers when addressing them: ‘As a mother, you know that. . .’. Indeed, several studies have shown that assigning expertise and affirming people’s positive self-views may reduce the perceptions of persuasive intent and reduce resistance (Dolinski, Nawrat, and Rudak 2001).

A second way to neutralize the motivation to adopt empowering strategies is to provide consumers with control over the situation; for example, by having consumers decide which ads they want to watch. This strategy may also reduce other forms of resistance, of course. The online television platform Hulu, for example, offers viewers the opportunity to select the ads they want to watch. Permission-based advertising is another way to provide consumers with more freedom. Tsang, Ho, and Liang (2004) demonstrated that advertisements that are received with permission are evaluated more positively than advertisements that are received without permission (e.g., spam). Asking consumers permission provides them control, which fosters acceptation and reduces resistance.

Conclusion

Advertisers can use a wide range of tactics to counter consumers’ resistance to persuasion. Knowles and Linn (2004) suggested using the term ‘omega strategies’ for persuasion strategies that explicitly deal with resistance that consumers may experience when exposed to (unwanted) advertising. In this paper, we argue that such resistance-neutralizing tactics should be more effective when they are tailored to the specific resistance strategy that is adopted by consumers.

We have introduced the ACE typology, and have discussed specific tactics for addressing the different strategies that consumers use to resist persuasion. This overview should be helpful for marketers who are interested in applying communication strategies that enhance persuasion by reducing consumer resistance.

To further the development of such strategies, more research is needed to better understand the various ways in which consumers provide resistance to persuasive messages. We see a particular need for research that goes beyond the study of individual strategies, and tries to establish personal and situational characteristics that favour one strategy over another. Such research could ultimately help to predict which types of resistance are likely to be triggered by a specific message, or in a specific market context. This knowledge, in turn, allows marketers to design communications that avoid these types of resistance. To facilitate this, we need research that establishes the extent to which specific marketing tactics can effectively counter the avoidance, contesting and empowering strategies that are distinguished in our typology.

HAPPINESS. Lessons from a New Science – Richard Layard.

Human beings have largely conquered nature, but they have still to conquer themselves. We have grown no happier in the last fifty years. What’s going on?

We have more food, more clothes, more cars, bigger houses, more central heating, more foreign holidays, a shorter working week, nicer work and, above all, better health. Yet we are not happier.

The best society is one where the citizens are happiest. So the best public policy is that which produces the greatest happiness.

That is what this book is about, the causes of happiness and the means we have to affect it. I hope this book will hasten the shift to a new perspective, where people’s feelings are treated as paramount. That shift is overdue.

In this new edition of his landmark book, Richard Layard shows that there is a paradox at the heart of our lives. Most people want more income. Yet as societies become richer, they do not become happier. This is not just anecdotally true, it is the story told by countless pieces of scientific research. We now have sophisticated ways of measuring how happy people are, and all the evidence shows that on average people have grown no happier in the last fifty years, even as average incomes have more than doubled, in fact, the First World has more depression, more alcoholism and more crime than fifty years ago. This paradox is true of Britain, the United States, continental Europe, and Japan. What is going on?

Now fully revised and updated to include developments since first publication, Layard answers his critics in what is still the key book in ‘happiness studies’.

Richard Layard is a leading economist who believes that the happiness of society does not necessarily equate to its income. He is best known for his work on unemployment and inequality, which provided the intellectual basis for Britain’s improved unemployment policies. He founded the Centre for Economic Performance at the London School of Economics, and since 2000 he has been a member of the House of Lords. His research into the subject of happiness brings together findings from such diverse areas as psychology, neuroscience, economics, sociology and philosophy.

I am an economist, I love the subject and it has served me well. But economics equates changes in the happiness of a society with changes in its purchasing power, or roughly so. I have never accepted that view, and the history of the last fifty years has disproved it. Instead, the new psychology of happiness makes it possible to construct an alternative view, based on evidence rather than assertion. From this we can develop a new vision of what lifestyles and what policies are sensible, drawing on the new psychology, as well as on economics, brain science, sociology and philosophy.

The time has come to have a go, to rush in where angels fear to tread. So here is my effort at a new evidence-based vision of how we can live better. It will need massive refinement as our knowledge accumulates. But I hope it will hasten the shift to a new perspective, where people’s feelings are treated as paramount. That shift is overdue.

So many people have helped in this book and helped so generously that I describe their role in a separate note at the end. I have been helped by psychologists, neuroscientists, sociologists, philosophers and of course economists, all sharing a desire for human betterment. If the book does anything, I hope it creates a bit more happiness.

Preface to the second edition

This book was first published six years ago. The wellbeing movement was already well under way and is now in full flood. Policy-makers worldwide are questioning whether wealth is a proper measure of welfare. And it has become quite respectable to say that what matters is how people experience life, inside themselves. Not everyone agrees with that, but talking about the happiness and misery which people feel no longer provokes an amused smile. The debate is on, at all levels in our society.

So this is a good moment for a second edition. In it I set out my own views in the debate, review some key new evidence, and record some major successes of the weil-being movement. I have not rewritten the main text of the book; instead I have added an extra final Part.

There is a second reason for a new edition. When the book came out, I received thousands of letters, some of them touching and mostly appreciative. Many asked, “Are you founding a movement?” For some time I thought “No.” But many things have made me change my mind. Public opinion is changing but far too slowly. There is still so much unnecessary misery that goes unaddressed while less important issues attractenormous attention. And technology now makes it much easier than before to mobilise people in a good cause.

So a group of us, including two multi-talented friends, Geoff Mulgan and Anthony Seldon, are launching a movement called Action for Happiness, which I discuss briefly in the final chapter. Our hope is that it may become a worldwide force for good. I have no doubt that we can have a happier world, and with your help we will.

Richard Layard, January 2011

What’s the problem?

“Nought’s had, all’s spent, Where our desire is got without content.” LADY MACBETH

There is a paradox at the heart of our lives. Most people want more income and strive for it. Yet as Western societies have got richer, their people have become no happier.

This is no old wives’ tale. It is a fact proven by many pieces of scientific research. As I’ll show, we have good ways to measure how happy people are, and all the evidence says that on average people are no happier today than people were fifty years ago. Yet at the same time average incomes have more than doubled. This paradox is equally true for the United States and Britain and Japan.

But aren’t our lives infinitely more comfortable? Indeed: we have more food, more clothes, more cars, bigger houses, more central heating, more foreign holidays, a shorter working week, nicer work and, above all, better health. Yet we are not happier. Despite all the efforts of governments, teachers, doctors and businessmen, human happiness has not improved.

This devastating fact should be the starting point for all discussion of how to improve our lot. It should cause each government to reappraise its objectives, and every one of us to rethink our goals.

One thing is clear: once subsistence income is guaranteed, making people happier is not easy. If we want people to be happier, we really have to know what conditions generate happiness and how to cultivate them. That is what this book is about, the causes of happiness and the means we have to affect it.

If we really wanted to be happier, what would we do differently? We do not yet know all the answers, or even half of them. But we have a lot of evidence, enough to rethink government policy and to reappraise our personal choices and philosophy of life.

The main evidence comes from the new psychology of happiness, but neuroscience, sociology, economics and philosophy all play their part. By bringing them together, we can produce a new vision of how we can live better, both as social beings and in terms of our inner spirit.

What Philosophy?

The philosophy is that of the eighteenth century Enlightenment, as articulated by Jeremy Bentham. If you pass below the fine classical portico of University College London, you will find him there near the entrance hall, an elderly man dressed in eighteenth century clothes, sitting in a glass case. The clothes are his and so is the body, except for the head, which is a wax replica. He is there because he inspired the founding of the college, and as he requested, he still attends the meetings of the College Council, being carried in for the purpose. A shy and kindly man, he never married, and he gave his money to good causes. He was also one of the first intellectuals to go jogging or trotting as he called itwhich he did until near his death. But despite his quirks, Bentham was one of the greatest thinkers of the Enlightenment.

The best society, he said, is one where the citizens are happiest. So the best public policy is that which produces the greatest happiness. And when it comes to private behaviour, the right moral action is that which produces the most happiness for the people it affects. This is the Greatest Happiness principle. It is fundamentally egalitarian, because everybody’s happiness is to count equally. It is also fundamentally humane, because it says that what matters ultimately is what people feel. It is close in spirit to the opening passages of the American Declaration of Independence.

This noble ideal has driven much of the social progress that has occurred in the last two hundred years. But it was never easy to apply, because so little was known about the nature and causes of happiness. This left it vulnerable to philosophies that questioned the ideal itself.

In the nineteenth century these alternative philosophies were often linked to religious conceptions of morality. But in the twentieth century religious belief diminished, and so eventually did belief in the secular religion of socialism. In consequence there remained no widely accepted system of ethical belief. Into the void stepped the non-philosophy of rampant individualism.

At its best this individualism offered an ideal of “selfrealisation.” But that gospel failed. It did not increase happiness, because it made each individual too anxious about what he could get for himself. If we really want to be happy, we need some concept of a common good, towards which we all contribute.

So now the tide is turning. People are calling out for a concept of the common good, and that is exactly what the Enlightenment ideal provides. It defines the common good as the greatest happiness of all, requiring us to care for others as well as for ourselves. And it advocates a kind of fellow-feeling for others that in itself increases our happiness and reduces our isolation.

What Psychology?

At the same time, the new psychology now gives us real insight into the nature of happiness and what brings it about. So the Enlightenment philosophy can now at last be applied using evidence instead of speculation.

Happiness is feeling good, and misery is feeling bad. At every moment we feel somewhere between wonderful and half-dead, and that feeling can now be measured by asking people or by monitoring their brains. Once that is done, we can go on to explain a person’s underlying level of happiness, the quality of his life as he experiences it. Every life is complicated, but it is vital to separate out the factors that really count.

Some factors come from outside us, from our society: some societies really are happier. Other factors work from inside us, from our inner life. In part 1 of the book I sort out how these key factors affect us. Then, in part 2, I focus on what kind of society and what personal practices would help us lead happier lives. The last chapter summarises my conclusions.

What Social Message?

So how, as a society, can we influence whether people are happy? One approach is to proceed by theoretical reasoning, using elementary economics. This concludes that selfish behaviour is all right, provided markets are allowed to function: through the invisible hand, perfect markets will lead us to the greatest happiness that is possible, given our wants and our resources. Since people’s wants are taken as given, national income becomes a proxy for national happiness. Government’s role is to correct market imperfections and to remove all barriers to labour mobility and flexible employment. This view of national happiness is the one that dominates the thinking and pronouncements of leaders of Western governments.

The alternative is to look at what actually makes people happy. People certainly hate absolute poverty, and they hated Communism. But there is more to life than prosperity and freedom.

In this book we shall look at other key facts about human nature, and how we should respond to them:

Our wants are not given, in the way that elementary economics assumes. In fact they depend heavily on what other people have, and on what we ourselves have got accustomed to. They are also affected by education, advertising and television. We are heavily driven by the desire to keep up with other people. This leads to a status race, which is self-defeating since if I do better, someone else must do worse. What can we do about this?

People desperately want security, at work, in the family and in their neighbourhoods. They hate unemployment, family break-up and crime in the streets. But the individual cannot, entirely on his own, determine whether he loses his job, his spouse or his wallet. It depends in part on external forces beyond his control. So how can the community promote a way of life that is more secure?

People want to trust other people. But in the United States and in Britain (though not in continental Europe), levels of trust have plummeted in recent decades. How is it possible to maintain trust when society is increasingly mobile and anonymous?

In the seventeenth century the individualist philosopher Thomas Hobbes proposed that we should think about human problems by considering men “as if but even now sprung out of the earth, and suddenly (like mushrooms) come to full maturity, without any kind of engagement with each other.”

But people are not like mushrooms. We are inherently social, and our happiness depends above all on the quality of our relationships with other people. We have to develop public policies that take this “relationship factor” into account.

What Personal Message?

There is also an inner, personal factor. Happiness depends not only on our external situation and relationships; it depends on our attitudes as well. From his experiences in Auschwitz, Viktor Frankl concluded that in the last resort “everything can be taken from a man but one thing, the last of human freedoms, to choose one’s attitude in any given set of circumstances.”

Our thoughts do affect our feelings. As we shall see, people are happier if they are compassionate; and they are happier if they are thankful for what they have. When life gets rough, these qualities become ever more important.

Throughout the centuries parents, teachers and priests have striven to instil these traits of compassion and acceptance. Today we know more than ever about how to develop them. Modern cognitive therapy was developed in the last thirty years as a forward-looking substitute for backward-looking psychoanalysis. Through systematic experimentation, it has found ways to promote positive thinking and to systematically dispel the negative thoughts that afflict us all. In recent years these insights have been generalised by “positive psychology,” to offer a means by which all of us, depressed or otherwise, can find meaning and increase our enjoyment of life. What are these insights?

Many of the ideas are as old as Buddhism and have recurred throughout the ages in all the religious traditions that focus on the inner life. In every case techniques are offered for liberating the positive force in each of us, which religious people call divine. These techniques could well become the psychological basis of twenty-first-century culture.

Even so, our nature is recalcitrant, and for some people it seems impossible to be positive without some physical help. Until fifty years ago there was no effective treatment for mental illness. But in the 1950s drugs were found that, despite side effects, could provide relief to many who suffer from schizophrenia, depression or anxiety. This, followed by the development of cognitive and behavioural therapy, has given new life to millions of people who would otherwise have been half-dead. But how much further can this process go in the relief of misery?

Human beings have largely conquered nature, but they have still to conquer themselves. In the last fifty years we have eliminated absolute material scarcity in the West. With good policies and Western help, the same could happen throughout the world within a hundred years. But in the meantime we in the West are no happier. Changing this is the new challenge and the new frontier, and much more difficult than traditional wealth-creation. Fortunately, enough tools are already available to fill this small book.

What is happiness?

“If not actually disgruntled, he was far from being gruntled” P. G. Wodehouse

In the late nineteenth century doctors noticed something strange about people with brain injuries. If the damage was on the left side of the brain, they were more likely to become depressed than if it was on the right. As time passed, the evidence built up, and it was even found that damage on the right side of the brain could sometimes produce elation. From these dim beginnings, a new science has emerged that measures what happens in the brain when people experience positive and negative feelings.

The broad picture is this. Good feelings are experienced through activity in the brain’s left-hand side behind the forehead; people feel depressed if that part of their brain goes dead. Bad feelings are connected with brain activity behind the right-hand side of the forehead; when that part of the brain is out of action, people can feel elated.

Such scientific breakthroughs have transformed the way we think about happiness. Until recently, if people said they were happy, sceptics would hold that this was just a subjective statement. There was no good way to show that it had any objective content at all. But now we know that what people say about how they feel corresponds closely to the actual levels of activity in different parts of the brain, which can be measured in standard scientific ways.

The Feeling of Happiness

So what is the feeling of happiness? Is there a state of “feeling good” or “feeling bad” that is a dimension of all our waking life? Can people say at any moment how they feel? Indeed, is your happiness something, a bit like your temperature, that is always there, fluctuating away whether you think about it or not? If so, can I compare my happiness with yours?

The answer to all these questions is essentially yes. This may surprise those of a sceptical disposition. But it would not surprise most people, past or present. They have always been aware of how they felt and have used their introspection to infer how others feel. Since they themselves smile when they are happy, they infer that when others smile, they are happy too. Likewise when they see others frown, or see them weep. It is through their feelings of imaginative sympathy that people have been able to respond to one another’s joys and sorrows throughout history.

So by happiness I mean feeling good enjoying life and wanting the feeling to be maintained. By unhappiness I mean feeling bad and wishing things were different.

There are countless sources of happiness, and countless sources of pain and misery. But all our experience has in it a dimension that corresponds to how good or bad we feel. In fact most people find it easy to say how good they are feeling, and in social surveys such questions get very high response rates, much higher than the average survey question. The scarcity of “Don’t knows” shows that people do know how they feel, and recognise the validity of the question.

When it comes to how we feel, most of us take a longish view. We accept the ups and downs and care mainly about our average happiness over a longish period of time. But that average is made up from a whole series of moments. At each moment of waking life we feel more or less happy, just as we experience more or less noise. There are many different sources of noise, from a trombone to a pneumatic drill, but we can feel how loud each noise is. In the same way there are many different sources of enjoyment, but we can compare the intensity of each. There are also many types of suffering, from toothache to a stomach ulcer to depression, but we can compare the pain of each. Moreover, as we shall see, happiness begins where unhappiness ends.

So how can we find out how happy or unhappy people are, both in general and from moment to moment? Both psychology and brain science are beginning to give us the tools to arrive at precise answers.

Asking People

The most obvious way to find out whether people are happy in general is to survey individuals in a random sample of households and to ask them. A typical question is, “Taking all things together, would you say you are very happy, quite happy, or not very happy?” Here is how people reply in the United States and in Britain: very similarly, as the table below shows. Interestingly, men and women reply very much the same.

But is everyone who answers the question using the words in the same way? Fortunately, their replies can be independently verified. In many cases friends or colleagues of the individual have been asked separately to rate the person’s happiness. These independent ratings turn out to be well related to the way the people rated themselves. The same is true of ratings made by an interviewer who has never met the person before.

Feelings Fluctuate

Of course our feelings fluctuate from hour to hour, and from day to day. Psychologists have recently begun to study how people’s mood varies from activity to activity. I will give only one example, from a study of around nine hundred working women in Texas. They were asked to divide the previous working day into episodes, like a film: typically they identified about fourteen episodes. They then reported what they were doing in each episode and who they were doing it with. Finally, they were asked how they felt in each episode, along twelve dimensions that can be combined into a single index of good or bad feeling.

The table shows what they liked most (sex) and what they liked least (commuting).

The table below shows what company they most enjoyed. They are highly gregarious, preferring almost any company to being alone. Only the boss’s company is worse than being alone.

We can also use these reports to measure how feelings change as the day goes on. As the next chart shows, these people feel better as time passes, except for a blip up at lunchtime.

I have showed these findings to stress the point that happiness is a feeling and that feelings occur continuously over time throughout our waking life. Feelings at any particular moment are of course influenced by memories of past experiences and anticipations of future ones. Memories and anticipations are very important parts of our mental life, but they pose no conceptual problems in measuring our happiness, be it instantaneous or averaged over a longer period of time.

It is the long-term average happiness of each individual that this book is about, rather than the fluctuations from moment to moment. Though our average happiness may be influenced by the pattern of our activities, it is mainly affected by our basic temperament and attitudes and by key features of our life situation, our relationships, our health, our worries about money.

Brainwaves

Sceptics may still question whether happiness is really an objective feeling that can be properly compared between people. To reassure doubters, we can turn to modern brain physiology with its sensational new insights into what is happening when a person feels happy or unhappy. This work is currently being led by Richard Davidson of the University of Wisconsin.

In most of his studies Davidson measures activity in different parts of the brain by putting electrodes all over the scalp and reading the electrical activity. These EEG measurements are then related to the feelings people report. When people experience positive feelings, there is more electrical activity in the left front of the brain; when they experience negative feelings, there is more activity in the right front of the brain. For example, when someone is shown funny film clips, his left side becomes more active and his right side less so; he also smiles and gives positive reports on his mood. When frightening or distasteful film clips are shown, the opposite happens.

Similar findings come from direct scans of what is going on inside the brain. For instance, people can be put inside an MRI or PET scanner and then shown nice or unpleasant pictures. The chart gives an example.

People are shown pictures, first of a happy baby and then of a baby that is deformed. The PET scanner picks up the corresponding changes in glucose usage in the brain and records it as light patches in the photographs. The nice picture activates the left side of the brain, and the horrendous picture activates the right side.

So there is a direct connection between brain activity and mood. Both can be altered by an external experience like looking at pictures. Both can also be altered directly by physical means. By using very powerful magnets it is possible to stimulate activity in the left side of the forebrain, and this automatically produces a better mood. Indeed, this method has even been used to alleviate depression. Even more remarkable, it has been found to improve the immune system, which is heavily influenced by a person’s mood.

So we have clear physical measures of how feelings vary over time. We can also use physical measures to compare the happiness of different people. People differ in the pattern of their EEGs, even when they are at rest. People whose left side is especially active (“leftsiders”) report more positive feelings and memories than “riqht-siders” do. Left-siders smile more, and their friends assess them as happier. By contrast, people who are especially active on the right side report more negative thoughts and memories, smile less and are assessed as less happy by their friends.

So a natural measure of happiness is the difference in activity between the left and right sides of the forebrain. This varies closely with many measures of self-reported mood. And one further finding is interesting. When different people are exposed to good experiences (like pleasant film clips), those who are naturally happy when at rest experience the greatest gain in happiness. And when they are exposed to nasty experiences, they experience the least increase in discomfort.

The EEG approach works even on newly born babies. When they are given something nice to suck, their left forebrain starts humming, while a sour taste sets off activity in the right brain. At ten months old, a baby’s brain activity at rest predicts how well it will respond if its mother disappears for a minute. Babies who are more active on the right side tend to howl, while the left-siders remain upbeat. At two and a half years old, left-sided youngsters are much more exploratory, while right-siders cling more to their mothers. However, up to their teens there are many changes in the differences between children, both by character traits and by brainwaves. Among adults the differences are more stable.

The frontal lobes are not the only part of the brain involved in emotion. For example, one seat of raw emotions is the amygdala, which is deeper in the brain. It triggers the command centre that mobilises the body to respond to a frightening stimulus, the fight-or-flight syndrome. But the amygdala in humans is not that different from the amygdala of the lowest mammals, and works unconsciously. Our conscious experience, however, is specially linked to the frontal lobes, which are highly developed in man.

So brain science confirms the objective character of happiness. It also confirms the objective character of pain. Here is a fascinating experiment, performed on a number of people. A very hot pad is applied to each person’s leg, the same temperature for all of them. The people then report the pain. They give widely varying reports, but these different reports are highly correlated with the different levels of brain activity in the relevant part of the cortex. This confirms the link between what people report and objective brain activity. There is no difference between what people think they feel and what they “really” feel, as some social philosophers would have us believe.

A Single Dimension

But isn’t this all a bit simplistic? Surely there are many types of happiness, and of pain? And in what sense is happiness the opposite of pain?

There are indeed many types of good and bad feeling. On the positive side there is loving and being loved, achievement, discovery, comfort, tranquillity, joy and many others. On the negative side there is fear, anger, sadness, guilt, boredom and many others again. But, as I have said, this is no different from the situation with pains and pleasures that are purely “physical”: one pain can be compared with another, and one pleasure can be compared with another. Similarly, mental pain and physical pain can be compared, and so can mental and physical enjoyment.

But is happiness really a single dimension of experience running from extreme misery to extreme joy? Or is it possible to be both happy and unhappy at the same time? The broad answer to this is no; it is not possible to be happy and unhappy at the same time. Positive feelings damp down negative feelings and vice versa. So we have just one dimension, running from the extreme negative to the extreme positive.

Lest this seem very mechanical, we should immediately note that happiness can be excited or tranquil, and misery can be agitated or leaden. These are important distinctions, which correspond to different levels of “arousal.” The range of possibilities is illustrated in the diagram, which dispels any impression that happiness can only be exciting or hedonistic.

One of the most enjoyable forms of aroused experience is when you are so engrossed in something that you lose yourself in it. These experiences of “flow” can be wonderful, both at the time and in retrospect”.

Qualities of Happiness

The concept of happiness I have described is essentially the one developed by the eighteenth century Enlightenment. It relates to how we feel as we live our lives. It famously inspired the authors of the American Declaration of Independence, and it has become central to our Western heritage.

It differs, for example, from the approach taken by Aristotle and his many followers. Aristotle believed that the object of life was eudaimonia, or a type of happiness associated with virtuous conduct and philosophic reflection. This idea of types of happiness, of higher and lower pleasures, was revived in the nineteenth century by John Stuart Mill and it survives to this day. Mill believed that the happiness of different experiences could vary both in quantity and quality. (He could not accept that a given amount of satisfaction derived from the game of “pushpin” was as valuable as the same amount of satisfaction derived from poetry.)

Mill’s intuition was right but his formulation was wrong. People who achieve a sense of meaning in their lives are happier than those who live from one pleasure to another. Carol Ryff of the University of Wisconsin has provided ample evidence of this. She has compiled refined measures of such things as purpose in life, autonomy, positive relationships, personal growth and self-acceptance and used them to construct an index of psychological well-being. In a sample of US. adults this index is very highly correlated with standard selfreported measures of happiness and life satisfaction.

Thus Mill was right in his intuition about the true sources of lasting happiness, but he was wrong to argue that some types of happiness are intrinsically better than others. In fact to do so is essentially paternalistic. It is of course obvious that some enjoyments, like those provided by cocaine, cannot in their nature last long: they work against a person’s long-term happiness, which means that we should avoid them. Similarly, some unhealthy enjoyments, like those of a sadist, should be avoided because they decrease the happiness of others. But no good feeling is bad in itself, it can only be bad because of its consequences.

Happiness Improves Your Health

In September 1932 the mother superior of the American School Sisters of Notre Dame decided that all new nuns should be asked to write an autobiographical sketch. These sketches were kept, and they have recently been independently rated by psychologists to show the amount of positive feeling which they revealed. These ratings have then been compared with how long each nun lived. Remarkably, the amount of positive feeling that a nun revealed in her twenties was an excellent predictor of how long she would live.

Of the nuns who were still alive in 1991, only 21% of the most cheerful quarter died in the following nine years, compared with 55% of the least cheerful quarter of the nuns? This shows how happiness can increase a person’s length of life.

In fact most sustained forms of good feeling are good for you. However we measure happiness, it appears to be conducive to physical health (other things being equal). Happy people tend to have more robust immune systems and lower levels of stress-causing cortisol. If artificially exposed to the flu virus, they are less likely to contract the disease. They are also more likely to recover from major surgery.

Equally, when a person has a happy experience, the body chemistry improves, and blood pressure and heart rate tend to fall. Especially good experiences can have long-lasting effects on our health. If we take the 750 actors and actresses who were ever nominated for Oscars, we can assume that before the award panel’s decision the winners and losers were equally healthy on average. Yet those who got the Oscars went on to live four years longer, on average, than the losers. Such was the gain in morale from winning.

The Function of Happiness

I hope I have now persuaded you that happiness exists and is generally good for your physical health. But that does not make it supremely important. It is supremely important because it is our overall motivational device. We seek to feel good and to avoid pain (not moment by moment but overall).

Without this drive we humans would have perished long ago. For what makes us feel good (sex, food, love, friendship and so on) is also generally good for our survival. And what causes us pain is bad for our survival (fire, dehydration, poison, ostracism).

So by seeking to feel good and to avoid pain, we seek what is good for us and avoid what is bad for us, and thus we have survived as a species. The search for good feeling is the mechanism that has preserved and multiplied the human race.

Some people question whether we have any overall system of motivation. They say we have separate drives for sex, feeding and so on, and that we respond to these drives independently of their effect on our general sense of well-being. The evidence is otherwise. For we often have to choose between satisfying different drives, and our choices vary according to how easy it is to satisfy one drive compared with another. So there must be some overall evaluation going on that compares how different drives contribute to our overall satisfaction.

When one source of satisfaction becomes more costly relative to another, we choose less of it. This is the so-called law of demand, which has been confirmed throughout human life and among many species of animals. It is not uniquely human and probably applies to most living things, all of which have a tendency to pursue their own good as best they can. In lower animals the process is unconscious, and even in humans it is mostly so, since consciousness could not possibly handle the whole of this huge task. However, we do have massive frontal lobes that other mammals lack, and that is probably where the conscious part of the balancing operation is performed.

Experiments show that at every moment we are evaluating our situation, often unconsciously. We are attracted to those elements of our situation that we like and repelled by the elements we dislike. It is this pattern of “approach” and “avoidance” that is central to our behaviour.

Here are two ingenious experiments by the psychologist John Bargh that illustrate the workings of this approach-avoidance mechanism. His technique is to flash good or bad words on a screen and observe how people respond. In the first experiment he flashed the words subliminally and recorded the impact on the person’s mood. The good words (like “music” improved mood, and the bad ones (like “worm”) worsened mood. He next examined approach and avoidance behaviour by making the words on the screen legible, and asking the person to remove them with a lever. The human instinct is to pull towards you that which you like, and to push away that which you wish to avoid. So Bargh split his subjects into two groups. Group A was told to behave in the natural way, to pull the lever for the good words, and to push it for the bad ones. Group B was told to behave “unnaturally”, to pull for the bad words and to push for the good. Group A did the job much more quickly, confirming how basic are our mechanisms of approach and avoidance.

So there is an evaluative faculty in each of us that tells us how happy we are with our situation, and then directs us to approach what makes us happy and avoid what does not. From the various possibilities open to us, we choose whichever combination of activities will make us feel best. In doing this we are more than purely reactive: we plan for the future, which sometimes involves denying ourselves today for the sake of future gratification.

This overall psychological model is similar to what economists have used from Adam Smith onwards. We want to be happy, and we act to promote our present and future happiness, given the opportunities open to us.

Of course we can make mistakes. Some things that people do are bad for survival, like cigarette smoking and the self-starvation of anorexia nervosa. Also, people are often short-sighted and bad at forecasting their future feelings. Natural selection has not produced perfect bodies, and neither has it produced perfect psyches. Yet we are clearly selected to be healthy, though we sometimes get sick. Similarly, we are selected to feel good, even if we sometimes make mistakes: it is impossible to explain human action and human survival except by the desire to achieve good feelings.

This raises the obvious issue of why, in that case, we are not happier than we are. Why is there so much anxiety and depression? Have anxiety and depression played any role in explaining our survival? Almost certainly, yes. Even today, it is a good idea to be anxious while driving a car-or while writing a book. A heavy dose of self-criticism will save you from some nasty mistakes. And it is often best to be sceptical about much of what you hear from other people, until it is independently confirmed.

It was even more important to be on guard when man first evolved on the African savannah. When you are in danger of being eaten by a lion, it is a good idea to be extremely cautious. (Better to have a smoke detector that goes off when you burn the toast than one that stays silent while the house burns down.) Even depression may have had some function. When confronted with an unbeatable opponent, dogs show signs of depression that turn off the opponent’s will to attack. The same may have been true of humans?

. . .

from

Happiness. Lessons from a New Science

by Richard Layard

get it at Amazon.com

SANCTIONS are increasingly popular, but do they actually work? – Madeline Grant * BLOCKING PROGRESS. The damaging side effects of economic sanctions – Dr. Nima Sanandaji.

“If goods don’t cross borders, soldiers will.”

Have we really given sufficient thought to whether such measures actually work?

Reliance on sanctions is a mistake. Sanctions generally do not achieve their underlying objectives, Not only do sanctions undermine the well-being of those living in targeted countries, they also create substantial costs for the world economy. In addition, sanctions reduce economic and civil liberties, and by disrupting global value chains undermine peaceful relations, leaving everyone worse off.

If the Iraqis had been able to trade with the world, it is doubtful if groups such as ISIS would have found a breeding ground in the country. The US, which has been the main diplomatic force pushing for sanctions, only bears a small share of the cost, just 0.6 per cent of the Western trade loss.

Shutting out countries from the global marketplace is not conducive to free markets or free societies. Linking the world together in advanced global value chains is the best strategy for future peace and prosperity.

Around the world, growing numbers of governments are using economic sanctions as a tool to influence the behaviour of other countries. Their tactics are nothing new. Sanctions and embargoes have a long and chequered past, dating back to antiquity, when the Athenian statesman Pericles issued the so-called “Megarian decree” in response to the abduction of three local women in 432 BC. Yet, as Gary Hufbauer and Jeffrey Schott note in their study of the topic, rather than preventing conflict, Pericles’s sanctions in Ancient Greece brought a number of unintended consequences; ultimately helping to prolong and intensify the Peloponnesian War.

This might be the first instance of sanctions being tried, and failing, but we have many more recent cases to choose from. Veterans of GCSE history may remember the League of Nations and the failure of its sanctions to protect Abyssinia from Fascist Italy. Draconian regimes still rule countries like Iran, largely under American embargo since 1979 – not to mention Cuba, whose sanctions date back to 1962.

Fast forward to 2018, and the global appetite for sanctions looks as strong as ever, with President Trump edging ever closer to full-scale trade war with China. Rarely a week seems to go by without news of fresh sanctions against Russia from the Western world. Citizens, horrified by extra-judicial killings and cyber warfare, might well favour such penalties. In times of public outrage, it may feel and look good for policy-makers to be “doing something”. But have we given sufficient thought to whether such measures actually work?

Trade sanctions do occasionally achieve their strategic or foreign policy goals. Yet far more often, they are ineffective blunt instruments.

Policy-makers should aim to promote free trade on a global level, to secure peace and prosperity.

Those that fail to learn from history, are doomed to repeat it, in Churchill’s famous words. Unfortunately, the long and largely fruitless history of sanctions suggests we’ve learnt very little.

CapX

BLOCKING PROGRESS. The damaging side effects of economic sanctions

Dr. Nima Sanandaji.

Dr. Nima Sanandaji is a Kurdish Swedish author of 25 books and the president of the European Centre of Entrepreneurship and Policy Reform.

Executive summary

During the twentieth century, economic sanctions became more prevalent. In the twenty-first century they have become a frequently used tool for governments seeking to change the behaviour of other countries.

An extensive research literature exists on the effectiveness of sanctions. Overall the research shows that sanctions very rarely achieve foreign policy goals. At the same time, sanctions create negative externalities.

Sanctions limit the economic well-being of people in targeted countries, in some cases leading to malnourishment or even starvation. They also undermine economic and civil liberties, instead encouraging centralised state control.

While sanctions are often aimed at destabilising governments, people in sanctioned countries often turn to their government when the country is isolated from the global marketplace. The sanctions on Russia in early 2014 coincided with Vladimir Putin’s popularity rising from an all-time low to an all-time high point.

The sanctions against Russia have led to a trade loss estimated at US$114 billion, with US$44 billion borne by the sanctioning Western countries. In percentage terms, Germany bears almost 40 per cent of the Western trade loss, compared with just 0.6 per cent incurred by the United States.

Two wealthy countries that are neutral in sanctions against Russia Israel and Switzerland have experienced a trade loss of 25 per cent between 2014 and 2016. This is nearly as high as the 30 per cent trade loss of the largest four sanctioning economies. Since sanctions undermine global value chains, neutral third-party countries are also hurt.

Fostering global value chains is a better strategy for promoting security, since economic interdependency makes peace a more attractive alternative than conflict. Market exchange is typically a better option than sanctions if the objective is a free, peaceful and prosperous world.

Introduction

Economic sanctions have become an increasingly popular tool in foreign affairs since the end of the Cold War. The concept of economic sanctions is not new. In fact, 2,400 years ago Athens declared a trade embargo on the neighbouring city state of Megara, strangling the city’s trade. Powers with naval dominance, such as the British Empire, used trade blockades during times of war. However, while sanctions were a known policy tool, they were seldom systematically used until modern times. During the twentieth century sanctions become more prevalent, and in the twenty-first century their position as a popular foreign policy tool has solidified.

This paper argues that this reliance on sanctions is a mistake. Sanctions generally do not achieve the underlying objectives, while they create substantial costs for the world economy. In addition, sanctions reduce economic and civil liberties, and by disrupting global value chains undermine peaceful relations.

Economic sanctions usually aim at either signalling dissatisfaction with particular policies, constraining the sanctioned nation or its leaders from further action, or to act as a coercive measure towards a government in an attempt to reverse its actions. Sanctions can severely undermine prosperity in countries when the ‘international community’ joins together in isolating them. In 1966, the United Nations for the first time introduced comprehensive sanctions against Rhodesia. Eleven years later similar measures were enforced against South Africa. These policies were directed at undermining white supremacy rule, an aim which seems to have been accomplished. These sanctions policies were successful due to the context in which they were introduced. Rhodesia and South Africa were countries governed by apartheid rule, and the large majority of the population were discriminated against due to the colour of their skin. Many whites also strongly objected to apartheid. A similarity can be drawn to Ronald Reagan’s escalation of the Cold War, which arguably accelerated the fall of the Soviet Union. In both cases, pressure was put on systems already on the brink of collapse.

During the Cold War period, sanctions were still relatively uncommon. If the West isolated a nation economically, it ran the risk of turning that nation over to the Soviet bloc. Rhodesia and South Africa were obviously the exception, since they were rejected by both blocs due to their racist policies. When the Cold War ended, Western powers gained both military and economic dominance and hence could apply sanctions policies more frequently without as many geopolitical risks. However, contrary to the early experience with apartheid states, sanctions overall proved to be less than effective.

Sanctions rarely achieve their goals

Extensive research has been carried out on the outcome and impact of economic sanctions, with different claims over their results. The Oxford Reference overview article on economic sanctions states that ‘There is considerable disagreement over their effectiveness. Critics point out that they are easily evaded and often inflict more pain on those they are designed to help than on the governments they are meant to influence”. The first major wave of research on the effects of economic sanctions was published during the 1960s and 1970s. The consensus of these papers, as summarised by Baldwin (1985: 373), was that sanctions were not as effective as military force.

The debate is not one-sided, as for some time there was academic enthusiasm about sanctions. According to Rogers (1996: 72), ‘Economic sanctions are more effective than most analysts suggest. Their efficacy is underrated in part because unlike other foreign policy instruments sanctions have no natural advocate or constituency’. An influential study by Hufbauer, Schott and Elliot (1990) was for some time seen as proof that sanctions were an effective tool to achieve policy change in foreign countries. The researchers examined 115 identified cases of sanctions between 1914 and 1990, and concluded that sanctions achieved their foreign policy goals in 40 of them.

In a widely cited study, Pape (1997) examined these 40 cases and concluded that only five of them involved a success for sanctions policy. Thus, four per cent rather than 35 per cent of the cases examined were a success for sanctions policy. Of the remainder, eighteen were determined by force (military defeats, governments being overthrown, etc.) rather than sanctions, eight were failures in which the target state did not concede to the coercer’s demands, six were trade disputes, and three remained undetermined.

For example, the sanctions against Germany during World War I and against Germany and Japan during World War II had been counted as having achieved their goals in the Hufbauer et al. study. However, Pape argues that both cases were won by military force. During World War I, for example, the food shortage linked to the British blockade led to the starvation of around 500,000 Germans. But the country continued to fight until militarily defeated. Another example is Rafael Trujillo, the president of the Dominican Republic, who was a protégé of the United States. His regime was seen as an embarrassment due to its repressive actions, and the US acted to remove him from power. As part of this policy, tariffs were imposed on Dominican sugar, while oil, trucks and military spare parts were embargoed. Pape challenges the conclusion of Hufbauer et al. that this was a successful case for sanctions, since the issue was resolved when the president was assassinated and his family driven out of the country. Pape concedes that sanctions in themselves have occasionally achieved foreign policy goals, such as when India imposed sanctions on Nepal in 1989 and when the US imposed sanctions against Poland in 1981. However, these are rare cases.

Although rare, the successes of sanctions policies are worth exploring. In 1989, India imposed a trade blockade on Nepal over a dispute about transit treaties and uneasiness over Nepal’s increased closeness with China. Since Nepal is a landlocked nation, it imports all of its petroleum supplies from India. The urgent fuel crisis brought on by the sanctions forced Nepal to introduce the policy changes desired by India. In 2015 Nepal accused India of having imposed a new undeclared blockade, which cut off fuel supplies and thus caused an economic and humanitarian crisis. The blockade forced Nepal to introduce constitutional amendments relating to the minority community of Indian origin in the country. Thus, it seems that India has achieved its aims through sanctions more than once. This is not surprising since the conditions and aims of the sanctions were similar in both cases.

Another case is the sanctions that the US and other Western countries imposed on Poland in 1981, in order to push for political change. Specifically, the sanctions were imposed after the martial-law crackdown of the Polish state on the Solidarity trade union. The sanctions had a major effect on Poland’s economy and seem to have influenced politics. The Solidarity movement was ultimately successful in helping to transform Poland from Marxism to democracy and a market economy.

There are also some new studies in favour of sanctions, though the consensus is still against them. Marinov (2005: 564) concludes that: ‘There is much pessimism on whether [sanctions] ever work. This article shows that economic pressure works in at least one respect: it destabilizes the leaders it targets’. In an empirical analysis, Dashti-Gibson, Davis and Radcliff (1997) reach a similar conclusion. According to this study, sanctions are able to destabilise countries, and financial sanctions in particular may achieve other goals. However, even with this form of more successful sanctions policy, the authors find a modest downward trend over time in the relative effectiveness. Drezner (2003) notes that most scholars consider sanctions an ineffective tool of statecraft. By taking into account unrealised threats of sanctions, Drezner shows that the bulk of successful economic coercion episodes are those in which the threat of sanctions leads to a policy change.

Sanctions limit economic and social liberty, instead encouraging state control

On the other hand, one must also consider that sanctions not only limit the economic well-being of people in the targeted country (in some cases leading to malnourishment or even starvation), but may also reduce economic and civil liberties. By doing so, they undermine the free exchange which breeds global prosperity and peaceful relations.

Peksen and Drury (2010) used a time-series cross-national dataset of sanctions over the period 1972 to 2000 to study the effectiveness of sanctions in reaching their goals. The authors concluded that ‘both the immediate and longer-term effects of economic sanctions significantly reduce the level of democratic freedoms in the target’ (ibid: 240). This occurs through reduced political rights as well as reduced civil liberties in the sanctioned state.

One illustrative example is the sanctions policy imposed on North Korea. World powers have relied on economic and financial sanctions to isolate the North Korean regime and force it into denuclearisation discussions. However, as the Council on Foreign Relations explains, it is doubtful if sanctions have reached their goals and if they ever will (Albert 2018). In fact, these policies have pushed North Korea to stick to a centrally planned command economy. Fortunately, there have been some openings for North Korea to trade with China and to a limited degree also South Korea. Gradually the North Korean state has incorporated some elements of free markets into its economic model, a change which has brought about a quiet social revolution (Kranz 2017). North Korea is still an authoritarian and brutal state, but the shift towards a market economy is nonetheless positive, it has for example reduced starvation.

Recently, North and South Korea signed the Panmunjom Declaration for Peace, Prosperity and Unification of the Korean Peninsula. This historic document represents a move towards peace in one of the longest global conflicts; a conflict which could result in nuclear war. An important part of the deal between the two Korean states is about fostering trade links. A question worth asking is: what if North Korea had not been exposed to international sanctions? It is likely that the state would have pushed for market integration at an earlier stage and also to a greater extent. It is also likely that the leadership of the country would have been less rather than more hostile towards the rest of the world.

Sometimes sanctions achieve certain goals, for example undermining the finances of a regime, while also creating massive unintended effects. A famous example is the economic sanctions directed against Saddam Hussein’s Baathist regime in Iraq. A near-total trade and financial embargo was imposed by the UN Security Council four days after Iraq’s invasion of neighbouring Kuwait. There is a general consensus that the sanctions achieved their goal of limiting the military development of Iraq, but also that the sanctions created poverty and malnutrition among the civilian population. According to UNICEF, per capita income in Iraq dropped from $3510 in 1989 to $450 in 1996 (Sen 2003). People’s living standards collapsed.

Free exchange fosters peace

Some 4,000 years ago, the first tamkarum entrepreneurs of the world emerged in Iraq and neighbouring Syria. During the early middle ages, the free-market renaissance of the Islamic Golden Age was focused on Baghdad. In part, this tradition of enterprise lived on even during modern times.

Before the UN sanctions were introduced, Iraq still had elements of a developed economy and a well-educated middle class. The country could have built upon this, and its entrepreneurial culture, to become more prosperous. Instead, due to global isolation the country’s economy collapsed. Educated people left Iraq as job opportunities became scarce. So, the sanctions did not topple Saddam Hussein, but did significantly limit the ability of people to benefit from market forces.

Iran also has a millennia long story of enterprise. The first known account of specialisation in a marketplace was given by Xenophon two thousand years before Adam Smith, and was based on the accounts of the marketplace of ancient Persia. In the sixteenth century, a Portuguese account describes the impressive amount of sophisticated agricultural and industrial goods for sale at the port of Hormuz, described as a free marketplace. Iran, Iraq and Syria all have deep traditions of enterprise and global exchange that could be tapped, but for this to happen trade routes must be open.

The importance of market commerce for long-term stability is often neglected. Yet, trade and commerce are often the alternative to conflict. Sanctions can break the link of the targeted nation to the global marketplace. Goods that used to be imported are suddenly in short supply, and those who work in exporting firms might lose their jobs. The government therefore intervenes to ensure that the immediate crisis is addressed. The country turns away from market freedom towards state intervention, and the people begin to view the rest of the world with suspicion. In the case of Iraq, the people ultimately turned not only to state reliance but also to tribal society and feuding militias. Sanctions thus induced future instability.

If the Iraqis had been able to trade with the world, it is doubtful if groups such as ISIS would have found a breeding ground in the country.

Putin’s popularity increased when Russia was sanctioned

One aim of sanctions is to destabilise governments, inspired by the regime changes in Rhodesia and South Africa. However, these were unusual cases, in which the vast majority of the populations suffered from white supremacy rule and naturally viewed the state with suspicion. In countries where the bond between the ruling classes and the population is stronger, sanctions can have the opposite effect by expanding the rulers’ grip over society.

A topical case is the sanctions introduced against Russia in early 2014, which have since expanded, at least from the US. These sanctions were implemented after Russia intervened in Ukraine. One concern raised in a report from the Centre for European Policy Studies is that the sanctions actually facilitate what they are designed to combat, they make Putin more popular, not less (Dolidze 2015). The mechanism through which this happens is that average Russians deem the sanctions imposed by the rest of the world to be unfair, siding with their own government position. The report states: ‘it seems that the “unfair” western sanctions have had the perverse effect of increasing Putin’s popularity at the start of the Ukraine crisis in November 2013 to the present, his ratings have risen from an ever-low to an ever-high point’.

In the last Presidential elections, held in March this year, Putin won re-election for his second consecutive term in office with 77 per cent of the vote. Although these numbers are not reliable, and some opposition candidates were blocked, it still seems that Putin currently holds strong approval ratings. The support comes as no surprise. One should remember that people above all else are motivated by seIf-interest for themselves and their families. If the US imposes sanctions which significantly increase the cost of putting food on the table for your family, you are not likely to hold a positive view of US policies.

The US recently began to target businesspersons as a way of broadening the scope of sanctions. Earlier this year, the US Treasury published a list of 96 businessmen of Russian origin. The unusual element to this was that this list was not focused on political or criminal activity; it was compiled according to wealth, based on the yearly wealth index published by Forbes. The list even includes businesspersons living in exile and in fear of persecution after falling out with the Russian state.

In theory, the sanctions against Russia are targeted on a few sectors and towards the firms owned by the political elite of the country. The reality is, however, far from the intended design of the sanctions. The inherent complexity of a world economy made up of global value chains has resulted in significant unintended consequences, which not only hurt the Russian population, but also European economies, and even those Western economies which have not participated in the sanctions policies.

Trade losses from sanctions against Russia

Crozet and Hinz (2017) analyse the friendly-me effect of the Russian sanctions and the counter-sanctions imposed by Russia. The authors study monthly trade data from 78 countries, as well as firm-level data, to estimate the actual impact of the sanctions. The authors find that the sanctions have led to a total trade loss of US$114 billion, with US$44 biliion borne by sanctioning Western countries. Out of the loss borne by the sanctioning countries, 90 per cent is incurred by EU member states. Germany is particularly badly affected, while the US, which has been the main diplomatic force pushing for the sanctions, only bears a small share of the cost. In percentage terms, Germany bears almost 40 per cent of the Western trade loss, compared with just 0.6 per cent incurred by the US.

In a recent study, Dennis Avorin and I look more closely at the friendly fire effect of sanctions policy. We focus on the two Western economies that did not participate in the policy to impose sanctions on Russia (Sanandaji and Avorin 2018). One might imagine that the two countries, Switzerland and Israel, would have massively increased their trade with Russia since the sanctions hinder Russia from trading with other Western economies. The trade data between 2014 and 2016 suggest that the opposite is true. Exports to Russia fell by around 25 per cent in the two non-sanctioning economies. This is nearly as high as the 30 per cent drop in exports experienced on average by the four largest economies engaged in the sanctions (US, Japan, Germany and UK). Between February 2014 and December 2016, we estimate that Israel had a trade loss with Russia amounting to US$680 million, while the loss for Switzerland was US$2.38 billion.

Of course, correlation and causation are two different things. It is dichult to separate the effect of reduced trade brought on by sanctions and the effect brought on by the fall in the Ruble (which in turn does reflect sanctions, but also other important economic drivers such as lower oil prices). Yet, the observation that the loss in trade was almost of the same magnitude in sanctioning and non-sanctioning economies is still important, not least because one might have expected Russia to turn to trading with Switzerland and Israel as an alternative to the other Western countries. Third parties are obviously hurt by unintended consequences.

This provides an important lesson. When the global value chains that connect people and businesses together in the modern world economy are disrupted, massive unintended losses are created. Countries that in theory are neutral are also significantly affected. As a tool for foreign policy, sanctions may have their use. But their cost in practice is much higher than was originally intended.

As the nineteenth-century economist Otto T. Mailery wrote: ‘If goods don’t cross borders, soldiers will’. This is, of course, even more relevant in the modern global economy in which global value chains create substantial interdependency between nations. Sanctions policies which exclude countries from trade with Western economies through unintended consequences reduce peaceful interdependence and thus undermine long-term global security.

A greater understanding of the history of capitalism as an institution might be useful in this regard. A commonly held view today is that the market economy is a recent invention of the Western world. In fact, for much of the last four millennia, the Middle East has alongside China and India been a free-market centre of the world, with advanced manufacturing, financial institutions and global trade. The periods characterised by market exchange have also been quite peaceful. Peaceful market exchange between the East and the West continued until the beginning of the eighteenth century, when the British Empire introduced sanctions against the industrial goods of Persia, India and China.

The motive was to foster Britain’s own industrial development. Instead of peaceful market exchange, a more aggressive form of colonial capitalism was to dominate. When later the same countries turned towards state planning, this was in large part motivated by the fact that the market economy had become associated with foreign colonialism. These embargoes, associated with the British industrial revolution, moved economic policies in the great eastern civilisations away from the market economy and thus had a significant effect on world politics. Shutting out countries from the global marketplace is not conducive to free markets or free societies.

Russia, likewise, is today associated in the West with state planning and the Soviet period. Yet, the country has a long history of peaceful trade. The Novgorod Republic, a predecessor to modern Russia, was a merchant republic. Until the communist revolution, Russia had deep trade relations with Europe and even the US. After the fall of communism, the country could have moved towards a market-friendly model. Relatively recently, there was interest in implementing market reforms inspired by Chicago School economists. The personal income tax rate in Russia is a flat 13 per cent, while the top corporate tax rate is 20 per cent. In these regards, at least, the country is quite market-oriented. However, corruption and bad governance hindered moves towards a market economy and an oligarch-dominated economy developed instead. We cannot however disregard the effect of sanctions. When sanctions are imposed on a country, it is likely to turn away from economic freedom and towards central planning. In fact, even the threat of future sanctions will favour central planning. The simple reason is that an economy is in great trouble if it is reliant on foreign goods and sanctions are introduced. Better then to rely on state enterprises or enterprises run by oligarchs with close links to the state leadership.

There is still hope for countries such as Russia. The Index of Economic Freedom finds that Russia is a relatively free-market country when it comes to business freedom, trade freedom, tax burden and fiscal health. The weaknesses of the system are, amongst others, lack of protection for private property and low freedom for investors. The government of Russia would have stronger incentives to improve these weaknesses if the country were more integrated into global trade and investment networks.

A last point about sanctions is that they became popular when the Soviet bloc fell. The western world gained economic dominance and the US in particular started using this dominance to pursue foreign policy goals. Today, however, China, India and other countries are rising as prosperous world economies. If the West pushes countries away through sanctions, they will become more dependent on trade with China and India instead. The West ultimately isolates itself, not only the sanctioned economies.

The point is not that sanctions are always the wrong policy, but that they should be used with regard for their considerable friendly-fire effects. In addition, a key aim of foreign policy should be to include more and more countries in free global trade.

Linking the world together in advanced global value chains is the best strategy for future peace and prosperity.

DESTINY DISRUPTED. A History of the World Through Islamic Eyes – Tamim Ansary.

How, on the eve of 9/11, could anyone have failed to consider Islam a major player at the table of world history!

Has the “Islamic world” not been a considerable geographical fact throughout its many centuries? Does it not remain one to this day, straddling the Asian-African landmass and forming an enormous buffer between Europe and East Asia?

In the United States the presumption holds that world history leads to the birth of its founding ideals of liberty and equality and to its resultant rise as a superpower leading the planet into the future.

My aim is to convey what Muslims think happened, because that’s what has motivated Muslims over the ages and what makes their role in world history intelligible.

The Western narrative of world history largely omits a whole civilization. Destiny Disrupted tells the history of the world from the Islamic point of view, and restores the centrality of the Muslim perspective, ignored for a thousand years.

In Destiny Disrupted, Tamim Ansary tells the rich story of world history as it looks from a new perspective: with the evolution of the Muslim community at the center. His story moves from the lifetime of Mohammed through a succession of farflung empires, to the tangle of modern conflicts that culminated in the events of 9/11. He introduces the key people, events, ideas, legends, religious disputes, and turning points of world history, imparting not only what happened but how it is understood from the Muslim perspective.

He clarifies why two great civilizations, Western and Muslim, grew up oblivious to each other, what happened when they intersected, and how the Islamic world was affected by its slow recognition that Europe, a place it long perceived as primitive, had somehow hijacked destiny.

With storytelling brio, humor, and evenhanded sympathy to all sides of the story, Ansary illuminates a fascinating parallel to the world narrative usually heard in the West. Destiny Disrupted offers a vital perspective on world conflicts many now find so puzzling.

The Islamic world today.
.

INTRODUCTION

Growing up as I did in Muslim Afghanistan, I was exposed early on to a narrative of world history quite different from the one that schoolchildren in Europe and the Americas routinely hear. At the time, however, it didn’t shape my thinking, because I read history for fun, and in Farsi there wasn’t much to read except boring textbooks. At my reading level, all the good stuff was in English.

My earliest favorite was the highly entertaining Child’s History of the World by a man named V.V. Hillyer. It wasn’t till I reread that book as an adult, many years later, that I realized how shockingly Eurocentric it was, how riddled with casual racism. I failed to notice these features as a child because Hillyer told a good story.

When I was nine or ten, the historian Arnold Toynbee passed through our tiny town of Lashkargah on a journey, and someone told him of a history-loving little bookworm of an Afghan kid living there. Toynbee was interested and invited me to tea, so I sat with the florid, old British gentleman, giving shy, monosyllabic answers to his kindly questions. The only thing I noticed about the great historian was his curious habit of keeping his handkerchief in his sleeve.

When we parted, however, Toynbee gave me a gift: Hendrick Willem Van Loon’s The Story of Mankind. The title alone thrilled me, the idea that all of “mankind” had a single story. Why, I was part of “mankind” myself, so this might be my story, in a sense, or at least might situate me in the one big story shared by all! I gulped that book down and loved it, and the Western narrative of world history became my framework ever after. All the history and historical fiction I read from then on just added flesh to those bones. I still studied the pedantic Farsi history texts assigned to us in school but read them only to pass tests and forgot them soon after.

Faint echoes of the other narrative must have lingered in me, however, because forty years later, in the fall of 2000, when I was working as a textbook editor in the United States, it welled back up. A school publisher in Texas had hired me to develop a new high school world-history textbook from scratch, and my first task was to draw up a table of contents, which entailed formulating an opinion about the overall shape of human history. The only given was the structure of the book. To fit the rhythm of the school year, the publisher ordained that it be divided into ten units, each consisting of three chapters.

But into what ten (or thirty) parts does all of time naturally divide? World history, after all, is not a chronological list of every damn thing that ever happened; it’s a chain of only the most consequential events, selected and arranged to reveal the arc of the story, it’s the arc that counts.

I tied into this intellectual puzzle with gusto, but my decisions had to pass through a phalanx of advisors: curriculum specialists, history teachers, sales executives, state education officials, professional scholars, and other such worthies. This is quite normal in elementary and high school textbook publishing, and quite proper I think, because the function of these books is to convey, not challenge, society’s most up-to-date consensus of what’s true. A chorus of advisors empanelled to second-guess a development editor’s decisions helps to ensure that the finished product reflects the current curriculum, absent which the book will not even be saleable.

As we went through the process, however, I noticed an interesting tug and pull between my advisors and me. We agreed on almost everything except, I kept wanting to give more coverage to Islam in world history, and they kept wanting to pull it back, scale it down, parse it out as side-bars in units devoted mainly to other topics. None of us was speaking out of parochial loyalty to “our own civilization.” No one was saying Islam was better or worse than “the West.” All of us were simply expressing our best sense of which events had been most consequential in the story of humankind.

Mine was so much the minority opinion that it was indistinguishable from error, so we ended up with a table of contents in which Islam constituted the central topic of just one out of thirty chapters. The other two chapters in that unit were “Pre-Columbian Civilizations of the Americas” and “Ancient Empires of Africa.”

Even this, incidentally, represented expanded coverage. The best-selling world history program of the previous textbook cycle, the 1997 edition of Perspectives on the Past, addressed Islam in just one chapter out of thirty-seven, and half of that chapter (part of a unit called “The MiddIe Ages”) was given over to the Byzantine Empire.

In short, less than a year before September 11, 2001, the consensus of expert opinion was telling me that Islam was a relatively minor phenomenon whose impact had ended long before the Renaissance. If you went strictly by our table of contents, you would never guess Islam still existed.

At the time, I accepted that my judgment might be skewed. After all, I had a personal preoccupation with Islam that was part of sorting out my own identity. Not only had I grown up in a Muslim country, but I was born into a family whose one-time high social status in Afghanistan was based entirely on our reputed piety and religious learning. Our last name indicates our supposed descent from the Ansars, “the Helpers,” those first Muslim converts of Medina who helped the Prophet Mohammed escape assassination in Mecca and thereby ensured the survival of his mission.

More recently, my grandfather’s great-grandfather was a locally revered Muslim mystic whose tomb remains a shrine for hundreds of his devotees to this day, and his legacy percolated down to my father’s time, instilling in our clan a generalized sense of obligation to know this stuff better than the average guy. Growing up, I heard the buzz of Muslim anecdotes, commentary, and speculation in my environment and some of it sank in, even though my own temperament somehow turned resolutely secular.

And it remained secular after I moved to the United States; yet I found myself more interested in Islam here than I ever had been whiIe living in the Muslim world. My interest deepened after 1979, when my brother embraced “fundamentalist” Islam. I began delving into the philosophy of Islam through writers such as Fazlur Rahman and Syed Hussein Nasr as well as its history through academics such as Ernst Grunebaum and Albert Hourani, just trying to fathom what my brother and I were coming from, or in his case, moving toward.

GROWTH OF ISLAM

Given my personal stake, I could concede that I might be overestimating the importance of Islam. And yet . . . a niggling doubt remained. Was my assessment wholly without objective basis? Take a look at these six maps, snapshots of the Islamic world at six different dates:

When I say “Islamic world,” I mean societies with Muslim majorities and/or Muslim rulers. There are, of course, Muslims in England, France, the United States, and nearly every other part of the globe, but it would be misleading, on that basis, to call London or Paris or New York a part of the Islamic world. Even by my limited definition, however, has the “Islamic world” not been a considerable geographical fact throughout its many centuries? Does it not remain one to this day, straddling the Asian-African landmass and forming an enormous buffer between Europe and East Asia? Physically, it spans more space than Europe and the United States combined. In the past, it has been a single political entity, and notions of its singleness and political unity resonate among some Muslims even now. Looking at these six maps, I still have to wonder how, on the eve of 9/11, anyone could have failed to consider Islam a major player at the table of world history!

After 9/11, perceptions changed. Non-Muslims in the West began to ask what Islam was all about, who these people were, and what was going on over there. The same questions began to bombinate with new urgency for me too. That year, visiting Pakistan and Afghanistan for the first time in thirty-eight years, I took along a book that I had found in a used bookstore in London, Islam in Modern History by the late Wilfred Cantwell Smith, a professor of religion at McGill and Harvard. Smith published his book in 1957, so the “modern history” of which he spoke had ended more than forty years earlier, and yet his analyses struck me as remarkably, in fact disturbingly pertinent to the history unfolding in 2002.

Smith shone new light on the information I possessed from childhood and from later reading. For example, during my school days in Kabul, I was quite aware of a man named Sayyid Jamaluddin-i-Afghan. Like “everyone,” I knew he was a towering figure in modern Islamic history; but frankly I never fathomed how he had earned his acclaim, beyond the fact that he espoused “pan-Islamism,” which seemed like mere pallid Muslim chauvinism to me. Now, reading Smith, I realized that the basic tenets of “Islamism,” the political ideology making such a clatter around us in 2001, had been hammered out a hundred-plus years earlier by this intellectual Karl Marx of “Islamism.” How could his very name be unknown to most non Muslims?

I plowed back into Islamic history, no longer in a quest for personal identity, but in an effort to make sense of the alarming developments among Muslims of my time, the horror stories in Afghanistan; the tumult in Iran, the insurgencies in Algeria, the Philippines, and elsewhere; the hijackings and suicide bombings in the Middle East, the hardening extremism of political Islam; and now the emergence of the Taliban. Surely, a close look at history would reveal how on Earth it had come to this.

And gradually, I came to realize how it had come to this. I came to perceive that, unlike the history of France or Malta or South America, the history of the Islamic lands “over there” was not a subset of some single world history shared by all. It was more like a whole alternative world history unto itself, competing with and mirroring the one I had tried to create for that Texas publisher, or the one published by McDougall-Littell, for which I had written “the Islam chapters.”

The two histories had begun in the same place, between the Tigris and Euphrates Rivers of ancient Iraq, and they had come to the same place, this global struggle in which the West and the Islamic world seemed to be the major players. In between, however, they had passed through different-and yet strangely parallel! landscapes.

Yes, strangely parallel: looking back, for example, from within the Western world-historical framework, one sees a single big empire towering above all others back there in ancient times: it is Home, where the dream of a universal political state was born.

Looking back from anywhere in the Islamic world, one also sees a single definitive empire looming back there, embodying the vision of a universal state, but it isn’t Rome. It is the khalifate of early Islam.

In both histories, the great early empire fragments because it simply grows too big. The decaying empire is then attacked by nomadic barbarians from the north, but in the Islamic world, “the north” refers to the steppes of Central Asia and in that world the nomadic barbarians are not the Germans but the Turks. In both, the invaders dismember the big state into a patchwork of smaller kingdoms permeated throughout by a single, unifying religious orthodoxy: Catholicism in the West, Sunni Islam in the East.

World history is always the story of how “we” got to the here and now, so the shape of the narrative inherently depends on who we mean by “we” and what we mean by “here and now.” Western world history traditionally presumes that here and now is democratic industrial (and postindustrial) civilization. In the United States the further presumption holds that world history leads to the birth of its founding ideals of liberty and equality and to its resultant rise as a superpower leading the planet into the future. This premise establishes a direction for history and places the endpoint somewhere down the road we’re traveling now. It renders us vulnerable to the supposition that all people are moving in this same direction, though some are not quite so far along, either because they started late, or because they’re moving more slowly, for which reason we call their nations “developing countries.”

When the ideal future envisioned by postindustrialized, Western democratic society is taken as the endpoint of history, the shape of the narrative leading to here-and-now features something like the following stages:

  1. Birth of civilization (Egypt and Mesopotamia)
  2. Classical age (Greece and Rome)
  3. The Dark Ages (rise of Christianity)
  4. The Rebirth: Renaissance and Reformation
  5. The Enlightenment (exploration and science)
  6. The Revolutions (democratic, industrial, technological)
  7. Rise of Nation-States: The Struggle for Empire
  8. World Wars I and II.
  9. The Cold War
  10. The Triumph of Democratic Capitalism

But what if we look at world history through Islamic eyes? Are we apt to regard ourselves as stunted versions of the West, developing toward the same endpoint, but less effectually? I think not. For one thing, we would see a different threshold dividing all of time into “before” and “after”: the year zero for us would be the year of Prophet Mohammed’s migration from Mecca to Medina, his Hijra, which gave birth to the Muslim community. For us, this community would embody the meaning of “civilized,” and perfecting this ideal would look like the impulse that had given history its shape and direction.

But in recent centuries, we would feel that something had gone awry with the flow. We would know the community had stopped expanding, had grown confused, had found itself permeated by a disruptive crosscurrent, a competing historical direction. As heirs to Muslim tradition, we would be forced to look for the meaning of history in defeat instead of triumph. We would feel conflicted between two impulses: changing our notion of “civilized” to align with the flow of history or fighting the flow of history to realign it with our notion of “civilized.”

If the stunted present experienced by Islamic society is taken as the here-and-now to be explained by the narrative of world history, then the story might break down to something like the following stages:

  1. Ancient Times: Mesopotamia and Persia.
  2. Birth of Islam
  3. The Khalifate: Quest for Universal Unity.
  4. Fragmentation: Age of the Sultanates
  5. Catastrophe: Crusaders and Mongols
  6. Rebirth: The Three-Empires Era
  7. Permeation of East by West
  8. The Reform Movements
  9. Triumph of the Secular Modernists
  10. The lslamist Reaction

Literary critic Edward Said has argued that over the centuries, the West has constructed an “Orientalist” fantasy of the Islamic world, in which a sinister sense of “otherness” is mingled with envious images of decadent opulence. Well, yes, to the extent that Islam has entered the Western imagination, that has more or less been the depiction.

But more intriguing to me is the relative absence of any depictions at all. In Shakespeare’s day, for example, preeminent world power was centered in three Islamic empires. Where are all the Muslims in his canon? Missing. If you didn’t know Moors were Muslims, you wouldn’t learn it from Othello.

Here are two enormous worlds side by side; what’s remarkable is how little notice they have taken of each other. If the Western and Islamic worlds were two individual human beings, we might see symptoms of repression here. We might ask, “What happened between these two? Were they lovers once? Is there some history of abuse?”

But there is, I think, another less sensational explanation. Throughout much of history, the West and the core of what is now the Islamic world have been like two separate universes, each preoccupied with its own internal affairs, each assuming itself to be the center of human history, each living out a different narrative, until the late seventeenth century when the two narratives began to intersect. At that point, one or the other had to give way because the two narratives were crosscurrents to each other. The West being more powerful, its current prevailed and churned the other one under.

But the superseded history never really ended. It kept on flowing beneath the surface, like a riptide, and it is flowing down there still. When you chart the hot spots of the world, Kashmir, Iraq, Chechnya, the Balkans, Israel and Palestine, Iraq, you’re staking out the borders of some entity that has vanished from the maps but still thrashes and flails in its effort not to die.

This is the story I tell in the pages that follow, and I emphasize “story.” Destiny Disrupted is neither a textbook nor a scholarly thesis. It’s more like what I’d tell you if we met in a coffeehouse and you said, “What’s all this about a parallel world history?” The argument I make can be found in numerous books now on the shelves of university libraries. Read it there if you don’t mind academic language and footnotes.

Read it here if you want the story arc. Although I am not a scholar, I have drawn on the work of scholars who sift the raw material of history to draw conclusions and of academics who sifted the work of scholarly researchers to draw meta-conclusions.

In a history spanning several thousand years, I devote what may seem like inordinate space to a brief half century long ago, but I linger here because this period spans the career of Prophet Mohammed and his first four successors, the founding narrative of Islam. I recount this story as an intimate human drama, because this is the way that Muslims know it. Academics approach this story more skeptically, crediting non-Muslim sources above supposedly less objective Muslim accounts, because they are mainly concerned to dig up what “really happened.” My aim is mainly to convey what Muslims think happened, because that’s what has motivated Muslims over the ages and what makes their role in world history intelligible.

I will, however, assert one caveat here about the origins of Islam. Unlike older religions, such as Judaism, Buddhism, Hinduism, even Christianity, Muslims began to collect, memorize, recite, and preserve their history as soon as it happened, and they didn’t just preserve it but embedded each anecdote in a nest of sources, naming witnesses to each event and listing all persons who transmitted the account down through time to the one who first wrote it down, references that function like the chain of custody validating a piece of evidence in a court case.

This implies only that the core Muslim stories cannot best be approached as parables. With a parable, we don’t ask for proof that the events occurred; that’s not the point. We don’t care if the story is true; we want the lesson to be true. The Muslim stories don’t encapsulate lessons of that sort: they’re not stories about ideal people in an ideal realm. They come to us, rather, as accounts of real people wrestling with practical issues in the mud and murk of actual history, and we take from them what lessons we will.

Which is not to deny that the Muslim stories are allegorical, nor that some were invented, nor that many or even all were modified by tellers along the way to suit agendas of the person or moment. It is only to say that the Muslims have transmitted their foundational narrative in the same spirit as historical accounts, and we know about these people and events in much the same way that we know what happened between Sulla and Marius in ancient Rome. These tales lie somewhere between history and myth, and telling them stripped of human drama falsifies the meaning they have had for Muslims, rendering less intelligible the things Muslims have done over the centuries. This then is how I plan to tell the story, and if you’re on board with me, buckle in and let’s begin.

Chapter 1

The Middle World

LONG BEFORE ISLAM was born, two worlds took shape between the Atlantic Ocean and the Bay of Bengal. Each coalesced around a different network of trade and travel routes; one of them, mainly sea routes; the other, land routes.

If you look at ancient sea traffic, the Mediterranean emerges as the obvious center of world history, for it was here that the Mycenaeans, Cretans, Phoenicians, Lydians, Greeks, Romans, and so many other vigorous early cultures met and mingled. People who lived within striking distance of the Mediterranean could easily hear about and interact with anyone else who lived within striking distance of the Mediterranean, and so this great sea itseIf became an organizing force drawing diverse people into one another’s narratives and weaving their destinies together to form the germ of a world history, and out of this came “Western civilization.”

If you look at ancient overland traffic, however, the Grand Central Station of the world was the nexus of roads and routes connecting the Indian subcontinent, Central Asia, the Iranian highlands, Mesopotamia, and Egypt, roads that ran within a territory ringed by rivers and seas,the Persian Gulf, the Indus and Oxus rivers; the Aral, Caspian, and Black seas; the Mediterranean, the Nile, and the Red Sea. This eventually became the Islamic world.

THE MEDITERRANEAN (Defined by Sea Routes)
.

THE MIDDLE WORLD (Defined by Land Routes)
.

Unfortunately, common usage assigns no single label to this second area. A portion of it is typically called the Middle East, but giving one part of it a name obscures the connectedness of the whole, and besides, the phrase Middle East assumes that one is standing in western Europe, if you’re standing in the Persian highlands, for example, the so-called Middle East is actually the Middle West. Therefore, I prefer to call this whole area from the Indus to Istanbul the Middle World, because it lies between the Mediterranean world and the Chinese world.

The Chinese world was, of course, its own universe and had little to do with the other two; and that’s to be expected on the basis of geography alone. China was cut off from the Mediterranean world by sheer distance and from the Middle World by the Himalayas, the Gobi Desert, and the jungles of southeast Asia, a nearly impenetrable barrier, which is why China and its satellites and rivals barely enter the “world history” centered in the Middle World, and why they come in for rare mention in this book. The same is true of sub-Saharan Africa, cut off from the rest of Eurasia by the world’s biggest desert. For that matter, the Americas formed yet another distinct universe with a world history of its own, which is for geographic reasons even more to be expected.

Geography, however, did not separate the Mediterranean and Middle worlds as radically as it isolated China or the Americas. These two regions coalesced as different worlds because they were what historian Philip D. Curtin has called “intercommunicating zones”: each had more interaction internally than it had with the other. From anywhere near the Mediterranean coast, it was easier to get to some other place near the Mediterranean coast than to Persepolis or the Indus River. Similarly, caravans on the overland routes crisscrossing the Middle World in ancient times could strike off in any direction at any intersection, there were many such intersections. As they traveled west, however, into Asia Minor (what we now call Turkey), the very shape of the land gradually funneled them down into the world’s narrowest bottleneck, the bridge (if there happened to be one at the given time) across the Bosporus Strait. This tended to choke overland traffic down to a trickle and turn the caravans back toward the center or south along the Mediterranean coast.

Gossip, stories, jokes, rumors, historical impressions, religious mythologies, products, and other detritus of culture flow along with traders, travelers, and conquerors. Trade and travel routes thus function like capillaries, carrying civilizational blood. Societies permeated by a network of such capillaries are apt to become characters in one another’s narratives, even if they disagree about who the good guys and the bad guys are.

Thus it was that the Mediterranean and Middle worlds developed somewhat distinct narratives of world history. People living around the Mediterranean had good reason to think of themselves at the center of human history, but people living in the Middle World had equally good reason to think they were situated at the heart of it all.

These two world histories overlapped, however, in the strip of territory where you now find Israel, where you now find Lebanon, where you now find Syria and Jordan-where you now, in short, find so much trouble.

This was the eastern edge of the world defined by sea-lanes and the western edge of the world defined by land routes. From the Mediterranean perspective, this area has always been part of the world history that has the Mediterranean as its seed and core. From the other perspective, it has always been part of the Middle World that has Mesopotamia and Persia at its core. Is there not now and has there not often been some intractable argument about this patch of land: whose world is this a part of?

The Dome of the Rock on the Temple Mount in Jerusalem.
.

THE MIDDLE WORLD BEFORE ISLAM

The first civilizations emerged along the banks of various big slow-moving rivers subject to annual floods. The Huang Ho valley in China, the Indus River valley in India, the Nile Valley in Africa, these are places where, some six thousand years ago or more, nomadic hunters and herders settled down, built villages, and became farmers.

Perhaps the most dynamic petri dish of early human culture was that fertile wedge of land between the Tigris and Euphrates known as Mesopotamia, which means, in fact, “between the rivers.” Incidentally, the narrow strip of land flanked by these two rivers almost exactly bisects the modern-day nation of Iraq. When we speak of “the fertile crescent” as “the cradle of civilization,” we’re talking about Iraq, this is where it all began.

One key geographical feature sets Mesopotamia apart from some of the other early hotbeds of culture. Its two defining rivers flow through flat, habitable plains and can be approached from any direction. Geography provides no natural defenses to the people living here, unlike the Nile, for example, which is flanked by marshes on its eastern side, by the uninhabitable Sahara on the west, and by rugged cliffs at its upper end. Geography gave Egypt continuity but also reduced its interactions with other cultures, giving it a certain stasis.

Not so, Mesopotamia. Here, early on, a pattern took hold that was repeated many times over the course of a thousand-plus years, a complex struggle between nomads and city dwellers, which kept spawning bigger empires. The pattern went like this:

Settled farmers would build irrigation systems supporting prosperous villages and towns. Eventually some tough guy, some well-organized priest, or some alliance of the two would bring a number of these urban centers under the rule of a single power, thereby forging a larger political unit, a confederation, a kingdom, an empire. Then a tribe of hardy nomads would come along, conquer the monarch of the moment, seize all his holdings, and in the process expand their empire. Eventually the hardy nomads would become soft, luxury-loving city dwellers, exactly the sort of people they had conquered, at which point another tribe of hardy nomads would come along, conquer them, and take over their empire.

Conquest, consolidation, expansion, degeneration, conquest, this was the pattern. It was codified in the fourteenth century by the great Muslim historian Ibn Khaldun, based on his observations of the world he lived in. Ibn Khaldun felt that in this pattern he had discovered the underlying pulse of history.

At any given time, this process was happening in more than one place, one empire developing here, another sprouting there, both empires expanding until they bumped up against each other, at which point one would conquer the other, forging a single new and bigger empire.

About fifty-five hundred years ago, a dozen or so cities along the Euphrates coalesced into a single network called Sumer. Here, writing was invented, the wheel, the cart, the potter’s wheel, and an early number system. Then the Akkadians, rougher fellows from upriver, conquered Sumer. Their leader, Sargon, was the first notable conqueror known to history by name, a ferocious fellow by all accounts and the ultimate self-made man, for he started out poor and unknown but left records of his deeds in the form of clay documents stamped with cuneiform, which basically said, “This one rose up and I smote him; that one rose up and I smote him.”

Sargon led his armies so far south they were able to wash their weapons in the sea. There he said, “Now, any king who wants to call himself my equal, wherever I went, let him go!” meaning, “Let’s just see anyone else conquer as much as I have.” His empire was smaller than New Jersey.

In time, a fresh wave of nomadic ruffians from the highlands came down and conquered Akkad, and they were conquered by others, and they by others, Guttians, Kassites, Hurrians, Amorites, the pattern kept repeating. Look closely and you’ll see new rulers presiding over basically the same territory, but always more of it.

The Amorites clocked a crucial moment in this cycle when they built the famous city of Babylon and from this capital ruled the (first) Babylonian Empire. The Babylonians gave way to the Assyrians, who ruled from the even bigger and grander city of Nineveh. Their empire stretched from Iraq to Egypt, and you can imagine how enormous such a realm must have seemed at a time when the fastest way to get from one place to another was by horse. The Assyrians acquired a nasty reputation in history as merciless tyrants. It’s hard to say if they were really worse than others of their time, but they did practice a strategy Stalin made infamous in the twentieth century: they uprooted whole populations and moved them to other places, on the theory that people who had lost their homes and lived among strangers, cut off from familiar resources, would be too confused and unhappy to organize rebellion.

It worked for a while, but not forever. The Assyrians fell at last to one of their subject peoples, the Chaldeans, who rebuilt Babylon and won a lustrous place in history for their intellectual achievements in astronomy, medicine, and mathematics. They used a base-12 system (as opposed to our base-1O system) and were pioneers in the measurement and division of time, which is why the year has twelve months, the hour has sixty minutes (five times twelve), and the minute has sixty seconds. They were terrific urban planners and architects, it was a Chaldean king who built those Hanging Gardens of Babylon, which the ancients ranked among the seven wonders of the world.

But the Chaldeans followed the Assyrian strategy of uprooting whole populations in order to divide and rule. Their king Nebuchadnezzar was the one who first smashed Jerusalem and dragged the Hebrews into captivity. It was also a Chaldean king of Babylonia, Balshazzar, who, while feasting in his palace one night, saw a disembodied hand write on his wall in letters of fire, “Mene mene tekel upharsin.”

His sycophants couldn’t make heads or tails of these words, probably because they were blind drunk, but also because the words were written in some strange tongue (Aramaic, as it happens.) They sent for the Hebrew captive Daniel, who said the words meant “Your days are numbered; you’ve been weighed and found wanting; your kingdom will be divided.” At least so goes the Old Testament story in the book of Daniel.

Balshazzar barely had time to ponder the prophecy before it came true. A sudden blistering bloodbath was unleashed upon Babylon by the newest gang of ruffians from the highlands, an alliance of Persians and Medes. These two Indo-European tribes put an end to second Babylonia and replaced it with the Persian Empire.

At this point, the recurrent pattern of ever-bigger empires in the heart of the Middle World came to an end or at least to a long pause. For one thing, by the time the Persians were done, there wasn’t much left to conquer. Both “cradles of civilization,” Egypt and Mesopotamia, ended up as part of their realm. Their suzerainty stretched west into Asia Minor, south to the Nile, and east through the Iranian highlands and Afghanistan to the Indus River. The perfumed and polished Persians probably saw no point in further conquest: south of the Indus lay steaming jungles, and north of Afghanistan stretched harsh steppes raked by bitter winds and roamed by Turkish nomads eking out a bare existence with their herds and flocks, who even wanted to rule that? The Persians therefore contented themselves with building a string of forts to keep the barbarians out, so that decent folks might pursue the arts of civilized living on the settled side of the fence.

By the time the Persians took charge, around 550 BCE, a lot of consolidation had already been done: in each region, earlier conquerors had drawn various local tribes and towns into single systems ruled by one monarch from a central capital, whether Elam, Ur, Nineveh, or Babylon. The Persians profited from the work (and bloodshed) of their predecessors.

Yet the Persian Empire stands out for several reasons. First, the Persians were the counter-Assyrians. They developed a completely opposite idea of how to rule a vast realm. Instead of uprooting whole nations, they resettled them. They set the Hebrews free from captivity and helped them get back to Canaan. The Persian emperors pursued a multicultural, many-people-under-one-big-tent strategy. They controlled their enormous realm by letting all the different constituent people live their own lives according to their own folkways and mores, under the rule of their own leaders, provided they paid their taxes and submitted to a few of the emperor’s mandates and demands. The Muslims later picked up on this idea, and it persisted through Ottoman times.

Second, the Persians saw communication as a key to unifying, and thus controlling, their realm. They promulgated a coherent set of tax laws and issued a single currency for their realm, currency being the medium of communication in business. They built a tremendous network of roads and studded it with hostels to make travel easy. They developed an efficient postal system, too, an early version of the Pony Express. That quote you sometimes see associated with the US Postal Service, “Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds,” comes from ancient Persia.

The Persians also employed a lot of translators. You couldn’t get away with saying, “But, officer, I didn’t know it was against the law; I don’t speak Persian.” Translators enabled the emperors to broadcast written descriptions of their splendor and greatness in various languages so that all their subjects could admire them. Darius (“the Great”), who brought the Persian Empire to one of its several peaks, had his life story carved into a rock at a place called Behistun. He had it inscribed in three languages: Old Persian, Elamite, and Babylonian, fifteen thousand characters devoted to Darius’s deeds and conquests, detailing the rebels who had tried and failed to topple him and the punishments he had meted out to them, essentially communicating that you did not want to mess with this emperor: he’d cut off your nose, and worse. Nonetheless, citizens of the empire found Persian rule basically benign. The well-oiled imperial machinery kept the peace, which let ordinary folks get on with the business of raising families, growing crops, and making useful goods.

The part of Darius’s Behistun inscription written in Old Persian was decipherable from modern Persian, so after it was rediscovered in the nineteenth century, scholars were able to use it to unlock the other two languages and thus gain access to the cuneiform libraries of ancient Mesopotamia, libraries so extensive that we know more about daily life in this area three thousand years ago than we know about daily life in western Europe twelve hundred years ago.

Religion permeated the Persian world. It wasn’t the million-gods idea of Hinduism, nor was it anything like the Egyptian pantheon of magical creatures with halfhuman and half-animal shapes, nor was it like Greek paganism, which saw every little thing in nature as having its own god, a god who looked human and had human frailties. No, in the Persian universe, Zoroastrianism held pride of place. Zoroaster lived about a thousand years before Christ, perhaps earlier or perhaps later; no one really knows. He hailed from northern Iran, or maybe northern Afghanistan, or maybe somewhere east of that; no one really knows that, either. Zoroaster never claimed to be a prophet or channeler of divine energy, much less a divinity or deity. He considered himself a philosopher and seeker. But his followers considered him a holy man.

Zoroaster preached that the universe was divided between darkness and light, between good and evil, between truth and falsehood, between life and death. The universe split into these opposing camps at the moment of creation, they had been locked in struggle ever since, and the contest would endure to the end of time.

People, said Zoroaster, contain both principles within themselves. They choose freely whether to go this way or that. By choosing good, people promote the forces of light and life. By choosing evil, they give strength to the forces of darkness and death. There is no predestination in the Zoroastrian universe. The outcome of the great contest is always in doubt, and not only is every human being free to make moral choices, but every moral choice affects that cosmic outcome.

Zoroaster saw the drama of the universe vested in two divinities, not one, not thousands, but two. Ahura Mazda embodied the principle of good, Ahriman the principle of evil. Fire served as an iconic representation of Ahura Mazda, which has led some to characterize Zoroastrians as fire worshippers, but what they worship is not fire per se, it’s Ahura Mazda. Zoroaster spoke of an afterlife but suggested that the good go there not as a reward for being good but as a consequence of having chosen that direction. You might say they lift themselves to heaven by the bootstraps of their choices. The Persian Zoroastrians rejected religious statues, imagery, and icons, laying the basis for the hostility toward representation in religious art that reemerged forcefully in Islam.

Sometimes Zoroaster, or at least his followers, called Ahura Mazda “the Wise Lord” and spoke as if he was actually the creator of the entire universe and as if it was he who had divided all of creation into two opposing aspects a short time after the moment of creation. Thus, Zoroaster’s dualism inched toward monotheism, but it never quite arrived there. In the end, for the ancient Persian Zoroastrians, two deities with equal power inhabited the universe, and human beings were the rope in a tug of war between them.

A Zoroastrian priest was called a magus, the plural of which is magi: the three “wise men of the East” who, according to the Christian story, brought myrrh and frankincense to the infant Jesus in his stable were Zoroastrian priests. The word magician also derives from magi. These priests were thought by others (and sometimes themselves claimed) to possess miraculous powers.

In the late days of the empire, the Persians broke into the Mediterranean world and made a brief, big splash in Western world history. Persian emperor Darius sallied west to punish the Greeks. I say “punish,” not “invade” or “conquer,” because from the Persian point of view the so-called Persian Wars were not some seminal clash between two civilizations. The Persians saw the Greeks as the primitive inhabitants of some small cities on the far western edges of the civilized world, cities that implicitly belonged to the Persians, even though they were too far away to rule directly. Emperor Darius wanted the Greeks merely to confirm that they were his subjects by sending him a jar of water and a box of soil in symbolic tribute. The Greeks refused. Darius collected an army to go teach the Greeks a lesson they would never forget, but the very size of his army was as much a liability as an asset: How do you direct so many men at such a distance? How do you keep them supplied? Darius had ignored the first principle of military strategy: never fight a land war in Europe. In the end, it was the Greeks who taught the Persians an unforgettable lesson, a lesson that they quickly forgot, however, for less than one generation later, Darius’s dimwitted son Xerxes decided to avenge his father by repeating and compounding his mistakes. Xerxes, too, came limping home, and that was the end of Persia’s European adventure.

It didn’t end there, however. About 150 years later, Alexander the Great took the battle the other way. We often hear of Alexander the Great conquering the world, but what he really conquered was Persia, which had already conquered “the worId.”

With Alexander, the Mediterranean narrative broke forcefully in upon the Middle World one. Alexander dreamed of blending the two into one: of uniting Europe and Asia. He was planning to locate his capital at Babylon. Alexander cut deep and made a mark. He appears in many Persian myths and stories, which give him an outsize heroic quality, though not an altogether positive one (but not entirely villainous, either). A number of cities in the Muslim world are named after him. Alexandria is the obvious example, but a less obvious one is Kandahar, famous now because the Taliban consider it their capital. Kandahar was originally called “Iskandar,” which is how “Alexander” was pronounced in the east, but the “Is” dropped away, and “Kandar” softened into “Kandahar.”

. . .

from

Destiny Disrupted. A History of the World Through Islamic Eyes

by Tamim Ansary

get it at Amazon.com

SOME WEIRD SHIT. THE CARTOON VERSION OF FASCISM. How democracy ends – David Runciman.

If Trump is the answer, we are no longer asking the right question.

Here we are, barely two decades into the twenty-first century, and almost from nowhere the question is upon us: is this how democracy ends?

Trump’s arrival in the White House poses a direct challenge: What would democratic failure in a country like the United States actually involve? What are the things that an established democracy could not survive? We now know we ought to start asking these questions. But we don’t know how to answer them.

When democracy ends, we are likely to be surprised by the form it takes. We may not even notice that it is happening because we are looking in the wrong places.

The inauguration of President Trump was not the moment at which democracy came to an end. But it was a good moment to start thinking about what the end of democracy might mean.

Democracy has died hundreds of times, all over the world. We think we know what that looks like: chaos descends and the military arrives to restore order, until the people can be trusted to look after their own affairs again. However, there is a danger that this picture is out of date.

Until very recently, most citizens of Western democracies would have imagined that the end was a long way off, and very few would have thought it might be happening before their eyes as Trump, Brexit and paranoid populism have become a reality.

David Runciman, one of the UK’s leading professors of politics, answers all this and more as he surveys the political landscape of the West, helping us to spot the new signs of a collapsing democracy and advising us on what could come next.

David Runciman is Professor of Politics at Cambridge University and Head of the Department of Politics and International Studies.

Thinking the unthinkable

NOTHING LASTS FOREVER. At some point democracy was always going to pass into the pages of history. No one, not even Francis Fukuyama who announced the end of history back in 1989 has believed that its virtues make it immortal. But until very recently, most citizens of Western democracies would have imagined that the end was a long way off. They would not have expected it to happen in their lifetimes. Very few would have thought it might be taking place before their eyes. Yet here we are, barely two decades into the twenty-first century, and almost from nowhere the question is upon us: is this how democracy ends?

Like many people, I first found myself confronting this question after the election of Donald Trump to the presidency of the United States. To borrow a phrase from philosophy, it looked like the reductio ad absurdum of democratic politics: any process that produces such a ridiculous conclusion must have gone seriously wrong somewhere along the way. If Trump is the answer, we are no longer asking the right question. But it’s not just Trump. His election is symptomatic of an overheated political climate that appears increasingly unstable, riven with mistrust and mutual intolerance, fuelled by wild accusations and online bullying, a dialogue of the deaf drowning each other out with noise. In many places, not just the United States, democracy is starting to look unhinged.

Let me make it clear at the outset: I don’t believe that Trump’s arrival in the White House spells the end of democracy. America’s democratic institutions are designed to withstand all kinds of bumps along the road and Trump’s strange, erratic presidency is not outside the bounds of what can be survived. It is more likely that his administration will be followed by something relatively routine than by something even more outlandish. However, Trump’s arrival in the White House poses a direct challenge: What would democratic failure in a country like the United States actually involve? What are the things that an established democracy could not survive? We now know we ought to start asking these questions. But we don’t know how to answer them.

Our political imaginations are stuck with outdated images of what democratic failure looks like. We are trapped in the landscape of the twentieth century. We reach back to the 1930s or to the 1970s for pictures of what happens when democracy falls apart: tanks in the streets; tin-pot dictators barking out messages of national unity, violence and repression in tow. Trump’s presidency has drawn widespread comparison with tyrannies of the past. We have been warned not to be complacent in thinking it couldn’t happen again.

But what of the other danger: that while we are looking out for the familiar signs of failure, our democracies are going wrong in ways with which we are unfamiliar? This strikes me as the greater threat. I do not think there is much chance that we are going back to the 1930s. We are not at a second pre-dawn of fascism, violence and world war. Our societies are too different too affluent, too elderly, too networked and our collective historical knowledge of what went wrong then is too entrenched. When democracy ends, we are likely to be surprised by the form it takes. We may not even notice that it is happening because we are looking in the wrong places.

Contemporary political science has little to say about new ways that democracy might fail because it is preoccupied with a different question: how democracy gets going in the first place. This is understandable. During the period that democracy has spread around the world the process has often been two steps forward, one step back. Democracy might get tentatively established in parts of Africa or Latin America or Asia and then a coup or military takeover would snuff it out, before someone tried again. This has happened in places from Chile to South Korea to Kenya. One of the central puzzles of political science is what causes democracy to stick. It is fundamentally a question of trust: people with something to lose from the results of an election have to believe it is worth persevering until the next time. The rich need to trust that the poor won’t take their money. The soldiers need to trust that the civilians won’t take their guns. Often, that trust breaks down. Then democracy falls apart.

As a result, political scientists tend to think of democratic failure in terms of what they call ‘backsliding’. A democracy reverts back to the point before lasting confidence in its institutions could be established. This is why we look for earlier examples of democratic failure to illuminate what might go wrong in the present. We assume that the end of democracy takes us back to the beginning. The process of creation goes into reverse.

In this book I want to offer a different perspective. What would political failure look like in societies where confidence in democracy is so firmly established that it is hard to shake? The question for the twenty-first century is how long we can persist with institutional arrangements we have grown so used to trusting, that we no longer notice when they have ceased to work. These arrangements include regular elections, which remain the bedrock of democratic politics. But they also encompass democratic legislatures, independent law courts and a free press. All can continue to function as they ought while failing to deliver what they should. A hollowed-out version of democracy risks lulling us into a false sense of security. We might continue to trust in it and to look to it for rescue, even as we seethe with irritation at its inability to answer the call. Democracy could fail while remaining intact.

This analysis might seem at odds with the frequent talk about the loss of trust in democratic politics and politicians across Western societies. It is true that many voters dislike and distrust their elected representatives now more than ever. But it is not the kind of loss of trust that leads people to take up arms against democracy. Instead, it is the kind that leads them to throw up their arms in despair. Democracy can survive that sort of behaviour for a long time. Where it ends up is an open question and one I will try to answer. But it does not end up in the 1930s.

We should try to avoid the Benjamin Button view of history, which imagines that old things become young again, even as they acquire more experience. History does not go into reverse. It is true that contemporary Western democracy is behaving in ways that seem to echo some of the darkest moments in our past, anyone who watched protestors with swastikas demonstrating on the streets of Charlottesville, Virginia, and then heard the president of the United States managing to find fault on both sides, could be forgiven for fearing the worst. However, grim though these events are, they are not the precursors of a return to something we thought we’d left behind. We really have left the twentieth century behind. We need another frame of reference.

So let me offer a different analogy. It is not perfect, but I hope it helps make sense of the argument of this book. Western democracy is going through a mid-life crisis. That is not to trivialise what’s happening: mid-life crises can be disastrous and even fatal. And this is a full-blown crisis. But it needs to be understood in relation to the exhaustion of democracy as well as to its volatility, and to the complacency that is currently on display as well as to the anger. The symptoms of a mid-life crisis include behaviour we might associate with someone much younger. But it would be a mistake to assume that the way to understand what’s going on is to study how young people behave.

When a miserable middle-aged man buys a motorbike on impulse, it can be dangerous. If he is really unlucky it all ends in a fireball. But it is nothing like as dangerous as when a seventeen-year-old buys a motorbike. More often, it is simply embarrassing. The mid-life motorbike gets ridden a few times and ends up parked in the street. Maybe it gets sold. The crisis will need to be resolved in some other way, if it can be resolved at all.

American democracy is in miserable middle age. Donald Trump is its motorbike. It could still end in a fireball. More likely, the crisis will continue and it will need to be resolved in some other way, if it can be resolved at all.

I am conscious that talking about the crisis of democracy in these terms might sound selfindulgent, especially coming from a privileged, middle-aged white man. Acting out like this is a luxury many people around the world cannot afford. These are first world problems. The crisis is real but it is also a bit of a joke. That’s what makes it so hard to know how it might end.

To suffer a crisis that comes neither at the beginning nor at the end but somewhere in the middle of a life is to be pulled forwards and backwards at the same time. What pulls us forwards is our wish for something better. What pulls us back is our reluctance to let go of something that has got us this far. The reluctance is understandable: democracy has served us well. The appeal of modern democracy lies in its ability to deliver long-term benefits for societies while providing their individual citizens with a voice. This is a formidable combination. It is easy to see why we don’t want to give up on it, at least not yet. However, the choice might not simply be between the whole democratic package and some alternative, anti-democratic package. It may be that the elements that make democracy so attractive continue to operate but that they no longer work together. The package starts to come apart. When an individual starts to unravel, we sometimes say that he or she is in pieces. At present democracy looks like it is in pieces. That does not mean it is unmendable. Not yet.

So what are the factors that make the current crisis in democracy unlike those it has faced in the past, when it was younger? I believe there are three fundamental differences.

First, political violence is not what it was for earlier generations, either in scale or in character. Western democracies are fundamentally peaceful societies, which means that our most destructive impulses manifest themselves in other ways. There is still violence, of course. But it stalks the fringes of our politics and the recesses of our imaginations, without ever arriving centre stage. It is the ghost in this story.

Second, the threat of catastrophe has changed. Where the prospect of disaster once had a galvanising effect, now it tends to be stultifying. We freeze in the face of our fears.

Third, the information technology revolution has completely altered the terms on which democracy must operate. We have become dependent on forms of communication and information-sharing that we neither control nor fully understand. All of these features of our democracy are consistent with its getting older.

I have organised this book around these three themes: coup; catastrophe; technological takeover. I start with coups the standard markers of democratic failure to ask whether an armed takeover of democratic institutions is still a realistic possibility. If not, how could democracy be subverted without the use of force being required? Would we even know it was happening? The spread of conspiracy theories is a symptom of our growing uncertainty about where the threat really lies. Coups require conspiracies because they need to be plotted by small groups in secret, or else they don’t work. Without them, we are just left with the conspiracy theories, which settle nothing.

Next I explore the risk of catastrophe. Democracy will fail if everything else falls apart: nuclear war, calamitous climate change, bioterrorism, the rise of the killer robots could all finish off democratic politics, though that would be the least of our worries. If something goes truly, terribly wrong, the people who are left will be too busy scrabbling for survival to care much about voting for change. But how big is the risk that, if confronted with these threats, the life drains out of democracy anyway, as we find ourselves paralysed by indecision?

Then I discuss the possibility of technological takeover. Intelligent robots are still some way off. But low-level, semi-intelligent machines that mine data for us and stealthily take the decisions we are too busy to make are gradually infiltrating much of our lives. We now have technology that promises greater efficiency than anything we’ve ever seen before, controlled by corporations that are less accountable than any in modern political history. Will we abdicate democratic responsibility to these new forces without even saying goodbye?

Finally, I ask whether it makes sense to look to replace democracy with something better. A midlife crisis can be a sign that we really do need to change. If we are stuck in a rut, why don’t we make a clean break from what’s making us so miserable? Churchill famously called democracy the worst system of government apart from all the others that have been tried from time to time. He said it back in 1947. That was a long time ago. Has there really been nothing better to try since then? I review some of the alternatives, from twenty-first century authoritarianism to twenty-first century anarchism.

To conclude, I consider how the story of democracy might actually wind up. In my view, it will not have a single endpoint. Given their very different life experiences, democracies will continue to follow different paths in different parts of the world. Just because American democracy can survive Trump doesn’t mean that Turkish democracy can survive Erdogan. Democracy could thrive in Africa even as it starts to fail in parts of Europe. What happens to democracy in the West is not necessarily going to determine the fate of democracy everywhere. But Western democracy is still the flagship model for democratic progress. Its failure would have enormous implications for the future of politics.

Whatever happens, unless the end of the world comes first, this will be a drawn-out demise. The current American experience of democracy is at the heart of the story that I tell, but it needs to be understood against the wider experience of democracy in other times and other places. In arguing that we ought to get away from our current fixation with the 1930s, I am not suggesting that history is unimportant. Quite the opposite: our obsession with a few traumatic moments in our past can blind us to the many lessons to be drawn from other points in time. For there is as much to learn from the 1890s as from the 1930s. I go further back: to the 1650s and to the democracy of the ancient world. We need history to help us break free from our unhealthy fixations with our own immediate back story. It is therapy for the middle-aged.

The future will be different from the past. The past is longer than we think. America is not the whole world. Nevertheless, the immediate American past is where I begin, with the inauguration of President Trump. That was not the moment at which democracy came to an end. But it was a good moment to start thinking about what the end of democracy might mean.

INTRODUCTION

20 January 2017

l WATCHED THE INAUGURATION of Donald Trump as president of the United States on a large screen in a lecture hall in Cambridge, England. The room was full of international students, wrapped up against the cold, public rooms in Cambridge are not always well heated and there were as many people in coats and scarves inside the hall as there were on the podium in Washington, DC. But the atmosphere among the students was not chilly. Many were laughing and joking. The mood felt quite festive, like at any public funeral.

When Trump began to speak, the laughing soon stopped. Up on the big screen, against a backdrop of pillars and draped American flags, he looked forbidding and strange. We were scared. Trump’s barking delivery and his crudely effective hand gestures slicing the thin air with his stubby fingers, raising a clenched fist at the climax of his address had many of us thinking the same thing: this is what the cartoon version of fascism looks like. The resemblance to a scene in a Batman movie the Joker addressing the cowed citizens of Gotham was so strong it seemed like a cliché. That doesn’t make it the wrong analogy. Clichés are where the truth goes to die.

The speech Trump gave was shocking. He used apocalyptic turns of phrase that echoed the wild, angry fringes of democratic politics where democracy can start to turn into its opposite. He bemoaned ‘the rusted-out factories scattered like tombstones across the landscape of our nation the crime and gangs and drugs’. In calling for a rebirth of national pride, he reminded his audience that ‘we all bleed the same red blood of patriots’. It sounded like a thinly veiled threat. Above all, he cast doubt on the basic idea of representative government, which is that the citizens entrust elected politicians to take decisions on their behalf. Trump lambasted professional politicians for having betrayed the American people and forfeited their trust:

“For too long, a small group in our nation’s capital has reaped the rewards of government while the people have borne the cost.

Washington flourished but the people did not share its wealth.

Politicians prospered but the jobs left, and the factories closed.”

He insisted that his election marked the moment when power passed not just from president to president or from party to party, but from Washington, DC back to the people. Was he going to mobilise popular anger against any professionals who now stood in his way? Who would be able to stop him? When he had finished speaking, he was greeted in our lecture hall back in Cambridge by a stunned silence. We weren’t the only ones taken aback. Trump’s predecessor but one in the presidency, George W. Bush, was heard to mutter as he left the stage: ‘That was some weird shit.’

Then, because we live in an age when everything that’s been consumed can be instantly reconsumed, we decided to watch it again. Second time around was different. I found the speech less shocking, once I knew what was coming. I felt that I had overreacted. Just because Trump said all these things didn’t make them true. His fearsome talk was at odds with the basic civility of the scene. Wouldn’t a country that was as fractured as he said have found it hard to sit politely through his inauguration? It was also at odds with what I knew about America. It is not a broken society, certainly not by any historical standards.

Notwithstanding some recent blips, violence is in overall decline. Prosperity is rising, though it remains very unequally distributed. If people had really believed what Trump said, would they have voted for him? That would have been a very brave act, given the risks of total civil breakdown. Maybe they voted for him because they didn’t really believe him?

It took me about fifteen minutes to acclimatise to the idea that this rhetoric was the new normal. Trump’s speechwriters, Steve Bannon and Stephen Miller, had put no words in his mouth that were explicitly anti-democratic. It was a populist speech, but populism does not oppose democracy. Rather, it tries to reclaim it from the elites who have betrayed it. Nothing Trump said disputed the fundamental premise of representative democracy, which is that at the allotted time the people get to say when they have had enough of the politicians who have been making decisions for them. Trump was echoing what those who voted for him clearly believed: enough was enough.

Watching the speech over again, I found myself focusing less on Trump and more on the people arrayed alongside him. Melania Trump looked alarmed to be on the stage with her husband. President Obama merely looked uncomfortable. Hillary Clinton, off to the side, looked dazed. The joint chiefs were stony-faced and stoical. The truth is that there is little Trump could have said after taking the oath of office that would have posed a direct threat to American democracy. These were just words. What matters in politics is when words become deeds. The only people with the power to end American democracy on 20 January 2017 were the ones sitting beside him. And they were doing nothing.

How might it have been different? The minimal definition of democracy says simply that the losers of an election accept that they have lost. They hand over power without resort to violence. In other words, they grin and bear it. If that happens once, you have the makings of a democracy. If it happens twice, you have a democracy that’s built to last. In America, it has happened fifty-seven times that the losers in a presidential election have accepted the result, though occasionally it has been touch and go (notably in the much-disputed 1876 election and in 2000, when the loser of the popular vote, as with Trump, went on to win the presidency). On twentyone occasions the US has seen a peaceful transfer of power from one party to another. Only once, in 1861, has American democracy failed this test when a group of Southern states could not endure the idea of Abraham Lincoln as their legitimate president, and fought against it for four years.

To put it another way: democracy is civil war without the fighting. Failure comes when proxy battles turn into real ones. The biggest single danger to American democracy following Trump’s victory was if either President Obama or Hillary Clinton had refused to accept the result. Clinton won the popular vote by a large margin, 2.9 million votes, more than any defeated candidate in US history and she ended up the loser thanks to the archaic rules of the Electoral College. On the night of the election, Clinton was having difficulty accepting that she had been beaten, as defeated candidates often do. Obama called her to insist that she acknowledge the outcome as soon as possible. The future of American democracy depended on it.

In that respect, a more significant speech than Trump’s inaugural was the one Obama gave on the lawn of the White House on 9 November, the day after the election. He had arrived to find many of his staffers in tears, aghast at the thought that eight years of hard work were about to be undone by a man who seemed completely unqualified for the office to which he had been elected. It was only hours after the result had been declared and angry Democrats were already questioning Trump’s legitimacy. Obama took the opposite tack:

“You know, the path this country has taken has never been a straight line. We zig and zag and sometimes we move in ways that some people think is forwards and others think is moving back and that’s OK. The point is that we all go forward with a presumption of good faith in our fellow citizens because that presumption of good faith is essential to a vibrant and functioning democracy And that’s why I’m confident that this incredible journey that we’re on as Americans will go on. And I’m looking forward to doing everything I can to make sure the next president is successful in that.”

It is easy to see why Obama felt he had no choice except to say what he did. Anything else would have thrown the workings of democracy into doubt. But it is worth asking: What are the circumstances in which a sitting president might feel compelled to say something different? When does faith in the zig and zag of democratic politics stop being a precondition of progress and start to become a hostage to fortune?

Had Clinton won the 2016 election, especially if she had somehow contrived to win the Electoral College while losing the popular vote, it is unlikely Trump would have been so magnanimous. He made it clear throughout the campaign that his willingness to accept the result depended on whether or not he was the winner. A defeated Trump could well have challenged the core premise of democratic politics that, as Obama put it, ‘if we lose, we learn from our mistakes, we do some reflection, we lick our wounds, we brush ourselves off, we get back in the arena’. Licking his wounds is not Trump’s style. If the worst-case scenario for a democracy is an election in which the two sides disagree about whether the result holds, then American democracy dodged a bullet in 2016.

It is easy to imagine that Trump might have chosen to boycott the inauguration of Hillary Clinton, had he lost. That scenario would have been ugly, and petty, and it could have turned violent, but it need not have been fatal to constitutional government. The republic could have muddled through. On the other hand, had Obama refused to permit Trump’s inauguration, on the grounds that he was still occupying the White House, or that he was planning to install Clinton there, then democracy in America would have been done for, at least for now.

There is another shorthand for the minimal definition of a functioning democracy: the people with guns don’t use them. Trump’s supporters have plenty of guns and, had he lost, some of these people might have been tempted to use them. Nevertheless, there is a big difference between an opposition candidate refusing to accept defeat and an incumbent refusing to leave office. No matter how much firepower the supporters of the aggrieved loser might have at their disposal, the state always has more. If it doesn’t, it is no longer a functioning state. The ‘people with guns’ in the minimal definition of democracy refers to the politicians who control the armed forces. Democracy fails when elected officials who have the authority to tell the generals what to do refuse to give it up. Or when the generals refuse to listen.

This means that the other players who had the capacity to deal democracy a fatal blow on 20 January were also sitting beside Trump: America’s military chiefs. If they had declined to accept the orders of their new commander-in-chief for instance, if they had decided he could not be trusted with the nuclear codes then no amount of ceremony would have hidden the fact that the inauguration was an empty Charade. One reason for the air of mild hilarity in our lecture hall in Cambridge was that the rumour quickly passed around that Trump had been in possession of the nuclear football since breakfast time. The joke was that we were lucky still to be here. But none of us would have been smiling if the joint chiefs had decided that the new president was best kept in the dark. Even more alarming than an erratic new president in possession of the power to unleash destruction is the prospect of the generals deciding to keep that power for themselves.

Yet it is worth asking the same questions of the generals as of the sitting president: When is it appropriate to refuse to obey the orders of a duly elected commander-in-chief? Trump came into office surrounded by rumours that he was under the influence of a foreign power. He was certainly inexperienced, likely irresponsible and possibly compromised. American democracy has survived worse if inexperience and irresponsibility in international affairs were a barrier to the highest office, then the history of the presidency would be very different. It is the knowledge that American democracy has survived worse that makes it so hard to know how to respond now. In Cambridge, we laughed for a bit, and then we sat in glum silence. In Washington, they did the same.

. . .

from

How Democracy Ends

by David Runciman

get it at Amazon.com

BASIC INCOME AND DEPRESSION. Restoring the Future – Johann Hari.

Giving people back time, and a sense of confidence in the future.

The point of a welfare state is to establish a safety net below which nobody should ever be allowed to fall. The poorer you are, the more likely you are to become depressed or anxious, and the more likely you are to become sick in almost every way.

There is a direct relationship between poverty and the number of mood-altering drugs that people take, the antidepressants they take just to get through the day. If we want to really treat these problems, we need to deal with poverty.

Instead of using a net to catch people when they fall, Basic Income raises the floor on which everyone stands.

The world has changed fundamentally. We won’t regain security by going backward, especially as robots and technology render more and more jobs obsolete, but we can go forward, to a basic income for everyone.

There was one more obstacle hanging over my attempts to overcome depression and anxiety, and it seemed larger than anything I had addressed up to now. If you’re going to try to reconnect in the ways I’ve been describing, if you’re going to (say) develop a community, democratize your workplace, or set up groups to explore your intrinsic values, you will need time, and you need confidence.

But we are being constantly drained of both. Most people are working all the time, and they are insecure about the future. They are exhausted, and they feel as if the pressure is being ratcheted up every year. It’s hard to join a big struggle when it feels like a struggle to make it to the end of the day. Asking people to take on more -when they’re already run down, seems almost like a taunt.

But as I researched this book, I learned about an experiment that is designed to give people back time, and a sense of confidence in the future.

In the middle of the 1970s, a group of Canadian government officials chose, apparently at random, a small town called Dauphin in the rural province of Manitoba. It was, they knew, nothing special to look at. The nearest city, Winnipeg, was a four-hour drive away. It lay in the middle of the prairies, and most of the people living there were farmers growing a crop called canola. Its seventeen thousand people worked as hard as they could, but they were still struggling. When the canola crop was good, everyone did well, the local car dealership sold cars, and the bar sold booze. When the canola crop was bad, everyone suffered.

And then one day the people of Dauphin were told they had been chosen to be part of an experiment, due to a bold decision by the country’s Liberal government. For a long time, Canadians had been wondering if the welfare state they had been developing, in fits and starts over the years, was too clunky and inefficient and didn’t cover enough people. The point of a welfare state is to establish a safety net below which nobody should ever be allowed to fall: a baseline of security that would prevent people from becoming poor and prevent anxiety. But it turned out there was still a lot of poverty, and a lot of insecurity, in Canada. Something wasn’t working.

So somebody had what seemed like an almost stupidly simple idea. Up to now, the welfare state had worked by trying to plug gaps, by catching the people who fell below a certain level and nudging them back up. But if insecurity is about not having enough money to live on, they wondered, what would happen if we just gave everyone enough, with no strings attached? What if we simply mailed every single Canadian citizen, young, old, all of them, a check every year that was enough for them to live on? It would be set at a carefully chosen rate. You’d get enough to survive, but not enough to have luxuries. They called it a universal basic income. Instead of using a net to catch people when they fall, they proposed to raise the floor on which everyone stands.

This idea had even been mooted by right-wing politicians like Richard Nixon, but it had never been tried before. So the Canadians decided to do it, in one place. That’s how for several years, the people of Dauphin were given a guarantee: Each of you will be unconditionally given the equivalent of $19,000 US. (in today’s money) by the government. You don’t have to worry. There’s nothing you can do that will take away this basic income. It’s yours by right. And then they stood back to see what would happen.

At that time, over in Toronto, there was a young economics student named Evelyn Forget, and one day, one of her professors told the class about this experiment. She was fascinated. But then, three years into the experiment, power in Canada was transferred to a Conservative government, and the program was abruptly shut down. The guaranteed income vanished. To everyone except the people who got the checks, and one other person, it was quickly forgotten.

Thirty years later, that young economics student, Evelyn, had become a professor at the medical school of the University of Manitoba, and she kept bumping up against some disturbing evidence. It is a well-established fact that the poorer you are, the more likely you are to become depressed or anxious, and the more likely you are to become sick in almost every way. In the United States, if you have an income below $20,000, you are more than twice as likely to become depressed as somebody who makes $70,000 or more. And if you receive a regular income from property you own, you are ten times less likely to develop an anxiety disorder than if you don’t get any income from property. “One of the things I find just astonishing,” she told me, “is the direct relationship between poverty and the number of mood-altering drugs that people take, the antidepressants they take just to get through the day.” If you want to really treat these problems, Evelyn believed, you need to deal with these questions.

And so Evelyn found herself wondering about that old experiment that had taken place decades earlier. What were the results? Did the people who were given that guaranteed income get healthier? What else might have changed in their lives? She began to search for academic studies written back then. She found nothing. So she began to write letters and make calls. She knew that the experiment was being studied carefully at the time, that mountains of data were gathered. That was the whole point: it was a study. Where did it go?

After a lot of detective work, stretching over five years, she finally got an answer. She was told that the data gathered during the experiment was hidden away in the National Archives, on the verge of being thrown in the trash. “I got there, and found most of it in paper. It was actually sitting in boxes,” she told me. “There were eighteen hundred cubic feet. That’s eighteen hundred bankers’ boxes, full of paper.” Nobody had ever added up the results. When the Conservatives came to power, they didn’t want anyone to look further, they believed the experiment was a waste of time and contrary to their moral values.

So Evelyn and a team of researchers began the long task of figuring out what the basic income experiment had actually achieved, all those years before. At the same time, they started to track down the people who had lived through it, to discover the Iong-term effects.

The first thing that struck Evelyn, as she spoke to the people who’d been through the program, was how vividly they remembered it. Everyone had a story about how it had affected their lives. They told her that, primarily, “the money acted as an insurance policy. It just sort of removed the stress of worrying about whether or not you can afford to keep your kids in school for another year, whether or not you could afford to pay for the things that you had to pay for.”

This had been a conservative farming community, and one of the biggest changes was how women saw themselves. Evelyn met with one woman who had taken her check and used it to become the first female in her family to get a postsecondary education. She trained to be a librarian and rose to be one of the most respected people in the community. She showed Evelyn pictures of her two daughters graduating, and she talked about how proud she was she had been able to become a role model for them.

Other people talked about how it lifted their heads above constant insecurity for the first time in their lives. One woman had a disabled husband and six kids, and she made a living by cutting people’s hair in her front room. She explained that the universal income meant for the first time the family had “some cream in the coffee” small things that made life a little better.

These were moving stories, but the hard facts lay in the number crunching. After years of compiling the data, here are some of the key effects Evelyn discovered:

  • Students stayed at school longer, and performed better there.
  • The number of low-birth-weight babies declined, as more women delayed having children until they were ready.
  • Parents with newborn babies stayed at home longer to care for them, and didn’t hurry back to work.
  • Overall work hours fell modestly, as people spent more time with their kids, or learning.

But there was one result that struck me as particularly important.

Evelyn went through the medical records of the people taking part, and she found that, as she explained to me, there were “fewer people showing up at their doctor’s office complaining about mood disorders.” Depression and anxiety in the community fell significantly. When it came to severe depression and other mental health disorders that were so bad the patient had to be hospitalized, there was a drop of 9 percent in just three years.

Why was that? “It just removed the stress, or reduced the stress, that people dealt with in their everyday lives,” Evelyn concludes. You knew you’d have a secure income next month, and next year, so you could create a picture of yourself in the future that was stable.

It had another unanticipated effect, she told me. If you know you have enough money to live on securely, no matter what happens, you can turn down a job that treats you badly, or that you find humiliating. “It makes you less of a hostage to the job you have, and some of the jobs that people work just in order to survive are terrible, demeaning jobs,” she says. It gave you “that little bit of power to say, I don’t need to stay here.” That meant that employers had to make work more appealing. And over time, it was poised to reduce inequality in the town, which we would expect to reduce the depression caused by extreme status differences.

For Evelyn, all this tells us something fundamental about the nature of depression. “If it were just a brain disorder,” she told me, “if it was just a physical ailment, you wouldn’t expect to see such a strong correlation with poverty,” and you wouldn’t see such a significant reduction from granting a guaranteed basic income. “Certainly,” she said, “it makes the lives of individuals who receive it more comfortable, which works as an antidepressant.”

As Evelyn looks out over the world today, and how it has changed from the Dauphin of the mid-1970s, she thinks the need for a program like this, across all societies, has only grown. Back then, “people still expected to graduate from high school and to go get a job and work at the same company [or] at least in the same industry until they’d be sixty-five, and then they’d be retired with a nice gold watch and a nice pension.” But “people are struggling to find that kind of stability in labor today, I don’t think those days are ever coming back. We live in a globalized world. The world has changed, fundamentally.” We won’t regain security by going backward, especially as robots and technology render more and more jobs obsolete, but we can go forward, to a basic income for everyone. As Barack Obama suggested in an interview late in his presidency, a universal income may be the best tool we have for recreating security, not with bogus promises to rebuild a lost world, but by doing something distinctively new.

Buried in those dusty boxes of data in the Canadian national archives, Evelyn might have found one of the most important antidepressants for the twenty-first century.

I wanted to understand the implications of this more, and to explore my own concerns and questions about it, so I went to see a brilliant Dutch economic historian named Rutger Bregman. He is the leading European champion of the idea of a universal basic income. We ate burgers and inhaled caffeinated drinks and ended up talking late into the night, discussing the implications of all this. “Time and again,” he said, “we blame a collective problem on the individual. So you’re depressed? You should get a pill. You don’t have a job? Go to a job coach, we’ll teach you how to write a résumé or [to join] LinkedIn. But obviously, that doesn’t go to the root of the problem. Not many people are thinking about what’s actually happened to our labor market, and our society, that these [forms of despair] are popping up everywhere.”

Even middle-class people are living with a chronic “lack of certainty” about what their lives will be like in even a few months’ time, he says. The alternative approach, a guaranteed income, is partly about removing this humiliation and replacing it with security. It has now been tried in many places on a small scale, as he documents in his book Utopia for Realists. There’s always a pattern, he shows. When it’s first proposed, people say, what, just give out money? That will destroy the work ethic. People will just spend it on alcohol and drugs and watching TV. And then the results come in.

For example, in the Great Smoky Mountains, there’s a Native American tribal group of eight thousand people who decided to open a casino. But they did it a little differently. They decided they were going to split the profits equally among everyone in the group, they’d all get a check for (as it turned out) $6,000 a year, rising to $9,000 later. It was, in effect, a universal basic income for everyone. Outsiders told them they were crazy. But when the program was studied in detail by social scientists, it turned out that this guaranteed income triggered one big change. Parents chose to spend a lot more time with their children, and because they were less stressed, they were more able to be present with their kids. The result? Behavioral problems like ADHD and childhood depression fell by 40 percent. I couldn’t find any other instance of a reduction in psychiatric problems in children by that amount in a comparable period of time. They did it by freeing up the space for parents to connect with their kids.

All over the world, from Brazil to India, these experiments keep finding the same result. Rutger told me: “When I ask people, ‘What would you personally do with a basic income?’ about 99 percent of people say, ‘I have dreams, I have ambitions, I’m going to do something ambitious and useful.’” But when he asks them what they think other people would do with a basic income, they say, oh, they’ll become lifeless zombies, they’ll binge-watch Netflix all day.

This program does trigger a big change, he says, but not the one most people imagine. The biggest change, Rutger believes, will be in how people think about work. When Rutger asks people what they actually do at work, and whether they think it is worthwhile, he is amazed by how many people readily volunteer that the work they do is pointless and adds nothing to the world. The key to a guaranteed income, Rutger says, is that it empowers people to say no. For the first time, they will be able to leave jobs that are degrading, or humiliating, or excruciating. Obviously, some boring things will still have to be done. That means those employers will have to offer either better wages, or better working conditions. In one swoop, the worst jobs, the ones that cause the most depression and anxiety, will have to radically improve, to attract workers.

People will be free to create businesses based on things they believe in, to run projects to improve their community, to look after their kids and their elderly relatives. Those are all real work, but much of the time, the market doesn’t reward this kind of work. When people are free to say no, Rutger says, “I think the definition of work would become; to add something of value to make the world a little more interesting, or a bit more beautiful.”

This is, we have to be candid, an expensive proposal, a real guaranteed income would take a big slice of the national wealth of any developed country. At the moment, it’s a distant goal. But every civilizing proposal started off as a utopian dream, from the welfare state, to women’s rights, to gay equality. President Obama said it could happen in the next twenty years. If we start to argue and campaign for it now, as an antidepressant; as a way of dealing with the pervasive stress that is dragging so many of us down, it will, over time, also help us to see one of the factors that are causing all this despair in the first place. It’s a way, Rutger explained to me, of restoring a secure future to people who are losing the ability to see one for themselves; a way of restoring to all of us the breathing space to change our lives, and our culture.

I was conscious, as I thought back over these seven provisional hints at solutions to our depression and anxiety, that they require huge changes, in ourselves, and in our societies. When I felt that way, a niggling voice would come into my head. It said, nothing will ever change. The forms of social change you’re arguing for are just a fantasy. We’re stuck here. Have you watched the news? You think positive changes are a-coming?

When these thoughts came to me, I always thought of one of my closest friends.

In 1993, the journalist Andrew Sullivan was diagnosed as HIV-positive. It was the height of the AIDS crisis. Gay men were dying all over the world. There was no treatment in sight. Andrew’s first thought was: I deserve this. I brought it on myself. He had been raised in a Catholic family in a homophobic culture in which, as a child, he thought he was the only gay person in the whole world, because he never saw anyone like him on TV, or on the streets, or in books. He lived in a world where if you were lucky, being gay was a punchline, and if you were unlucky, it got you a punch in the face.

So now he thought, ‘I had it coming. This fatal disease is the punishment I deserve.’

For Andrew, being told he was going to die of AIDS made him think of an image. He had once gone to see a movie and something went wrong with the projector, and the picture went all wrong, it displayed at a weird, unwatchable angle. It stayed like that for a few minutes. His life now, he realized, was like sitting in that cinema, except this picture would never be right again.

Not long after, he left his job as editor of one of the leading magazines in the United States, the New Republic. His closest friend, Patrick, was dying of AlDS, the fate Andrew was now sure awaited him.

So Andrew went to Provincetown, the gay enclave at the tip of Cape Cod in Massachussetts, to die. That summer, in a small house near the beach, he began to write a book. He knew it would be the last thing he ever did, so he decided to write something advocating a crazy, preposterous idea, one so outlandish that nobody had ever written a book about it before. He was going to propose that gay people should be allowed to get married, just like straight people. He thought this would be the only way to free gay people from the self-hatred and shame that had trapped Andrew himself. It’s too late for me, he thought, but maybe it will help the people who come after me.

When the book, Virtually Normal, came out a year later, Patrick died when it had only been in the bookstores for a few days, and Andrew was widely ridiculed for suggesting something so absurd as gay marriage. Andrew was attacked not just by right-wingers, but by many gay left-wingers, who said he was a sellout, a wannabe heterosexual, a freak, for believing in marriage. A group called the Lesbian Avengers turned up to protest at his events with his face in the crosshairs of a gun. Andrew looked out at the crowd and despaired. This mad idea, his last gesture before dying, was clearly going to come to nothing.

When I hear people saying that the changes we need to make in order to deal with depression and anxiety can’t happen, I imagine going back in time, to the summer of 1993, to that beach house in Provincetown, and telling Andrew something:

Okay, Andrew, you’re not going to believe me, but this is what’s going to happen next. Twenty-five years from now, you’ll be alive. I know; it’s amazing; but wait, that’s not the best part. This book you’ve written, it’s going to spark a movement. And this book, it’s going to be quoted in a key Supreme Court ruling declaring marriage equality for gay people. And I’m going to be with you and your future husband the day after you receive a letter from the president of the United States telling you that this fight for gay marriage that you started has succeeded in part because of you. He’s going to light up the White House like the rainbow flag that day. He’s going to invite you to have dinner there, to thank you for what you’ve done. Oh, and by the way, that president? He’s going to be black.

It would have seemed like science fiction. But it happened. It’s not a small thing to overturn two thousand years of gay people being jailed and scorned and beaten and burned. It happened for one reason only. Because enough brave people banded together and demanded it.

Every single person reading this is the beneficiary of big civilizing social changes that seemed impossible when somebody first proposed them. Are you a woman? My grandmothers weren’t even allowed to have their own bank accounts until they were in their forties, by law. Are you a worker? The weekend was mocked as a utopian idea when labor unions first began to fight for it. Are you black, or Asian, or disabled? You don’t need me to fill in this list.

So I told myself: if you hear a thought in your head telling you that we can’t deal with the social causes of depression and anxiety, you should stop and realize that’s a symptom of the depression and anxiety itself.

Yes, the changes we need now are huge. They’re about the size of the revolution in how gay people were treated. But that revolution happened.

There’s a huge fight ahead of us to really deal with these problems. But that’s because it’s a huge crisis. We can deny that, but then we’ll stay trapped in the problem. Andrew taught me: The response to a huge crisis isn’t to go home and weep. It’s to go big. It’s to demand something that seems impossible, and not rest until you’ve achieved it.

Every now and then, Rutger, the leading European campaigner for a universal basic income, will read a news story about somebody who has made a radical career choice. A fifty-year-old man realizes he’s unfulfilled as a manager so he quits, and becomes an opera singer. A forty-five-year-old woman quits Goldman Sachs and goes to work for a charity. “It is always framed as something heroic,” Rutger told me, as we drank our tenth Diet Coke between us. People ask them, in awe: “Are you really going to do what you want to do?” Are you really going to change your life, so you are doing something that fulfills you?

It’s a sign, Rutger says, of how badly off track we’ve gone, that having fulfilling work is seen as a freakish exception, like winning the lottery, instead of how we should all be living. Giving everyone a guaranteed basic income, he says “is actually all about making it so we tell everyone, ‘Of course you’re going to do what you want to do. You’re a human being. You only live once. What would you want to do instead, something you don’t want to do?’”

. . .

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

CHILDHOOD TRAUMA AND MENTAL ILLNESS. Overcoming Childhood Trauma, Beyond the smoke – Johann Hari.

Depression isn’t a disease; depression is a normal response to abnormal life experiences.

For every category of traumatic experience you go through as a kid, you are radically more likely to become depressed as an adult. The greater the trauma, the greater your risk of depression, anxiety, or suicide.

Chronic adversities change the architecture of a child’s brain, altering the expression of genes that control stress hormone output, triggering an overactive inflammatory stress response for life, and predisposing the child to adult disease.

Emotional abuse especially, is more likely to cause depression than any other kind of trauma, even sexual molestation. Being treated cruelly by your parents is the biggest driver of depression, out of all categories.

Vincent Felitti didn’t want to discover just a sad fact, he wanted to discover a solution. He was the doctor who uncovered the startling evidence about the role childhood trauma plays in causing depression and anxiety later in life. He proved that childhood trauma makes you far more likely to be depressed or severely anxious as an adult. He traveled across the United States explaining the science, and there is now a broad scientific consensus that he was right. But for Vincent, that wasn’t the point. He didn’t want to tell people who’d survived trauma that they were broken and doomed to a diminished life because they were not properly protected as kids. He wanted to help them out of this pain. But how?

He had established these facts partly by sending a questionnaire to every single person who received health care from the insurance company Kaiser Permanente. It asked about ten traumatic things that can happen to you as a kid, and then matched them against your current health. It was only after he had been doing this for more than a year, and the data was clear, that Vincent had an idea.

What if, when a patient checked that they had suffered a trauma in childhood, the doctor waited until they next came in for health care of any kind, and asked the patient about it? Would that make any difference?

So they began an experiment. Every doctor providing help to a Kaiser Permanente patient, for anything from hemorrhoids to eczema to schizophrenia, was told to look at the patient’s trauma questionnaire, and if the patient had suffered a childhood trauma, the doctors were given a simple instruction. They were told to say something like: “I see you had to survive X or Y in your childhood. I’m sorry that happened to you, it shouldn’t have. Would you like to talk about those experiences?” If the patient said she did, the doctor was told to express sympathy, and to ask: Do you feel it had negative long-term effects on you? Is it relevant to your health today?

The goal was to offer the patient two things at the same time. The first was an opportunity to describe the traumatic experience, to craft a story about it, so the patient could make sense of it. As this experiment began, one of the things they discovered almost immediately is that many of the patients had literally never before acknowledged what happened to them to another human being.

The second, just as crucial, was to show them that they wouldn’t be judged. On the contrary, as Vincent explained to me, the purpose was for them to see that an authority figure, who they trusted, would offer them real compassion for what they’d gone through.

So the doctors started to ask the questions. While some patients didn’t want to talk about it, many of them did. Some started to explain about being neglected, or sexually assaulted, or beaten by their parents. Most, it turned out, had never asked themselves if these experiences were relevant to their health today. Prompted in this way, they began to think about it.

What Vincent wanted to know was, would this help? Or would it be harmful, stirring up old traumas? He waited anxiously for the results to be compiled from tens of thousands of these consultations.

Finally, the figures came in. In the months and years that followed, the patients who had their trauma compassionately acknowledged by an authority figure seemed to show a significant reduction in their illnesses, they were 35 percent less likely to return for medical help for any condition.

At first, the doctors feared that this might be because they had upset the patients and they had felt shamed. But literally nobody complained; and in follow-ups, a large number of patients said they were glad to have been asked. For example, one elderly woman, who had described being raped as a child for the first time, wrote them a letter: “Thank you for asking,” it said simply. “I feared I would die, and no one would ever know what had happened.”

In a smaller pilot study, after being asked these questions, the patients were given the option of discussing what had happened in a session with a psychoanalyst. Those patients were 50 percent less likely to come back to the doctor saying they felt physically ill, or seeking drugs, in the following year.

So it appeared that they were visiting the doctor less because they were actually getting less anxious, and less unwell. These were startling results. How could that be? The answer, Vincent suspects, has to do with shame. “In that very brief process,” he told me, “one person tells somebody else who’s important to them something [they regard as] deeply shameful about themselves, typically for the first time in their life. And she comes out of that with the realization, ‘I still seem to be accepted by this person.’ It’s potentially transformative.”

What this suggests is it’s not just the childhood trauma in itself that causes these problems, including depression and anxiety, it’s hiding away the childhood trauma. It’s not telling anyone because you’re ashamed. When you lock it away in your mind, it festers, and the sense of shame grows. As a doctor, Vincent can’t (alas) invent time machines to go back and prevent the abuse. But he can help his patients to stop hiding, and to stop feeling ashamed.

There is a great deal of evidence that a sense of humiliation plays a big role in depression. I wondered whether this was relevant here, and Vincent told me: “I believe that what we’re doing is very efficiently providing a massive reduction in humiliation and poor self-concept.” He started to see it as a secular version of confession in the Catholic Church. “I’m not saying this as a religious person because I’m not [religious, but confession has been in use for eighteen hundred years. Maybe it meets some basic human need if it’s lasted that long.”

You need to tell somebody what has happened to you, and you need to know they don’t regard you as being worth less than them. This evidence suggests that by reconnecting a person with his childhood trauma, and showing him that an outside observer doesn’t see it as shameful, you go a significant way toward helping to set him free from some of its negative effects.

“Now, is that all that needs to be done?” Vincent asked me. “No. But it’s a hell of a big step forward.”

Can this be right? There is evidence, from other scientific studies, that shame makes people sick. For example, closeted gay men, during the AIDS crisis, died on average two to three years earlier than openly gay men, even when they got health care at the same point in their illness. Sealing off a part of yourself and thinking it’s disgusting poisons your life. Could the same dynamic be at work here?

The scientists involved are the first to stress that more research needs to be done to find out how to build on this encouraging first step. This should only be the start. “Right now, I think that is waiting to happen, in terms of the science of it,” Vincent’s scientific partner, Robert Anda, told me. “What you’ve asked about is going to require a whole new thinking, and a generation of studies that has to put all this together. It hasn’t been done yet.”

I didn’t talk at all about the violence and abuse I survived as a child until I was in my mid-twenties, when I had a brilliant therapist. I was describing the course of my childhood to him, and I told him the story I had told myself my whole life: that I had experienced these things because I had done something wrong, and therefore I deserved it.

“Listen to what you’re saying,” he said to me. At first I didn’t understand what he meant. But then he repeated it back to me. “Do you think any child should be treated like that? What would you say if you saw an adult saying that to a ten-year-old now?”

Because I had kept these memories locked away, I had never questioned the narrative I had developed back then. It seemed natural to me. So I found his question startling.

At first I defended the adults who had behaved this way. I attacked the memory of my childhood self. It was only slowly, over time, that I came to see what he was saying.

And I felt a real release of shame.

Also on TPPA = CRISIS

CHILDHOOD TRAUMA AND MENTAL ‘ILLNESS’. Beyond the smoke

Johann Hari

Depression isn’t a disease; depression is a normal response to abnormal life experiences.

The medical team, and all their friends, expected these people, who had been restored to health to react with joy. Except they didn’t react that way. The people who did best, and lost the most weight were often thrown into a brutal depression, or panic, or rage. Some of them became suicidal.

Was there anything else that happened in your life when you were eleven? Well, Susan replied that was when my grandfather began to rape me.

“Overweight is overlooked, and that’s the way I need to be.”

What we had perceived as the problem, major obesity, was in fact, very frequently, the solution to problems that the rest of us knew nothing about. Obesity, he realized, isn’t the fire. It’s the smoke.

For every category of traumatic experience you go through as a kid, you are radically more likely to become depressed as an adult. The greater the trauma, the greater your risk of depression, anxiety, or suicide.

Emotional abuse especially, is more likely to cause depression than any other kind of trauma, even sexual molestation. Being treated cruelly by your parents is the biggest driver of depression, out of all these categories.

We have failed to see depression as a symptom of something deeper that needs to be dealt with. There’s a house fire inside many of us, and we’ve been concentrating on the smoke.

CHILDHOOD TRAUMA AND MENTAL ‘ILLNESS’. Beyond the smoke – Johann Hari

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

ADVERTISING SHITS IN YOUR HEAD. Reconnecting to Meaningful Values * JUNK VALUES. Consumerism literally is depressing – Johann Hari.

Advertising is the PR team for an economic system, Neoliberal Globalisation, that operates by making us feel inadequate and telling us the solution is to constantly spend.

We are constantly bombarded with messages that we will feel better only if we buy some specific product; and then buy something more; and buy again, and on and on, until finally your family buys your coffin.

Can we turn off the autopilot, and take back control for ourselves?

Spending often isn’t about the object itself. It is about getting to a psychological state that makes you feel better.

When there is pollution in the air that makes us feel worse, we ban the source of the pollution.

Advertising is a form of mental pollution.

When I was trying to apply everything I had learned to change, in order to be less depressed, I felt a dull, insistent tug on me. I kept getting signals that the way to be happy is simple. Buy stuff. Show it off. Display your status. Acquire things. These impulses called to me, from every advertisement, and from so many social interactions. I had learned from Tim Kasser that these are junk values, a trap that leads only to greater anxiety and depression. But what is the way beyond them? I could understand the arguments against them very well. I was persuaded. But there they were, in my head, and all around me, trying to pull me back down.

But Tim, I learned, has been proposing two ways, as starters, to wriggle free. The first is defensive. And the second is proactive, a way to stir our different values.

When there is pollution in the air that makes us feel worse, we ban the source of the pollution: we don’t allow factories to pump lead into our air. Advertising, he says, is a form of mental pollution. So there’s an obvious solution. Restrict or ban mental pollution, just like we restrict or ban physical pollution.

This isn’t an abstract idea. It has already been tried in many places. For example, the city of Sao Paulo, in Brazil, was being slowly smothered by billboards. They covered every possible space, gaudy logos and brands dominated the skyline wherever you looked. It had made the city look ugly, and made people feel ugly, by telling them everywhere they looked that they had to consume.

So in 2007 the city’s government took a bold step, they banned all outdoor advertising: everything. They called it the Clean City Law. As the signs were removed one by one, people began to see beautiful old buildings that had long been hidden. The constant ego-irritation of being told to spend was taken away, and was replaced with works of public art. Some 70 percent of the city’s residents say the change has made it a better place. I went there to see it, and almost everyone says the city seems somehow psychologically cleaner and clearer than it did before.

We could take this insight and go further. Several countries, including Sweden and Greece, have banned advertising directed at children. While I was writing this book, there was a controversy after a company marketing diet products put advertisements in the London Underground asking, ARE YOU BEACH BODY READY? next to a picture of an impossibly lithe woman. The implication was that if you are one of the 99.99 percent of humans who look less buff than this, you are not “ready” to show your flesh on the beach. There was a big backlash, and the posters were eventually banned. It prompted a wave of protests across London, where people defaced ads with the words “Advertising shits in your head.”

It made me think: Imagine if we had a tough advertising regulator who wouldn’t permit ads designed to make us feel bad in any way. How many ads would survive? That’s an achievable goal, and it would clear a lot of mental pollution from our minds.

This has some value in itself, but I think the fight for it could spur a deeper conversation. Advertising is only the PR team for an economic system that operates by making us feel inadequate and telling us the solution is to constantly spend. My hunch is that, if we start to really talk about how this affects our emotional health, we will begin to see the need for more radical changes.

There was a hint of how this might start in an experiment that tried to go deeper, not just to block bad messages that divert our desires onto junk, but to see if we can draw out our positive values. This led to the second, and most exciting, path back that Tim has explored.

The kids were telling Nathan Dungan one thing, over and over again. They needed stuff. They needed consumer objects. And they were frustrated, outright angry, that they weren’t getting them. Their parents were refusing to buy the sneakers or designer clothes or latest gadgets that they needed to have, and it was throwing them into an existential panic. Didn’t their parents know how important it is to have all this?

Nathan didn’t expect to be having these conversations. He was a middle-aged man who had worked in financial services in Pennsylvania for years, advising people on investments. One day, he was talking to an educator at a middle school and she explained that the kids she was working with, middle-class, not rich, had a problem. They thought satisfaction and meaning came from buying objects. When their parents couldn’t afford them, they seemed genuinely distressed. She asked, could Nathan come in and talk to the kids about financial realities?

He agreed cautiously. But that decision was going to set him on a steep learning curve, and lead him to challenge a lot of what he took for granted.

Nathan went in believing his task was obvious. He was there to educate the kids, and their parents, about how to budget, and how to live within their financial means. But then he hit this wall of need, this ravenous hunger for stuff. To him, it was baffling. Why do they want it so badly? What’s the difference between the sneakers with the Nike swoosh and the sneakers without? Why would that gap be so significant that it would send kids into a panic?

He began to wonder if he should be talking not about how to budget, but why the teenagers wanted these things in the first place. And it went deeper than that. There was something about seeing teenagers craving apparently meaningless material objects that got Nathan to think, as adults, are we so different?

Nathan had no idea how to start that conversation, so he began to wing it. And it led to a striking scientific experiment, where he teamed up with Tim Kasser.

A short time later, in a conference room in Minneapolis, Nathan met with the families who were going to be the focus of his experiment. They were a group of sixty parents and their teenage kids, sitting in front of him on chairs. He was going to have a series of long sessions with them over three months to explore these issues and the alternatives. (At the same time, the experiment followed a separate group of the same size who didn’t meet with Nathan or get any other help. They were the experiment’s control group.)

Nathan started the conversation by handing everyone worksheets with a list of open-ended questions. He explained there was no right answer: he just wanted them to start to think about these questions. One of them said: “For me, money is …” and you had to fill in the blank.

At first, people were confused. They’d never been asked a question like this before. Lots of the participants wrote that money is scarce. Or a source of stress. Or something they try not to think about. They then broke into groups of eight, and began to contemplate their answers, haltingly. Many of the kids had never heard their parents talk about money worries before.

Then the groups began to discuss the question, why do I spend? They began to list the reasons why they buy necessities (which are obvious: you’ve got to eat), and then the reasons why they buy the things that aren’t necessities. Sometimes, people would say, they bought nonessential stuff when they felt down. Often, the teenagers would say, they craved this stuff so badly because they wanted to belong, the branded clothes meant you were accepted by the group, or got a sense of status.

As they explored this in the conversation, it became clear quite quickly, without any prompting from Nathan, that spending often isn’t about the object itself. It is about getting to a psychological state that makes you feel better. These insights weren’t deeply buried. People offered them quite quickly, although when they said them out loud, they seemed a little surprised. They knew it just below the surface, but they’d never been asked to articulate that latent feeling before.

Then Nathan asked people to list what they really value, the things they think are most important in life. Many people said it was looking after your family, or telling the truth, or helping other people. One fourteen-year-old boy wrote simply “love,” and when he read it out, the room stopped for a moment, and “you could hear a pin drop,” Nathan told me. “What he was speaking to was, how important is it for me to be connected?”

Just asking these two questions, “What do you spend your money on?” and “What do you really value?”, made most people see a gap between the answers that they began to discuss. They were accumulating and spending money on things that were not, in the end, the things that they believed in their heart mattered. Why would that be?

Nathan had been reading up on the evidence about how we come to crave all this stuff. He learned that the average American is exposed to up to five thousand advertising impressions a day, from billboards to logos on T-shirts to TV advertisements. It is the sea in which we swim. And “the narrative is that if you [buy] this thing, it’ll yield more happiness, and so thousands of times a day you’re just surrounded with that message,” he told me. He began to ask: “Who’s shaping that narrative?” It’s not people who have actually figured out what will make us happy and who are charitably spreading the good news. It’s people who have one motive only, to make us buy their product.

In our culture, Nathan was starting to believe, we end up on a materialistic autopilot. We are constantly bombarded with messages that we will feel better (and less stinky, and less disgustingly shaped, and less all-around worthless) only if we buy some specific product; and then buy something more; and buy again, and on and on, until finally your family buys your coffin. What he wondered is, if people stopped to think about this and discussed alternatives, as his group was doing, could we turn off the autopilot, and take back control for ourselves?

At the next session, he asked the people in the experiment to do a short exercise in which everyone had to list a consumer item they felt they had to have right away. They had to describe what it was, how they first heard about it, why they craved it, how they felt when they got it, and how they felt after they’d had it for a while. For many people, as they talked this through, something became obvious. The pleasure was often in the craving and anticipation. We’ve all had the experience of finally getting the thing we want, getting it home, and feeling oddly deflated, only to find that before long, the craving cycle starts again.

People began to talk about how they had been spending, and they were slowly seeing what it was really all about. Often, not always, it was about “filling a hole. It fills some sort of loneliness gap.” But by pushing them toward that quick, rapidly evaporating high, it was also nudging them away from the things they really valued and that would make them feel satisfied in the long run. They felt they were becoming hollow.

There were some people, both teens and adults, who rejected this fiercely. They said that the stuff made them happy, and they wanted to stick with it. But most people in the group were eager to think differently.

They began to talk about advertising. At first, almost everyone declared that ads might affect other people but didn’t hold much sway over them. “Everyone wants to be smarter than the ad,” Nathan said to me later. But he guided them back to the consumer objects they had longed for. Before long, members of the group were explaining to each other: “There’s no way they’re spending billions of dollars if it’s not having an impact. They’re just not doing that. No company is going to do that.”

So far, it had been about getting people to question the junk values we have been fed for so long.

But then came the most important part of this experiment.

Nathan explained the difference that I talked about before between extrinsic and intrinsic values. He asked people to draw up a list of their intrinsic values, the things they thought were important, as an end in themselves and not because of what you get out of it. Then he asked: How would you live differently if you acted on these other values? Members of the groups discussed it.

They were surprised. We are constantly encouraged to talk about extrinsic values, but the moments when we are asked to speak our intrinsic values out loud are rare. Some said, for example, they would work less and spend more time with the people they loved. Nathan wasn’t making the case for any of this. Just asking a few open questions took most of the group there spontaneously.

Our intrinsic motivations are always there, Nathan realized, lying “dormant. It was brought out into the light,” he said. Conversations like this, Nathan was realizing, don’t just happen “in our culture today. We don’t allow space or create space for these really critical conversations to take place, so it just creates more and more isolation.”

Now that they had identified how they had been duped by junk values, and identified their intrinsic values, Nathan wanted to know: could the group choose, together, to start to follow their intrinsic goals? Instead of being accountable to advertising, could they make themselves accountable to their own most important values, and to a group that was trying to do the same thing? Could they consciously nurture meaningful values?

Now that each person had figured out his or her own intrinsic goals, they would report back at the next series of meetings about what they’d done to start moving toward them. They held each other accountable. They now had a space in which they could think about what they really wanted in life, and how to achieve it. They would talk about how they had found a way to work less and see their kids more, for example, or how they had taken up a musical instrument, or how they had started to write.

Nobody knew whether all this would have any real effect, though. Could these conversations really reduce people’s materialism and increase their intrinsic values?

Independent social scientists measured the levels of materialism of the participants at the start of the experiment, and they measured them again at the end. As he waited for the results, Nathan was nervous. This was a small intervention, in the middle of a lifetime of constant consumerist bombardment. Would it make any difference at all?

When the results came through, both Nathan and Tim were thrilled. Tim had shown before that materialism correlates strongly with increased depression and anxiety. This experiment showed, for the first time, that it was possible to intervene in people’s lives in a way that would significantly reduce their levels of materialism. The people who had gone through this experiment had significantly lower materialism and significantly higher selfesteem. It was a big and measurable effect.

It was an early shot of proof that a determined effort to reverse the values that are making us so unhappy works.

The people who took part in the study could never have made these changes alone, Nathan believes. “There was a lot of power in that connection and that community for people, removing the isolation and the fear. There’s a lot of fear around this topic.” It was only together, as a group, that they there were able to “peel those layers away, so you could actually get to the meaning, to the heart: their sense of purpose.”

I asked Nathan if we could integrate this into our ordinary lives, if we all need to form and take part in a kind of Alcoholics Anonymous for junk values, a space where we can all meet to challenge the depression-generating ideas we’ve been taught and learn to listen instead to our intrinsic values. “I would say, without question,” he said. Most of us sense we have been valuing the wrong things for too long. We need to create, he told me, a “counter-rhythm” to the junk values that have been making us mentally sick.

From his bare conference room in Minneapolis, Nathan has proven something, that we are not imprisoned in the values that have been making us feel so lousy for so long. By coming together with other people, and thinking deeply, and reconnecting with what really matters, we can begin to dig a tunnel back to meaningful values.

Also on TPPA = CRISIS

JUNK VALUES. CONSUMERISM LITERALLY IS DEPRESSING

Johann Hari

Just as we have shifted en masse from eating food to eating junk food, we have also shifted from having meaningful values to having junk values.

All this mass-produced fried chicken looks like food, and it appeals to the part of us that evolved to need food; yet it doesn’t give us what we need from food, nutrition. Instead, it fills us with toxins.

In the same way, all these materialistic values, telling us to spend our way to happiness, look like real values; they appeal to the part of us that has evolved to need some basic principles to guide us through life; yet they don’t give us what we need from values, a path to a satisfying life.

Studies show that materialistic people are having a worse time, day by day, on all sorts of fronts. They feel sicker, and they are angrier. Something about a strong desire for materialistic pursuits actually affects their day-to-day lives, and decreases the quality of their daily experience. They experienced less joy, and more despair.

For thousands of years, philosophers have been suggesting that if you overvalue money and possessions, or if you think about life mainly in terms of how you look to other people, you will be unhappy.

Modern research indicates that materialistic people, who think happiness comes from accumulating stuff and a superior status, have much higher levels of depression and anxiety. The more our kids value getting things and being seen to have things, the more likely they are to be suffering from depression and anxiety.

The pressure, in our culture, runs overwhelmingly one way, spend more; work more. We live under a system that constantly distracts us from what’s really good about life. We are being propagandized to live in a way that doesn’t meet our basic psychological needs, so we are left with a permanent, puzzling sense of dissatisfaction.

The more materialistic and extrinsically motivated you become, the more depressed you will be.

JUNK VALUES. CONSUMERISM LITERALLY IS DEPRESSING – Johann Hari

. . .

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

The Spirit Level. Why equality is better for everyone – Richard Wilkinson and Kate Pickett.

“For the first time in history, the poor are on average fatter than the rich.”
How is it that we have created so much mental and emotional suffering despite levels of wealth and comfort unprecedented in human history? The luxury and extravagance of our lives is so great that it threatens the planet.

At the pinnacle of human material and technical achievement, we find ourselves anxiety-ridden, prone to depression, worried about how others see us, unsure of our friendships, driven to consume and with little or no community life. Our societies are, despite their material success, increasingly burdened by their social failings.

If we are to gain further improvements in the real quality of life, we need to shift attention from material standards and economic growth to ways of improving the psychological and social wellbeing of whole societies. It is possible to improve the quality of life for everyone. We shall set out the evidence and our reasons for interpreting it the way we do, so that you can judge for yourself.

Social theories are partly theories about ourselves; indeed, they might almost be regarded as part of our selfawareness or self-consciousness of societies. The knowledge that we cannot carry on as we have, that change is necessary, is perhaps grounds for optimism: maybe we do, at last, have the chance to make a better world.

The truth is that both our broken society and broken economy resulted from the growth of inequality. The problems in rich countries are not caused by the society not being rich enough (or even by being too rich) but by the scale of material differences between people within each society being too big. What matters is where we stand in relation to others in our own society.

Why do we mistrust people more in the UK than in Japan? Why do Americans have higher rates of teenage pregnancy than the French? What makes the Swedish thinner than the Greeks? The answer: inequality.

This groundbreaking book, based on years of research, provides hard evidence to show:

  • How almost everything from life expectancy to depression levels, violence to illiteracy is affected not by how wealthy a society is, but how equal it is.
  • That societies with a bigger gap between rich and poor are bad for everyone in them including the well-off.
  • How we can flnd positive solutions and move towards a happier, fairer future.

Urgent, provocative and genuinely uplifting, The Spirit Level has been heralded as providing a new way of thinking about ourselves and our communities, and could change the way you see the world.

Richard Wilkinson has played a formative role in international research on the social determinants of health. He studied economic history at the London School of Economics before training in epidemiology and is Professor Emeritus at the University of Nottingham Medical School, Honorary Professor at University College London and Visiting Professor at the University of York.

Kate Pickett is Professor of Epidemiology at the University of York and a National Institute for Health Research Career Scientist. She studied physical anthropology at Cambridge, nutritional sciences at Cornell and epidemiology at the University of California Berkeley.

People usually exaggerate the importance of their own work and we worry about claiming too much. But this book is not just another set of nostrums and prejudices about how to put the world to rights. The work we describe here comes out of a very long period of research (over fifty person-years between us) devoted, initially, to trying to understand the causes of the big differences in life expectancy, the ‘health inequalities’ between people at different levels in the social hierarchy in modern societies. The focal problem initially was to understand why health gets worse at every step down the social ladder, so that the poor are less healthy than those in the middle, who in turn are less healthy than those further up.

Like others who work on the social determinants of health, our training in epidemiology means that our methods are those used to trace the causes of diseases in populations, trying to find out why one group of people gets a particular disease while another group doesn’t, or to explain why some disease is becoming more common. The same methods can, however, also be used to understand the causes of other kinds of problems, not just health.

Epidemiology is the study and analysis of the distribution (who, when, and where) and determinants of health and disease conditions in defined populations.

Just as the term ‘evidence-based medicine’ is used to describe current efforts to ensure that medical treatment is based on the best scientific evidence of what works and what does not, we thought of calling this book ‘Evidence-based Politics’. The research which underpins what we describe comes from a great many research teams in different universities and research organizations. Replicable methods have been used to study observable and objective outcomes, and peer-reviewed research reports have been published in academic, scientific journals.

This does not mean that there is no guesswork. Results always have to be interpreted, but there are usually good reasons for favouring one interpretation over another. Initial theories and expectations are often called into question by later research findings which make it necessary to think again. We would like to take you on the journey we have travelled, signposted by crucial bits of evidence and leaving out only the various culs-de-sac and wrong turnings that wasted so much time, to arrive at a better understanding of how we believe it is possible to improve the quality of life for everyone in modern societies. We shall set out the evidence and our reasons for interpreting it the way we do, so that you can judge for yourself.

At an intuitive level people have always recognized that inequality is socially corrosive. But there seemed little reason to think that levels of inequality in developed societies differed enough to expect any measurable effects. The reasons which first led one of us to look for effects seem now largely irrelevant to the striking picture which has emerged. Many discoveries owe as much to luck as judgement.

The reason why the picture we present has not been put together until now is probably that much of the data has only become available in recent years. With internationally comparable information not only on incomes and income distribution but also on different health and social problems, it could only have been a matter of time before someone came up with findings like ours. The emerging data have allowed us, and other researchers, to analyse how societies differ, to discover how one factor is related to another, and to test theories more rigorously.

It is easy to imagine that discoveries are more rapidly accepted in the natural than in the social sciences, as if physical theories are somehow less controversial than theories about the social world. But the history of the natural sciences is littered with painful personal disputes, which started off as theoretical disagreements but often lasted for the rest of people’s lives. Controversies in the natural sciences are usually confined to the experts: most people do not have strong views on rival theories in particle physics. But they do have views on how society works. Social theories are partly theories about ourselves; indeed, they might almost be regarded as part of our selfawareness or self-consciousness of societies. While natural scientists do not have to convince individual cells or atoms to accept their theories, social theorists are up against a plethora of individual views and powerful vested interests.

In 1847, Ignaz Semmelweiss discovered that if doctors washed their hands before attending women in childbirth it dramatically reduced deaths from puerperal fever. But before his work could have much benefit he had to persuade people, principally his medical colleagues to change their behaviour. His real battle was not his initial discovery but what followed from it. His views were ridiculed and he was driven eventually to insanity and suicide. Much of the medical profession did not take his work seriously until Louis Pasteur and Joseph Lister had developed the germ theory of disease, which explained why hygiene was important.

We live in a pessimistic period. As well as being worried by the likely consequences of global warming, it is easy to feel that many societies are, despite their material success, increasingly burdened by their social failings. And now, as if to add to our woes, we have the economic recession and its aftermath of high unemployment. But the knowledge that we cannot carry on as we have, that change is necessary, is perhaps grounds for optimism: maybe we do, at last, have the chance to make a better world. The extraordinarily positive reception of the hardback editon of this book confirms that there is a widespread appetite for change and a desire to find positive solutions to our problems.

We have made only minor changes to this edition. Details of the statistical sources, methods and results, from which we thought most readers would want to be spared, are now provided in an appendix for those with a taste for data. Chapter 13, which is substantially about causation, has been slightly reorganized and strengthened. We have also expanded our discussion of what has made societies substantially more or less equal in the past. Because we conclude that these changes have been driven by changes in political attitudes, we think it is a mistake to discuss policy as if it were a matter of finding the right technical fix. As there are really hundreds of ways that societies can become more equal if they choose to, we have not nailed our colours to one or other set of policies. What we need is not so much a clever solution as a society which recognizes the benefits of greater equality.

If correct, the theory and evidence set out in this book tells us how to make substantial improvements in the quality of life for the vast majority of the population. Yet unless it is possible to change the way most people see the societies they live in, the theory will be stillborn. Public opinion will only support the necessary political changes if something like the perspective we outline in this book permeates the public mind.

We have therefore set up a not-for-profit organization called The Equality Trust (described at the end of this book) to make the kind of evidence set out in the following pages better known and to suggest that there is a way out of the woods for us all.

PART ONE

Material Success, Social Failure

1 The end of an era

“I care for riches, to make gifts to friends, or lead a sick man back to health with ease and plenty. Else small aid is wealth for daily gladness; once a man be done with hunger, rich and poor are all as one.” Euripides, Electra

It is a remarkable paradox that, at the pinnacle of human material and technical achievement, we find ourselves anxiety-ridden, prone to depression, worried about how others see us, unsure of our friendships, driven to consume and with little or no community life. Lacking the relaxed social contact and emotional satisfaction we all need, we seek comfort in overeating, obsessive shopping and spending, or become prey to excessive alcohol, psychoactive medicines and illegal drugs.

How is it that we have created so much mental and emotional suffering despite levels of wealth and comfort unprecedented in human history? Often what we feel is missing is little more than time enjoying the company of friends, yet even that can seem beyond us. We talk as if our lives were a constant battle for psychological survival, struggling against stress and emotional exhaustion, but the truth is that the luxury and extravagance of our lives is so great that it threatens the planet.

Research from the Harwood Institute for Public Innovation (commissioned by the Merck Family Foundation) in the USA shows that people feel that ‘materialism’ somehow comes between them and the satisfaction of their social needs. A report entitled Yearning for Balance, based on a nationwide survey of Americans, concluded that they were ‘deeply ambivalent about wealth and material gain’. A large majority of people wanted society to ‘move away from greed and excess toward a way of life more centred on values, community, and family’. But they also felt that these priorities were not shared by most of their fellow Americans, who, they believed, had become ‘increasingly atomized, selfish, and irresponsible’. As a result they often felt isolated. However, the report says, that when brought together in focus groups to discuss these issues, people were ‘surprised and excited to find that others share[d] their views’. Rather than uniting us with others in a common cause, the unease we feel about the loss of social values and the way we are drawn into the pursuit of material gain is often experienced as if it were a purely private ambivalence which cuts us off from others.

Mainstream politics no longer taps into these issues and has abandoned the attempt to provide a shared vision capable of inspiring us to create a better society. As voters, we have lost sight of any collective belief that society could be different.

Instead of a better society, the only thing almost everyone strives for is to better their own position as individuals within the existing society.

The contrast between the material success and social failure of many rich countries is an important signpost. It suggests that, if we are to gain further improvements in the real quality of life, we need to shift attention from material standards and economic growth to ways of improving the psychological and social wellbeing of whole societies. However, as soon as anything psychological is mentioned, discussion tends to focus almost exclusively on individual remedies and treatments. Political thinking seems to run into the sand.

It is now possible to piece together a new, compelling and coherent picture of how we can release societies from the grip of so much dysfunctional behaviour. A proper understanding of what is going on could transform politics and the quality of life for all of us. It would change our experience of the world around us, change what we vote for, and change what we demand from our politicians.

In this book we show that the quality of social relations in a society is built on material foundations. The scale of income differences has a powerful effect on how we relate to each other. Rather than blaming parents, religion, values, education or the penal system, we will show that the scale of inequality provides a powerful policy lever on the psychological wellbeing of all of us. Just as it once took studies of weight gain in babies to show that interacting with a loving care-giver is crucial to child development, so it has taken studies of death rates and of income distribution to show the social needs of adults and to demonstrate how societies can meet them.

Long before the financial crisis which gathered pace in the later part of 2008, British politicians commenting on the decline of community or the rise of various forms of anti-social behaviour, would sometimes refer to our ‘broken society’. The financial collapse shifted attention to the broken economy, and while the broken society was sometimes blamed on the behaviour of the poor, the broken economy was widely attributed to the rich.

Stimulated by the prospects of ever bigger salaries and bonuses, those in charge of some of the most trusted financial institutions threw caution to the wind and built houses of cards which could stand only within the protection of a thin speculative bubble. But the truth is that both the broken society and the broken economy resulted from the growth of inequality.

WHERE THE EVIDENCE LEADS

We shall start by outlining the evidence which shows that we have got close to the end of what economic growth can do for us. For thousands of years the best way of improving the quality of human life was to raise material living standards. When the wolf was never far from the door, good times were simply times of plenty. But for the vast majority of people in affluent countries the difficulties of life are no longer about filling our stomachs, having clean water and keeping warm. Most of us now wish we could eat less rather than more. And, for the first time in history, the poor are on average fatter than the rich.

Economic growth, for so long the great engine of progress, has, in the rich countries, largely finished its work. Not only have measures of wellbeing and happiness ceased to rise with economic growth but, as affluent societies have grown richer, there have been long-term rises in rates of anxiety, depression and numerous other social problems. The populations of rich countries have got to the end of a long historical journey.

Figure 1.1 Only in its early stages does economic development boost life expectancy.
.

The course of the journey we have made can be seen in Figure 1.1. It shows the trends in life expectancy in relation to Gross National Income per head in countries at various stages of economic development. Among poorer countries, life expectancy increases rapidly during the early stages of economic development, but then, starting among the middle-income countries, the rate of improvement slows down. As living standards rise and countries get richer and richer, the relationship between economic growth and life expectancy weakens. Eventually it disappears entirely and the rising curve in Figure 1.1 becomes horizontal showing that for rich countries to get richer adds nothing further to their life expectancy. That has already happened in the richest thirty or so countries nearest the top righthand corner of Figure 1.1.

The reason why the curve in Figure 1.1 levels out is not because we have reached the limits of life expectancy. Even the richest countries go on enjoying substantial improvements in health as time goes by. What has changed is that the improvements have ceased to be related to average living standards. With every ten years that passes, life expectancy among the rich countries increases by between two and three years. This happens regardless of economic growth, so that a country as rich as the USA no longer does better than Greece or New Zealand, although they are not much more than half as rich. Rather than moving out along the curve in Figure 1.1, what happens as time goes by is that the curve shifts upwards: the same levels of income are associated with higher life expectancy. Looking at the data, you cannot help but conclude that as countries get richer, further increases in average living standards do less and less for health.

While good health and longevity are important, there are other components of the quality of life. But just as the relationship between health and economic growth has levelled off, so too has the relationship with happiness. Like health, how happy people are rises in the early stages of economic growth and then levels off. This is a point made strongly by the economist Richard Layard, in his book on happiness.

Figure 1.2 Happiness and average incomes (data for UK unavailable).
.

Figures on happiness in different countries are probably strongly affected by culture. In some societies not saying you are happy may sound like an admission of failure, while in another claiming to be happy may sound selfsatisfied and smug. But, despite the difficulties, Figure 1.2 shows the ‘happiness curve’ levelling off in the richest countries in much the same way as life expectancy. In both cases the important gains are made in the earlier stages of economic growth, but the richer a country gets, the less getting still richer adds to the population’s happiness. In these graphs the curves for both happiness and life expectancy flatten off at around $25,000 per capita, but there is some evidence that the income level at which this occurs may rise over time.

The evidence that happiness levels fail to rise further as rich countries get still richer does not come only from comparisons of different countries at a single point in time (as shown in Figure 1.2). In a few countries, such as Japan, the USA and Britain, it is possible to look at changes in happiness over sufficiently long periods of time to see whether they rise as a country gets richer. The evidence shows that happiness has not increased even over periods long enough for real incomes to have doubled. The same pattern has also been found by researchers using other indicators of wellbeing such as the ‘measure of economic welfare’ or the ‘genuine progress indicator’, which try to calculate net benefits of growth after removing costs like traffic congestion and pollution.

So whether we look at health, happiness or other measures of wellbeing there is a consistent picture. In poorer countries, economic development continues to be very important for human wellbeing. Increases in their material living standards result in substantial improvements both in objective measures of wellbeing like life expectancy, and in subjective ones like happiness. But as nations join the ranks of the affluent developed countries, further rises in income count for less and less.

This is a predictable pattern. As you get more and more of anything, each addition to what you have, whether loaves of bread or cars, contributes less and less to your wellbeing. If you are hungry, a loaf of bread is everything, but when your hunger is satisfied, many more loaves don’t particularly help you and might become a nuisance as they go stale.

Sooner or later in the long history of economic growth, countries inevitably reach a level of affluence where ‘diminishing returns’ set in and additional income buys less and less additional health, happiness or wellbeing. A number of developed countries have now had almost continuous rises in average incomes for over 150 years and additional wealth is not as beneficial as it once was.

The trends in different causes of death confirm this interpretation. It is the diseases of poverty which first decline as countries start to get richer. The great infectious diseases such as tuberculosis, cholera or measles which are still common in the poorest countries today, gradually cease to be the most important causes of death. As they disappear, we are left with the so-called diseases of affluence, the degenerative cardiovascuiar diseases and cancers. While the infectious diseases of poverty are particularly common in childhood and frequently kill even in the prime of life, the diseases of affluence are very largely diseases of later life.

One other piece of evidence confirms that the reason why the curves in Figures 1.1 and 1.2 level off is because countries have reached a threshold of material living standards after which the benefits of further economic growth are less substantial. It is that the diseases which used to be called the ‘diseases of affluence’ became the diseases of the poor in affluent societies. Diseases like heart disease, stroke and obesity used to be more common among the rich. Heart disease was regarded as a businessman’s disease and it used to be the rich who were fat and the poor who were thin. But from about the 1950s onwards, in one developed country after another, these patterns reversed. Diseases which had been most common among the better-off in each society reversed their social distribution to become more common among the poor.

THE ENVIRONMENTAL LIMITS TO GROWTH

At the same time as the rich countries reach the end of the real benefits of economic growth, we have also had to recognize the problems of global warming and the environmental limits to growth. The dramatic reductions in carbon emissions needed to prevent runaway climate change and rises in sea levels may mean that even present levels of consumption are unsustainable particularly if living standards in the poorer, developing, world are to rise as they need to. In Chapter 15 we shall discuss the ways in which the perspective outlined in this book fits in with policies designed to reduce global warming.

INCOME DIFFERENCES WITHIN AND BETWEEN SOCIETIES

We are the first generation to have to find new answers to the question of how we can make further improvements to the real quality of human life. What should we turn to if not to economic growth? One of the most powerful clues to the answer to this question comes from the fact that we are affected very differently by the income differences within our own society from the way we are affected by the differences in average income between one rich society and another.

In Chapters 4-12 we focus on a series of health and social problems like violence, mental illness, teenage births and educational failure, which within each country are all more common among the poor than the rich. As a result, it often looks as if the effect of higher incomes and living standards is to lift people out of these problems. However, when we make comparisons between different societies, we find that these social problems have little or no relation to levels of average incomes in a society.

Take health as an example. Instead of looking at life expectancy across both rich and poor countries as in Figure 1.1, look just at the richest countries. Figure 1.3 shows just the rich countries and confirms that among them some countries can be almost twice as rich as others without any benefit to life expectancy. Yet within any of them death rates are closely and systematically related to income.

Figure 1.3 Life expectancy is unrelated to differences in average income between rich countries.
.

Figure 1.4 shows the relation between death rates and income levels within the USA. The death rates are for people in zip code areas classified by the typical household income of the area in which they live. On the right are the richer zip code areas with lower death rates, and on the left are the poorer ones with higher death rates. Although we use American data to illustrate this, similar health gradients, of varying steepness, run across almost every society. Higher incomes are related to lower death rates at every level in society.

Figure 1.4 Death rates are closely related to differences in income within societies.
.

Note that this is not simply a matter of the poor having worse health than everyone else. What is so striking about Figure 1.4 is how regular the health gradient is right across society it is a qradient which affects us all.

Within each country, people’s health and happiness are related to their incomes. Richer people tend, on average, to be healthier and happier than poorer people in the same society. But comparing rich countries it makes no difference whether on average people in one society are almost twice as rich as people in another.

What sense can we make of this paradox that differences in average income or living standards between whole populations or countries don’t matter at all, but income differences within those same populations matter very much indeed? There are two plausible explanations. One is that what matters in rich countries may not be your actual income level and living standard, but how you compare with other people in the same society. Perhaps average standards don’t matter and what does is simply whether you are doing better or worse than other people, where you come in the social pecking order.

The other possibility is that the social gradient in health shown in Figure 1.4 results not from the effects of relative income or social status on health, but from the effects of social mobility, sorting the healthy from the unhealthy. Perhaps the healthy tend to move up the social ladder and the unhealthy end up at the bottom.

This issue will be resolved in the next chapter. We shall see whether compressing, or stretching out, the income differences in a society matters. Do more and less equal societies suffer the same overall burden of health and social problems?

2 Poverty or inequality?

“Poverty is not a certain small amount of goods, nor is it just a relation between means and ends; above all it is a relation between people. Poverty is a social status It has grown as an invidious distinction between classes”

Marshall Sahlins, Stone Age Economics

HOW MUCH INEQUALITY?

In the last chapter we saw that economic growth and increases in average incomes have ceased to contribute much to wellbeing in the rich countries. But we also saw that within societies health and social problems remain strongly associated with incomes. In this chapter we will see whether the amount of income inequality in a society makes any difference.

Figure 2.1 How much richer are the richest 20 per cent than the poorest 20 per cent in each country?
.

Figure 2.1 shows how the size of income differences varies from one developed country to another. At the top are the most equal countries and at the bottom are the most unequal. The length of the horizontal bars shows how much richer the richest 20 per cent of the population is in each country compared to the poorest 20 per cent.

Within countries such as Japan and some of the Scandinavian countries at the top of the chart, the richest 20 per cent are less than four times as rich as the poorest 20 per cent. At the bottom of the chart are countries in which these differences are at least twice as big, including two in which the richest 20 per cent get about nine times as much as the poorest. Among the most unequal are Singapore, USA, Portugal and the United Kingdom. (The figures are for household income, after taxes and benefits, adjusted for the number of people in each household.)

There are lots of ways of measuring income inequality and they are all so closely related to each other that it doesn’t usually make much difference which you use. Instead of the top and bottom 20 per cent, we could compare the top and bottom 10 or 30 per cent. Or we could have looked at the proportion of all incomes which go to the poorer half of the population. Typically, the poorest half of the population get something like 20 or 25 per cent of all incomes and the richest half get the remaining 75 or 80 per cent.

Other more sophisticated measures include one called the Gini coefficient. It measures inequality across the whole society rather than simply comparing the extremes. If all income went to one person (maximum inequality) and everyone else got nothing, the Gini coefficient would be equal to 1. If income was shared equally and everyone got exactly the same (perfect equality), the Gini would equal 0. The lower its value, the more equal a society is. The most common values tend to be between 0.3 and 0.5. Another measure of inequality is called the Robin Hood Index because it tells you what proportion of a society’s income would have to be taken from the rich and given to the poor to get complete equality.

To avoid being accused of picking and choosing our measures, our approach in this book has been to take measures provided by official agencies rather than calculating our own. We use the ratio of the income received by the top to the bottom 20 per cent whenever we are comparing inequality in different countries: it is easy to understand and it is one of the measures provided ready-made by the United Nations. When comparing inequality in US states, we use the Gini coefficient: it is the most common measure, it is favoured by economists and it is available from the US Census Bureau. In many academic research papers we and others have used two different inequality measures in order to show that the choice of measures rarely has a significant effect on results.

DOES THE AMOUNT OF INEQUALITY MAKE A DIFFERENCE?

Having got to the end of what economic growth can do for the quality of life and facing the problems of environmental damage, what difference do the inequalities shown in Figure 2.1 make?

It has been known for some years that poor health and violence are more common in more unequal societies. However, in the course of our research we became aware that almost all problems which are more common at the bottom of the social ladder are more common in more unequal societies. It is not just ill-health and violence, but also, as we will show in later chapters, a host of other social problems. Almost all of them contribute to the widespread concern that modern societies are, despite their affluence, social failures.

To see whether these problems were more common in more unequal countries, we collected internationally comparable data on health and as many social problems as we could find reliable figures for.

The list we ended up with included:

  • level of trust
  • mental illness (including drug and alcohol addiction)
  • life expectancy and infant mortality
  • obesity
  • children’s educational performance
  • teenage births
  • homicides
  • imprisonment rates
  • social mobility (not available for US states)

Occasionally what appear to be relationships between different things may arise spuriously or by chance. In order to be confident that our findings were sound we also collected data for the same health and social problems or as near as we could get to the same for each of the fifty states of the USA. This allowed us to check whether or not problems were consistently related to inequality in these two independent settings. As Lyndon Johnson said, ‘America is not merely a nation, but a nation of nations.’

To present the overall picture, we have combined all the health and social problem data for each country, and separately for each US state, to form an Index of Heaith and Social Problems for each country and US state. Each item in the indexes carries the same weight so, for example, the score for mental health has as much influence on a society’s overall score as the homicide rate or the teenage birth rate. The result is an index showing how common all these health and social problems are in each country and each US state. Things such as life expectancy are reverse scored, so that on every measure higher scores reflect worse outcomes. When looking at the Figures, the higher the score on the Index of Health and Social Problems, the worse things are. (For information on how we selected countries shown in the graphs we present in this book, please see the Appendix.)

Figure 2.2 Health and social problems are closely related to inequality among rich countries.
.

We start by showing, in Figure 2.2, that there is a very strong tendency for ill-health and social problems to occur less frequently in the more equal countries. With increasing inequality (to the right on the horizontal axis), the higher is the score on our Index of Health and Social Problems. Health and social problems are indeed more common in countries with bigger income inequalities. The two are extraordinarily closely related, chance alone would almost never produce a scatter in which countries lined up like this.

Figure 2.3 Health and social problems are only weakly related to national average income among rich countries.
.

To emphasize that the prevalence of poor health and social problems in whole societies really is related to inequality rather than to average living standards, we show in Figure 2.3 the same index of health and social problems but this time in relation to average incomes (National Income per person). It shows that there is no similarly clear trend towards better outcomes in richer countries. This confirms what we saw in Figures 1.1 and 1.2 in the first chapter. However, as well as knowing that health and social problems are more common among the less well-off within each society (as shown in Figure 1.4), we now know that the overall burden of these problems is much higher in more unequal societies.

To check whether these results are not just some odd fluke, let us see whether similar patterns also occur when we look at the fifty states of the USA. We were able to find data on almost exactly the same health and social problems for US states as we used in our international index.

Figure 2.4 Health and social problems are related to inequality in US states.
.

Figure 2.4 shows that the Index of Health and Social Problems is strongly related to the amount of inequality in each state, while Figure 2.5 shows that there is no clear relation between it and average income levels.

Figure 2.5 Health and social problems are only weakly related to average income in US states.
.

The evidence from the USA confirms the international picture. The position of the US in the international graph (Figure 2.2) shows that the high average income level in the US as a whole does nothing to reduce its health and social problems relative to other countries.

We should note that part of the reason why our index combining data for ten different health and social problems is so closely related to inequality is that combining them tends to emphasize what they have in common and downplays what they do not. In Chapters 4-12 we will examine whether each problem taken on its own is related to inequality and will discuss the various reasons why they might be caused by inequality.

This evidence cannot be dismissed as some statistical trick done with smoke and mirrors. What the close fit shown in Figure 2.2 suggests is that a common element related to the prevalence of all these health and social problems is indeed the amount of inequality in each country. All the data come from the most reputable sources from the World Bank, the World Health Organization, the United Nations and the Organization for Economic Cooperation and Development (OECD), and others.

Could these relationships be the result of some unrepresentative selection of problems? To answer this we also used the ‘Index of child wellbeing in rich countries’ compiled by the United Nations Children’s Fund (UNICEF). It combines forty different indicators covering many different aspects of child wellbeing. (We removed the measure of child relative poverty from it because it is, by definition, closely related to inequality.)

Figurer 2.6 The UNICEF index of child wellbeing in rich countries is related to inequality.
.

Figure 2.6 shows that child wellbeing is strongly related to inequality, and Figure 2.7 shows that it is not at all related to average income in each country.

Figure 2.7 The UNICEF index of child wellbeing is not related to Gross National Income per head in rich countries.
.

SOCIAL GRADIENTS

As we mentioned at the end of the last chapter, there are perhaps two widespread assumptions as to why people nearer the bottom of society suffer more problems. Either the circumstances people live in cause their problems, or people end up nearer the bottom of society because they are prone to problems which drag them down. The evidence we have seen in this chapter puts these issues in a new light.

Let’s first consider the view that society is a great sorting system with people moving up or down the social ladder according to their personal characteristics and vulnerabilities. While things such as having poor health, doing badly at school or having a baby when still a teenager all load the dice against your chances of getting up the social ladder, sorting alone does nothing to explain why more unequal societies have more of all these problems than less unequal ones. Social mobility may partly explain whether problems congregate at the bottom, but not why more unequal societies have more problems overall.

The view that social problems are caused directly by poor material conditions such as bad housing, poor diets, lack of educational opportunities and so on implies that richer developed societies would do better than the others. But this is a long way from the truth: some of the richest countries do worst.

It is remarkable that these measures of health and social problems in the two different settings, and of child wellbeing among rich countries, all tell so much the same story.

The problems in rich countries are not caused by the society not being rich enough (or even by being too rich) but by the scale of material differences between people within each society being too big. What matters is where we stand in relation to others in our own society.

Of course a small proportion of the least well-off people even in the richest countries sometimes find themselves without enough money for food. However, surveys of the 12.6 per cent of Americans living below the federal poverty line (an absolute income level rather than a relative standard such as half the average income) show that 80 per cent of them have airconditioning, almost 75 per cent own at least one car or truck and around 33 per cent have a computer, a dishwasher or a second car.

What this means is that when people lack money for essentials such as food, it is usually a reflection of the strength of their desire to live up to the prevailing standards. You may, for instance, feel it more important to maintain appearances by spending on clothes while stinting on food. We knew of a young man who was unemployed and had spent a month’s income on a new mobile phone because he said girls ignored people who hadn’t got the right stuff. As Adam Smith emphasized, it is important to be able to present oneself creditably in society without the shame and stigma of apparent poverty.

However, just as the gradient in health ran right across society from top to bottom, the pressures of inequality and of wanting to keep up are not confined to a small minority who are poor. Instead, the effects are as we shall see widespread in the population.

. . .

from

The Spirit Level. Why equality is better for everyone

by Richard Wilkinson and Kate Pickett

get it at Amazon.com

SINS OF OMISSION, EMOTIONAL NEGLECT. What Did Your Family Cook Up For Christmas? * Running on Empty: Overcome Your Childhood Emotional Neglect – Jonice Webb PhD.

Good enough parents, or chronic empathic failure?

When a parent effectively recognizes and meets her child’s emotional needs in infancy, a “secure attachment” is formed and maintained. This first attachment forms the basis of a positive self-image and a sense of general well-being throughout childhood and into adulthood.

Your parents’ failure to validate or respond enough to your emotional needs as a child has massive consequences, coming from the totality of important moments in which emotionally neglectful parents are deaf and blind to the emotional needs of their growing child.

There is a minimal amount of parental emotional connection, empathy and ongoing attention which is necessary to fuel a child’s growth and development so that he or she will grow into an emotionally healthy and emotionally connected adult. Less than that minimal amount and the child becomes an adult who struggles emotionally, outwardly successful, perhaps, but empty, missing something within, which the world can’t see.

Childhood Emotional Neglect has a tremendous impact on your ability to achieve happiness and fulfillment in adulthood. You’re feeling empty, disconnected, and different; as if you don’t actually belong anywhere.

It also wreaks havoc with your relationships with your parents and family in adulthood. The CEN adult feels so uncomfortable and empty with family not because of what’s there, but because of what’s missing.

This book is written to help you become aware of what didn’t happen in your childhood, what you don’t remember.

Childhood Emotional Neglect is the result of your parent’s inability to validate and respond adequately to your emotional needs. Childhood emotional neglect can be hard to identify because it’s what didn’t happen in your childhood. It doesn’t leave any visible bruises or scars, but it’s hurtful and confusing for children.

Symptoms of Childhood Emotional Neglect include:

Emptiness

Loneliness

Feeling something’s fundamentally wrong with you

Feeling unfulfilled even when you’re successful

Difficulty connecting with most of your feelings, not feeling anything

Burying, avoiding, or numbing your feelings

Feeling out of place or like you don’t fit in

Difficulty asking for help and not wanting to depend on others

Depression and anxiety

High levels of guilt, shame, and/or anger

Lack of deep, intimate connection with your friends and spouse

Feeling different, unimportant or inadequate

Difficulty with self-control (this could be overeating or drinking)

People-pleasing and focusing on other people’s needs

Not having a good sense of who you are, your likes and dislikes, your strengths and weaknesses

Sharon Martin, LCSW

What’s Your Family Cooking Up For Christmas?

Jonice Webb PhD

Do you look forward to family holiday gatherings, but then often end up feeling disappointed?

Do you dread family holiday dinners, but feel confused about the reasons why?

Do you feel guilty for avoiding or snapping at your parents at holiday gatherings, but just can’t stop yourself?

Do you feel strangely uncomfortable when you’re with your family as if you don’t belong there?

In my experience as a psychologist, I have come to realize that for every irritable, out-of-place, or disappointed person at a family gathering, there is a valid explanation for how that person feels.

I have also found that the explanation is often something rooted in childhood. Something that as an adult you can’t see or remember but is likely still happening to this day: Childhood Emotional Neglect.

Childhood Emotional Neglect (CEN) happens when your parents fail to validate or respond enough to your emotional needs as they raise you. Adults whose parents failed them in this way in childhood typically have no awareness that this failure happened. A failure to validate or respond is not an action or an event. It’s a failure to act and a non-event. Therefore, your eyes don’t see it and your brain can’t record it. As an adult, you will likely have no memory of it.

Yet CEN has a tremendous impact on your ability to achieve happiness and fulfillment in adulthood. Growing up with your feelings unaddressed in your family plays out in your own adult life in some very important ways. But it also wreaks havoc with your relationships with your parents and family in adulthood.

Once you’re grown up, Emotional Neglect from childhood can make you resent your parents and feel uncomfortable with your family without you even realizing it. On top of all that, CEN can leave you feeling empty, disconnected, and different; as if you don’t actually belong anywhere.

There is no situation that immerses you in all of your CEN symptoms more than being at a family gathering. And this is especially true when it happens under the pressure-cooker of the holidays.

Chelsea

Chelsea fastened her necklace while simultaneously calling up the stairs for her 3 children to find their shoes and put them on. “We don’t want to be late to Grandma and Grandpa’s house for holiday brunch!” she yelled. As she gathered up the pie she’d made and the bottle of wine she was taking, she was confused by her own mood. She was definitely excited about the holiday and looking forward to the day, but there was also a feeling of darkness lurking in the pit of her stomach. “What is wrong with me? I’m 43 years old and I’m all over the place. This makes no sense,” she thought, angry at herself. She closed her eyes and commanded herself to just be happy and enjoy the day.

Jack

28-year-old Jack sat in his parents’ family room surrounded by his niece and nephew, siblings and dad. It’s their annual New Years Day family dinner. As everyone watches the children play, Jack is sitting very uncomfortably in his comfortable chair. Knowing he should be feeling happy and warm and loved, he’s never felt less so. He feels deeply uneasy and out of place as if he is among strangers. He feels unknown, invisible, and deeply bored. “What is my problem?” he agonizes.

Chelsea and Jack don’t know it, but they are both struggling to identify something in themselves that’s very hard to see. Their confusion and contradictory feelings do all make sense, and they have them for a reason. But in looking for answers they are both doing what people with emotional neglect usually do: they are getting angry at themselves for having the feelings they have because they can’t see what’s wrong. They are blaming the pain and deprivation from their childhoods on themselves.

The CEN adult feels so uncomfortable and empty with family not because of what’s there, but because of what’s missing.

What’s missing could be best described as three things:

The feeling that people are genuinely interested in you.

Questions about yourself and your life.

Meaningful conversations about interpersonal issues and the feelings involved.

So when Chelsea and Jack see their families now, it’s a sad continuation of their childhoods. Their parents do not ask them genuine questions about themselves or their lives, no one shows interest in their problems or genuine life experience or feelings. And no one talks about anything that really matters, like problems or conflicts or feelings.

What’s missing is what’s failing to happen, which is something Chelsea and Jack may never see because it’s been their reality from childhood. They can feel it but they cannot see it unless they stop blaming themselves for having negative feelings and acknowledge how their parents failed them.

What To Do Differently

Learn as much as you can about Childhood Emotional Neglect (CEN) before your holiday event. This will help you see that this problem is real as well as understand how it’s affected you. Instead of trying to ban your negative feelings (like Chelsea did), do the opposite. Pay attention to them as important messages from your body trying to alert you to a real problem in your experience of your family. Think about how to protect yourself this year. For example, you may limit your time present at the event or bring a support person who understands CEN and your situation. You might lower your expectations or stick close to someone you’re most comfortable with.

Now here’s the thing. The power of Childhood Emotional Neglect comes from your lack of awareness of it. Once you see it, you can beat it. You can treat yourself differently than your family ever treated you. By caring about your own feelings and validating your own experience you can start protecting yourself.

And when you do you will experience your holidays in a very different way. And then you will see that it makes all the difference in the world.

Jonice Webb has a PhD in clinical psychology, and is author of the bestselling books Running on Empty: Overcome Your Childhood Emotional Neglect and Running On Empty No More: Transform Your Relationship. She currently has a private psychotherapy practice in the Boston area, where she specializes in the treatment of couples and families. To read more about Dr. Webb, her books and Childhood Emotional Neglect, you can visit her website, Emotionalneglect.com.

PsychCentral

Running on Empty: Overcome Your Childhood Emotional Neglect

Jonice Webb PhD

Writing this book has been one of the most fascinating experiences of my life. As the concept of Emotional Neglect gradually became clearer and more defined in my head, it changed not only the way I practiced psychology, but also the way I looked at the world. I started to see Emotional Neglect everywhere: in the way I sometimes parented my own children or treated my husband, at the mall, and even on reality TV shows. I found myself often thinking that it would help people enormously if they could become aware of this invisible force that affects us all: Emotional Neglect.

After watching the concept become a vital aspect of my work over several years, and becoming fully convinced of its value, I finally shared it with my colleague, Dr. Christine Musello. Christine responded with immediate understanding, and quickly began seeing Emotional Neglect in her own clinical practice, and all around her, as I had. Together we started to work on outlining and defining the phenomenon. Dr. Musello was helpful in the process of putting the initial words to the concept of Emotional Neglect. The fact that she was so readily able to embrace the concept, and found it so useful, encouraged me to take it forward.

Although Dr. Musello was not able to continue in the writing of this book with me, she was a helpful support at the beginning of the writing process. She composed some of the first sections of the book and several of the clinical vignettes. I am therefore pleased to recognize her contribution.

INTRODUCTION

What do you remember from your childhood? Almost everyone remembers some bits and pieces, if not more. Perhaps you have some positive memories, like family vacations, teachers, friends, summer camps or academic awards; and some negative memories, like family conflicts, sibling rivalries, problems at school, or even some sad or troubling events.

Running on Empty is not about any of those kinds of memories. In fact, it’s not about anything that you can remember or anything that happened in your childhood. This book is written to help you become aware of what didn’t happen in your childhood, what you don’t remember. Because what didn’t happen has as much or more power over who you have become as an adult than any of those events you do remember. Running on Empty will introduce you to the consequences of what didn’t happen: an invisible force that may be at work in your life. I will help you determine whether you’ve been affected by this invisible force and, if so, how to overcome it.

Many fine, high-functioning, capable people secretly feel unfulfilled or disconnected. “Shouldn’t I be happier?” “Why haven’t I accomplished more?” “Why doesn’t my life feel more meaningful?” These are questions which are often prompted by the invisible force at work. They are often asked by people who believe that they had loving, wellmeaning parents, and who remember their childhood as mostly happy and healthy. So they blame themselves for whatever doesn’t feel right as an adult. They don’t realize that they are under the influence of what they don’t remember, the invisible force.

By now, you’re probably wondering, what is this Invisible Force? Rest assured it’s nothing scary. It’s not supernatural, psychic or eerie. It’s actually a very common, human thing that doesn’t happen in homes and families all over the world every day. Yet we don’t realize it exists, matters or has any impact upon us at all. We don’t have a word for it. We don’t think about it and we don’t talk about it. We can’t see it; we can only feel it. And when we do feel it, we don’t know what we’re feeling.

In this book, I’m finally giving this force a name. I’m calling it Emotional Neglect. This is not to be confused with physical neglect. Let’s talk about what Emotional Neglect really is.

Everyone is familiar with the word “neglect.” It’s a common word. The definition of “neglect,” according to the Merriam-Webster Dictionary, is “to give little attention or respect or to disregard; to leave unattended to, especially through carelessness.”

“Neglect” is a word used especially frequently by mental health professionals in the Social Services. It’s commonly used to refer to a dependent person, such as a child or elder, whose physical needs are not being met. For example a child who comes to school with no coat in the winter, or an elder shut-in whose adult daughter frequently “forgets” to bring her groceries.

Pure emotional neglect is invisible. It can be extremely subtle, and it rarely has any physical or visible signs. In fact, many emotionally neglected children have received excellent physical care. Many come from families that seem ideal. The people for whom I write this book are unlikely to have been identified as neglected by any outward signs, and are in fact unlikely to have been identified as neglected at all.

So why write a book? After all, if the topic of Emotional Neglect has gone unnoticed by researchers and professionals all this time, how debilitating can it really be? The truth is, people suffering from Emotional Neglect are in pain. But they can’t figure out why, and too often, neither can the therapists treating them. In writing this book, I identify, define and suggest solutions to a hidden struggle that often stymies its sufferers and even the professionals to whom they sometimes go for help. My goal is to help these people who are suffering in silence, wondering what is wrong with them.

There is a good explanation for why Emotional Neglect has been so overlooked. It hides. It dwells in the sins of omission, rather than commission; it’s the white space in the family picture rather than the picture itself. It’s often what was NOT said or observed or remembered from childhood, rather than what WAS said.

For example, parents may provide a lovely home and plenty of food and clothing, and never abuse or mistreat their child. But these same parents may fail to notice their teen child’s drug use or simply give him too much freedom rather than set the limits that would lead to conflict. When that teen is an adult, he may look back at an “ideal” childhood, never realizing that his parents failed him in the way that he needed them most. He may blame himself for whatever difficulties have ensued from his poor choices as a teen. “I was a real handful”; “I had such a great childhood, I have no excuse for not having achieved more in life.” As a therapist, I have heard these words uttered many times by high-functioning, wonderful people who are unaware that Emotional Neglect was an invisible, powerful force in their childhood. This example offers only one of the infinite numbers of ways that a parent can emotionally neglect a child, leaving him running on empty.

Here I would like to insert a very important caveat: We all have examples of how our parents have failed us here and there. No parent is perfect, and no childhood is perfect. We know that the huge majority of parents struggle to do what’s best for their child. Those of us who are parents know that when we make parenting mistakes, we can almost always correct them. This book is not meant to shame parents or make parents feel like failures. In fact, throughout the book you’ll read about many parents who are loving and well-meaning, but still emotionally neglected their child in some fundamental way. Many emotionally neglectful parents are fine people and good parents, but were emotionally neglected themselves as children. All parents commit occasional acts of Emotional Neglect in raising their children without causing any real harm. It only becomes a problem when it is of a great enough breadth or quantity to gradually emotionally “starve” the child.

Whatever the level of parental failure, emotionally neglected people see themselves as the problem, rather than seeing their parents as having failed them.

Throughout the book I include many examples, or vignettes, taken from the lives of my clients and others, those who have grappled with sadness or anxiety or emptiness in their lives, for which there were no words and for which they could find little explanation. These emotionally neglected people most often know how to give others what they want or need. They know what is expected from them in most of life’s social environments. Yet these sufferers are unable to label and describe what is wrong in their internal experience of life and how it harms them.

This is not to say that adults who were emotionally neglected as children are without observable symptoms. But these symptoms, the ones that may have brought them to a psychotherapist’s door, always masquerade as something else: depression, marital problems, anxiety, anger. Adults who have been emotionally neglected mislabel their unhappiness in such ways, and tend to feel embarrassed by asking for help. Since they have not learned to identify or to be in touch with their true emotional needs, it’s difficult for therapists to keep them in treatment long enough to help them understand themselves better.

So this book is written not only for the emotionally neglected, but also for mental health professionals, who need tools to combat the chronic lack of compassion-for-self which can sabotage the best of treatments.

Whether you picked up Running on Empty because you are looking for answers to your own feelings of emptiness and lack of fulfillment, or because you are a mental health professional trying to help “stuck” patients, this book will provide concrete solutions for invisible wounds.

In Running on Empty, I have used many vignettes to illustrate various aspects of Emotional Neglect in childhood and adulthood. All of the vignettes are based upon real stories from clinical practice, either my own or Dr. Musello’s. However, to protect the privacy of the clients, names, identifying facts, and details were altered, so that no vignette depicts any real person, living or dead. The exceptions are the vignettes involving Zeke which appear throughout Chapters 1 and 2. These vignettes were created to illustrate how different parenting styles might affect the same boy, and are purely fictitious.

Are you wondering if this book applies to you? Take this questionnaire to find out. Circle the questions to which your answer is YES.

Emotional Neglect Questionnaire

Do You:

  • Sometimes feel like you don’t belong when with your family or friends
  • Pride yourself on not relying upon others
  • Have difficulty asking for help
  • Have friends or family who complain that you are aloof or distant
  • Feel you have not met your potential in life
  • Often just want to be left alone
  • Secretly feel that you may be a fraud
  • Tend to feel uncomfortable in social situations
  • Often feel disappointed with, or angry at yourself
  • Judge yourself more harshly than you judge others
  • Compare yourself to others and often find yourself sadly lacking
  • Find it easier to love animals than people
  • Often feel irritable or unhappy for no apparent reason
  • Have trouble knowing what you’re feeling
  • Have trouble identifying your strengths and weaknesses
  • Sometimes feel like you’re on the outside looking in
  • Believe you’re one of those people who could easily live as a hermit
  • Have trouble calming yourself
  • Feel there’s something holding you back from being present in the moment
  • At times feel empty inside
  • Secretly feel there’s something wrong with you
  • Struggle with self-discipline

Look back over your circled (YES) answers. These answers give you a window into the areas in which you may have experienced Emotional Neglect as a child.

Part 1 Running on Empty

Chapter 1

WHY WASN’T THE TANK FILLED?

“…I am trying to draw attention to the immense contribution to the individual and to society which the ordinary good mother with her husband in support makes at the beginning, and which she does simply through being devoted to her infant. ”

D.W. Winnicott, (1964) The Child, the Family, and the Outside World

It doesn’t take a parenting guru, a saint, or, thank goodness, a Ph.D. in psychology to raise a child to be a healthy, happy adult. The child psychiatrist, researcher, writer and psychoanalyst Donald Winnicott emphasized this point often throughout writings that spanned 40 years.

While today we recognize that fathers are of equal importance in the development of a child, the meaning of Winnicott’s observations on mothering is still essentially the same:

There is a minimal amount of parental emotional connection, empathy and ongoing attention which is necessary to fuel a child’s growth and development so that he or she will grow into an emotionally healthy and emotionally connected adult. Less than that minimal amount and the child becomes an adult who struggles emotionally, outwardly successful, perhaps, but empty, missing something within, which the world can’t see.

In his writings, Winnicott coined the now wellknown term, “Good Enough Mother” to describe a mother who meets her child’s needs in this way. Parenting that is “good enough” takes many forms, but all of these recognize the child’s emotional or physical need in any given moment, in any given culture, and do a “good enough” job of meeting it. Most parents are good enough. Like all animals, we humans are biologically wired to raise our children to thrive. But what happens when life circumstances interfere with parenting? Or when parents themselves are unhealthy, or have significant character flaws?

Were you raised by “good enough” parents? By the end of this chapter, you will know what “good enough” means, and you will be able to answer this question for yourself.

But first…

If you are a parent as well as a reader, you may find yourself identifying with the parental failures presented in this book, as well as with the emotional experience of the child in the vignettes (because you are, no doubt, hard on yourself.) Therefore, I ask that you pay close attention to the following warnings:

First – All good parents are guilty of emotionally failing their children at times. Nobody is perfect. We all get tired, cranky, stressed, distracted, bored, confused, disconnected, overwhelmed or otherwise compromised here and there. This does not qualify us as emotionally neglectful parents.

Emotionally neglectful parents distinguish themselves in one of two ways, and often both:

Either they emotionally fail their child in some critical way in a moment of crisis, causing the child a wound which may never be repaired (acute empathic failure)

OR they are chronically tone-deaf to some aspect of a child’s need throughout his or her childhood development (chronic empathic failure).

Every single parent on earth can recall a parenting failure that makes him cringe, where he knows that he has failed his child. But the harm comes from the totality of important moments in which emotionally neglectful parents are deaf and blind to the emotional needs of their growing child.

Second – If you were indeed emotionally neglected, and are a parent yourself as well, there is a good chance that as you read this book you will start to see some ways in which you have passed the torch of Emotional Neglect to your child. If so, it’s extremely vital for you to realize that it is not your fault. Because it’s invisible, insidious, and easily passes from generation to generation, it’s extremely unlikely and difficult to stop unless you become explicitly aware of it.

Since you’re reading this book, you are light-years ahead of your parents. You have the opportunity to change the pattern, and you are taking it. The effects of Emotional Neglect can be reversed. And you’re about to learn how to reverse those parental patterns for yourself, and for your children.

Keep reading. No self-blame allowed.

The Ordinary Healthy Parent in Action

The importance of emotion in healthy parenting is best understood through attachment theory. Attachment theory describes how our emotional needs for safety and connection are met by our parents from infancy.

Many ways of looking at human behavior have grown out of attachment theory, but most owe their thinking to the original attachment theorist, psychiatrist John Bowlby. His understanding of parent-child bonding comes from thousands of hours of observation of parents and children, beginning with mothers and infants.

It suggests, quite simply, that when a parent effectively recognizes and meets her child’s emotional needs in infancy, a “secure attachment” is formed and maintained. This first attachment forms the basis of a positive self-image and a sense of general well-being throughout childhood and into adulthood.

Looking at emotional health through the lens of attachment theory, we can identify three essential emotional skills in parents:

1) The parent feels an emotional connection to the child.

2) The parent pays attention to the child and sees him as a unique and separate person, rather than, say, an extension of him or herself, a possession or a burden.

3) Using that emotional connection and paying attention, the parent responds competently to the child’s emotional need.

Although these skills sound simple, in combination they are a powerful tool for helping a child learn about and manage his or her own nature, for creating a secure emotional bond that carries the child into adulthood, so that he may face the world with the emotional health to achieve a happy adulthood.

In short, when parents are mindful of their children’s unique emotional nature, they raise emotionally strong adults. Some parents are able to do this intuitively, but others can learn the skills. Either way, the child will not be neglected.

. . .

from

Running on Empty: Overcome Your Childhood Emotional Neglect

by Jonice Webb PhD.

get it at Amazon.com

Also on TPPA = CRISIS

BEING THE BLACK SHEEP. COPING WITH A MARGINALIZING FAMILY – VINITA MEHTA PH.D., ED.M. * THE COMMUNICATIVE PROCESS OF RESILIENCE FOR MARGINALIZED FAMILY MEMBERS – ELIZABETH DORRANCE HALL

A FAILING PRESIDENCY – Michael R. Bloomberg.

There are many reasons to be optimistic about 2019. The increasingly isolated man in the Oval Office is not one of them.

One week sums up a failing presidency. This past week, we got a glimpse of what the beginning of the collapse may look like and what it may ultimately cost us.

With the first two years of Donald Trump’s presidency drawing to a close, the past week all too perfectly exemplified its destructive effect on competent government in Washington and it should give all Americans, in all parties, cause for concern.

On Thursday, one of the last remaining seasoned and respected professionals at the top of the administration announced his resignation, for reasons he explained in a letter that was as courteous as it was devastating.

On Saturday, government services were (yet again) shut down because of the quarrel between Congress and the White House over the president’s obsession with a border wall that won’t work but will waste billions of taxpayer money.

And in between, the stock market dove to its worst week since 2011, as investors concerned about Trump’s taste for trade wars delivered a vote of no-confldence.

Each of these mistakes has a common denominator: Trump’s recklessly emotional and senselessly chaotic approach to the job.

At the halfway mark of this terrible presidency, one has to wonder how much more the country can take.

The president’s decision to withdraw US. forces from Syria, which jeopardized military success in a crucial battle and betrayed an ally as well, led James Mattis to quit in protest. He is the first defense secretary to do so since the position was created in 1947. His resignation letter is meticulously calm and respectful and all the more brutal as a result. Every American should read it.

He wrote: “While the U.S. remains the indispensable nation in the free world, we cannot protect our interests or serve that role effectively without maintaining strong alliances and showing respect to those allies.” He added: “I believe we must be resolute and unambiguous in our approach to those countries whose strategic interests are increasingly in tension with ours.”

Mattis understands that the two principles which have served America well since World War II must not be separated. And that gives what comes next such force: “Because you have the right to have a Secretary of Defense whose views are better aligned with yours on these and other subjects, I believe it is right for me to step down from my position.”

In short: One of the few people protecting Trump from Trump is leaving. And unfortunately, few Republicans in Congress have shown any appetite for that job, preferring instead to appease his worst instincts as the debate over a wall along the U.S.-Mexican border continues to show.

Even if a wall were a good idea, and it is not, a government shutdown would be a dumb way to pursue it. The Democrats have just won control of the House of Representatives. The country has given them a full share of responsibility for making decisions about public spending. Does the president expect to override this reality by maneuvering to shut down the government? His penchant for ignoring reality evident in so many other areas, including climate change apparently extends to elections.

This weekend, he imposed needless costs on government workers and on the country at large not to accomplish anything, or to defend any principle, but to pander to the extreme wing of his party and rage at being thwarted. Republicans in Congress have gone along with this for too long. November should have been a wake-up call.

Some Republicans, at least, seem to be slowly realizing what a disaster Trump’s trade policies have been. His trade war with China has won few concessions but has cost American workers, consumers, farmers and businesses a great deal. With other countries pleading for sanity and institutions such as the International Monetary Fund and World Trade Organization warning of severe consequences if trade sanctions get out of hand, talk of a looming recession is growing. Yet the president seems determined to make matters worse and to hell with the economic consequences.

Unless something changes unless, in particular, Republicans in Congress start showing some spine two more years might be enough to test whether we can sustain Trump’s model of bad government. This past week, we got a glimpse of what the beginning of the collapse may look like and what it may ultimately cost us.

DEPRESSION. It’s what’s Inside Your Head? – Johann Hari.

“Ask not what’s inside your head, ask what your head’s inside of” W. M. Mace.

“It is no measure of health to be well-adjusted to a sick society.” Jiddu Krishnamurti.

How does your brain change when you are deeply distressed? Do those changes make it harder to recover? The real role of genes and brain changes.

The distress caused by the outside world, and the changes inside your brain come together. If the world keeps causing you deep pain, of course you’ll stay trapped there for a long time, with the snowball growing, your genes are activated by the environment. They can be switched on, or off, by what happens to you.

Genes increase your sensitivity, sometimes significantly. But they aren’t, in themselves, the cause of depression. Your genes can certainly make you more vulnerable, but they don’t write your destiny.

Marc Lewis’s friends thought he was dead.

It was the summer of 1969, and this young student in California was desperate to block out his despair any way he could. He had swallowed, snorted, or injected any stimulant he could find for a week now.

After he had been awake for thirty-six hours straight, he got a friend to inject him with heroin, so he could finally crash.

When Marc regained consciousness, he realized his friends were trying to figure out where they could find a bag big enough to dump his body in.

When Marc suddenly began to talk, they were freaked out. His heart, they explained to him, had stopped beating for several minutes.

About ten years after that night, Marc left drugs behind, and started to study neuroscience. He became a leading figure in the field, and a professor in the Netherlands.

He wanted to know: How does your brain change when you are deeply distressed? Do those changes make it harder to recover?

If you look at a brain scan of a depressed or highly anxious person it will look different from the brain scan of somebody without these problems. The areas that relate to feeling unhappy, or to being aware of risk, will be lit up like Christmas tree lights. They will be bigger, and more active.

Fifteen years ago, if you had shown me a diagram of my brain and described what it was like, I and most people, would have thought: that’s me, then. If the parts of the brain that relate to being unhappy, or being frightened, are more active, then I’m fixed as a person who is always going to be more unhappy, or more frightened. You might have short legs, or long arms; I have a brain with more active parts related to fear and anxiety; that’s how it is.

Wrong. To understand why we have to grasp a crucial concept called neuroplasticity.

Your brain changes according to how you use it. Neuroplasticity is the tendency for the brain to continue to restructure itself based on experience. Your brain is constantly changing to meet your needs. It does this mainly in two ways: by pruning the synapses you don’t use, and by growing the synapses you do use.

For as long as you live, this neuroplasticity never stops, and the brain is always changing.

A brain scan is a snapshot of a moving picture. You can take a snapshot of any moment in a football game, it doesn’t tell you what’s going to happen next, or where the brain is going. The brain changes as you become depressed and anxious, and it changes again when you stop being depressed and anxious. It’s always changing in response to signals from the world.

Social and psychological factors have the capacity to physically change your brain. Being lonely, or isolated, or grossly materialistic, these things change your brain, and, crucially, reconnection can change it back.

We have been thinking too simplistically. You couldn’t figure out the plot of Breaking Bad by dismantling your TV set. In the same way, you can’t figure out the root of your pain by dismantling your brain. You have to look at the signals the TV, or your brain, is receiving to do that.

They, the distress caused by the outside world, and the changes inside the brain come together.

Once this process begins, it, like everything else that happens to us, causes real changes in the brain, and they can then acquire a momentum of their own that deepens the effects from the outside world.

Imagine that your marriage just broke up, and you lost your job, and you know what? Your mother just had a stroke. It’s pretty overwhelming. Because you are feeling intense pain for a long period, your brain will assume this is the state in which you are going to have to survive from now on, so it might start to shed the synapses that relate to the things that give you joy and pleasure, and strengthen the synapses that relate to fear and despair. That’s one reason why you can often start to feel you have become somehow fixed in a state of depression or anxiety even if the original causes of the pain seems to have passed.

While it’s wrong to say the origin of these problems is solely within the brain, it would be equally wrong to say that the responses within the brain can’t make it worse. They can. The pain caused by life going wrong can trigger a response that is so powerful that the brain tends to stay there, in a pained response, for a while, until something pushes it out of that corner, into a more flexible place.

And if the world keeps causing you deep pain, of course you’ll stay trapped there for a long time, with the snowball growing.

How much of depression is carried in your genes?

I had assumed I inherited it in my genes. I sometimes thought of depression as a lost twin, born in the womb alongside me.

Scientists haven’t identified a specific gene or set of genes that can, on their own, cause depression and anxiety, but we do know there is a big genetic factor.

Scientists studying the genetic basis for depression and anxiety have concluded that it’s real, but it doesn’t account for most of what is going on. There is, however, a twist here.

A group of scientists led by a geneticist named Avshalom Caspi did one of the most detailed studies of the genetics of depression ever conducted. For twenty-five years, his team followed a thousand kids in New Zealand from being babies to adulthood. One of the things they were trying to figure out was which genes make you more vulnerable to depression.

Years into their work, they found something striking. They discovered that having a variant of a gene called 5-HTT does relate to becoming depressed.

Yet there was a catch. We are all born with a genetic inheritance, but your genes are activated by the environment. They can be switched on, or off, by what happens to you.

If you have a particular flavor of 5-HTT, you have a greatly increased risk of depression, but only in a certain environment. If you carried this gene, the study showed, you were more likely to become depressed, but only if you had experienced a terribly stressful event, or a great deal of childhood trauma.

If those bad things hadn’t happened to you, even if you had the gene that related to depression, you were no more likely to become depressed than anyone else.

So genes increase your sensitivity, sometimes significantly. But they aren’t, in themselves, the cause of depression.

This means that if other genes work like 5-HTT, and it looks as if they do, then nobody is condemned to be depressed or anxious by their genes.

Your genes can certainly make you more vulnerable, but they don’t write your destiny.

For example, we know that even if you are genetically more prone to put on weight, you still have to have lots of food in your environment for your genetic propensity to put on weight to kick in. Stranded in the rain forest or the desert with nothing to eat, you’ll lose weight whatever your genetic inheritance is.

Depression and anxiety, the current evidence suggests, are a little like that. The genetic factors that contribute to depression and anxiety are very real, but they also need a trigger in your environment or your psychology. Your genes can then supercharge those factors, but they can’t create them alone.

Endogenous Depression?

Is there some group of depressed people whose pain really is caused in just the way my doctor explained to me, by their brain wiring going wrong, or some other innate flaw? If it exists, how common is it?

It used to be thought that some depressions are caused by what happened to us in our lives, and then there is another, purer kind of depression that is caused by something going badly wrong in your brain. The first kind of depression was called “reactive,” and the second, purely internal kind was called “endogenous.”

Scientists have studied people who had been hospitalized for reactive depressions and compared them to people who had been classed as having endogenous depressions. It turned out that their circumstances were exactly the same: they had had an equal amount of things happen to them to trigger their despair. The distinction seemed, to them at that time, based on their evidence, to be meaningless.

There’s no agreement and scant evidence that endogenous depression actually exists, but researchers generally agree that if it exists at all, it’s a tiny minority of depressed people. This means that telling all depressed people a story that focuses only on these physical causes is a bad idea.

There are however situations, in addition to manic depression and bipolar disorder where we know that a biological change can make you more vulnerable. People with glandular fever, or underactive thyroids, are significantly more likely to become depressed.

It is foolish to deny there is a real biological component to depression and anxiety, and there may be other biological contributions we haven’t identified yet, but it is equally foolish to say they are the only causes.

Why then do we cling to the idea these problems are caused only by our brains.

Junk Values. You can have everything a person could possibly need by the standards of our culture, but those standards can badly misjudge what a human actually needs in order to have a good or even a tolerable life. Our culture creates a picture of what you “need” to be happy, through all the junk values we have been taught, that doesn’t fit with what we actually need.

Get a Grip. For a long time, depressed and anxious people have been told their distress is not real, that it is just laziness, or weakness, or self-indulgence.

The right-wing British pundit Katie Hopkins said depression is “the ultimate passport to self-obsession. Get a grip, people,” and added that they should just go out for a run and get over their moaning.

The way we have resisted this form of nastiness is to say that depression is a disease. You wouldn’t hector a person with cancer to pull themselves together, so it’s equally cruel to do it to somebody with the disease of depression or severe anxiety. The path away from stigma has been to explain patiently that this is a physical illness like diabetes or cancer.

We have come to believe that the only route out of stigma is to explain to people that this is a biological disease with purely biological causes. So, based on this positive motive, we have scrambled to find the biological effects, and held them up as evidence to rebut the sneerers.
“See! Even you admit it’s not a disease like cancer. So pull yourself together!”

But does saying something is a disease really reduce stigma?
Everybody knew, right from the start, that AIDS was a disease. It didn’t stop people with AIDS from being horribly stigmatized. People with AIDS are still stigmatized, greatly stigmatized. Nobody ever doubted leprosy was a disease, and lepers were persecuted for millennia.

Professor Sheila Mehta set up an experiment to figure out whether saying that something is a disease makes people kinder to the sufferer, or crueller.

Believing depression was a disease didn’t reduce hostility. In fact, it increased it.

“This way is better”, Marc said, “because if it’s an innate biological disease, the most you can hope for from other people is sympathy, a sense that you, with your difference, deserve their big-hearted kindness.
But if it’s a response to how we live, you can get something richer: empathy, because it could happen to any of us. It’s not some alien thing. It’s a universal human source of vulnerability.

The evidence suggests Marc is right, looking at it this way makes people less cruel, to themselves and to other people.

Pills Pay Big

For decades, psychiatrists have, in their training, been taught something called the bio-psycho-social model. They are shown that depression and anxiety have three kinds of causes: biological, psychological, and social. And yet almost nobody I know who has become depressed or severely anxious was told this story by their doctor, and most were not offered help for anything except their brain chemistry.

Why? CASH!

It is much more politically challenging to say that so many people are feeling terrible because of how our societies now work. It fits much more with our system of neoliberal capitalism to say, “Okay, we’ll get you functioning more efficiently, but please don’t start questioning … because that’s going to destabilize all sorts of things.”

The pharmaceutical companies are major forces shaping a lot of psychiatry, because it’s this big, big business, billions of dollars.

They pay the bills, so they largely set the agenda, and they obviously want our pain to be seen as a chemical problem with a chemical solution. The result is that we have ended up, as a culture, with a distorted sense of our own distress.

Just defective tissue!?

Telling people their distress is due mostly or entirely to a biological malfunction has several dangerous effects on them.

You leave the person disempowered, feeling they’re not good enough, because their brain’s not good enough.

Secondly: it pitches us against parts of ourselves. It says there is a war taking place in your head. On one side there are your feelings of distress, caused by the malfunctions in your brain or genes. On the other side there’s the sane part of you. You can only hope to drug the enemy within into submission, forever.

But it does something even more profound than that. It tells you that your distress has no meaning, it’s just defective tissue.

This is the biggest division between the old story about depression and anxiety and the new story. The old story says our distress is fundamentally irrational, caused by faulty apparatus in our head. The new story says our distress is, however painful, in fact rational, and Sane.

You’re not crazy to feel so distressed. You’re not broken.

“It is no measure of health to be well-adjusted to a sick society.” Jiddu Krishnamurti.

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

DARK NIGHTS OF THE SOUL. Kidnapped by Depression – Dale M. Kushner * The Emotional Life of Your Brain – Richard J. Davidson, Ph.D. and Sharon Begley.

“We do not see things as they are, we see them as we are. Emotions, far from being the neurological fluff that mainstream science once believed them to be, are central to the functions of the brain and to the life of the mind.”

Why and how do people differ so widely in their emotional responses to the ups and the downs of life? How myths and neuroscience can illuminate the darkness of depression.

Imagine a black sack thrown over your head. Imagine your arms and legs bound, your body injected with a drug that wipes out thoughts, flattens feelings, and numbs senses. This is depression.

Depression is called the dark night of the soul for good reason. Depression leads us into the night world, a world of shadows, emptiness, and blurry vision. You feel lost, lonely and alone, mired in the quicksand of sadness, vulnerable to thoughts of failure and unworthiness.

During depression, we yearn for a lost part of ourselves, for it seems that our spirited aliveness has deserted us, our appetite for living kidnapped and dragged down into the house of death.

Depression may feel as if parts of us have died, and yet is it possible depression opens us to another level of deep experience, one that matures us and brings new wisdom?

We are more than our genetic predisposition and our biochemistry; we are conscious creatures capable of discovering light in the darkness.

“We do not see things as they are, we see them as we are,” says a Talmudic expression. Through the lens of depression, the world is saturated with gloom.

One way to understand the lived experience of depression is to see it acted out symbolically in story form. Myths and fairytales show us the collective (and archetypal) universal patterns of the human psyche. I may have “my depression” and you, “yours,” but throughout the ages, worldwide, depression has plagued the human race.

The Rape of Proserpina (1621-22), white marble sculpture, by Gian Lorenzo Bernini (1598-1680).
.

One of the Greek Homeric hymns, the “Hymn to Demeter,” gives an early and vivid picture of depression. It tells the story of Persephone, Demeter and Zeus’s daughter, whom Hades, god of the underworld and brother of Zeus, falls in love with. When Hades asks Zeus’ leave to marry her, Zeus knows Demeter would never agree and says he will neither give nor withhold his consent. So, one day, while Persephone is gathering flowers in a meadow, the ground splits open and Hades springs forth and abducts her, dragging her down into his kingdom against her will. The unwilling bride screams to Zeus, her father, to save her, but he ignores her pleas. Demeter, a goddess herself, hears her daughter’s cries and also begs Zeus for aid, but he refuses to intervene.

Separated from her daughter, Demeter rages at the gods for allowing Persephone’s capture and rape. Her grief is “terrible and savage.” Disguised as an old woman, she roams the earth, neither eating, drinking, nor bathing while she searches for her child. During her time of mourning, the earth lies fallow.

“Then she caused a most dreadful and cruel year for mankind over the all-nourishing earth: the ground would not make the seed sprout, for rich-crowned Demeter kept it hid. In the fields the oxen drew many a curved plough in vain, and much white barley was cast upon the land without avail. So she would have destroyed the whole race of man with cruel famine.” “Hymn to Demeter,” translated from Greek by Hugh G. Evelyn-White.

Ceres Begging for Jupiter’s Help after the Kidnapping of Her Daughter Proserpine (1777) by Antoine-Frangois Callet (1741-1823).
.

As Demeter pines for her daughter, so too, during depression, do we yearn for a lost part of ourselves, for it seems that our spirited aliveness has deserted us, our appetite for living kidnapped and dragged down into the house of death. With our instincts blunted, we sink into darkness, and experience the desolation of barren landscape. Like the grieving Demeter, our enthusiasm lost, our life-giving energy depleted, we fall into despair.

This feeling of isolation is a signature of depression and runs deep in those who try to articulate their condition and reach out for help.

As the story continues, Zeus’s mounting fear that if he does not reunite mother and daughter nothing will ever grow again on the land finally propels his intervention. He orders Hermes, messenger of the gods, into the underworld to bring Persephone back. Hades is surprisingly gracious in agreeing to her return. Inconsolable during her stay in the underworld, Persephone has yet to eat anything. Before she leaves, Hades urges her to eat at least three pomegranate seeds. Distracted by her joy at leaving, Persephone does so – and thereby consigns herself to return to Hades for three months every year. Had she not eaten the fruit of the underworld, she would have been able to stay with her mother forever.

When we enter the space of depression, it seems we will never “get out,” but as the myth reveals, nature is cyclic. The myth of Demeter and Persephone originates in ancient fertility cults and women’s mysteries, and is associated with harvest and the annual vegetation cycles. Symbolically, for a quarter of the year, while Persephone is in the underworld, lifeless winter prevails. When she returns to earth, spring advances, a time of rebirth.

But depressive cycles are not nearly as predictable as the seasons, and yet we might consider our time in the underworld as periods of incubation. While winter’s colorless landscape may suggest death, beneath the ground roots, seeds, and bulbs are dormant, not dead. They are busy with the business of storing nutrients for the coming season.

The Return of Persephone (1891), oil on canvas, by Frederic Leighton (1830-1896) shows Hermes returning Persephone to Demeter.
.

For plants, winter’s stillness is necessary before spring’s renewal. Depression, too, can be viewed as a time of going inward and down into the depths, and can be a generative and creative interlude during which the psyche renews itself in the slower rhythms of dark days. Many artists attest to depressive episodes that prefigure a creative breakthrough. An astonishing number of famous artists, writers, and statesmen as diverse as Charles Darwin, Friedrich Nietzsche, Winston Churchill, Hans Christian Andersen, Abraham Lincoln, and Georgia O’Keefe have described experiencing depression.

Little is written about Persephone’s life in the underworld, but one thing is clear, she does not die. Quite the opposite. She is given the honorific title Queen of the Underworld. This suggests her movement “to below” is one of transformation and the acquisition of special gifts and powers. Depression may feel as if parts of us have died, and yet is it possible depression opens us to another level of deep experience, one that matures us and brings new wisdom?

When depression drags us away from the lively day world, we might remember Persephone. The darkness of the underworld may provide a special quality of illumination not possible in the glaring, horn-honking, digitally-frenzied daylight. To consider depression as an expression of loss, grief, mourning, and inevitability of mortality is to bring it into the realm of the human heart.

We are more than our genetic predisposition and our biochemistry; we are conscious creatures capable of discovering light in the darkness.

If myths allow us to look into “the heart of the matter,” then neuroscience allows us to peer into the actual matter of our brains. Dr. Richard J. Davidson, founder of the Center for Healthy Minds at the University of Wisconsin, Madison, has made it his life’s work to investigate brain (neuro)plasticity, and how we can improve our wellbeing through the development of certain skills, including meditation.

In his groundbreaking book, The Emotional Life of Your Brain: How Its Unique Patterns Affect the Way You Think, Feel, and Live—and How You Can Change Them, Dr. Davidson and his co-author Sharon Begley offer an in-depth view of how our brains respond to different emotions and provide strategies to help balance or strengthen specific areas of brain circuitry.

Schematic of brain regions that showed significantly different association with amygdala in control versus depressed individuals.
.

The experience of depression differs from person to person. With the aid of fMRI imaging, Dr. Davidson has been able to pinpoint dysfunctional areas of the brain and correlate them with patient’s symptoms. Under the subheading “A Brain Taxonomy of Depression,”

Dr. Davidson identifies three subcategories of depression. One group of depressed patients had difficulty recovering from adversity while another group had difficulty regulating their emotions in a context-appropriate way. The third group was unable to sustain positive emotions. Different patterns of brain activity were noted for each group.

Dr. Davidson is optimistic. His book offers a questionnaire to help readers figure out their emotional “style” and gives exercises that build skills to improve brain functioning. Sufferers of depression need hope. Dr. Davidson’s excitement about what he is learning in the laboratory is palpable and his hope contagious.

Archetypal myths and brain science may seem disconnected, but each presents its own form of wisdom, one through images and story, the other through investigatory science. Demeter’s suffering, the barren land, Persephone’s descent into darkness lodge in our imagination and dreams and recommend that we look into our own lives to discover the source of our grief. Neuroscience advances our knowledge of brain anatomy and its relationship to our feelings and emotions. Each perspective provides a potentially valuable way to examine and understand our experience of depression.

Psychology Today

THE EMOTIONAL LIFE OF YOUR BRAIN. How Its Unique Patterns Affect the Way You Think, Feel and Live. And how You can Change Them.

Richard J. Davidson, Ph.D. with Sharon Begley

INTRODUCTION

A Scientific Quest

This book describes a personal and professional journey to understand why and how people differ in their emotional responses to what life throws at them, motivated by my desire to help people lead healthier, more fulfilling lives.

The “professional” thread in this tapestry describes the development of the hybrid discipline called affective neuroscience, the study of the brain mechanisms that underlie our emotions and the search for ways to enhance people’s sense of well-being and promote positive qualities of mind.

The “personal” thread is my own story. Spurred by the conviction that, as Hamlet said to Horatio, “there are more things in heaven and earth than are dreamt of” in the standard account of the mind provided by mainstream psychology and neuroscience, I have ventured outside the boundaries enclosing these disciplines, sometimes getting struck down, but in the end, I hope, achieving at least some of what I set out to do: to show through rigorous research that emotions, far from being the neurological fluff that mainstream science once believed them to be, are central to the functions of the brain and to the life of the mind.

My thirty years of research in affective neuroscience has produced hundreds of findings, from the brain mechanisms that underlie empathy and the differences between the autistic brain and the normally developing brain to how the brain’s seat of rationality can plunge us into the roiling emotional depths of depression.

I hope that these results have contributed to our understanding of what it means to be human, of what it means to have an emotional life. But as these findings accumulated, I found myself stepping back from the day-to-day life of my laboratory at the University of Wisconsin, Madison, which has grown over the years to something resembling a small company: As I write this in the spring of 2011, I have eleven graduate students, ten postdoctoral fellows, four computer programmers, twenty-one additional research and administrative staff members, and some twenty million dollars in research grants from the National Institutes of Health and other funders.

Since May 2010, I have also served as director of the university’s Center for Investigating Healthy Minds, a research complex dedicated to learning how the qualities of mind that humankind has valued since before the dawn of civilization, compassion, wellbeing, charity, altruism, kindness, love, and other noble aspects of the human condition, arise in the brain and how they can be nurtured.

One of the great virtues of the center is that we do not confine our work to research alone. We very much want to get the results of that research out into the world, where it can make a real difference in the lives of real people. To that end, we have developed a preschool and elementary school curriculum designed to cultivate kindness and mindfulness, and we are evaluating the impact of this training on academic achievement as well as on attention, empathy, and cooperation. Another project investigates whether training in breathing and meditation can help veterans returning from Afghanistan and Iraq cope with stress and anxiety.

I love all of this, both the basic science and the extension of our findings into the real world. But it is way too easy to get consumed by it. (I often joke that I have several full-time jobs, from overseeing grant applications to negotiating with the university bioethics committees for permission to do research on human volunteers.) I did not want that to happen.

About ten years ago, I therefore began to take stock of my research and that of other labs pursuing affective neuroscience, not the interesting individual findings but the larger picture. And I saw that our decades of work had revealed something fundamental about the emotional life of the brain: that each of us is characterized by what I have come to call Emotional Style.

Before I briefly describe the components of Emotional Style, let me quickly explain how it relates to other classification systems that try to illuminate the vast diversity of ways to be human: emotional states, emotional traits, personality, and temperament.

The smallest, most fleeting unit of emotion is an emotional state. Typically lasting only a few seconds, it tends to be triggered by an experience, the spike of joy you feel at the macaroni collage your child made you for Mother’s Day, the sense of accomplishment you feel upon finishing a big project at work, the anger you feel over having to work all three days of a holiday weekend, the sadness you feel when your child is the only one in her class not invited to a party. Emotional states can also arise from purely mental activity, such as daydreaming, or introspection, or anticipating the future. But whether they are triggered by real-world experiences or mental ones, emotional states tend to dissipate, each giving way to the next.

A feeling that does persist, and that remains consistent over minutes or hours or even days, is a mood, of the “he’s in a bad mood” variety. And a feeling that characterizes you not for days but for years is an emotional trait. We think of someone who seems perpetually annoyed as grumpy, and someone who always seems to be mad at the world as angry. An emotional trait (chronic, just-about-to-boil-over anger) increases the likelihood that you will experience a particular emotional state (fury) because it lowers the threshold needed to feel such an emotional state.

Emotional Style is a consistent way of responding to the experiences of our lives. It is governed by specific, identifiable brain circuits and can be measured using objective laboratory methods. Emotional Style influences the likelihood of feeling particular emotional states, traits, and moods.

Because Emotional Styles are much closer to underlying brain systems than emotional states or traits, they can be considered the atoms of our emotional lives, their fundamental building blocks.

In contrast, personality, a more familiar way of describing people, is neither fundamental in this sense nor grounded in identifiable neurological mechanisms. Personality consists of a set of high-level qualities that comprise particular emotional traits and Emotional Styles. Take, for instance, the well-studied personality trait of agreeableness.

People who are extremely agreeable, as measured by standard psychological assessments (as well as their own and that of people who know them well), are empathic, considerate, friendly, generous, and helpful. But each of these emotional traits is itself the product of different aspects of Emotional Style. Unlike personality, Emotional Style can be traced to a specific, characteristic brain signature. To understand the brain basis of agreeableness, then, we need to probe more deeply into the underlying Emotional Styles that comprise it.

Psychology has been churning out classification schemes with gusto lately, asserting that there are four kinds of temperament or five components of personality or Lord-knows-how-many character types. While perfectly interesting and even fun the popular media have had a field day describing which character types make good romantic matches, business leaders, or psychopaths, these schemes are light on scientific validity because they are not based on any rigorous analysis of underlying brain mechanisms. Anything having to do with human behavior, feelings, and ways of thinking arises from the brain, so any valid classification scheme must also be based on the brain. Which brings me back to Emotional Style.

Emotional Style comprises six dimensions. Neither conventional aspects of personality nor simple emotional traits or moods, let alone diagnostic criteria for mental illness, these six dimensions reflect the discoveries of modern neuroscientiflc research:

Resilience: how slowly or quickly you recover from adversity.

Outlook: how long you are able to sustain positive emotion.

Social Intuition: how adept you are at picking up social signals from the people around you.

Self-Awareness: how well you perceive bodily feelings that reflect emotions.

Sensitivity to Context: how good you are at regulating your emotional responses to take into account the context you fmd yourself in.

Attention: how sharp and clear your focus is.

These are probably not the six dimensions you would come up with if you sat down and thought about your emotions and how they might differ from those of others. By the same measure, the Bohr model of the atom is probably not the model you would come up with if you sat down and thought about the structure of matter. I don’t mean to equate my work with that of the founders of modern physics, only to make a general point: It is rare that the human mind can determine the truths of nature, or even of ourselves, by intuition or casual observation. That’s why we have science. Only by methodical, rigorous experiments, and lots of them, can we figure out how the world works, and how we ourselves work.

These six dimensions arose from my research in affective neuroscience, complemented and strengthened by the discoveries of colleagues around the world. They reflect properties of and patterns in the brain, the sine qua non of any model of human behavior and emotion.

If the six dimensions don’t resonate with your understanding of yourself or of those close to you, that is likely because several of them operate on levels that are not always immediately apparent. For example, we tend not to be consciously aware of where we fall on the Resilience dimension. With few exceptions, we do not pay attention to how quickly we recover from a stressful event. (An exception would be something extremely traumatic, such as the death of a child; in that case, you are all too aware that you have remained a basket case for months and months.) But we experience its consequences. For instance, if you have an argument with your significant other in the morning, you might feel irritable for the entire day, yet not realize that the reason you are snappish and grouchy and churlish is that you have not regained your emotional equilibrium, which is the mark of the Slow to Recover style. I will show you in chapter 3 how you can become more aware of your Emotional Styles, which is the first and most important step in any attempt to either gracefully accept who you are or transform it.

A rule of thumb in science is that any new theory that hopes to supplant what came before must explain the same phenomena that the old theory did, as well as new ones. In order to be accepted as a more accurate and all-encompassing theory of gravity than what Isaac Newton had proposed after he saw the apple fall from the tree (or not), Einstein’s general theory of relativity had to explain all of the gravitational phenomena that Newton’s did, such as the orbits of the planets around the sun and the rate at which objects fell to earth, and new ones, too, such as the bending of celestial light around a large star. Let me show, then, that Emotional Style has sufficient explanatory power to account for well-established personality traits and temperament types; later, particularly in chapter 4, we will see that it has a solid foundation in the brain, something other classification schemes do not.

I believe that every individual personality and temperament reflects a different combination of the six dimensions of Emotional Style.

Take the “big five” personality traits, one of the standard classification systems in psychology: openness to new experience, conscientiousness, extraversion, agreeableness, and neuroticism:

– Someone high in openness to new experience has strong Social Intuition. She is also very self-aware and tends to be focused in her Attention style.

– A conscientious person has well-developed Social Intuition, a focused style of Attention, and acute Sensitivity to Context.

– An extraverted person bounces back rapidly from adversity and thus is at the Fast to Recover end of the Resilience spectrum. She maintains a positive Outlook.

– An agreeable person has a highly attuned Sensitivity to Context and strong Resilience; he also tends to maintain a positive Outlook.

– Someone high in neuroticism is slow to recover from adversity. He has a gloomy, negative Outlook, is relatively insensitive to context, and tends to be unfocused in his Attention style.

While the combinations of Emotional Styles that add up to each of the big five personality traits generally hold true, there will always be exceptions. Not everyone with a given personality will have all the dimensions of Emotional Style that I describe, but they will invariably have at least one of them.

Moving beyond the Big Five, we can look at traits that all of us think of when we describe ourselves or someone we know well. Each of these, too, can be understood as a combination of different dimensions of Emotional Style, though, again, not everyone with the trait will possess each dimension. However, most people will have most of them:

– Impulsive: a combination of unfocused Attention and low Self-Awareness.

– Patient: a combination of high Self-Awareness and high Sensitivity to Context. Knowing that when context changes, other things will change, too, helps to facilitate patience.

– Shy: a combination of being Slow to Recover on the Resilience dimension and having low Sensitivity to Context. As a result of the insensitivity to context, shyness and wariness extend beyond contexts in which they might be normal.

– Anxious: a combination of being Slow to Recover, having a negative Outlook, having high levels of Self-Awareness, and being unfocused (Attention).

– Optimistic: a combination of being Fast to Recover and having a positive Outlook.

– Chronically unhappy: a combination of being Slow to Recover and having a negative Outlook, with the result that a person cannot sustain positive emotions and becomes mired in negative ones after setbacks.

As you can see, these common trait descriptors comprise different permutations of Emotional Styles. This formulation provides a way of describing what the brain bases for these common traits are likely to be.

If you read original scientific papers, it is easy to get the impression that the researchers thought of a question, designed a clever experiment to answer it, and carried out the study with nary a dead end or setback between them and the answer. It’s not like that. I suspect you realized as much, but what is not as widely known, even among people who gobble up popular accounts of scientific research, is how difficult it is to challenge a prevailing paradigm.

That was the position I found myself in during the early 1980s. At that time, academic psychology relegated the study of emotions mostly to social and personality psychology rather than to neurobiology; few psychology researchers were interested in studying the brain basis of emotion. What little interest there was supported research on the socalled emotion centers of the brain, which were then thought to be exclusively in the limbic system.

I had a very different idea: that higher cortical functions, particularly those located in the evolutionarily advanced prefrontal cortex, are critical to emotion. When I first suggested that the prefrontal cortex is involved in emotion, I was met with an endless stream of skeptics. The prefrontal cortex, they insisted, is the site of reason, the antithesis of emotion. It certainly could not play a role in emotion, too. It was very lonely trying to carve out a scientific career when the prevailing winds blew strongly in the other direction. My search for bases of emotion in the brain’s seat of reason was viewed as quixotic, to say the least, the neuroscientific equivalent of hunting elephants in Alaska. There were more than a few times, especially when I struggled to get funding early on, when my skepticism about the classic division between thought (in the highly evolved neocortex) and feeling (in the subcortical limbic system) seemed like a good way to end a scientific career, not begin one.

If my scientific leanings were a less-than-savvy career move, so were some of my personal interests. Soon after I entered graduate school at Harvard in the 1970s, I met a remarkable group of kind and compassionate people who, I soon learned, had something in common: They all practiced meditation. This discovery catalyzed my then rudimentary interest in meditation to such an extent that, after my second year of grad school, I went off to India and Sri Lanka for three months to learn more about this ancient tradition and experience what intensive meditation might bring. I had a second motive as well, I wanted to see whether meditation might be a suitable subject for scientific research.

Studying emotions was controversial enough. Practicing meditation was practically heretical, and studying it was a scientific nonstarter. Just as academic psychologists and neuroscientists believed that there are brain regions for reason and brain regions for emotions, and never the two shall meet, so they believed that there is rigorous, empirical science and there is woo-woo meditation, and if you practiced the latter, your bona fides for the former were highly suspect.

This was the period of The Tao of Physics (1975), The Dancing Wu Li Masters (1979), and other books arguing that there are strong complementarities between the findings of modern Western science and the insights of ancient Eastern philosophies. Most academic scientists dismissed this as trash; being a meditator in their midst was not, shall we say, the most direct path to academic success. It was made very clear to me by my Harvard mentors that if I wanted a successful scientific career, studying meditation was not a very good place to start. While I dabbled in research on meditation in the early part of my career, once I saw how deep the resistance was, I set it aside. I remained a closet meditator, though, and eventually, once I had been granted tenure at the University of Wisconsin, and had a long list of scientific publications and honors to my credit, returned to meditation as a subject of scientific study.

A big reason I did so was a transformative meeting I had with the Dalai Lama in 1992, which completely changed the course of both my career and my personal life. As I describe in chapter 9, the encounter was the spark that made me decide to bring my interests in meditation and other forms of mental training out of the closet.

It is breathtaking to see how much has changed in the short period of time that I’ve been at this. In less than twenty years, the scientific and medical communities have become much more receptive to research on mental training. Thousands of new articles are now published on the subject in top scientific journals each year (I was tickled that the first such paper ever to appear in the august Proceedings of the National Academy of Sciences was by my colleagues and me, in 2004), and the National Institutes of Health now provides substantial funding for research on meditation. A decade ago that would have been unthinkable.

I believe this change is a very good thing, and not because of any sense of personal vindication (though I admit it’s been gratifying to see a scientific outcast of a topic receive the respect it deserves). I made two promises to the Dalai Lama in 1992: I would personally study meditation, and I would try to make research on positive emotions, such as compassion and well-being, as central a focus of psychology as research on negative emotions had long been.

Now those two promises have converged, and with them my tilting-at-windmills conviction that the seat of reason and higher-order cognitive function in the brain plays as important a role in emotion as the limbic system does. My research on meditators has shown that mental training can alter patterns of activity in the brain to strengthen empathy, compassion, optimism, and a sense of well-being, the culmination of my promise to study meditation as well as positive emotions. And my research in the mainstream of affective neuroscience has shown that it is these sites of higher-order reasoning that hold the key to altering these patterns of brain activity.

So while this book is a story of my personal and scientific transformation, I hope it offers you a guide for your own transformation. In Sanskrit, the word for meditation also means “familiarization.” Becoming more familiar with your Emotional Style is the first and most important step in transforming it. If this book does nothing more than increase your awareness of your own Emotional Style and that of others around you, I would consider it a success.

CHAPTER 1

One Brain Does Not Fit All

If you believe most self-help books, pop-psychology articles, and television therapists, then you probably assume that how people respond to significant life events is pretty predictable. Most of us, according to the “experts,” are affected in just about the same way by a given experience, there is a grieving process that everyone goes through, there is a sequence of events that happens when we fall in love, there is a standard response to being jilted, and there are fairly standard ways almost every normal person reacts to the birth of a child, to being unappreciated at one’s job, to having an unbearable workload, to the challenges of raising teenagers, and to the inevitable changes that occur with aging. These same experts confidently recommend steps we can all take to regain our emotional footing, weather a setback in life or in love, become more (or less) sensitive, handle anxiety with aplomb . . . and otherwise become the kind of people we would like to be.

But my thirty-plus years of research have shown that these one-size-fits-all assumptions are even less valid in the realm of emotion than they are in medicine. There, scientists are discovering that people’s DNA shapes how they will respond to prescription drugs (among other things), ushering in an age of personalized medicine in which the treatments one patient receives for a certain illness will be different from what another patient receives for that same illness, for the fundamental reason that no two patients’ genes are identical. (One important example of this: The amount of the blood thinner warfarin a patient can safely take to prevent blood clots depends on how quickly the patient’s genes metabolize the drug.)

When it comes to how people respond to what life throws at them, and how they can develop and nurture their capacity to feel joy, to form loving relationships, to withstand setbacks, and in general to lead a meaningful life, the prescription must be just as personalized. In this case, the reason is not just that our DNA differs, though of course it does, and DNA definitely influences our emotional traits, but that our patterns of brain activity do. Just as the medicine of tomorrow will be shaped by deciphering patients’ DNA, so the psychology of today can be shaped by understanding the characteristic patterns of brain activity underlying the emotional traits and states that define each of us.

Over the course of my career as a neuroscientist, I’ve seen thousands of people who share similar backgrounds respond in dramatically different ways to the same life event. Some are resilient in the face of stress, for instance, while others fall apart. The latter become anxious, depressed, or unable to function when they encounter adversity. Resilient people are somehow able not only to withstand but to benefit from certain kinds of stressful events and to turn adversity into advantage.

This, in a nutshell, is the puzzle that has driven my research. I’ve wanted to know what determines how someone reacts to a divorce, to the death of a loved one, to the loss of a job, or to any other setback, and, equally, what determines how people react to a career triumph, to winning the heart of their true love, to realizing that a friend will walk over hot coals for them, or to other sources of happiness. Why and how do people differ so widely in their emotional responses to the ups and the downs of life?

The answer that has emerged from my own work is that different people have different Emotional Styles. These are constellations of emotional reactions and coping responses that differ in kind, intensity, and duration.

Just as each person has a unique fingerprint and a unique face, each of us has a unique emotional profile, one that is so much a part of who we are that those who know us well can often predict how we will respond to an emotional challenge.

My own Emotional Style, for instance, is fairly optimistic and upbeat, eager to take on challenges, quick to recover from adversity, but sometimes prone to worry about things that are beyond my control. (My mother, struck by my sunny disposition, used to call me her “joy boy.”)

Emotional Style is why one person recovers fairly quickly from a painful divorce while another remains mired in self-recrimination and despair. It is why one sibling bounces back from a job loss while another feels worthless for years afterward. It is why one father shrugs off the botched call of a Little League umpire who called out his (clearly safe!) daughter at second base while another leaps out of his seat and screams at the ump until his face turns purple.

Emotional Style is why one friend serves as a wellspring of solace to everyone in her circle while another makes herself scarce, emotionally and literally, whenever her friends or family need sympathy and support. It is why some people can read body language and tone of voice as clearly as a billboard while to others these nonverbal cues are a foreign language.

And it is why some people have insight into their own states of mind, heart, and body that others do not even realize is possible.

Every day presents countless opportunities to observe Emotional Styles in action. I spend a lot of time at airports, and it is a rare trip that doesn’t offer the chance for a little field research. As we all know, there seem to be more ways for a flight schedule to go awry than there are flights departing O’Hare on a Friday evening: bad weather, waiting for a flight crew whose connection is late, mechanical problems, cockpit warning lights that no one can decipher . . . the list goes on. So I’ve had countless chances to watch the reaction of passengers (as well as myself!) who, waiting to take off, hear the dreaded announcement that the flight has been delayed for one hour, or for two hours, or indefinitely, or canceled.

The collective groan is audible. But if you look carefully at individual passengers, you’ll see a wide range of emotional reactions. There’s the college student in his hoodie, bobbing his head to the music coming in through his earbuds, who barely glances up before getting lost again in his iPad. There’s the young mother traveling alone with a squirmy toddler who mutters, “Oh great,” before grabbing her child and stalking off toward the food court. There’s the corporate-looking woman in the tailored suit who briskly walks up to the gate agent and calmly but firmly demands to be rerouted immediately through anywhere this side of Kathmandu, just get her to her meeting! There’s the silver-haired, bespoke-suited man who storms up to the agent and, loud enough for everyone to hear, demands to know if she realizes how important it is for him to get to his destination, insists on seeing her superior, and-red-faced by now-screams that the situation is completely intolerable.

Okay, I’m prepared to believe that delays are worse for some people than for others. Failing to make it to the bedside of your dying mother is definitely up there, and missing a business meeting that means life or death to the company your grandfather founded is a lot worse than a student arriving home for winter break half a day later than planned. But I strongly suspect that the differences in how people react to an exasperating flight delay have less to do with the external circumstances and more to do with their Emotional Style.

The existence of Emotional Style raises a number of related questions. The most obvious is, when does Emotional Style first appear, in early adulthood, when we settle into the patterns that describe the people we will be, or, as genetic determinists would have it, before birth? Do these patterns of emotional response remain constant and stable throughout our lives? A less obvious question, but one that arose in the course of my research, is whether Emotional Style influences physical health. (One reason to suspect it does is that people who suffer from clinical depression are much more prone to certain physical disorders such as heart attack and asthma than are people with no history of depression.)

Perhaps most fundamentally, how does the brain produce the different Emotional Styles, and are they hardwired into our neural circuitry, or is there anything we can do to change them and thus alter how we deal with and respond to the pleasures and vicissitudes of life? And if we are able to somehow change our Emotional Style (in chapter 11 I will suggest some methods for doing so), does it also produce measureable changes in the brain?

The Six Dimensions of Emotional Style

So as not to leave you in suspense, and to make specific what I mean by “Emotional Style”, let me lay out its bare bones. There are six dimensions of Emotional Style. The existence of the six did not just suddenly occur to me, nor did they emerge early on in my research, let alone result from a command decision that six would be a nice number. Instead, they arose from systematic studies of the neural bases of emotion. Each of the six dimensions has a specific, identifiable neural signature, a good indication that they are real and not merely a theoretical construct. It is conceivable that there are more than six dimensions, but it’s unlikely: The major emotion circuits in the brain are now well understood, and if we believe that the only aspects of emotion that have scientific validity are those that can be traced to events in the brain, then six dimensions completely describe Emotional Style.

Each dimension describes a continuum. Some people fall at one or the other extreme of that continuum, while others fall somewhere in the middle. The combination of where you fall on each dimension adds up to your overall Emotional Style.

Your Resilience style: Can you usually shake off setbacks, or do you suffer a meltdown? When faced with an emotional or other challenge, can you muster the tenacity and determination to soldier on, or do you feel so helpless that you simply surrender? If you have an argument with your significant other, does it cast a pall on the remainder of your day, or are you able to recover quickly and put it behind you? When you’re knocked back on your heels, do you bounce back and throw yourself into the ring of life again, or do you melt into a puddle of depression and resignation? Do you respond to setbacks with energy and determination, or do you give up?

People at one extreme of this dimension are Fast to Recover from adversity; those at the other extreme are Slow to Recover, crippled by adversity.

Your Outlook style: Do you seldom let emotional clouds darken your sunny outlook on life? Do you maintain a high level of energy and engagement even when things don’t go your way? Or do you tend toward cynicism and pessimism, struggling to see anything positive? People at one extreme of the Outlook spectrum can be described as Positive types; those at the other, as Negative.

Your Social Intuition style: Can you read people’s body language and tone of voice like a book, inferring whether they want to talk or be alone, whether they are stressed to the breaking point or feeling mellow? Or are you puzzled by, even blind to, the outward indications of people’s mental and emotional states? Those at one extreme on this spectrum are Socially Intuitive types; those at the other, Puzzled.

Your Self-Awareness style: Are you aware of your own thoughts and feelings and attuned to the messages your body sends you? Or do you act and react without knowing why you do what you do, because your inner self is opaque to your conscious mind? Do those closest to you ask why you never engage in introspection and wonder why you seem oblivious to the fact that you are anxious, jealous, impatient, or threatened? At one extreme of this spectrum are people who are Self-Aware; at the other, those who are Self-Opaque.

Your Sensitivity to Context style: Are you able to pick up the conventional rules of social interaction so that you do not tell your boss the same dirty joke you told your husband or try to pick up a date at a funeral? Or are you baffled when people tell you that your behavior is inappropriate? If you are at one extreme of the Sensitivity to Context style, you are Tuned In; at the other end, Tuned Out.

. . .

from

The Emotional Life of Your Brain. How Its Unique Patterns Affect the Way You Think, Feel and Live. And how You can Change Them.

by Richard J. Davidson, Ph.D. and Sharon Begley

get it at Amazon.com

“BRINGING IN THE BODIES”, OUR HARSH LOGIC. Israeli soldiers’ testimonies from the Occupied Territories, 2000-2010 – Breaking the Silence.

“The deputy brigade commander then decided that instead of just being aggressive, he’d also remind him who’s the boss, who’s the Jew and who’s the Arab . . . Of course I reported it as well, it didn’t lead to anything.”

The widespread notion in Israeli society that control of the Palistinian Territories is exclusively aimed at protecting citizens is incompatible with the information conveyed by hundreds of Israeli Defence Force soldiers.

Settler violence against Palestinians is not treated as an infraction of the law. It is instead one more way in which Israel exercises its control in the Territories.

Contrary to the impression the Israeli government prefers to give, in which Israel is slowly withdrawing from the Territories securely and with caution, the soldiers portray a tireless effort to tighten the country’s hold on both the land and on the Palestinian population.

ISRAEL HAS NO HISTORY, ONLY A CRIMINAL RECORD.

BREAKING THE SILENCE, one of Israel’s most internationally lauded non-government organisations, was established in Jerusalem in 2004 by Israel Defense Forces veterans to document the testimonies of Israeli soldiers who have served in the Occupied West Bank and Gaza Strip.

In June 2004, some sixty veteran soldiers of the Israel Defense Forces presented an exhibition of written testimonies and photographs from their military service in Hebron in the Occupied West Bank. The exhibition led to the founding of Breaking the Silence, an organization that is dedicated to exposing the day-to-day reality of military service in the Occupied Territories through testimonies by the soldiers entrusted with carrying it out. The organization interviews men and women who have served in Israel’s security forces since the outbreak of the Second Intifada in September 2000 and distributes their testimonies online, in print, and through the media. Breaking the Silence also holds events and lectures and conducts tours in the West Bank, with the aim of shedding light on Israel’s operational methods in the Territories and encouraging debate about the true nature of the Occupation.

This volume contains 145 testimonies and is representative of the material collected by the organization (through more than 700 interviews) since its inception. The witnesses represent all strata of Israeli society and nearly all IDF units engaged in the Occupied Territories. They include commanders and officers as well as the rank and file, and both men and women.

All the testimonies published by Breaking the Silence, including those in this book, have been collected by military veterans and verified prior to publication. Unless noted otherwise, they were reported by eyewitnesses and are published verbatim, with only minor alterations to the language to remove identifying details and clarify military terms. The organization keeps the identities of witnesses confidential; without anonymity, it would be impossible to make the information published here public.

Although the soldiers’ descriptions are limited to their personal experiences, the cumulative body of their testimony allows a broad view, not only of the IDF’s primary methods of operation but also of the principles shaping Israeli policies in the Occupied Territories.

Breaking the Silence considers exposing the truth of those policies a moral obligation and a necessary condition for a more just society. For Israelis to ignore clear and unambiguous firsthand accounts of the Occupation means surrendering a fundamental right of citizens, the right to know the truth about their actions and the actions of those who operate in their name. Breaking the Silence demands accountability regarding Israel’s military actions in the Occupied Territories, which are perpetrated by its citizens and in their names.

In this book, readers will find themselves immersed in the ordinary speech of Israeli soldiers, which is dense with jargon, idiom, and a frame of reference specific to their particular experience. The testimonies in the original Hebrew are transcribed verbatim, preserving the words of the testifying soldier as he or she spoke them. The English translation has stayed as faithful as possible to the original, only adding clarification where it is critically necessary for understanding.

Our Harsh Logic was edited in Hebrew by Mikhael Manekin, Avichai Sharon, Yanay Israeli, Oded Naaman, and Levi Spectre.

Introduction

The publication of Occupation of the Territories: Israeli Soldiers’ Testimonies, 2000-2010, the report on which this book is based, marked a decade since the outbreak of the Second Palestinian Intifada. Drawing on the firsthand accounts of hundreds of men and women soldiers interviewed by Breaking the Silence, the report exposed the operational methods of the Israeli military in the West Bank and the Gaza Strip and the impact of those methods on the people who must live with them, the Palestinians, the settlers, and the soldiers themselves. Moreover, the IDF troops, who are charged with carrying out the country’s mission in the Territories, revealed in unprecedented detail the principles and consequences of Israel’s policies, and their descriptions gave clarity to the underlying logic of Israeli operations overall.

The testimonies left no room for doubt: while the security apparatus has indeed had to respond to concrete threats during the past decade, including terrorist attacks on citizens, Israel’s actions are not solely defensive. Rather, they have systematically led to the de facto annexation of large sections of the West Bank through the dispossession of Palestinian residents and by tightening control over the civilian population and instilling fear. The widespread notion in Israeli society that control of the Territories is exclusively aimed at protecting citizens is incompatible with the information conveyed by hundreds of IDF soldiers.

In the media, in internal discussions, and in military briefings, the security forces and government bodies consistently refer to four components of Israeli policy: “prevention of terrorism,” or “prevention of hostile terrorist activity” (sikkul); “separation,” that is, Israel remaining separate from the Palestinian population (hafradah); preserving the Palestinian “fabric of life” (mirkam hayyim); and “enforcing the law” in the Territories (akifat hok). But these terms convey a partial, even distorted, portrayal of the policies that they represent. Although they were originally descriptive, these four terms quickly became code words for activities unrelated to their original meaning. This book lays bare the aspects of those policies that the state’s institutions do not make public. The soldiers who have testified are an especially reliable source of information: they are not merely witnesses; they have been entrusted with the task of carrying out those policies, and are, explicitly or implicitly, asked to conceal them as well.

The testimonies in this book are organized in four parts, each corresponding to one of the policy terms: “prevention,” “separation,” “fabric of life,” and “law enforcement.”

In the first part, “Prevention,” the testimonies show that almost every use of military force in the Territories is considered preventive. Behind this sweeping interpretation of the term lies the assumption that every Palestinian, man and woman, is suspect, constituting a threat to Israeli citizens and soldiers; consequently, deterring the Palestinian population as a whole, through intimidation, will reduce the possibility of opposition and thereby prevent terrorist activity. In this light, abusing Palestinians at checkpoints, confiscating property, imposing collective punishment, changing and obstructing access to free movement (by setting up transient checkpoints, for example), even making arbitrary changes to the rules (according to the whim of a commander at a checkpoint, for instance), these can all be justified as preventive activities. And if the term “preventive” applies to almost every military operation, the difference between offensive and defensive actions gradually disappears. Thus most military acts directed at Palestinians can be viewed as justifiably defensive.

Part Two covers the second policy term, “Separation.” On its face, the principle of “separation” seems to involve the defense of Israelis in Israel proper by driving a wedge between them and the Palestinian population in the Territories. However, the testimonies in this part show that the policy does not only mean separating the two populations, but also separating Palestinian communities from each other. The policy allows Israel to control the Palestinian population: Palestinian movement is channeled to Israel’s monitoring mechanisms, which establish new borders on the ground. The many permits and permissions Palestinians need to move around the West Bank also serve to limit their freedom of movement and internally divide their communities. The often arbitrary regulations and endless bureaucratic mazes are no less effective than physical barriers. The policy of separation is exposed as a means to divide and conquer.

The soldiers’ testimonies also reveal a third effect, which is the separation of Palestinians from their land. The Israeli settlements and surrounding areas are themselves a barrier. Palestinians are forbidden to enter these territories, which often include their own agricultural land. The location of these multiple barriers does not appear to be determined solely by defensive considerations based on where Palestinians live, but rather on offensive calculations governed by Israel’s desire to incorporate certain areas into its jurisdiction. In the West Bank, checkpoints, roads closed to Palestinian traffic, and prohibition against Palestinian movement from one place to another are measures that effectively push Palestinians off their land and allow the expansion of Israeli sovereignty. The soldiers’ testimonies in this part make clear that “separation” is not aimed at withdrawal from the Occupied Territories, but is rather a means of control, dispossession, and annexation.

The reality of Palestinian life under Israeli occupation is the subject of Part Three, “The Fabric of Life.” Israeli spokespeople emphasize that Palestinians in the Territories receive all basic necessities and are not subjected to a humanitarian crisis, and that Israel even ensures the maintenance of a proper “fabric of life.” Such claims, along with assertions of economic prosperity in the West Bank, suggest that life under foreign occupation can be tolerable, and even good. On the basis of these claims, those who support Israeli policy argue that the occupation is a justifiable means of defense, and if harm is regrettably suffered by the population, the damage is “proportionate” to the security of Israeli civilians. But, as the testimonies in Part Three confirm, the fact that Palestinians require Israel’s good grace to lead their lives shows the extent to which they are dependent on Israel. If Israel is able to prevent a humanitarian crisis in the Gaza Strip, when considered necessary, then Israel also has the power to create one.

Israel’s claim to allow the maintenance of the “fabric of life” in the West Bank reveals the absolute control that it has over the Palestinian people. On a daily basis, the Israeli authorities decide which goods may be transferred from city to city, which businesses may open, who can pass through checkpoints and through security barrier crossings, who may send their children to school, who will be able to reach the universities, and who will receive the medical treatment they need. Israel also continues to hold the private property of tens of thousands of Palestinians. Sometimes property is held for supposed security considerations, other times for the purpose of expropriating land. In a significant number of cases, the decision to confiscate property appears completely arbitrary. Houses, agricultural land, motor vehicles, electronic goods, farm animals, any and all of these can be taken at the discretion of a regional commander or a soldier in the field. Sometimes IDF soldiers even “confiscate” people for use during a training exercise: to practice arrest procedures, troops might burst into a house in the dead of night, arrest one of the residents, and release him later. Thus, as this part shows, the Palestinian fabric of life itself is arbitrary and changing.

In Part Four, which covers the “dual regime,” the soldiers’ testimonies show how, in the name of enforcing the law, Israel maintains two legal systems: in one, Palestinians are governed by military rule that is enforced by soldiers and subject to frequent change; in the other, Israeli settlers are subject to predominantly civil law that is passed by a democratically elected legislature and enforced by police. The Israeli legal authority in the Territories does not represent Palestinians and their interests. Rather, they are subordinate to a system through compliance with threats that reinforce Israel’s overall military superiority.

The testimonies in this part also reveal the active role played by settlers in imposing Israel’s military rule. Settlers serve in public positions and are partners in military deliberations and decisions that control the lives of the Palestinians who live in their area of settlement. Settlers often work in the Ministry of Defense as security coordinator for their settlement, in which case they influence all kinds of details affecting the area, such as transportation, road access, and security patrols, and even participate in soldiers’ briefings.

The security forces do not see the settlers as civilians subject to law enforcement but as a powerful body that shares common goals. Even when the wishes of the settlers and the military are at odds, they still ultimately consider each other as partners in a shared struggle and settle their conflict through compromise. As a consequence, the security forces usually acquiesce in the settlers’ goals, if only partially. Thus settler violence against Palestinians is not treated as an infraction of the law. It is instead one more way in which Israel exercises its control in the Territories.

It is sometimes claimed that the failure to enforce the law among the settlers is due to the weakness of the Israeli police force. The testimonies in this section strongly suggest otherwise: that the law is not enforced because security forces do not treat settlers as regular citizens but as partners. In the process, the security forces also serve the settlers’ political aspirations: annexation of large portions of the Occupied Territories for their use.

“Prevention,” “separation,” “fabric of life,” and “law enforcement” are some of the terms the Israeli authorities use to signify elements of their policy in the Territories. But rather than explaining the policy, these terms conceal it under the cover of defensive terminology whose connection to reality is weak at best. The accounts of the IDF soldiers cited here show that the effect of Israel’s activities in the Territories is not to preserve the political status quo but to change it. While Israel expropriates more and more territory, its military superiority allows it to control all strata of Palestinian life. Contrary to the impression the government prefers to give, in which Israel is slowly withdrawing from the Territories securely and with caution, the soldiers portray a tireless effort to tighten the country’s hold on both the land and on the Palestinian population.

Despite its scope, this book is limited to the information brought to light in the soldiers’ testimonies. It does not describe all the means by which the State of Israel controls the Territories and should not be read as an attempt to address every aspect of the Occupation. The full picture is missing the activities carried out by the General Security Services (Shabak) and other intelligence agencies, as well as the military courts, which constitute an important component of military rule, and additional facets of the military administration.

Rather, the purpose of this book is to replace the code words that sterilize public discussion with a more accurate description of Israel’s policies in the West Bank and the Gaza Strip. The facts are clear and accessible; the testimonies oblige us to look directly at Israel’s actions and ask whether they reflect the values of a humane, democratic society.

PART ONE

Prevention: Intimidating the Palestinian Population

An Overview

Since the outbreak of the Second Intifada in September 2000, more than one thousand Israelis and six thousand Palestinians have been killed. The considerable escalation in violence between Palestinians and Israelis both in the Occupied Territories and within Israel prompted the security system to develop new, more aggressive methods of action, which were intended to quash Palestinian opposition and prevent attempted attacks on Israeli civilians and soldiers on both sides of the Green Line.

The testimonies in Part One address the DPS offensive and proactive military action in the Occupied Territories during the past decade. Although the security forces claim they are “preventing terror,” the soldiers’ testimonies reveal how broadly the term “prevention” is applied: it has become a code word that signifies all offensive action in the Territories. As the testimonies here attest, a significant portion of offensive actions are intended not to prevent a specific act of terrorism, but rather to punish, deter, or tighten control over the Palestinian population. But the term “prevention of terror” gives the stamp of approval to any action in the Territories, obscuring the distinction between using force against terrorists and using force against civilians. In this way, the IDF is able to justify methods that serve to intimidate and oppress the population overall. These testimonies also show the serious implications of blurring this distinction for the lives, dignity, and property of Palestinians.

The actions described include arrest, assassination, and occupation of homes, among others. Also revealed here are the principles and considerations that guide decision makers to take those actions, both in the field and at high levels of command. Early in the Second Intifada, the IDF established the principle behind its methods, calling it a “searing of consciousness.” The assumption is that resistance will fade once Palestinians as a whole see that opposition is useless. In practice, as the testimonies show, “searing of consciousness” translates into intimidation and indiscriminate punishment. In other words, violence against a civilian population and collective punishment are justified by the “searing of consciousness” policy, and they have become cornerstones of IDF strategy.

One particular action identified with the IDF’s efforts at prevention is targeted assassinations. The IDF has claimed repeatedly that assassinations are used as a last resort, as a defensive measure against people who plan and carry out terrorist attacks. However, the soldiers’ testimonies reveal that the military’s undertakings in the last decade are not consistent with statements made in the media and in the courts. More than once, a unit was sent to carry out an assassination when other options, such as arrest, were at its disposal. Also, it becomes clear in this part that at least some of the assassinations are aimed at revenge or punishment, not necessarily to prevent a terrorist attack. One testimony describes the assassination of unarmed Palestinian police officers who were under no suspicion of terror. According to the soldier testifying, the killing was done as revenge for the murder of soldiers the day before by Palestinian militants from the same area. Other testimonies describe a policy of making Palestinians “pay the price” of opposition: missions whose goals are, to quote one of the commanders, to “bring in the bodies.”

Arrests are another instrument of the effort to “prevent terror.” During the last decade, tens of thousands of Palestinians were arrested in almost nightly operations conducted deep in Palestinian territory. According to testimonies, arrests are frequently accompanied by the abuse of bound detainees, who are beaten or humiliated by soldiers and commanders. Arrests are used to accomplish a variety of aims, and in many cases, the reason is unclear to those being arrested. For example, during IDF invasions of some Palestinian cities and villages, all the men were detained in a specific place, although the army knew of no connection to any misdeeds and had no intelligence about their intentions; they were held, bound and blindfolded, sometimes for hours. Thus, under the guise of “prevention of terror,” mass arrests are used to instill fear in the population and tighten Israeli military control.

Arrests are often accompanied by destruction or confiscation of Palestinian property and infrastructure. The testimonies demonstrate that destruction is often the result of a mistake or occurs in the course of operational need, but it may also be inflicted intentionally by soldiers and commanders in the field, or by orders coming from higher up. In every case, destruction is an additional avenue for control of the population.

Invading and taking control of Palestinians’ private domains has also become common in the last ten years. Nearly every night, IDF forces invade families’ homes, often taking up posts there for days or even weeks. This action, known as creating a “straw widow,” is aimed at better controlling the territory by capturing positions and creating hidden lookouts. As reveaIed in the testimonies, though, the aim of taking control of a house is often not to prevent conflict but to cause it. Testimonies in this chapter describe “decoy” missions, whose goal is to force armed Palestinians out of hiding and into the streets in order to strike at them.

In addition to assassination, arrest, and destruction, the testimonies describe a method of intimidation and punishment called “demonstrating a presence,” one of the DPS primary means of instilling fear. A conspicuous expression of “demonstrating a presence” is the army’s night patrol in Palestinian cities and villages. Soldiers are sent to patrol the alleys and streets of a town, and they “demonstrate their presence” in a variety of ways: shooting into the air, throwing sound bombs, shooting flares or tear gas, conducting random house invasions and takeovers, and interrogating passersby. Field-level commanders call these “violent patrols,” “harassment activity,” or “disruption of normalcy.” According to the soldiers’ testimonies, “demonstrating a presence” is done on a frequent and ongoing basis, and it is not dependent on intelligence about a specific terrorist activity. Missions to “demonstrate a presence” prove that the IDF sees all Palestinians, whether or not they are engaged in opposition, as targets for intimidation and harassment.

“Mock operations” are another example of a “disruption of normalcy.” In the course of drilling and training, military forces invade homes and arrest Palestinians: they take over villages as a drill in preparation for war or to train for combat in an urban setting. Although the Palestinians affected might experience these incursions as real, the testimonies show that they are not carried out in order to make an arrest or prevent an attack but are explicitly defined as drilling and training activities.

Finally, the term “prevention” is also used to suppress nonviolent opposition to the Occupation. During the past few years, a number of grassroots Palestinian protest movements have developed in the Territories, often with the cooperation of Israeli and international activists. These movements rely on demonstrations, publications, and legal action to make their protest, all forms of nonviolence. Yet IDF “prevention” extends to using violence against protesters, arresting political activists, and imposing curfews on villages in which political activity takes place.

The different objectives and methods revealed here form part of the logic of IDF activity in the Territories over the last decade. Underlying the reasoning governing this activity is the assumption that distinguishing between enemy civilians and enemy combatants is not necessary. “Demonstrating a presence” and the “searing of consciousness” express this logic best: systematic harm to Palestinians as a whole makes the population more obedient and easier to control.

1. Stun grenades at three in the morning

UNIT: PARATROOPERS, LOCATION: NABLUS DISTRICT, YEAR: 2003

We did all kinds of very sketchy work in Area A*. That could mean, for example, going into Tubas on a Friday, when the market is packed, to set up a surprise checkpoint in the middle of the village. One time, we arrived to set up a checkpoint like that on Friday morning, and we started to spread out: inspecting vehicles and every car that passed. Three hundred meters from us some kids start a small demonstration. They throw rocks at us, but they come maybe ten meters and don’t hit us. They start cursing us and everything. At the same time, a crowd of people gathers. Of course, this was followed by aiming our weapons at the kids-you can call it self-defense.

* Territory in which, according to the terms of the 1995 Washington Agreement, securityrelated and civilian issues are under control of the Palestinian Authority; in Area B, the Palestinian Authority controls only civilian affairs; Area C, which includes the Israeli settlements, the Jordan Valley, buffer zones, and bypass and access roads, a majority of the land remains exclusively under Israeli control
.

What was the purpose of the checkpoint?

Just to show our presence, to get into a firefight, we didn’t know whether that would happen or not. In the end we got out without a scratch, without anything happening, but the company commander lost it. He asked one of the grenade launchers to fire a riot control grenade toward the demonstrators, the children. The grenade launcher refused, and afterward he was treated terribly by the company commander. He wasn’t punished, because the company commander knew he’d given an illegal order, but he was treated really disgustingly by the staff. That’s what happened. Another time we went into Tubas at three in the morning in a Safari and threw stun grenades in the street. For no reason, just to wake people up.

What was the point?

To say, “We’re here. The IDF is here.” In general, they told us that if some terrorist heard the IDF in the village, then maybe he’d come outside to fight. No one ever came out. It seems that the goal was just to show the local population that the IDF is here, and it’s a common policy: “The IDF is here, in the Territories, and we’ll make your life bitter until you decide to stop the terror.” The IDF has no problem doing it. But we didn’t understand why we were throwing grenades. We threw a grenade. We heard a “boom,” and we saw people waking up. When we got back they’d say, “Great operation,” but we didn’t understand why. This happened every day, a different force from the company did it each time, it was just part of the routine, part of our lives.

2. To stop the village from sleeping

UNIT: ARTILLERY, LOCATION: GUSH ETZION, YEAR: 2004

Normally, the point of “Happy Purim” is to stop people from sleeping [On the Purim holiday, Israeli children celebrate by making tremendous noise and creating chaos in the streets]. It means going into a village in the middle of the night, going around throwing stun grenades and making noise. Not all night long, but at some specific time. It doesn’t matter how long you do it, they don’t set an end time. They say, “Okay, they threw stones at you today in Husan, so do a Happy Purim there.” There weren’t that many of those.

Is that what’s called “demonstrating a presence”?

I’m sure you’ve heard the term Happy Purim before. If not, you’ll hear it. Yes, demonstrating a presence. Sometimes we got instructions from the battalion to do something like that. . . . It’s part of the activities that happen before. . .

What’s the rationale behind that kind of operation?

If the village initiates an operation, then you’re going to initiate a lack of sleep. I never checked how much this kind of operation actually stops people from sleeping, because you aren’t in the village for four hours throwing stun grenades every ten minutes, if we did that three times the IDF would run out of stun grenades. These are operations that happen at a specific time, and if you throw a single stun grenade at point X in Nahalin, it probably won’t make much noise a hundred or two hundred meters away. In general, maybe this creates the impression that the IDF is in the village at night, without having to do too much, but I don’t think it’s more than that.

3. They came to a house and just demolished it

UNIT: KFIR BRIGADE, LOCATION: NABLUS DISTRICT, YEAR: 2009

During your service in the Territories, what shook you up the most?

The searches we did in Hares, that was the straw that broke the camel’s back. They said there are sixty houses that have to be searched. I said that there had to have been some warning from intelligence. I tried to justify it to myself.

Was this during the day or at night?

At night.

You went out as a patrol?

No, the whole division. It was a battalion operation, they spread out over the whole village, took control of the school, smashed the locks, the classrooms. One room was used as the investigation room for the Shin Bet, one room for detainees, one room for the soldiers to rest [“Shin Bet” and “Shabak” are used interchangeably. Both names are acronyms for Sherut Bitachon Clali, or General Security Services, which is responsible for internal intelligence]. I remember it particularly annoyed me that they chose a school. We went house by house, knocking at two in the morning on the family’s door. They’re scared to death, girls peeing in their pants with fear. We bang on the doors, there’s a feeling of “We’ll show them,” it’s fanatical. We go into the house and turn everything upside down.

What’s the procedure?

Gather the family in one room, put a guard there, tell the guard to keep his gun on them, and then search the whole house.

We received another order that everyone born after 1980 until . . . everyone between sixteen and twenty-nine, doesn’t matter who, bring him in cuffed and blindfolded. They yelled at old people, one of them had an epileptic seizure. They carried on yelling at him. He doesn’t speak Hebrew and they continue yelling at him. The medic treated him. We did the rounds. Every house we went into, they took everyone between sixteen and twenty-nine and brought them to the school. They sat tied up in the schoolyard.

Did they tell you the purpose of all this?

To locate weapons. But we didn’t find any weapons in the end. They confiscated kitchen knives. What shocked me the most was that there was also stealing. One person took twenty Shekels. People went into the houses and looked for things to steal. This was a very poor village. At one point, guys were saying, “What a bummer, there’s nothing to steal.” “I took some markers just so I could say that I stole something.”

That was said in a conversation among the soldiers?

Among the soldiers, after the action. There was a lot of joy at peopIe’s misery, guys were happy taIking about it. There was a moment where someone they knew was mentally ill yelled at the soldiers, but one soldier decided that he was going to beat him up anyway, so they smashed him. They hit him in the head with the butt of a gun, he was bleeding, and they brought him to the school along with everyone else. There were a pile of arrest orders signed by the battalion commander, ready, with one area left blank. They’d fill in that the person was detained on suspicion of disturbing the peace. They just filled in the name and the reason for arrest. I remember there were people with plastic handcuffs that had been put on really tight, and I’d cut them off and put on looser ones. I got to speak with people there. There was one who worked thirteen hours a day, and another one a settler had brought into Israel to work for him, after two months he didn’t pay him and handed him over to the police [It is illegal for Palestinians from the Territories to work within Israel without a permit].

All the people came from that one village?

Yes.

Anything else you remember from that evening?

That bothered me? A small thing, but it bothered me. There was one house that they just demolished. There’s a dog that can find weapons but they didn’t bring him, they just destroyed the house. The mother watched from the side and cried, the kids sat with her and stroked her. I see how my mom puts so much effort into every corner of our house, and suddenly they come and destroy it.

What do you mean, that they just destroyed a house?

They smash the floors, turn over sofas, throw plants and pictures, turn over beds, smash the closets, the tiles. There were other, smaller things, but this really bothered me. The look on the people whose house you’ve gone into. It really hurt me to see this. And after all that, they left them for hours tied up and blindfolded in the school. The order came to free them at four in the afternoon. So that was more than twelve hours. There were investigators from the security services who sat there and interrogated them one by one.

During an army operation near Jenin, this teenaged boy wandered into the area. When the soldiers discovered that he was related to a man they were looking for, they took him back to their post (seen here) to deliver him to the Shin Bet for interrogation.
.

Had there been an earlier terrorist attack in the area?

No. We didn’t even find any weapons. The brigade commander claimed that the Shin Bet did find some intelligence, and that there are a lot of guys there who throw stones, and that now we’d be able to catch them . . . Things from the operation in Hares are always surfacing in my mind.

Like what?

The way they looked at us, what was going through their minds, their children’s minds. How you can take a woman’s son in the middle of the night and put him in handcuffs and a blindfold.

4. The deputy brigade commander beat up a restrained detainee

UNIT: GENERAL STAFF RECONNAISSANCE UNIT, LOCATION: NABLUS AREA, YEAR: 2000

It was in Kfar Tal, we went to look for a few suspects, a Nassar Asaida and his brother Osama Asaida. And we were at a house where Osama was supposed to be, we surrounded the house and closed in. The procedure is that you yell and make noise . . . and if that doesn’t do it, you throw a stone at the door so they’ll wake up, and if that doesn’t work, then you shoot in the air or at the walls . . . In the end you throw bombs on the roof, but the procedure is clear that you start the . . . the action with . . .

Not with shooting?

Not with shooting but . . . really at the end . . . there was fire from a machine gun, maybe a Negev, I don’t remember what there was, at the wall. A burst of fire, you know five, six times: rat-tat-tat-tat, like that . . . like they fired a lot and again, it was against procedure and against . . .

So what then happened to the suspect?

The suspect came out . . . and we interrogated him, and it really was him, and they restrained him with his hands behind his back and blindfolded him, I don’t remember what we put on him, um . . . and I took him to the northeastern corner of the courtyard and some kind of, they sent some kind of armored jeep from the deputy brigade commander, the brigade commander at that time was –, but he wasn’t there. I don’t know who the deputy brigade commander was. He arrived with a driver and a radio man or some other guy, and when I took [the detainee] I certainly wasn’t rough but I also wasn’t gentle, I was very assertive. I took him and made things clear, so it was totally clear who’s the boss in this situation, but when I got to the deputy brigade commander, then he decided that instead of just being aggressive, he’d also remind him who’s the boss, who’s the Jew and who’s the Arab, who’s the prisoner, and he gave him some two, three, four blows, elbow to the ribs, a kick to the ass, all kinds of . . .

The deputy brigade commander himself?

The deputy brigade commander himself. It wasn’t just “See who’s the boss,” which, say, would mean hitting him once to show him. I don’t understand it, it could just be the guy, just a way of releasing tension. The deputy brigade commander letting off tension with the . . . this son of a bitch who probably sent suicide bombers . . . In that situation it was me standing between these two people, the terrorist or the suspected terrorist, and the deputy brigade commander, so that he wouldn’t . . . to prevent abuse of the detainee. I also found myself threatening the . . . two or three times I threatened the driver and the radioman, when they put him into the back of the jeep or whatever it was, that if I heard that something happened to him, I would personally take care of them. I don’t know where he is, and how this ended. But I remember that afterward I thought that as a soldier who’s there to protect the State of Israel, in the end I found myself wondering, what’s the difference between the deputy brigade commander abusing a . . . a Palestinian detainee who, it doesn’t matter what he did, who is now restrained and blindfolded? Of course I reported it as well, it didn’t lead to anything.

5. They kicked a cuffed man in his stomach and head

UNIT: ARMORED CORPS, LOCATION: GENERAL, YEAR: 2000

There’s some law that it’s forbidden to hit a Palestinian when he’s handcuffed, when his hands are tied. When the Shabak guys take people from their homes in the middle of the night, they’d blindfold them and kick them in the stomach while they’re handcuffed. Three in the morning, they open the door, burst into the house. The mother’s hysterical, the whole family’s hysterical . . . the Shabak sends someone in to check, it’s not always a terrorist, but they grab him, they bring him out, you can’t imagine what’s going through the guy’s head, he’s blindfolded, there’s two soldiers holding him from behind, and other soldiers follow. These are standing army, fifteen people in the company who’re a problem, a minority. And they just, here’s this man handcuffed, and they kick him in the stomach and the head . . .those guys really liked doing it.

Was it reported to the staff?

This was an officer! A serious officer, part of the staff! During your regular service, you don’t understand what’s going . . . If this guy wasn’t allowed to do it, he wouldn’t do it! It’s just because that’s how it is. It’s the Wild West and everyone . . . does whatever they want.

And most of the soldiers, they just take it as given?

. . . The truth is, when I think about it, I should have done something. I really should have stopped it . . . but you don’t think like that . . . You say that’s the reality, it doesn’t have to be that way, they’re shits for doing it . . . but you don’t really know what to do. You don’t feel like there’s anyone to turn to.

You go back home. Did you tell your mother and father?

Are you kidding? You suppress it.

Your parents knew nothing at all?

What are you . . . ? You’re part of it. Really, there isn’t much you can do. Especially when they’re officers and you’re in the Tank Corps who they wouldn’t even piss on, so what? You’re going to fight? You’re going to stop it? You can’t start messing with company loyalty or the group like that, you can’t start fighting with people in the middle of it all. It wouldn’t happen now. I wouldn’t let it happen, but that’s not saying much because I’m in the reserves.

6. He’s hitting an Arab, and I’m doing nothing

UNIT: NAHAL, LOCATION: HEBRON, YEAR: 2009

The forward command team . . . kept telling us they hit Arabs for laughs all the time. On patrols and . . . they always hit them, but there was one time that was my main event . . . One day we got an alert. We jumped up, began to gear up, me and the medic were getting the gear for the jeep, and the company commander opened his office door, came out, and said: “Scram everybody, only me and . . are going.” He told me to leave my gear and come as I am. He wasn’t wearing his bulletproof vest or anything, just his uniform and weapon. We drove to the Pharmacy checkpoint. There were two or three kids there who wouldn’t go through the metal detector. We stopped the jeep, he got off, took a boy to the alley.

One of the kids who wouldn’t go through the machine?

Yeah. And then he did what he did.

What?

He . . . I can see it, like a film. First he faced the kid, the kid was close to the wall, he faced him, looked at him for a second, and then choked him with the . . . held him like this with his elbow.

Against the wall?

Choked him up against the wall. The kid went wild, and the company commander was screaming at him, in Hebrew, not in Arabic. Then he let him go. The kid raised his hands to wipe his eyes, and the commander gave him a blow. The kid dropped his hands and stopped wiping his eyes, he left his hands at his side, and then the slapping started. More and more slaps. Blows. And yelling the whole time. The kid began to scream, it was scary, and people started coming around the checkpoint to look in the alley. Then I remember the commander coming out and telling them “It’s okay, everything’s okay.” He yelled at the kid: “Stay here, don’t go anywhere.” He came out, said everything was okay, called over the squad commander from the checkpoint, stood facing the kid and told the squad commander, “That’s how you deal with them.” Then he gave the kid another two slaps and let him go.

It’s a crazy story, I remember sitting in the vehicle, looking on, and telling myself: I’ve been waiting for a situation like this for three years. From the minute I enlisted, I wanted to stop things like this, and here I am doing nothing, choosing to do nothing, is that okay? I remember answering myself: Yes, it’s okay. He’s hitting an Arab, and I’m doing nothing. I was really aware of doing nothing because I was scared of the company commander, and what could I do? Jump off the jeep and tell him to stop, because it’s stupid, what he’s doing?

How old was the boy?

A teenager. Not eighteen. More like thirteen, fourteen, fifteen years old.

How long did it go on? The beating? I don’t remember.

Ten minutes? An hour?

It wasn’t . . . Something like ten minutes of hitting. Then he called over the squad commander.

The squad commander at the checkpoint? Ten, fifteen minutes, then he got into his vehicle and drove off.

The kid stayed in the alley?

Yeah. In the alley. You know the one I’m talking about? The alley in front of Pharmacy checkpoint.

When you’re coming from Gross?

Yes.

On the left?

On the right. There’s the checkpoint, the entrance to old Abu Sneina.

Tell me, did you talk about this with anyone, another officer, someone else, friends?

I remember returning to the post, getting off the vehicle, I was like . . . I got off and went into the room where the rest of the platoon was, and said: “Listen, you can’t imagine the insane thing that just happened, he came along and beat him up.” That’s it.

. . .

from

OUR HARSH LOGIC. Israeli soldiers’ testimonies from the Occupied Territories, 2000-2010 – Breaking the Silence.

get it at Amazon.com

Also on TPPA = CRISIS

AS ISRAELIS, WE CALL ON THE WORLD TO INTERVENE ON BEHALF OF THE PALESTINIANS – ILANA HAMMERMAN

THE BIGGEST PRISON ON EARTH. THE HISTORY OF THE OCCUPIED TERRITORIES – ILAN PAPPE

A STATE BUILT ON MURDER. RISE AND KILL FIRST: THE SECRET HISTORY OF ISRAEL’S TARGETED ASSASSINATIONS – RONEN BERGMAN

DON’T BE AFRAID TO SHOOT’: A FORMER ISRAELI SOLDIER’S ACCOUNT OF GAZA – MATTHEW HALL

YOU’RE WELCOME, LOVE, INTROVERTS. Things You Need to Know If Your Child is an Introvert * The Secret Lives of Introverts – Jenn Granneman.

Why was my idea of a good time so different from what other people wanted to do? I was broken. I had to be.

“Don’t just accept your child for who she is; treasure her for who she is. The more you embrace your child’s introverted nature, the happier they will be.”

Introversion is your temperament. It takes years to build a personality, but your temperament is something you’re born with.

Introverts tend to avoid small talk. We’d rather talk about something meaningful than fill the air with chatter just to hear ourselves make noise. We find small talk inauthentic, and, frankly, many of us feel awkward doing it.

This is a book about secrets. It’s about seeing what’s really going on with introverts. It’s about finally feeling understood.

If it weren’t for introverts and our amazing ability to focus, we wouldn’t have the theory of relativity, Google, or Harry Potter (yes, Einstein, Larry Page, and J. K. Rowling are all likely introverts). Dear society, where would you be without us? You’re welcome. Love, introverts.

You’re confused by your kid. She doesn’t act the way you did growing up. She’s hesitant and reserved. Instead of diving in to play, she’d rather stand back and watch the other kids. She talks to you in fits and starts, sometimes she rambles on, telling you stories, but other times, she’s silent, and you can’t figure out what’s going on in her head. She spends a lot of time alone in her bedroom. Her teachers say they wish she’d participate more in class. Her social life is limited to two people.

Even weirder, she seems totally okay with that.

Congratulations: You’ve got an introvert.

It’s not unusual for extroverted parents to worry about their introverted children and even wonder if their behavior is healthy. (Disclaimer: children can suffer from anxiety and depression, just as adults can. It’s important to be aware of the symptoms of childhood depression, sometimes withdrawal from others and low energy, signal something quite different than introversion.)

However, many introverted children are not depressed or anxious at all. They behave in the way they do because of their innate temperament, being an introvert is genetic, and it’s not going to change. The more you embrace your child’s natural introverted personality the happier they will be.

Here are 15 things you must understand if you’re the parent of an introvert.

1. There’s nothing unusual or shameful about being an introvert.

Introverts are hardly a minority, making up 30-50 percent of the US. population. Some of our most successful leaders, entertainers, and entrepreneurs have been introverts, such as Bill Gates, Emma Watson, Warren Buffett, Courteney Cox, Christina Aguilera, and J.K. Rowling. It’s often suggested that even Abraham Lincoln, Mother Teresa, and Mahatma Gandhi were introverts.

2. Your child won’t stop being an introvert.

Can your child just “get over” hating raucous birthday parties? Nope. According to Dr. Marti Olsen Laney, author of The Hidden Gifts of the Introverted Child, introversion and extroversion are genetic (although parents play an important role in nurturing that temperament). Introverts’ and extroverts’ brains are also wired somewhat differently.

According to Laney, introverts’ and extroverts’ brains may use different neurotransmitter pathways, and they may favor different “sides” of their nervous system (introverts prefer the parasympathetic side, the “rest and digest” system as opposed to the sympathetic, which triggers the “fight, flight, or freeze” response). Furthermore, a study published in the Journal of Neuroscience found that introverts have larger, thicker gray matter in their prefrontal cortices, which is the area of the brain associated with abstract thought and decision-making.

So if your child tends to be more cautious and reserved than his extroverted peers, rest assured that there’s a biological reason for it.

3. They’ll warm up to new people and situations slowly and that’s okay.

Introverts often feel overwhelmed or anxious in new environments and around new people. If you’re attending a social event, don’t expect your child to jump into the action and chat with other children right away. If possible, arrive early so your child can get comfortable in that space and feel like other people are entering a space he already “owns.”

Another option is to have your child stand back from the action at a comfortable distance perhaps near you, where he feels safe and simply watch for a few minutes. Quiet observation will help him process things.

If neither of those options is possible, discuss the event ahead of time with him, talking about who will be there, what will likely happen, how he might feel, and what he can do when he’s losing energy.

No matter what new experience you’re getting him accustomed to, remember: go slowly, but don’t not go. “Don’t let him opt out, but do respect his limits, even when they seem extreme,” writes Susan Cain about introverted children. “Inch together toward the thing he’s wary of.”

4. Socializing saps your introverted kid’s energy.

Both introverts and extroverts can feel drained by socializing, but for introverts, it’s even worse. If your child is older, teach her to excuse herself to a quieter part of the room or a different location such as the bathroom or outside. If she’s younger, she might not notice when she’s tapped out, so you’ll have to watch her for signs of fatigue the dreaded “introvert hangover.”

5. Making friends can be nervewracking for introverts.

Which means, give your child positive reinforcement when he takes a social risk. Say something like, “Yesterday, I saw you talking to that new boy. I know that was hard for you, and I’m proud of what you did.”

6. But you can teach them to self regulate their negative feelings.

Say, “You thought you were going to have a miserable time at the birthday party, but you ended up making some new friends.” With positive reinforcement like this, over time, he’ll be more likely to self-regulate the negative feelings he associates with stepping out of his comfort zone.

7. Your kid may have intense and unique interests.

Give him opportunities to pursue those interests, says Christine Fonseca, author of Quiet Kids: Help Your Introverted Child Succeed in an Extroverted World. Softball and Boy Scouts may work well for some children, but don’t forget to look off the beaten path and consider writing classes or science camps. Intense engagement in an activity can bring happiness, wellbeing and confidence, but it also gives your kid opportunities to socialize with other children who have similar passions (and perhaps similar temperaments).

8. Talk to their teachers about introversion.

Some teachers mistakenly assume that introverted children don’t speak up much in class because they’re disinterested or not paying attention. On the contrary, introverted students can be quite attentive in class, but they often prefer to listen and observe rather than actively participate. (In many cases, an introverted child is “saying” all the things other kids would say, but simply doing it silently in his head which, for an introvert, is just as engaging.)

Also, if the teacher knows about your child’s introversion, the teacher may be able to gently help him navigate things like interactions with friends, participation in group work, or presenting in class.

9. Your child may struggle to stand up for herself.

So teach her to say stop or no in a loud voice when another child tries to take her toy from her. If she’s being bullied or treated unfairly at school, encourage her to speak up to an adult or the perpetrator. “It starts with teaching introverted children that their voice is important,” Fonseca says.

10. Help your child feel heard.

Listen to your child, and ask questions to draw her out. Many introverted children and adults struggle to get the thoughts and emotions swirling inside them out to others.

Introverts “live internally, and they need someone to draw them out,” writes Dr. Laney in her book. “Without a parent who listens and reflects back to them, like an echo, what they are thinking, they can get lost in their own minds.”

11. Your child might not ask for help.

Introverts tend to internalize problems. Your child might not talk to you about her problems even when she wishes for and/or could benefit from some adult guidance. Again, ask questions and truly listen, but don’t interrogate.

12. Your child is not necessarily shy.

Shy is a word that carries a negative connotation. If your introverted child hears the word “shy” enough times, she may start to believe that her discomfort around people is a fixed trait, not a feeling she can learn to control.

Furthermore, “shy” focuses on the inhibition she experiences, and it doesn’t help her understand the true source of her quietness, her introversion.

Don’t refer to your child as “shy,” and if others do, correct them gently by saying, “Actually, she’s an introvert.”

13. Your child may only have one or two close friends and there’s nothing wrong with that.

Introverts seek depth in relationships, not breadth. They prefer a small circle of friends and aren’t usually interested in being “popular.”

14. Your kid will need plenty of alone time, don’t take it personally.

Anything that pulls your child out of her inner world like school, friends, or even navigating a new routine will drain her. Don’t be hurt or think your child doesn’t enjoy being with the family when she spends time alone in her room. Most likely, once she’s recharged, she’ll want to spend time with you again.

15. Your introverted child is a treasure.

“Don’t just accept your child for who she is; treasure her for who she is,” writes Cain. “Introverted children are often kind, thoughtful, focused, and very interesting company, as long as they’re in settings that work for them.”

Psychology Today

The Secret Lives of Introverts. Inside our hidden world.

Jenn Granneman

Dear introvert,

One of my earliest memories as a little girl is my dad putting a microphone to my lips and asking me to tell a story. Okay, I thought, this should be easy. I had been telling stories to myself already, in my mind, each night before I fell asleep, even though I was too young to read or write.

I closed my eyes and imagined a horse who played with her friends in a sunny meadow. Like many introverted children, my inner world was vivid and alive. The madeup story seemed almost as real as the actual world around me of toys and parents and pets. The horse and her friends were having a race to see who was the fastest. They dashed through fields of flowers and jumped over a glistening creek, when, all of the sudden, one of them started to flap her tiny, hidden wings and fly…

Suddenly, my dad interrupted my thoughts. “You have to say your story out loud,” he said, nodding to the microphone. “So I can record it.”

I looked at the microphone, then back at my dad, but I didn’t know how to respond. The things inside me had to be spoken? How could mere words describe the striking images I saw in my mind, and how they made me feel?

Sensing my hesitancy, my dad prompted again. “Just say what you’re thinking,” he said, as if that were the easiest thing in the world.

But I couldn’t. I continued to stare at my dad in silence. The secret world inside me would not come out. My dad grew impatient, probably thinking his only daughter was being stubborn, uncreative. The truth was I had no idea how to translate my inner experience into words. Somehow, I thought that with my father’s supreme intelligence, he would just know what I meant to say. But he couldn’t read my thoughts. And the microphone attached to the primitive eighties tape recorder couldn’t hear them. Eventually, he gave up and put everything away.

This would not be the last time in my life that my silence confused and frustrated someone. I would carry that feeling of disconnect between my inner world and the outer one with me for much of my life.

If you’re an introvert like me, you may have secrets inside you, too. You have thoughts that you don’t have the words to express and big ideas that no one else sees. Maybe your secret is you feel lonely even when you’re surrounded by other people. Perhaps you’re doing certain things and acting a certain way only because you think you’re supposed to. Maybe your heart longs for just one person to see the real you, and to know what’s really going on inside your head.

This is a book about secrets. It’s about seeing what’s really going on with introverts. It’s about finally feeling understood.

Thank you for joining me in this journey. If you have a secret like the one I just described, I hope you will feel less alone about it after reading this book.

Quietly yours,

Jenn

Chapter 1

THIS IS FOR ALL THE QUIET ONES

When I was in sixth grade, I was lucky enough to be scooped up by a great group of girls who would become my lifelong friends. We slept over at each other’s houses and whispered secrets in the dark. We spied on the boy who lived in the neighborhood and his friends, and giggled over who we had crushes on. We filled notebook after notebook with our dreams for the future. We even promised to reunite every Fourth of July as adults on a hill by our high school, so we would always have a place in each other’s lives.

Anyone looking at us would have thought I was just one of the girls. We did almost everything together. People even said we looked like sisters. But deep down, I felt different. I wasn’t one of them. I was other.

While they read Seventeen magazine and chatted about celebrities, I sat silently on the edges, wondering if there was life on other planets. When they were relieved that another school year was over and that summer vacation had begun, I was catapulted into a deep existential crisis about growing older. When they wanted to hang out all night, and then the next day, and then the next, I was desperately searching for an excuse to be alone. (“Mom, tell them I’m sick! Or that I have to go to church!”) In so many little ways, I was the weird one.

My friend group was the center of my teenage world. I loved them. So I did what anyone does when they feel like they are an alien dropped into this world from another planet: at times, I pretended. I kept my secret thoughts to myself. I didn’t let on when I wished I could be alone in my bedroom instead of at the mall, surrounded by people. I tried to be the person I thought I should be, fun-loving and always ready to hang out.

All that pretending got exhausting. But I did it because I thought that’s what everyone else was doing, pretending. I figured they were just a lot better at hiding their true feelings than I was.

There Must Be Something Wrong with Me

As an adult, I still couldn’t shake the feeling of being “different.” I worked as a journalist for a few years, then went back to school to become a teacher, thinking this would be more meaningful work. My graduate program was full of outgoing wouId, be teachers who always had something to say. They sat in little groups on breaks, bursting with energetic chatter, even after we’d just spent hours doing collaborative learning or having a group discussion.

I, on the other hand, bolted for the door on breaks as quickly as possible, my head was spinning from all the noise and activity, and my energy level was at zero. Also, talking in front of our class or answering a question on the spot was no problem for them. I, however, avoided the spotlight as much as possible. Whenever I had to present a lesson plan, I felt compelled to practice exactly what I was going to say, until I got it “perfect.” Even then, I usually couldn’t keep my hands from shaking.

I had also gotten married. My husband (now ex-husband) was a confident, life-of-the-party guy who could talk to anyone. His large family was the same way. They loved spending time together in a loud gaggle of kids, siblings, and friends of the family. Often, they’d drop by our small apartment, letting me know they were coming only when they were already on their way. They’d pass hours crammed into the living room, telling stories, cracking jokes, and volleying sarcastic remarks back and forth with the professional finesse of Venus and Serena Williams.

I, once again, sat quietly on the edges, never knowing how to wedge myself into these fast-moving conversations or what to say. As the night wore on, I often found myself slipping into an exhausted brain fog, which made it even harder to participate. Most nights, what I really wanted was to read a book alone, play a video game, or just be with my husband.

When comparing myself to my extroverted in-laws and classmates, I never seemed to measure up. My disparaging thoughts returned. Why couldn’t I just loosen up and go with the flow? Why did I never have much to say when I was in a big group but had plenty to talk about during a one-on-one? Why was my idea of a good time so different from what other people wanted to do?

I was broken. I had to be.

Things didn’t look like they would ever get better. At one point, I had a complete breakdown. I found myself awake in the middle of the night, frantically crying, typing everything that was wrong with me and my life into a Word document. I just couldn’t take it anymore. I was too different-too messed up. The world was too much, too loud, too harsh. I think finally expressing all the secret feelings that had built up inside me, in a raw, unfiltered way, saved me. When I reread what I had written, I realized I couldn’t keep living this way.

Somehow, I made it through that terrible night. Soon after, I discovered something about myself that changed my life.

One Magic Word: Introvert

One afternoon, in the psychology/self-help section of a used bookstore, I came across a book called The Introvert Advantage by Marti Olsen Laney. I bought it and read it cover to cover. When I finished, I cried. I had never felt so understood in my life.

That beautiful book told me there was a word for what I was: introvert. It was a magic word, because it explained many of the things I had struggled with my entire life, things that had made me feel bad about myself. Best of all, the word meant I wasn’t alone. There were other people out there like me. Other introverts.

Say what you will about labeling. That little label changed my life.

I went on to read everything about introversion I could get my hands on. I read Quiet by Susan Cain, Introvert Power by Laurie Helgoe, The Introvert’s Way by Sophia Dembling, and others.

I became interested in personality type and high sensitivity, too. Turns out I’m not just an introvert but also a highly sensitive person (but I’ll leave that topic for another time). After reading dozens of books about introversion, I turned to the Internet. I joined Facebook groups for introverts and poured over blogs. My friends got sick of me constantly talking about introversion: “Did you know it’s an introvert thing to need time to think before responding?” I’d say, or, “I can’t go out tonight, it’s introvert time.”

I couldn’t shut up about being an introvert. It was like I had been reading the wrong script my entire life, trying to play the role of the person I thought I should be, not the person I truly was.

Don’t get me wrong. Learning about my introversion didn’t fix all my problems. It would take several years of hard, inner work, along with consciously deciding to make real changes in my life, before things got better. But for me, embracing my introversion, and stopping myself from trying to pretend to be an extrovert, was the first step.

As I learned more about introversion, I became more confident in who I was. I started accepting my need for alone time. I saw my quiet, reflective nature as a strength, not a liability. I also started working on my social skills, seeing them as simply that, skills I could improve and use to my advantage. But most important, for the first time in my life, I started to actually like myself.

I was no longer an other. I was something else: an introvert.

Now I’m on a Mission

Today, I’m the voice behind Introvert, Dear, the popular online community for introverts. I never set out to be an advocate for introverts, but, when something changes your life, you want to tell other people about it. I started Introvert, Dear as my personal blog in 2013. At the time, I was working as a teacher, living with roommates, and truly dating for the first time in my adult life. I decided I would chronicle my life as an introvert living in a society that seems geared toward extroverts. I kept my blog anonymous so I could write whatever I wanted without fearing what other people would think (so very introverted of me). For my bio, I used a picture of just my shoulder that showed off a tattoo of five birds I had just gotten. My face was mostly hidden.

Staring at my computer screen, alone in my bedroom one night, I named my little blog Introvert, Dear. I imagined a wise, older introverted woman counseling a younger introverted woman. The young woman was lying on a chaise lounge, and the older woman was sitting in a chair nearby, the kind of setup you see in movies when someone goes to a therapist. The older one began her advice to the younger one by saying, “Now, introvert, dear…

The first blog post I wrote got more comments about my tattoo than anything actually related to what I’d written. But I kept writing, mostly just for myself. And people kept reading. I didn’t know it then, but Introvert, Dear was another step in my journey toward healing. Once again, expressing myself honestly relieved some of the pain I was feeling. And connecting with other introverts made me feel less self-conscious about my “weird” ways.

Today, Introvert, Dear is less of a blog and more of an online publishing platform. It features not just my voice, but hundreds of introvert voices, and it brings together introverts from all over the world. My writing about introverts has been featured in publications like the Huffington Post, Thought Catalog, Susan Cain’s Quiet Revolution, the Mighty, and others. Now I’m on a mission: to let introverts everywhere know it’s okay to be who they are. I don’t ever want another introvert to feel the way I did when I was younger.

Are You an Introvert?

What about you? Have you always felt different? Were you the quiet one in school? Did people ask you, “Why don’t you talk more?” Do they still ask you that today?

If so, you might be an introvert like me. Introverts make up 30 to 50 percent of the population, and we help shape the world we live in. We might be your parent, friend, spouse, significant other, child, or coworker. We lead, create, educate, innovate, do business, solve problems, charm, heal, and love.

Introversion is a temperament, which is different from your personality; temperament refers to your inborn traits that organize how you approach the world, while personality can be defined as the pattern of behavior, thoughts, and emotions that make you an individual. It can take years to build a personality, but your temperament is something you’re born with.

But the most important thing to know about being an introvert is that there’s nothing wrong with you. You’re not broken because you’re quiet. It’s okay to stay home on a Friday night instead of going to a party. Being an introvert is a perfectly normal “thing” to be.

Are you an introvert? Here are twenty-two signs that you might veer toward introversion on the spectrum. How many do you relate to? These signs may not apply to every introvert, but I believe they are generally true:

You enjoy spending time alone. You have no problem staying home on a Saturday night. In fact, you look forward to it. To you, Netflix and chill really means watching Netflix and relaxing. Or maybe your thing is reading, playing video games, drawing, cooking, writing, knitting tiny hats for cats, or just lounging around the house. Whatever your preferred solo activity is, you do it as much as your schedule allows. You feel good when you’re alone. In your alone time, you’re free.

You do your best thinking when you’re alone. Your alone time isn’t just about indulging in your favorite hobbies. It’s about giving your mind time to decompress. When you’re with other people, it may feel like your brain is too overloaded to really work the way it should. In solitude, you’re free to tune into your own inner monologue, rather than paying attention to what’s going on around you. You might be more creative and/or have deeper insights when you’re alone.

Your inner monologue never stops. You have a distinct inner voice that’s always running in the back of your mind. If people could hear the thoughts that ran through your head, they may, in turn, be surprised, amazed, and perhaps horrified. Whatever their reaction might be, your inner narrator is something that’s hard to shut off. Sometimes you can’t sleep at night because your mind is still going. Thoughts from your past haunt you. “I can’t believe I said that stupid thing five years ago!”

You often feel lonelier in a crowd than when you’re alone. There’s something about being with a group that makes you feel disconnected from yourself. Maybe it’s because it’s hard to hear your inner voice when there’s so much noise around you. Or maybe you feel like an other, like I did. Whatever the reason, as an introvert, you crave intimate moments and deep connections, and those usually aren’t found in a crowd.

You feel like you’re faking it when you have to network. Walking up to strangers and introducing yourself? You’d rather stick tiny needles under your fingernails. But you know there’s value in it, so you might do it anyway, except you feel like a phony the entire time. If you’re anything like me, you had to teach yourself how to do it. You might have read self-help books about how to be a better conversationalist or exude more charisma. In the moment, you have to activate your “public persona.” You might say things to yourself like, “Smile, make eye contact, and use your loud confident voice!” Then, when you’re finished, you feel beat, and you need downtime to recover. You wonder, Does everyone else have to try this hard when meeting new people?

You’re not the student shooting your hand up every time the teacher asks a question. You don’t need all that attention. You’re content just knowing that you know the answer, you don’t have to prove it to anyone else. At work, this may translate to not saying much during meetings. You’d rather pull your boss aside afterward and have a one-on-one conversation, or email your ideas, rather than explain them to a room full of people.

The exception to this is when you feel truly passionate about something. On rare occasions, even shy introverts have been known to transform themselves into a force to be reckoned with when it really counts. It’s all about how much something matters to you; you’ll risk overstimulation when you think speaking up will truly make a difference.

You’re better at writing your thoughts than speaking them. You prefer texting to calling and emailing to face-to-face meetings. Writing gives you time to reflect on what to say and how to say it. It allows you to edit your thoughts and craft your message just so. Plus, there’s less pressure when you’re typing your words into your phone alone than when you’re saying them to someone in real time.

But it isn’t just about texting and emailing. Many introverts enjoy journaling for self-expression and self-discovery. Others make a career out of writing, such as John Green, author of the bestselling young adult novel, The Fault in Our Stars. In his YouTube video, “Thoughts from Places: The Tour,” Green says, “Writing is something you do alone. It’s a profession for introverts who want to tell you a story but don’t want to make eye contact while doing it.”

Likewise, talking on the phone does not sound like a fun way to pass the time. One of my extroverted friends is always calling me when she’s alone in her car. She figures that although her eyes, hands, and feet are currently occupied, her mouth is not. Plus, there are no people around, how boring! So she reaches for her phone. (Remember to practice safe driving, kids.) However, this is not the case for me. When I have a few spare minutes of silence and solitude, I have no desire to fill that time with idle chitchat.

You’d rather not engage with people who are angry. Psychologist Marta Ponari and collaborators found that people high in introversion don’t show what’s called the “gaze-cueing effect.” Normally, if you were to view the image of a person’s face on a computer screen looking in a certain direction, you would follow that person’s gaze; therefore, you’d respond more quickly to a visual target on that side of the screen than when the person’s gaze and the target point in opposite directions. Introverts and extroverts both do this, with one exception: if the person seems mad, introverts don’t show the gazecueing effect. This suggests that people who are very introverted don’t want to look at someone who seems angry. Ponari and her team think that this is because they are more sensitive to potentially negative evaluations. Meaning, if you think a person is mad because of something related to you, even their gaze becomes a threat.

You avoid small talk whenever possible. When a coworker is walking down the hall toward you, have you ever turned into another room in order to avoid having a “Hey, what’s up?” conversation with them? Or have you ever waited a few minutes in your apartment when you heard your neighbors in the hallway so you didn’t have to chat? If so, you might be an introvert, because introverts tend to avoid small talk. We’d rather talk about something meaningful than fill the air with chatter just to hear ourselves make noise. We find small talk inauthentic, and, frankly, many of us feel awkward doing it.

You’ve been told you’re “too intense.” This stems from your dislike of small talk. If it were up to you, mindless chitchat would be banished. You’d much rather sit down with someone and discuss the meaning of life, or, at the very least, exchange some real, honest thoughts. Have you ever had a deep conversation and walked away feeling energized, not drained? That’s what I’m talking about. Meaningful interactions are the introvert’s antidote to social burnout.

You don’t go to parties to meet new people. Birthday parties, wedding receptions, staff holiday parties, or whatever, you party every once in a while. But when you go to an event, you probably don’t go with the goal of making new friends; you’d rather hang out with the people you already know. That’s because, like a pair of well-worn sneakers, your current friends feel good on you. They know your quirks, and you feel comfortable around them. Plus, making new friends would mean making small talk.

You shut down after too much socializing. A study from Finnish researchers Sointu Leikas and Ville-Juhani llmarinen shows that socializing eventually becomes tiring to both introverts and extroverts. That’s likely because socializing expends energy. Not only do you have to talk, but you also have to listen and process what’s being said. Plus, you’re taking in all kinds of sensory information, such as someone’s tone of voice and body language, along with filtering out any background noises or visual distractions. It’s no wonder people get drained.

But there are some very real differences between introverts and extroverts; on average, introverts really do prefer solitude and quiet more than their extroverted counterparts. In fact, if you’re an introvert, you might experience something that’s been dubbed the “introvert hangover.” Like a hangover induced by one too many giant fishbowl margaritas, you feel sluggish and icky after too much socializing. Your brain seems to stop working, and, in your exhaustion, you cease to be able to hold a conversation or say words that make sense. You just want to lie down in a quiet, dark room and not move or talk for a while. That’s because introverts can become overstimulated by socializing and shut down (more about the introvert hangover later).

You notice details that others miss. It’s true that introverts (especially highly sensitive introverts) can get overwhelmed by too much stimuli. But there’s an upside to our sensitivity, we notice details that others might miss. For example, you might notice a subtle change in your friend’s demeanor signaling that she’s upset (but oddly, no one else in the room sees it). Or, you might be highly tuned in to color, space, and texture, making you an incredible visual artist.

You can concentrate for long periods of time on things that matter to you. I can write for hours. I get in the zone, and I just keep going. I don’t need anyone or anything else to entertain me, as I write, I enter a state of flow. I block out distractions and hone in on what I need to accomplish. If you’re an introvert, you likely have activities or pet projects that you could work on for practically forever. That’s because introverts are great at focusing alone for long periods of time. If it weren’t for introverts and our amazing ability to focus, we wouldn’t have the theory of relativity, Google, or Harry Potter (yes, Einstein, Larry Page, and J. K. Rowling are all likely introverts). Dear society, where would you be without us? You’re welcome. Love, introverts.

You live in your head. In fact, you may daydream so much that people have told you to “get out of your head” or “come down to earth.” That’s because your inner world is rich and vivid. Not all introverts have strong imaginations (that trait is correlated with “openness to experience” on the Big Five personaIity scale, not “extroversion-introversion”), but many of us do.

. . .

from

The Secret Lives of Introverts. Inside our hidden world

by Jenn Granneman

get it at Amazon.com

Also on TPPA = CRISIS

QUIET. THE POWER OF INTROVERTS IN A WORLD THAT CAN’T STOP TALKING – SUSAN CAIN

ALONE. The Badass Psychology of People Who Like Being Alone – Bella DePaulo, Ph.D.

THE HANDBOOK OF SOLITUDE. PSYCHOLOGICAL PERSPECTIVES ON SOCIAL ISOLATION, SOCIAL WITHDRAWAL, AND BEING ALONE

IF YOU LIKE BEING ALONE YOU HAVE THESE 5 AMAZING TRAITS

HOW TO BE ALONE – SARA MAITLAND

SUPER-AGERS AND THE MYSTERY OF THEIR SUCCESS – Adam Piore – Four Strategies for Aging Well.

“What’s emerged is how much our mental filter, how we see the world, determines our reality and how much we will suffer when we find ourselves in difficult situations in life.”

While extended health span is feasible and already unfolding for many of those with higher education, so far there are very slim gains in health span for minorities and those with strained socioeconomic resources.

Many people can think of an older person who has had a profound influence on them. It’s because of the brains of elders. They are more pro-social, more likely to give to people in need than younger people. This is not a huge surprise but we’re now able to think of the biology of this. We really need our elders.

It was the kind of case no traditional medical textbook could explain. The subject, let’s call him Peter Green, was a white male in his late 80s, enrolled in longitudinal studies of the elderly at the UCSF Memory and Aging Center. Green’s brain scans “were not pretty,” recalls Joel Kramer, PsyD, who directs the center’s neuropsychology program. His brain had begun to atrophy, and its white matter composed of long bundles of nerve cells that carry signals from one area to another were shot through with dead patches, suggesting that Green had suffered the kind of ministrokes often associated with cognitive decline.

Yet by all behavioral measures, Green was thriving. His cognitive test scores were impeccable and his ability to function in the world remained high.

“If you look at his cognition and level of functioning, it not only remains high, it hasn’t changed at all in years,” Kramer says. What was it about Green, Kramer wondered, that set him apart from his peers with similar brain scans, who seemed to have been waylaid by the ravages of time?

When Kramer finally met the study subject in person, the neurologist was struck by Green’s dynamism and sunny outlook on life. He told Kramer he volunteered in the community, was constantly busy with projects and organizations, and remained close to his family. He shared how grateful he was for what he had and really seemed to be enjoying his golden years.

“He talked about how his attitude toward life is one of embracing it, not getting stressed out by the little things, and valuing the importance of relationships,” Kramer says. ”I was so impressed. It was inspiring.”

Kramer has a name for people like this vigorous, dynamic octogenarian: “super-agers.” In recent years, he’s become increasingly fascinated by their qualities and has set out to solve the mystery of their success.

“There are some suggestions that people who are more optimistic age better than people who aren’t,” Kramer says, pointing to Peter Green as Exhibit A. “We’re just starting to look at these personality traits and how they influence aging.”

For decades, those studying the science of aging have devoted most of their time to trying to understand what goes wrong as we get older, what risk factors predispose us to disease, and how we might better diagnose and treat it. But in recent years, a growing number of researchers at UCSF and elsewhere have turned their attention to a separate but related series of questions: What is it that allows some older people to thrive? What is there to learn from the most resilient and functional senior citizens among us? And how might we apply that knowledge to everyone else to promote healthy aging?

Though the approaches UCSF researchers are taking to answer these questions vary from studying large cohorts of elderly patients, to measuring telomeres, to analyzing components in the blood of variously aged mice many of them have begun to converge on an optimistic conclusion.

“As we get older, when we see declines in memory and other skills, people tend to think that’s part of normal aging,” Kramer says. “It’s not. It doesn’t have to be that way.”

Stress Can Make Us Older

Elissa Epel, PhD, a professor of psychology who co-directs the UCSF Aging, Metabolism, and Emotions Center, believes one’s chronological age and biological age do not always align. She is trying to understand what makes some of us more resilient than others, and one of the answers seems to be stress.

“The biology of aging and the biology of stress are intimate friends, and they talk to each other and influence each other,” she says. ”The greater the feelings of chronic stress, the greater the signs of aging in cells.”

Epel is studying participants under almost constant stress: family members who are caring for a child with a chronic condition or a spouse with dementia. As one proxy for biological age, Epel monitors the length of individuals’ telomeres, or caps on the ends of chromosomes, which shorten as we get older.

When our telomeres get too short, our cells are no longer able to divide. It becomes harder for our bodies to replenish tissues, and our chances of developing chronic diseases increase, Epel explains. Short telomeres in midlife predict an early onset of cardiovascular disease, diabetes, dementia, some cancers, and many other diseases often associated with aging.

Chronic stress, she and others have found, can lead to a buildup of proinfiammatory factors called cytokines, which mobilize our immune system to release a series of chemicals that, though important in fighting infection, can over time harm the body’s own cells. Chronic stress can impair mitochondria, the energy centers of our cells, accelerate the epigenetic clock (a measure of cellular age based on the methylation patterns of genes), and prematurely shorten our chromosomes’ telomeres.

But Epel has found that there are things we can do to counteract the toxic effects of stress and slow down the aging process.

“The big story is that there are so many differences among caregivers in the way that they’re responding to their life situation,” Epel says. “What’s emerged is how much our mental filter, how we see the world, determines our reality and how much we will suffer when we find ourselves in difficult situations in life.”

It’s possible to modify that filter through consciously cultivating gratitude and a mindful response to stress, Epel says. This sounds much like the mindset of the “superager” that Kramer has observed. Social support is one of the largest factors protecting us from stress. Caregivers who have a greater number of positive emotional connections appear to be protected from much of the damage caused by stress. In addition, meditation, exercise, and an anti-inflammatory diet can reduce and possibly reverse some effects of aging.

“While extreme biohacks are super interesting, most of them are probably not feasible and not healthy in the long run,” she says. “But lifestyle interventions are a form of biohacking that is feasible, safe, and reliable. Our biological aging is more under our control than we think. If we can make small changes and maintain them over years and years, our cells will be listening and maintaining their resiliency and health.”

She adds that context also plays a role. Culture and environment at home, work, and in neighborhoods are important components in the ability of individuals to maintain lifestyle interventions over the long run. She notes that while extended health span is feasible and already unfolding for many of those with higher education, so far there are very slim gains in health span for minorities and those with strained socioeconomic resources.

UCSF is working to modify the culture in ways that support such healthy changes on campus, she notes, pointing to the Stress Free UC program, a daily meditation app that is free to any UC staff member.

Aging and Youth are Literally in our Blood

While Epel is zooming out to explore how the mind-body connection might promote healthy aging, UC San Francisco’s Saul Villeda, PhD, is zooming in, examining how microscopic, cellular messages that travel through our bloodstream might impact geriatric health.

Villeda, an assistant professor of anatomy, oversees a group of 12 researchers looking into mechanisms of brain aging and rejuvenation. His experiments sound a little like science fiction. In 2014, Villeda published a study in Nature Medicine showing that infusing the blood of young mice into older mice could significantly reverse signs of agerelated cognitive decline that is, geriatric mice infused with young mouse plasma were better able to both recall the way through a maze and find a specific location. Conversely, younger mice injected with older blood experienced accelerated symptoms of aging.

What is it about young blood that can have such a profound effect? Using a method known as parabiosis, connecting the circulatory systems between older and young mice, Villeda found that the young blood caused the number of stem cells in the brains of older mice to increase and the number of neural connections to spike by 20 percent.

Earlier this year, he published a study demonstrating that infusing the young blood also caused a spike in an enzyme called TET2 in areas of the brain associated with learning and memory. The research team, led by one of Villeda’s postdocs, Geraldine Gontier, PhD, demonstrated not only that TET2 levels decline with age but that restoring the enzyme to youthful levels improved memory in healthy adult mice.

The stimulatory effect of young blood, Villeda says, likely results from a handful of factors acting together. He also points to another factor that seems to play a role in the magical properties of young blood, a protein called metalloproteinase that is involved in remodeling the structural components that hold our cells together and give them their shape.

Meanwhile, Villeda has also isolated factors in old blood that accelerate aging. Blood from mice who are the equivalent of 65 human years contains cellular signaling agents that he says promote inflammation. These agents play what he calls a “huge role” not just in cognitive declines but also in muscle and immune-related deterioration, results that are consistent with those found by Epel.

By continuing to decode these cellular components, Villeda believes we may someday be able to harness what he and others are learnmg in order to create new medicines that rather than target single diseases, target some of the underlying factors that cause diseases of aging in general.

This idea, of making therapies that treat aging in the same way we treat other diseases, says Villeda, is becoming “more mainstream.”

“We don’t think of aging as final anymore. We’re basically maintaining a youthful state for longer.” Even 15 years ago, Villeda continues, “if you told someone, ‘I can keep you healthy until you’re 85 and you won’t get cardiovascular disease or Alzheimer’s, and all you have to do is take this pill,‘ people would probably have been looking at you a little strange.”

But attitudes have begun to change. “If you tell them, ‘We understand the molecular mechanisms that are driving certain aspects of aging, and we can target them, he says, “it becomes much more understandable to people.”

There is Still More to Learn

Joel Kramer has been following some of his “super-agers” for more than a decade. They now number in the dozens and are part of a far larger cohort of subjects ranging in age from 60 to 95.

At least every two years, each subject comes in to answer questions about their lifestyle and to undergo a battery of tests of their cognitive function, blood composition, brain volume, and a wide array of other factors associated with aging and their ability to function in the world.

The study continues to produce reams of data, much of which Kramer and his colleagues have barely begun to analyze.

But a complicated picture has started to emerge, one highlighting multiple factors that interact to affect our ability to function.

In March 2017, Kramer and his colleagues published the first of many planned studies exploring some of the characteristics that seem to be associated with cognitive and functional performance. They compared 17 “resilient agers,” who exhibited fast cognitive processing speeds, to 56 “average agers” and 47 “sub-agers,” whose cognitive processing speeds appeared to be slowing down.

Just as Epel and Villeda predicted, the resilient agers had lower levels of proinflammatory cytokines than the sub-agers. Anatomical differences may have also played a role in the differences among the cohorts. For example, the starting size of the brain’s corpus callosum, a thick band of nerve fibers connecting the two sides of the brain, was larger in resilient agers than in sub-agers.

The lower levels of inflammation might be attributable in part to lifestyle choices, especially since this group self-reported higher levels of exercise.

In a study currently under review for publication, Kramer and his team found that the brains of those who ate a healthy Mediterranean-style diet were less likely to contain large amounts of a protein associated with Alzheimer’s. One of his colleagues has found evidence that higher levels of mental activity are correlated with a growth in the connections between brain cells and with better cognitive processing speeds. Others suggest that sleep plays a crucial role in healthy aging.

“There’s definitely a genetic component, which is very big,” notes Kramer. “But these are all little hints that there are things we can do to improve our chances of better brain aging.”

The paradigm shifts emerging from the new science are already beginning to have an impact in the clinic.

Bruce Miller, MD, the Clausen Distinguished Professor of Neurology and director of UCSF’s Memory and Aging Center, is collaborating with Kramer on the healthy aging study. Miller, Kramer, Epel, and Villeda are all members of the UCSF Weill Institute for Neurosciences as well. Miller notes that when he first arrived at UCSF in 1998, the fleld in general was “very nihilistic.” Age-associated decline was seen as inevitable. Since then, however, that assumption has changed.

“I think imaging in particular has advanced in a way to allow us to do these sorts of studies that we never could have done before and say, ‘Wow, we now have these really clear biological markers in elderly populations, so we can now think about whether they’re changing when we intervene.”

The evidence is convincing that cardiovascular health, exercise, and low-fat diets can all make a positive difference, he says.

Kramer notes there’s still more work to be done, however. “We clearly just started doing this,” he says, but then adds that the study is already having an impact on at least one person: himself. “Having contact with so many of our older subjects who have maintained good brain health has really inspired me,” Kramer says. ”Even just the simple fact that they exist is inspiring. It’s an exciting time.”

Four Strategies for Aging Well

1. Embrace Aging

Many of us experience a better balance between positive and negative emotions as we age.

When we’re older, we seek positive situations in our life much more and cut out things we don’t like. We take more control of our environment.

What’s more, the wisdom that often comes with age may be related to structural changes in older brains. Bruce Miller points to recent work showing that brain circuits involved in altruism, wisdom, and thinking about other people are shaped based on the cumulative experiences of our lives. One’s ability to consciously control emotions improves as this circuitry increases.

This is why so many people can think of an older person who has had a profound influence on them. It’s because of the brains of elders. We are more pro-social. We are more likely to give to people in need than younger people. This is not a huge surprise but we’re now able to think of the biology of this. We really need our elders.

2. Quit the Negativity

Negativity and fear associated with aging often overshadow the positive aspects of growing older. Ironically, this fact can have its own damaging consequences.

We hold these tremendously negative stereotypes about aging, and these start from when we’re really young. By the time we’re older, these are actually having a negative effect on our health.

When we believe that aging means we’re going to be suffering and frail and dependent we don’t heal as quickly when we break a hip. We’re more likely to get dementia, regardless of whether we have the gene associated with Alzheimer’s. And we don’t live as long.

The most obvious explanation is that it’s a self-fulfilling prophecy: When we harbor the belief that we can’t control our rate of aging, we develop a fatalistic attitude and engage in fewer healthy behaviors.

But there may be something even more insidious at work. Studies show that negative attitudes about aging can actually cause us to become more stress reactive and less stress resistent, triggering biochemical cascades that may actually accelerate aging.

3. Move More

The positive effects of physical activity on cognitive functioning in older adults are well documented. Exercise leads to the production of more brain cells, increases cardiovascular health, and promotes a sense of well-being. It also appears to be highly correlated with cognitive processing speed.

In a 2017 study, Joel Kramer and his team showed that exercise may even exert a protective effect against cognitive decline in those carrying genes that place them at a greater risk for Alzheimer’s.

Meanwhile, in a 2018 study, a team led by Eli Puterman examined a cohort of 68 elderly individuals who were caring for family members with dementia. These caregivers were under high stress, had high levels of depressive symptoms, and had sedentary lifestyles. The study encouraged participants to exercise for 40 minutes, three to five times per week, for six months. At the end of that period, participants had lengthened their telomeres, a biomarker associated with longevity.

4. Meditate

Epel and several collaborators recruited 28 participants enrolled in a California meditation retreat to undergo extensive testing. The researchers monitored markers associated with biological age (including telomere length, gene expression, and more) and also tracked participants’ anxiety, depression, and personality traits over the course of the intensive, one-month meditation retreat.

The participants meditated for extended periods under the guidance of experienced practitioners, refrained from speaking, and were encouraged to treat all daily activities as “opportunities to attend to their ongoing mental experience with open and reflexive awareness.”

At the end of the retreat, the participants’ telomere length had increased significantly, and participants with the highest initial levels of anxiety and depression showed the most dramatic changes over the course of the study.

What’s next?

Epel’s team, with a $1.2 million gift from the John W. Brick Foundation for Mental Health, will study how natural treatments including mindfulness meditation, high-intensity interval training exercise, and different breathing techniques impact mood, health, and biological aging. At the time of publication, they are seeking women participants who could benefit from these interventions. More information and enrollment requirements are at StressResilience.net

University of California San Francisco

Like Voting Rights? Thank a Socialist – Adam J Sacks.

As voting rights increasingly come under attack, we shouldn’t forget the crucial role that early socialists played in fighting for universal suffrage.

Stolen elections, decrepit voting infrastructure, draconian ID laws. The recent attacks on voting rights in the US might seem like an outgrowth of pure partisanship, the desperation of a minoritarian party using any means necessary to hold onto political power. But the GOP’s brazen attempts to restrict voting access (particularly for African Americans) should also be viewed as symptoms of a disease that has long afflicted elites: recalcitrant opposition to democracy, including the right to vote.

Since the advent of the modern state, ruling classes have tried to restrain the voting power of workers and those not “well born.” Contrary to the mainstream story that capitalism naturally gave rise to democracy, establishment powers in nineteenth-century Europe restricted the vote for as long as they possibly could. Only when faced with mass mobilization, or when continent-wide war wiped out working-class males en masse, was it clear that the franchise could no longer be withheld.

The particulars of individual European countries varied. In some nations, following intense struggles, workers won limited forms of universal male suffrage before World War I. More commonly, broad suffrage rights appeared only after the war.

But what was consistent were the actors pushing for universal suffrage: trade unions and, crucially, socialist parties. In fact, what has been called the “democratic breakthrough” of the nineteenth century could easily be called the “socialist breakthrough.”

Belgium

On August 10, 1890, seventy-five thousand men and women took to the streets of Brussels to demonstrate for universal suffrage. Like all other putatively democratic nations of the time, Belgium limited the right to vote to male property owners. Workers were entirely shut out of the country’s political life. Over the next twenty-five years, that would change, but not until a series of general strikes convulsed the country and World War I ripped the country to shreds.

In 1890, the year of the first general strike, ruling elites worried that conferring the vote on the working class would give the ascendant socialist movement a batting ram to bludgeon their autocratic citadel. Though founded just five years earlier, the Parti Ouvrier, like its sister parties in the Second International, was steadily growing, fusing workers together into a powerful, coherent political bloc. Party leaders hoped they could pursue a patient reformist course, winning trade union and suffrage rights without resorting to a revolutionary strategy of mass strikes.

But the stubbornness of reality, the powers that be resolutely blocked pro-worker measures in parliament, and the militancy of workers forced the party’s leaders to concede that more radical action was necessary.

In 1893, following up on the mass action three years earlier, the Council of Workers declared a general strike. Mass demonstrations broke out in multiple cities, miners cut telegraph and telephone lines, and soldiers chased party leaders through the streets with bayonets drawn. Women chucked rocks and broken pottery at the police behind barricades built by miners.

Leopold II, who reigned as the king of Belgium from 1865 to 1909.
.

The militant action worked. Property restrictions were abolished. The leaders of the Parti Ouvrier, including a marble worker named Louis Bertrand who helped found the party, were invited into parliament.

But progress would not occur in a straight line. The elections the next year sent shock waves through Europe when dozens of socialist deputies were elected to parliament rather than the expected handful. The party immediately went to work, drafting laws to support unions and set up disability insurance and pensions. Ruling elites, realizing their mistake, pushed through a system of “plural voting” that gave additional weight to citizens living in strongholds of the conservative Catholic Party.

So workers, often over the objections of party leaders, kept up the pressure. When the government tried to deepen inequalities in voting rights, the socialist movement again declared a strike, in 1902. This time over three hundred thousand flooded the streets.
The thrust and parry continued in the subsequent years. Catholic parties, still aided by plural voting, strengthened their majority in 1912 and attacked full universal suffrage in the legislature the following year. Socialist leaders, trying to balance the competing politics of rural miners and urban social-democratic politicians, still held out hope parliament would enact universal suffrage.

Instead, 1913 brought another general strike, the largest in Western European history. Strike funds were set up via a system of coupons, and co-ops and childcare were organized. Le Peuple, a socialist daily, published recipes for soupes communistes to cook in the communal kitchens. Art exhibitions, museum visits, and country hikes drew working-class families together, offering not just respite but cultural nourishment.
The strike didn’t achieve its aim of full and equal universal suffrage. It was only after World War I, in 1919, that plural voting finally fell, and women wouldn’t receive the right to vote until 1948.
Yet those early battles for the franchise had an enormous impact on the consciousness of other socialists around the continent, the Parti Ouvrier, Rosa Luxemburg said, had inspired the entire Second International to “speak Belgian.”

The Russian Empire

During Belgium’s 1902 general strike, the city of Louvain was the site of a frightful massacre: twelve workers eventually died after state officers opened fire. Further east, another government-led mass murder triggered a seminal general strike, the 1905 Russian Revolution.
While in late 1904 liberals and progressives had successfully pressed for workers insurance, the abolition of censorship, and expanded local representative government, the Russian Empire still lacked a federal parliament. In January 1905, strikes erupted in multiple cities, culminating in a peaceful march in St Petersburg of men, women, and children, singing hymns and brandishing a petition demanding an elected parliament. Troops fired on the marchers before they could reach the Winter Palace, killing upwards of one thousand.

An artistic impression of “Bloody Sunday” in St Petersburg, Russia, when unarmed demonstrators marching to present a petition to Tsar Nicholas II were shot at by the Imperial Guard in front of the Winter Palace on January 22, 1905.
.

Theatrical performances were spontaneously interrupted, and thousands of students and professionals struck in solidarity with the workers. The merchants club, hardly a redoubt of radicalism, barred its doors to guards for their involvement in the massacre.

Within a couple of weeks, half of European Russian workers and 93 percent of all workers in Russian-occupied Poland were out on strike. In Lodz, strikers held the provincial governor hostage in a hotel. Throughout the entire empire, the rail network ground to a halt.

Revolution was in the air. The next few months would witness the country’s first open celebration of May Day and the legendary Potemkin Mutiny off the shores of Odessa, later immortalized by filmmaker Sergei Eisenstein. And by the end of October, the tsar had reluctantly signed the manifesto that established the Duma, and extended the franchise toward universal male suffrage.

Elsewhere in the Russian Empire, radical actions for the vote had even more far-reaching consequences. A general strike in Finland in 1905 led not only to the adoption of universal male suffrage and a unicameral parliamentary system, but also the granting of women the right to vote and to stand for elections the first country in Europe to do so. Over the coming decade, the country’s workers would use these expanded rights, before the strike, only 8 percent of the population could vote, to press for increasingly revolutionary reforms.

Sweden

Among American liberals, it’s popular to imagine Sweden as a social-democratic utopia, a nation where enlightened values have won out over rank selfishness. But the history of the Swedish workers movement is a testament to the tenaciousness of the country’s ruling class, including its dogged resistance to voting rights.

The political expression of the labor movement, the Swedish Social Democratic Party (SAP), formed in 1889 amid a broader surge in worker organizing. As elsewhere, those without property lacked basic political rights. The Swedish socialist movement’s goal was to first win political democracy.

In 1902, a two-day general strike for universal suffrage served as a warning shot at the stridently right-wing government. Called by the political parties and never intended to last longer than a couple days, the strike made a strong impression on the government due to its impressive level of mass support. Still, the strike lacked the crucial participation of the trade unions.

This would come in part with the 1909 general strike, which lasted a month and convened almost half a million workers. The initial aim was to combat worker lockouts and wage freezes. But as chairman of the transport workers, Charles Lindley, recalled, “In that time there was an almost unlimited faith in the general strike as the decisive means to get universal suffrage.” The economically inspired strike increasingly reflected workers’ democratic political aspirations.

The Swedish socialist movement’s goal was to first win political democracy.

Swedish policemen guard empty trams during the 1909 general strike. Wikimedia Common.
.

The strike shut down all core export industries in the country, and workers attempted to spread it further. Employers responded with a standard tactic: importing striker breakers. In one case, three unemployed Swedish workers independently organized to bomb a ship that housed strikebreakers coming from Great Britain.

As days turned into weeks, however, strike leaders were forced to retreat, faced with meager strike funds and the prospect of having to divert relief from other workers in an economic recession. Liberals began to turn on the strikers when typographers joined, seeing their participation as an attack on “freedom of speech.” Workers’ families struggled mightily with the mounting deprivation. The Swedish Employer’s Association was therefore in a position by the end to dictate terms, and they did.

But while the strike was in many ways a setback, it is universally recognized today as laying the groundwork for the democratization of Swedish society. Later that year all men in the country, regardless of their property holdings, gained the right to vote in at least one chamber of federal government. Full political democracy, while distant, was now on the horizon.

The Riksdag, the Swedish parliament.
.

Germany

Almost two-thirds of late-nineteenth century Germany lay within the Kingdom of Prussia, which had enforced the unification of the German states in 1871. Despite the passage that year of the general, equal, and secret right to vote for all males over age twenty-five, Prussia maintained a system from 1849 that divided voters into three classes based on their tax bracket.

The obviously unequal arrangement, early socialist leader Wilhelm Liebknecht referred to the Reichstag as the “fig leaf of absolutism”, created a situation where 4 percent of the first class held as many voters as the third class, who made up 82 percent of the eligible voting population. And there was another anti-democratic check on workers’ power: the upper chamber, the Reichsrat, could block any constitutional changes passed by the directly elected representatives of the Reichstag. The Second Reich, Marx declared, was a “police-guarded military despotism, embellished with parliamentary forms.”

Somehow, the German Social Democratic Party (SPD) flourished in spite of these adverse conditions. It was the largest socialist party on the continent, the Second International party par excellence. The SPD’s Erfurt Program, ratified in 1891, declared: “The struggle of the working class against capitalistic exploitation is of necessity a political struggle. The working class cannot carry out its economic struggle and cannot develop its economic organization without political rights.” At the top of the party’s demands: “universal, equal, and direct voting rights via secret vote for all citizens over twenty years of age, regardless of sex.”

A printing of the SPD’s seminal 1891 manifesto “The Erfurt Program.”
The working class cannot carry out its economic struggle and cannot develop its economic organization without political rights.
.

The country’s elites were not amused. Following the development of a country-wide strike movement, employers insisted that the kaiser both rescind the vote from all those affiliated with Social Democracy and legally limit strikes. The kaiser, showing no aversion to despotic rhetoric himself, told a group of new military recruits in Potsdam in November 1891:

“the current socialist machinations could result that I order you to shoot down your own relatives, brothers, even parents . . . but even then you must follow my orders without any grumbling.”

The SPD patiently agitated and organized to become the largest party in the Prussian parliament by 1908. They led repeated mass demonstrations for full suffrage, which were inexorably met with brutal repression.

On the eve of World War I, suffrage rights were still the province of the elite. But for their efforts, the SPD was rightfully recognized as the most consistently democratic force in prewar Germany.

Great Britain

Of all the European countries of the Second International, Great Britain had the least democratic voting system, the proportion of men over the age of twenty-one who could cast a ballot at the start of World War I was smaller than in eight of nine countries for which full data is available.

Mass disenfranchisement was deeply rooted in the country’s political system. At the start of the nineteenth century, in an electoral system marred by extreme gerrymandering, only 4 percent of the population could vote. In the middle of the century, the pro-suffrage demonstrations of the Chartists, the first mass working-class movement in European history, were met with elite antipathy. As late as 1884, access to voting remained unequal between the towns and the countryside, and after reforms altered that undemocratic hindrance, eligible voters still had to prove a base payment in rent to qualify.

The ruling class simply couldn’t countenance approving a measure they thought would give “the rabble” political power: universal suffrage, in the estimation of British statesman Thomas Babington Macaulay, was “incompatible with property . . . and consequently incompatible with civilization” itself.

Arrayed against Macaulay were the working class and their burgeoning movement. The Labour Party, firmly committed to universal suffrage, agitated for political democracy and was able to wrest some concessions before World War I. In 1911, they pushed for an end to the House of Lords’ veto over legislation.

Finally, on the heels of continent-wide war, universal male suffrage was established, and women won the vote in 1928.

The political order that, in Lenin’s words, had entrapped the working masses in a “well-equipped system of flattery, lies, and fraud” was cracking open.

Fighters for Democracy

The early socialist parties showed an unflagging commitment to universal suffrage, a commitment unmatched by any other party.

Their dedication was at once ethical and practical. On the one hand, they were determined to overturn structures of domination and inequality wherever they existed. And in the political sphere, workers were vassals, subject to the decisions of officials they had no hand in choosing.

On a more practical level, the early socialists recognized the potency of the ballot. Their fight for universal suffrage joined the political and economic struggles, transforming the vote into an object of radical tactics and revolutionary élan. It tied together the different factions of the movement in the pursuit of a tool (the vote) that workers could use as part of the broader class struggle. Their aim was to create a “true democracy,” from the bottom-up, in the tradition of Marx.

Today, amid fights in the US to maintain the basic functionality of a democratic voting system, socialists mustn’t forget their historic role in struggling for political democracy. So many of even the liberal democratic parts of liberal democracy came about thanks to the battles socialists waged against the feudalistic leftovers of the Old Regime and the new capitalistic oligarchy.

Barely a century old, and only for males of European descent the universal right to vote is still an infant in need of close guard. Current shadows of Jim Crow, whether in Georgia or the Dakotas, reveal the persistent threats to its existence, as well as the oligarchic and undemocratic strain that runs strong in the American republic and still hasn’t accepted universal suffrage.

We should reject faux-radical pronouncements that dismiss voting as inconsequential and, instead, meld the fight for universal suffrage with the fight for socialism and radical democracy. The vote was a historic conquest for the working class. It remains a “paper stone” in the hands of the disenfranchised.

Adam J Sacks holds an MA and PhD in history from Brown University and an MS in education from the City College of the City University of New York.

Jacobin Magazine

DEPRESSION, IT’S OUR HABITAT! Biophilia – Edward O. Wilson.

“I imagined that this place and all its treasures were mine alone and might be so forever in memory, if the bulldozer came.”

To explore and affiliate with life is a deep and complicated process in our mental development. Our existence depends on this propensity, our spirit is woven from it, hope rises on its currents.

To the degree that we come to understand other organisms, we will place a greater value on them, and on ourselves.

Everywhere I have gone, South America, Australia, New Guinea, Asia, I have thought that jungles and grasslands are the logical destinations, and towns and farmland the labyrinths that people have imposed between them sometime in the past. I cherish the green enclaves accidentally left behind.

What if humans, like animals in a zoo, become depressed when we are deprived of access to the kind of landscape we evolved in?

It’s been known for a long time that all sorts of mental health problems, including ones as severe as psychosis and schizophrenia, are considerably worse in cities than in the countryside.

Studies have clearly shown that people who move to green areas experience a big reduction in depression, and people who move away from green areas see a big increase in depression.

One of the most striking studies is perhaps the most simple. They got people who lived in cities to take a walk in nature, and then tested their mood and concentration. Everyone, predictably, felt better and was able to concentrate more, but the effect was dramatically bigger for people who had been depressed. Their improvement was five times greater than the improvement for other people.

Why would this be? What was going on?

We have been animals that move for a lot longer than we have been animals that talk and convey concepts, but we still think that depression can be cured by this conceptual layer. I think the first answer is more simple. Let’s fix the physiology first. Get out. Move!

The scientific evidence is clear that exercise significantly reduces depression and anxiety, because it returns us to our more natural state, one where we are embodied, we are animal, we are moving, our endorphins are rushing. Kids or adults who are not moving, and are not in nature for a certain amount of time, cannot be considered fully healthy animals.

When scientists have compared people who run on treadmills in the gym with people who run in nature, they found that both see a reduction in depression, but it’s higher for the people who run in nature. So what are the other factors?

Biologist Edward O. Wilson, one of the most important people in his field in the twentieth century, argued that all humans have a natural sense of something called Biophilia, an innate love for the landscapes in which humans have lived for most of our existence, and for the natural web of life that surrounds us and makes our existence possible. Almost all animals get distressed if they are deprived of the kinds of landscape that they evolved to live in. A frog can live on land, it’ll just be miserable as hell and give up.

Why would humans be the one exception to this rule? Looking around us: it’s our habitat that’s making us depressed.

This is a hard concept to test scientifically, but there has been one attempt to do it. The social scientists Gordon Orians and Judith Heerwagen worked with teams all over the world, in radically different cultures, and showed them a range of pictures of very different landscapes, from the desert to the city to the savanna. What they found is that everywhere, no matter how different their culture, people had a preference for landscapes that look like the savannas of Africa. There’s something about it, they conclude, that seems to be innate.

Johann Hari

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions – Johann Hari

BIOPHILIA

by Edward O. Wilson

ON MARCH 12, 1961, I stood in the Arawak village of Bernhardsdorp and looked south across the white-sand coastal forest of Surinam. For reasons that were to take me twenty years to understand, that moment was fixed with uncommon urgency in my memory. The emotions I felt were to grow more poignant at each remembrance, and in the end they changed into rational conjectures about matters that had only a distant bearing on the original event.

The object of the reflection can be summarized by a single word, biophilia, which I will be so bold as to define as the innate tendency to focus on life and lifelike processes. Let me explain it very briefly here and then develop the larger theme as I go along.

From infancy we concentrate happily on ourselves and other organisms. We learn to distinguish life from the inanimate and move toward it like moths to a porch light. Novelty and diversity are particularly esteemed; the mere mention of the word extraterrestrial evokes reveries about still unexplored life, displacing the old and once potent exotic that drew earlier generations to remote islands and jungled interiors. That much is immediately clear, but a great deal more needs to be added. I will make the case that to explore and affiliate with life is a deep and complicated process in mental development. To an extent still undervalued in philosophy and religion, our existence depends on this propensity, our spirit is woven from it, hope rises on its currents.

There is more. Modern biology has produced a genuinely new way of looking at the world that is incidentally congenial to the inner direction of biophilia. In other words, instinct is in this rare instance aligned with reason. The conclusion I draw is optimistic: to the degree that we come to understand other organisms, we will place a greater value on them, and on ourselves.

Bernhardsdorp

AT BERNHARDSDORP on an otherwise ordinary tropical morning, the sunlight bore down harshly, the air was still and humid, and life appeared withdrawn and waiting. A single thunder-head lay on the horizon, its immense anvil shape diminished by distance, an intimation of the rainy season still two or three weeks away. A footpath tunneled through the trees and lianas, pointing toward the Saramacca River and far beyond, to the Orinoco and Amazon basins. The woodland around the village struggled up from the crystalline sands of the Zanderij formation. It was a miniature archipelago of glades and creekside forest enclosed by savannagrassland with scattered trees and high bushes. To the south it expanded to become a continuous lacework fragmenting the savanna and transforming it in turn into an archipelago. Then, as if conjured upward by some unseen force, the woodland rose by stages into the triple-canopied rain forest, the principal habitat of South America’s awesome ecological heartland.

In the village a woman walked slowly around an iron cooking pot, stirring the fire beneath with a soot-blackened machete. Plump and barefoot, about thirty years old, she wore two long pigtails and a new cotton dress in a rose floral print. From politeness, or perhaps just shyness, she gave no outward sign of recognition. I was an apparition, out of place and irrelevant, about to pass on down the footpath and out of her circle of required attention. At her feet a small child traced meanders in the dirt with a stick. The village around them was a cluster of no more than ten one-room dwellings. The walls were made of palm leaves woven into a herringbone pattern in which dark bolts zigzagged upward and to the onlooker’s right across flesh-colored squares. The design was the sole indigenous artifact on display. Bernhardsdorp was too close to Paramaribo, Surinam’s capital, with its flood of cheap manufactured products to keep the look of a real Arawak village. In culture as in name, it had yielded to the colonial Dutch.

A tame peccary watched me with beady concentration from beneath the shadowed eaves of a house. With my own, taxonomist’s eye I registered the defining traits of the collared species, Dicotytes tajacu: head too large for the piglike body, fur coarse and brindled, neck circled by a pale thin stripe, snout tapered, ears erect, tail reduced to a nub. Poised on stiff little dancer’s legs, the young male seemed perpetually fierce and ready to charge yet frozen in place, like the metal boar on an ancient Gallic standard.

A note: Pigs, and presumably their close relatives the peccaries, are among the most intelligent of animals. Some biologists believe them to be brighter than dogs, roughly the rivals of elephants and porpoises. They form herds of ten to twenty members, restlessly patrolling territories of about a square mile. In certain ways they behave more like wolves and dogs than social ungulates. They recognize one another as individuals, sleep with their fur touching, and bark back and forth when on the move. The adults are organized into dominance orders in which the females are ascendant over males, the reverse of the usual mammalian arrangement. They attack in groups if cornered, their scapular fur bristling outward like porcupine quills, and can slash to the bone with sharp canine teeth. Yet individuals are easily tamed if captured as infants and their repertory stunted by the impoverishing constraints of human care.

So I felt uneasy, perhaps the word is embarrassed, in the presence of a captive individual. This young adult was a perfect anatomical specimen with only the rudiments of social behavior. But he was much more: a powerful presence, programed at birth to respond through learning steps in exactly the collared-peccary way and no other to the immemorial environment from which he had been stolen, now a mute speaker trapped inside the unnatural clearing, like a messenger to me from an unexplored world.

I stayed in the village only a few minutes. I had come to study ants and other social insects living in Surinam. No trivial task: over a hundred species of ants and termites are found within a square mile of average South American tropical forest. When all the animals in a randomly selected patch of woodland are collected together and weighed, from tapirs and parrots down to the smallest insects and roundworms, one third of the weight is found to consist of ants and termites. If you close your eyes and lay your hand on a tree trunk almost anywhere in the tropics until you feel something touch it, more times than not the crawler will be an ant. Kick open a rotting log and termites pour out. Drop a crumb of bread on the ground and within minutes ants of one kind or another drag it down a nest hole. Foraging ants are the chief predators of insects and other small animals in the tropical forest, and termites are the key animal decomposers of wood. Between them they form the conduit for a large part of the energy flowing through the forest. Sunlight to leaf to caterpillar to ant to anteater to jaguar to maggot to humus to termite to dissipated heat: such are the links that compose the great energy network around Surinam’s villages.

I carried the standard equipment of a field biologist: camera; canvas satchel containing forceps, trowel, ax, mosquito repellent, jars, vials of alcohol, and notebook; a twenty-power hand lens swinging with a reassuring tug around the neck; partly fogged eyeglasses sliding down the nose and khaki shirt plastered to the back with sweat. My attention was on the forest; it has been there all my life. I can work up some appreciation for the travel stories of Paul Theroux and other urbanophile authors who treat human settlements as virtually the whole world and the intervening natural habitats as troublesome barriers. But everywhere I have gone, South America, Australia, New Guinea, Asia-I have thought exactly the opposite. Jungles and grasslands are the logical destinations, and towns and farmland the labyrinths that people have imposed between them sometime in the past. I cherish the green enclaves accidentally left behind.

Once on a tour of Old Jerusalem, standing near the elevated site of Solomon’s Throne, I looked down across the Jericho Road to the dark olive trees of Gethsemane and wondered which native Palestinian plants and animals might still be found in the shade underneath. Thinking of “Go to the ant, thou sluggard; consider her ways,” I knelt on the cobblestones to watch harvester ants carry seeds down holes to their subterranean granaries, the same food-gathering activity that had impressed the Old Testament writer, and possibly the same species at the very same place. As I walked with my host back past the Temple Mount toward the Muslim Quarter, I made inner calculations of the number of ant species found within the city walls. There was a perfect logic to such eccentricity: the million-year history of Jerusalem is at least as compelling as its past three thousand years.

AT BERNHARDSDORP I imagined richness and order as an intensity of light. The woman, child, and peccary turned into incandescent points. Around them the village became a black disk, relatively devoid of life, its artifacts adding next to nothing. The woodland beyond was a luminous bank, sparked here and there by the moving lights of birds, mammals, and larger insects.

I walked into the forest, struck as always by the coolness of the shade beneath tropical vegetation, and continued until I came to a small glade that opened onto the sandy path. I narrowed the world down to the span of a few meters. Again I tried to compose the mental set, call it the naturalist’s trance, the hunter’s trance, by which biologists locate more elusive organisms. I imagined that this place and all its treasures were mine alone and might be so forever in memory, if the bulldozer came.

In a twist my mind came free and I was aware of the hard workings of the natural world beyond the periphery of ordinary attention, where passions lose their meaning and history is in another dimension, without people, and great events pass without record or judgment. I was a transient of no consequence in this familiar yet deeply alien world that I had come to love. The uncounted products of evolution were gathered there for purposes having nothing to do with me; their long Cenozoic history was enciphered into a genetic code I could not understand. The effect was strangely calming. Breathing and heartbeat diminished, concentration intensified. It seemed to me that something extraordinary in the forest was very close to where I stood, moving to the surface and discovery.

I focused on a few centimeters of ground and vegetation. I willed animals to materialize, and they came erratically into view. Metallic-blue mosquitoes floated down from the canopy in search of a bare patch of skin, cockroaches with variegated wings perched butterfly-like on sunlit leaves, black carpenter ants sheathed in recumbent golden hairs filed in haste through moss on a rotting log. I turned my head slightly and all of them vanished. Together they composed only an infinitesimal fraction of the life actually present. The woods were a biological maelstrom of which only the surface could be scanned by the naked eye. Within my circle of vision, millions of unseen organisms died each second. Their destruction was swift and silent; no bodies thrashed about, no blood leaked into the ground. The microscopic bodies were broken apart in clean biochemical chops by predators and scavengers, then assimilated to create millions of new organisms, each second.

Ecologists speak of “chaotic regimes” that rise from orderly processes and give rise to others in turn during the passage of life from lower to higher levels of organization. The forest was a tangled bank tumbling down to the grassland’s border. Inside it was a living sea through which I moved like a diver groping across a littered floor. But I knew that all around me bits and pieces, the individual organisms and their populations, were working with extreme precision. A few of the species were locked together in forms of symbiosis so intricate that to pull out one would bring others spiraling to extinction. Such is the consequence of adaptation by coevolution, the reciprocal genetic change of species that interact with each other through many life cycles.

Eliminate just one kind of tree out of hundreds in such a forest, and some of its pollinators, leafeaters, and woodborers will disappear with it, then various of their parasites and key predators, and perhaps a species of bat or bird that depends on its fruit, and when will the reverberations end? Perhaps not until a large part of the diversity of the forest collapses like an arch crumbling as the keystone is pulled away. More likely the effects will remain local, ending with a minor shift in the overall pattern of abundance among the numerous surviving species. In either case the effects are beyond the power of present-day ecologists to predict. It is enough to work on the assumption that all of the details matter in the end, in some unknown but vital way.

After the sun’s energy is captured by the green plants, it flows through chains of organisms dendritically, like blood spreading from the arteries into networks of microscopic capillaries. It is in such capillaries, in the life cycles of thousands of individual species, that life’s important work is done. Thus nothing in the whole system makes sense until the natural history of the constituent species becomes known. The study of every kind of organism matters, everywhere in the world. That conviction leads the field biologist to places like Surinam and the outer limits of evolution, of which this case is exemplary:

The three-toed sloth feeds on leaves high in the canopy of the lowland forests through large portions of South and Central America. Within its fur live tiny moths, the species Cryptoses choloepi, found nowhere else on Earth. When a sloth descends to the forest floor to defecate (once a week), female moths leave the fur briefly to deposit their eggs on the fresh dung. The emerging caterpillars build nests of silk and start to feed. Three weeks later they complete their development by turning into adult moths, and then fly up into the canopy in search of sloths. By living directly on the bodies of the sloths, the adult Cryptoses assure their offspring first crack at the nutrient-rich excrement and a competitive advantage over the myriad of other coprophages.

At Bernhardsdorp the sun passed behind a small cloud and the woodland darkened. For a moment all that marvelous environment was leveled and subdued. The sun came out again and shattered the vegetative surfaces into light-based niches. They included intensely lighted leaf tops and the tops of miniature canyons cutting vertically through tree bark to create shadowed depths two or three centimeters below. The light filtered down from above as it does in the sea, giving out permanently in the lowermost recesses of buttressed tree trunks and penetralia of the soil and rotting leaves. As the light’s intensity rose and fell with the transit of the sun, Silverfish, beetles, spiders, bark lice, and other creatures were summoned from their sanctuaries and retreated back in alternation. They responded according to receptor thresholds built into their eyes and brains, filtering devices that differ from one kind of animal to another. By such inborn controls the species imposed a kind of prudent self-discipline. They unconsciously halted their population growth before squeezing out competitors, and others did the same. No altruism was needed to achieve this balance, only specialization. Coexistence was an incidental by-product of the Darwinian advantage that accrued from the avoidance of competition. During the long span of evolution the species divided the environment among themselves, so that now each tenuously preempted certain of the capillaries of energy flow. Through repeated genetic changes they sidestepped competitors and built elaborate defenses against the host of predator species that relentlessly tracked them through matching genetic countermoves. The result was a splendid array of specialists, including moths that live in the fur of three-toed sloths.

Now to the very heart of wonder.

Because species diversity was created prior to humanity, and because we evolved within it, we have never fathomed its limits. As a consequence, the living world is the natural domain of the most restless and paradoxical part of the human spirit. Our sense of wonder grows exponentially: the greater the knowledge, the deeper the mystery and the more we seek knowledge to create new mystery. This catalytic reaction, seemingly an inborn human trait, draws us perpetually forward in a search for new places and new life. Nature is to be mastered, but (we hope) never completely. A quiet passion burns, not for total control but for the sensation of constant advance.

At Bernhardsdorp I tried to convert this notion into a form that would satisfy a private need. My mind maneuvered through an unending world suited to the naturalist. I looked in reverie down the path through the savanna woodland and imagined walking to the Saramacca River and beyond, over the horizon, into a timeless reconnaissance through virgin forests to the land of magical names, Yékwana, Jivaro, Sirioné, Tapirapé, Siona-Secoya, Yumana, back and forth, never to run out of fresh jungle paths and glades.

The same archetypal image has been shared in variations by others, and most vividly during the colonization of the New World. It comes through clearly as the receding valleys and frontier trails of nineteenth-century landscape art in the paintings of Albert Bierstadt, Frederick Edwin Church, Thomas Cole, and their contemporaries during the crossing of the American West and the innermost reaches of South America.

In Bierstadt’s Sunset in Yosemite Valley (1868), you look down a slope that eases onto the level valley floor, where a river flows quietly away through waist-high grass, thickets, and scattered trees. The sun is near the horizon. Its dying light, washing the surface in reddish gold, has begun to yield to blackish green shadows along the near side of the valley. A cloud bank has lowered to just beneath the tops of the sheer rock walls. More protective than threatening, it has transformed the valley into a tunnel opening out through the far end into a sweep of land.

. . .

from

Biophilia. The human bond with other species

by Edward O. Wilson

get it at Amazon.com

BEING THE BLACK SHEEP. Coping with a Marginalizing Family – Vinita Mehta Ph.D., Ed.M. * The communicative process of resilience for marginalized family members – Elizabeth Dorrance Hall.

“Basically I don’t have a family now. I only see them once a year and that’s mostly so they don’t bother me for the rest of the year. I don’t talk to them . . . My mother wants more of a relationship but I don’t.”

Rejection engenders profound consequences.

Many families are a wellspring of belongingness. But this isn’t the case for the Black Sheep, who are all too often cast away or disapproved of by their family members. Family members who perceive they are marginalized experience chronic stress associated with their position in the family.

New research investigates how marginalized family members remain resilient.

The holidays are a tough time of the year for many, potentially triggering both old and new family dramas. But when you’re the Black Sheep, it can be particularly difficult to engage with family members. For those who must contend with this station in life, feeling left out and put down can intensify during this time.

How does the Black Sheep of the family cope with their predicament? This was the focus of a study conducted by Elizabeth Dorrance Hall of Utah State University.

Human beings are wired to connect and bond and to belong. This means having positive experiences with others, with whom we feel are caring and close, over time. When the fundamental need to belong is not filled it can lead to a range of conditions, including depression, anxiety, loneliness and jealousy. For many, families are a wellspring of belongingness. But this isn’t the case for the Black Sheep, who are all too often cast away or disapproved of by their family members.

Hall describes being the Black Sheep of the family as a form of marginalization. People who are “on the margins,” live on the edge of a group or society. They suffer from rejection, and have virtually no voice or influence on the group. Branded as deviant, they feel a strong need to make both a psychological and physical break from the group. This is difficult enough to contend with in the larger society, but when a person is deemed an outcast by one’s own family, Hall writes, it can lead to a disintegration of identity. What’s more, rejection engenders profound consequences, ranging from aggressiveness, diminished intellectual functioning, detachment, and emotional numbness.

Marginalized family members have a unique set of circumstances with which to cope, Hall writes. Though the process of marginalization happens over time, there are often “turning point” events, like coming out, that mark faltering shifts in members. Black Sheep may also be experiencing a form of ambiguous loss, involving a physical presence but psychological absence at family events. Moreover, marginalized family members have low status in their families, which need for coping strategies. Taken together, and unsurprisingly, being the Black Sheep is a deeply painful experience.

In order to better understand how the Black Sheep of families remain resilient in spite of it all, here’s what Hall did. She recruited 30 marginalized family members who identified themselves as different, excluded, not accepted, or not as well liked as other members in their family. Participants were limited to those between the ages of 25 to 35 years so that their experiences with their families were recent and relevant. They also had to report having “chronic feelings of marginalization,” in which they felt “different, not included, or not approved of. . . by multiple family members.” Participants were then interviewed, and their narratives were coded and examined.

What did Hall find? Participants’ interviews yielded five coping strategies:

1. Seeking support from “communication networks”.

Black Sheep found social support from others via two major routes. First, they elected to invest in relationships with family members that they felt were genuine, loving, and inclusive. For some participants, siblings were the antagonizing source of their distress, but many found that siblings as well as extended family members provided much needed support especially when her brother was “very accepting, very open, very encouraging” when she came out, which was not the case of her other family members. This acceptance helped her feel less marginalized and comfortable with herself.

Participants also turned to “adopted or fictive kin,” that is, people in their social networks who were not family members. One participant felt she had formed a new family: “I have an adopted family now and I have since I was 25. I have holidays with them and we sort of share the things that families are supposed to do.”

2. Creating and negotiating boundaries.

Boundaries proved to be a protective measure for participants. Reducing exposure to their families gave them the opportunity for a fresh start or to move forward. This broke down in two ways. One was to create physical distance from their families. One participant said of his move to New York City, “I want to really live like I don’t have to work to get somebody’s acceptance.”

A second way participants created and negotiated boundaries was to limit family members’ access to personal information. A participant remarked, “I don’t really call my family and talk very often. When I do I keep things very surface level, how’s school, oh school’s great. How’s everything going at home, oh it’s good.” Again, this was a strategy in the service of self-protectiveness.

3. (Re)building while recognizing negative experiences.

Participants described “reframing” their personal circumstances by focusing on (re)building their lives, such as seeking higher education or independence. At the same time, they recognized that being the Black Sheep was profoundly painful.

Some participants were able to reframe their marginalization and find positive meaning in their experience as the Black Sheep. They spoke of how being the Black Sheep ultimately made them stronger and proud of being different. One participant reflected, “What motivated me really was that I was gay. And that I knew that if I came out, like, I might have ended up in the streets . . . the best choice for me was to get an education.”

4. Downplaying the lived experience of marginalization.

Participants downplayed the impact that marginalization had on them, while trying to understand their experience as the Black Sheep at the same time. By doing so, they were attempting to change the meaning of their marginalization through their “talk”. This resilience strategy is distinct from (re)building while recognizing negative experiences in that they essentially minimized their pain as opposed to confronting it.

By diminishing the influence of their family relationships, participants could change the meaning of their marginalized experience. One participant remarked, “Basically I don’t have a family now. I only see them once a year and that’s mostly so they don’t bother me for the rest of the year. I don’t talk to them . . . My mother wants more of a relationship but I don’t.”

5. Living authentically despite disapproval.

Participants also spoke about living authentic lives, and being true to themselves in the face of disapproval from their families. Hall observed an undertow of anger in participant’s responses, and how this anger was then redirected towards achieving productive goals in which they defended themselves against their Black Sheep status. Participants also coped with their marginalization by being proud of their stigma.

Relatedly, participants were w