Tag Archives: Inequality

THE UPPER TEN THOUSANDTH. The Great Leveler. Violence and the History of Inequality. From the Stone Age to the Twenty-First Century – Walter Scheidel.

The largest American private fortune currently equals about 1 million times the average annual household income.

The “Four Horsemen” of leveling, mass-mobilization warfare, transformative revolutions, state collapse, and catastrophic plagues, have repeatedly destroyed the fortunes of the rich.

Across the full sweep of history, every single one of the major compressions of material inequality we can observe in the record was driven by one or more of these four levelers.

There is no repertoire of benign means of inequality compression that has ever achieved results that are even remotely comparable to those produced by the Four Horsemen.

Environments that were free from major violent shocks and their broader repercussions hardly ever witnessed major compressions of inequality.

Will the future be different?

I wrote this book to show that the forces that used to shape inequality have not in fact changed beyond recognition. If we seek to rebalance the current distribution of income and wealth in favor of greater equality, we cannot simply close our eyes to what it took to accomplish this goal in the past. We need to ask whether great inequality has ever been alleviated without great violence, how more benign influences compare to the power of this Great Leveler, and whether the future is likely to be very different, even if we may not like the answers.

Are mass violence and catastrophes the only forces that can seriously decrease economic inequality? To judge by thousands of years of history, the answer is yes. Tracing the global history of inequality from the Stone Age to today, Walter Scheidel shows that it never dies peacefully. The Great Leveler is the first book to chart the crucial role of violent shocks in reducing inequality over the full sweep of human history around the world. The “Four Horsemen” of leveling, mass-mobilization warfare, transformative revolutions, state collapse, and catastrophic plagues, have repeatedly destroyed the fortunes of the rich.

Today, the violence that reduced inequality in the past seems to have diminished, and that is a good thing. But it casts serious doubt on the prospects for a more equal future. An essential contribution to the debate about inequality, The Great Leveler provides important new insights about why inequality is so persistent, and why it is unlikely to decline anytime soon.

THE CHALLENGE OF INEQUALITY

”A DANGEROUS AND GROWING INEQUALITY”

How many billionaires does it take to match the net worth of half of the world’s population? In 2015, the richest sixty-two persons on the planet owned as much private net wealth as the poorer half of humanity, more than 3.5 billion people. If they decided to go on a field trip together, they would comfortably fit into a large coach. The previous year, eighty-five billionaires were needed to clear that threshold, calling perhaps for a more commodious double-decker bus. And not so long ago, in 2010, no fewer 388 of them had to pool their resources to offset the assets of the global other half, a turnout that would have required a small convoy of vehicles or filled up a typical Boeing 777 or Airbus A340.

But inequaIity is not created just by multibillionaires. The richest 1 percent of the world’s households now hold a little more than half of global private net wealth. Inclusion of the assets that some of them conceal in offshore accounts would skew the distribution even further. These disparities are not simply caused by the huge differences in average income between advanced and developing economies. Similar imbalances exist within societies. The wealthiest twenty Americans currently own as much as the bottom half of their country’s households taken together, and the top 1 percent of incomes account for about a fifth of the national total.

Inequality has been growing in much of the world. In recent decades, income and wealth have become more unevenly distributed in Europe and North America, in the former Soviet bloc, and in China, India, and elsewhere. And to the one who has, more will be given: in the United States, the best-earning 1 percent of the top 1 percent (those in the highest 0.01 percent income bracket) raised their share to almost six times what it had been in the 1970s even as the top tenth of that group (the top 0.1 percent) quadrupled it. The remainder averaged gains of about three-quarters, nothing to frown at, but a far cry from the advances in higher tiers.

The “1 percent” may be a convenient moniker that smoothly rolls off the tongue, and one that I repeatedly use in this book, but it also serves to obscure the degree of wealth concentration in even fewer hands. In the 1850s, Nathaniel Parker Willis coined the term “Upper Ten Thousand” to describe New York high society. We may now be in need of a variant, the “Upper Ten Thousandth,” to do justice to those who contribute the most to widening inequality. And even within this rarefied group, those at the very top continue to outdistance all others. The largest American fortune currently equals about 1 million times the average annual household income, a multiple twenty times larger than it was in 1982. Even so, the United States may be losing out to China, now said to be home to an even larger number of dollar billionaires despite its considerably smaller nominal GDP.

All this has been greeted with growing anxiety. In 2013, President Barack Obama elevated rising inequality to a “defining challenge”:

“And that is a dangerous and growing inequality and lack of upward mobility that has jeopardized middle-class America’s basic bargain-that if you work hard, you have a chance to get ahead. I believe this is the defining challenge of our time: Making sure our economy works for every working American.”

Two years earlier, multibillionaire investor Warren Buffett had complained that he and his “mega-rich friends” did not pay enough taxes. These sentiments are widely shared. Within eighteen months of its publication in 2013, a 700-page academic tome on capitalist inequality had sold 1.5 million copies and risen to the top of the New York Times nonfiction hardcover bestseller list.

In the Democratic Party primaries for the 2016 presidential election, Senator Bernie Sanders’s relentless denunciation of the “billionaire class” roused large crowds and elicited millions of small donations from grassroots supporters. Even the leadership of the People’s Republic of China has publicly acknowledged the issue by endorsing a report on how to “reform the system of income distribution.” Any lingering doubts are dispelled by Google, one of the great money-spinning disequalizers in the San Francisco Bay Area, where I live, which allows us to track the growing prominence of income inequality in the public consciousness (Fiq. 1.1).

Figure 1.1 Top 1 percent income share in the United States (per year) and references to “income inequality” (three-year moving averages), 1970-2008.
.

So have the rich simply kept getting richer? Not quite. For all the much-maligned rapacity of the “billionaire class” or, more broadly, the “1 percent,” American top income shares only very recently caught up with those reached back in 1929, and assets are less heavily concentrated now than they were then. In England on the eve of the First World War, the richest tenth of households held a staggering 92 percent of all private wealth, crowding out pretty much everybody else; today their share is a little more than half.

High inequality has an extremely long pedigree. Two thousand years ago, the largest Roman private fortunes equaled about 1.5 million times the average annual per capita income in the empire, roughly the same ratio as for Bill Gates and the average American today. For all we can tell, even the overall degree of Roman income inequality was not very different from that in the United States. Yet by the time of Pope Gregory the Great, around 600 CE, great estates had disappeared, and what little was left of the Roman aristocracy relied on papal handouts to keep them afloat. Sometimes, as on that occasion, inequality declined because although many became poorer, the rich simply had more to lose. In other cases, workers became better off while returns on capital fell: western Europe after the Black Death, where real wages doubled or tripled and laborers dined on meat and beer while landlords struggled to keep up appearances, is a famous example.

How has the distribution of income and wealth developed over time, and why has it sometimes changed so much? Considering the enormous amount of attention that inequality has received in recent years, we still know much less about this than might be expected. A large and steadily growing body of often highly technical scholarship attends to the most pressing question: why income has frequently become more concentrated over the course of the last generation. Less has been written about the forces that caused inequality to fall across much of the world earlier in the twentieth century, and far less still about the distribution of material resources in the more distant past.

To be sure, concerns about growing income gaps in the world today have given momentum to the study of inequality in the longer run, just as contemporary climate change has encouraged analysis of pertinent historical data. But we still lack a proper sense of the big picture, a global survey that covers the broad sweep of observable history. A crosscultural, comparative, and long-term perspective is essential for our understanding of the mechanisms that have shaped the distribution of income and wealth.

THE FOUR HORSEMEN

Material inequality requires access to resources beyond the minimum that is needed to keep us all alive. Surpluses already existed tens of thousands of years ago, and so did humans who were prepared to share them unevenly. Back in the last Ice Age, hunter-gatherers found the time and means to bury some individuals much more lavishly than others.

But it was food production, farming and herding, that created wealth on an entirely novel scale. Growing and persistent inequality became a defining feature of the Holocene. The domestication of plants and animals made it possible to accumulate and preserve productive resources. Social norms evolved to define rights to these assets, including the ability to pass them on to future generations. Under these conditions, the distribution of income and wealth came to be shaped by a variety of experiences: health, marital strategies and reproductive success, consumption and investment choices, bumper harvests, and plagues of locusts and rinderpest determined fortunes from one generation to the next. Adding up over time, the consequences of luck and effort favored unequal outcomes in the long term.

In principle, institutions could have flattened emerging disparities through interventions designed to rebalance the distribution of material resources and the fruits from labor, as some premodern societies are indeed reputed to have done. In practice, however, social evolution commonly had the opposite effect. Domestication of food sources also domesticated people. The formation of states as a highly competitive form of organization established steep hierarchies of power and coercive force that skewed access to income and wealth. Political inequality reinforced and amplified economic inequality. For most of the agrarian period, the state enriched the few at the expense of the many: gains from pay and benefactions for public service often paled next to those from corruption, extortion, and plunder. As a result, many premodern societies grew to be as unequal as they could possibly be, probing the limits of surplus appropriation by small elites under conditions of low per capita output and minimal growth. And when more benign institutions promoted more vigorous economic development, most notably in the emergent West, they continued to sustain high inequality. Urbanization, commercialization, financial sector innovation, trade on an increasingly global scale, and, finally, industrialization generated rich returns for holders of capital. As rents from the naked exercise of power declined, choking off a traditional source of elite enrichment, more secure property rights and state commitments strengthened the protection of hereditary private wealth. Even as economic structures, social norms, and political systems changed, income and wealth inequality remained high or found new ways to grow.

For thousands of years, civilization did not lend itself to peaceful equalization. Across a wide range of societies and different levels of development, stability favored economic inequality. This was as true of Pharaonic Egypt as it was of Victorian England, as true of the Roman Empire as of the United States. Violent shocks were of paramount importance in disrupting the established order, in compressing the distribution of income and wealth, in narrowing the gap between rich and poor. Throughout recorded history, the most powerful leveling invariably resulted from the most powerful shocks.

Four different kinds of violent ruptures have flattened inequality: mass mobilization warfare, transformative revolution, state failure, and lethal pandemics. I call these the Four Horsemen of Leveling.

Just like their biblical counterparts, they went forth to “take peace from the earth” and “kill with sword, and with hunger, and with death, and with the beasts of the earth.” Sometimes acting individually and sometimes in concert with one another, they produced outcomes that to contemporaries often seemed nothing short of apocalyptic. Hundreds of millions perished in their wake. And by the time the dust had settled, the gap between the haves and the havenots had shrunk, sometimes dramatically.

Only specific types of violence have consistently forced down inequality. Most wars did not have any systematic effect on the distribution of resources: although archaic forms of conflict that thrived on conquest and plunder were likely to enrich victorious elites and impoverish those on the losing side, less clearcut endings failed to have predictable consequences. For war to level disparities in income and wealth, it needed to penetrate society as a whole, to mobilize people and resources on a scale that was often only feasible in modern nation-states. This explains why the two world wars were among the greatest levelers in history. The physical destruction wrought by industrial-scale warfare, confiscatory taxation, government intervention in the economy, inflation, disruption to global flows of goods and capital, and other factors all combined to wipe out elites’ wealth and redistribute resources.

They also served as a uniquely powerful catalyst for equalizing policy change, providing powerful impetus to franchise extensions, unionization, and the expansion of the welfare state. The shocks of the world wars led to what is known as the “Great Compression,” massive attenuation of inequalities in income and wealth across developed countries. Mostly concentrated in the period from 1914 to 1945, it generally took several more decades fully to run its course.

Earlier mass mobilization warfare had lacked similar pervasive repercussions. The wars of the Napoleonic era or the American Civil War had produced mixed distributional outcomes, and the farther we go back in time, the less pertinent evidence there is. The ancient Greek city-state culture, represented by Athens and Sparta, arguably provides us with earliest examples of how intense popular military mobilization and egalitarian institutions helped constrain material inequality, albeit with mixed success.

The world wars spawned the second major leveling force, transformative revolution. Internal conflicts have not normally reduced inequality: peasant revolts and urban risings were common in premodern history but usually failed, and civil war in developing countries tends to render the income distribution more unequal rather than less. Violent societal restructuring needs to be exceptionally intense if it is to reconfigure access to material resources. Similarly to equalizing mass mobilization warfare, this was primarily a phenomenon of the twentieth century. Communists who expropriated, redistributed, and then often collectivized leveled inequality on a dramatic scale. The most transformative of these revolutions were accompanied by extraordinary violence, in the end matching the world wars in terms of body count and human misery. Far less bloody ruptures such as the French Revolution leveled on a correspondingly smaller scale.

Violence might destroy states altogether. State failure or systems collapse used to be a particularly reliable means of leveling. For most of history, the rich were positioned either at or near the top of the political power hierarchy or were connected to those who were. Moreover, states provided a measure of protection, however modest by modern standards, for economic activity beyond the subsistence level. When states unraveled, these positions, connections, and protections came under pressure or were altogether lost. Although everybody might suffer when states unraveled, the rich simply had much more to lose: declining or collapsing elite income and wealth compressed the overall distribution of resources. This has happened for as long as there have been states. The earliest known examples reach back 4,000 years to the end of Old Kingdom Egypt and the Akkadian empire in Mesopotamia. Even today, the experience of Somalia suggests that this once potent equalizing force has not completely disappeared.

State failure takes the principle of leveling by violent means to its logical extremes: instead of achieving redistribution and rebalancing by reforming and restructuring existing polities, it wipes the slate clean in a more comprehensive manner. The first three horsemen represent different stages, not in the sense that they are likely to appear in sequence, whereas the biggest revolutions were triggered by the biggest wars, state collapse does not normally require similarly strong pressures, but in terms of intensity. What they all have in common is that they rely on violence to remake the distribution of income and wealth alongside the political and social order.

Human-caused violence has long had competition. In the past, plague, smallpox, and measles ravaged whole continents more forcefully than even the largest armies or most fervent revolutionaries could hope to do. In agrarian societies, the loss of a sizeable share of the population to microbes, sometimes a third or even more, made labor scarce and raised its price relative to that of fixed assets and other nonhuman capital, which generally remained intact. As a result, workers gained and landlords and employers lost as real wages rose and rents fell. Institutions mediated the scale of these shifts: elites commonly attempted to preserve existing arrangements through fiat and force but often failed to hold equalizing market forces in check.

Pandemics complete the quartet of horsemen of violent leveling. But were there also other, more peaceful mechanisms of lowering inequality? If we think of leveling on a large scale, the answer must be no. Across the full sweep of history, every single one of the major compressions of material inequality we can observe in the record was driven by one or more of these four levelers. Moreover, mass wars and revolutions did not merely act on those societies that were directly involved in these events: the world wars and exposure to communist challengers also influenced economic conditions, social expectations, and policymaking among bystanders. These ripple effects further broadened the effects of leveling rooted in violent conflict. This makes it difficult to disentangle developments after 1945 in much of the world from the preceding shocks and their continuing reverberations. Although falling income inequality in Latin America in the early 2000s might be the most promising candidate for nonviolent equalization, this trend has remained relatively modest in scope, and its sustainability is uncertain.

Other factors have a mixed record. From antiquity to the present, land reform has tended to reduce inequality most when associated with violence or the threat of violence, and least when not. Macroeconomic crises have only short-lived effects on the distribution of income and wealth. Democracy does not of itself mitigate inequality. Although the interplay of education and technological change undoubtedly influences dispersion of incomes, returns on education and skills have historically proven highly sensitive to violent shocks.

Finally, there is no compelling empirical evidence to support the view that modern economic development, as such, narrows inequalities. There is no repertoire of benign means of compression that has ever achieved results that are even remotely comparable to those produced by the Four Horsemen.

Yet shocks abate. When states failed, others sooner or later took their place. Demographic contractions were reversed after plagues subsided, and renewed population growth gradually returned the balance of labor and capital to previous levels. The world wars were relatively short, and their aftereffects have faded over time: top tax rates and union density are down, globalization is up, communism is gone, the Cold War is over, and the risk of World War III has receded. All of this makes the recent resurgence of inequality easier to understand. The traditional violent levelers currently lie dormant and are unlikely to return in the foreseeable future. No similarly potent alternative mechanisms of equalization have emerged.

Even in the most progressive advanced economies, redistribution and education are already unable fully to absorb the pressure of widening income inequality before taxes and transfers. Lower-hanging fruits beckon in developing countries, but fiscal constraints remain strong. There does not seem to be an easy way to vote, regulate, or teach our way to significantly greater equality. From a global historical perspective, this should not come as a surprise. So far as we can tell, environments that were free from major violent shocks and their broader repercussions hardly ever witnessed major compressions of inequality. Will the future be different?

WHAT THIS BOOK IS NOT ABOUT

Disparities in the distribution of income and wealth are not the only type of inequality of social or historical relevance: so are inequalities that are rooted in gender and sexual orientation; in race and ethnicity; and in age, ability, and beliefs, and so are inequalities of education, health, political voice, and life chances. The title of this book is therefore not as precise as it could be. Then again, a subtitle such as “violent shocks and the global history of income and wealth inequality from the Stone Age to the present and beyond” would not only have stretched the publisher’s patience but would also have been needlessly exclusive. After all, power inequalities have always played a central role in determining access to material resources: a more detailed title would be at once more precise and too narrow.

I do not endeavor to cover all aspects even of economic inequality. I focus on the distribution of material resources within societies, leaving aside questions of economic inequality between countries, an important and much-discussed topic. I consider conditions within particular societies without explicit reference to the many other sources of inequality just mentioned, factors whose influence on the distribution of income and wealth would be hard, if not impossible, to track and compare in the very long run. I am primarily interested in answering the question of why inequality fell, in identifying the mechanisms of leveling. Very broadly speaking, after our species had embraced domesticated food production and its common corollaries, sedentism and state formation, and had acknowledged some form of hereditary property rights, upward pressure on material inequality effectively became a given, fundamental feature of human social existence. Consideration of the finer points of how these pressures evolved over the course of centuries and millennia, especially the complex synergies between what we might crudely label coercion and market forces, would require a separate study of even greater length.

Finally, I discuss violent shocks (alongside alternative mechanisms) and their effects on material inequality but do not generally explore the inverse relationship, the question of whether, and if so, how-inequality helped generate these violent shocks. There are several reasons for my reluctance. Because high levels of inequality were a common feature of historical societies, it is not easy to explain specific shocks with reference to that contextual condition. Internal stability varied widely among contemporaneous societies having comparable levels of material inequality. Some societies that underwent violent ruptures were not particularly unequal: prerevolutionary China is one example.

Certain shocks were largely or entirely exogenous, most notably pandemics that leveled inequality by altering the balance of capital and labor. Even human-caused events such as the world wars profoundly affected societies that were not directly involved in these conflicts. Studies of the role of income inequality in precipitating civil war highlight the complexity of this relationship. None of this should be taken to suggest that domestic resource inequality did not have the potential to contribute to the outbreak of wars and revolutions or to state failure. It simply means that there is currently no compelling reason to assume a systematic causal connection between overall income and wealth inequality and the occurrence of violent shocks. As recent work has shown, analysis of more specific features that have a distributional dimension, such as competition within elite groups, may hold greater promise in accounting for violent conflict and breakdown.

For the purposes of this study, I treat violent shocks as discrete phenomena that act on material inequality. This approach is designed to evaluate the significance of such shocks as forces of leveling in the very long term, regardless of whether there is enough evidence to establish or deny a meaningful connection between these events and prior inequality. If my exclusive focus on one causal arrow, from shocks to inequality, encourages further engagement with the reverse, so much the better. It may never be feasible to produce a plausible account that fully endogenizes observable change in the distribution of income and wealth over time. Even so, possible feedback loops between inequality and violent shocks are certainly worth exploring in greater depth. My study can be no more than a building block for this larger project.

HOW IS IT DONE?

There are many ways of measuring inequality. In the following chapters, I generally use only the two most basic metrics, the Gini coefficient and percentage shares of total income or wealth. The Gini coefficient measures the extent to which the distribution of income or material assets deviates from perfect equality. If each member of a given population receives or holds exactly the same amount of resources, the Gini coefficient is 0; if one member controls everything and everybody else has nothing, it approximates 1. Thus the more unequal the distribution, the higher the Gini value. It can be expressed as a fraction of 1 or as a percentage; I prefer the former so as to distinguish it more clearly from income or wealth shares, which are generally given as percentages. Shares tell us which proportion of the total income or wealth in a given population is received or owned by a particular group that is defined by its position within the overall distribution. For example, the much-cited “1 percent” represent those units, often households, of a given population that enjoy higher incomes or dispose of greater assets than 99 percent of its units. Gini coefficients and income shares are complementary measures that emphasize different properties of a given distribution: whereas the former compute the overall degree of inequality, the latter provide much-needed insight into the shape of the distribution.

Both indices can be used for measuring the distribution of different versions of the income distribution. Income prior to taxes and public transfers is known as “market” income, income after transfers is called “gross” income, and income net of all taxes and transfers is defined as “disposable” income. In the following, I refer only to market and disposable income. Whenever I use the term income inequality without further specification, I mean the former. For most of recorded history, market income inequality is the only type that can be known or estimated. Moreover, prior to the creation of extensive systems of fiscal redistribution in the modern West, differences in the distribution of market, gross, and disposable income were generally very small, much as in many developing countries today. In this book, income shares are invariably based on the distribution of market income. Both contemporary and historical data on income share, especiaily those at the very top of the distribution, are usually derived from tax records that refer to income prior to fiscal intervention. On a few occasions, I also refer to ratios between shares or particular percentiles of the income distribution, an alternative measure of the relative weight of different brackets. More sophisticated indices of inequality exist but cannot normally be applied to long-term studies that range across highly diverse data sets.

The measurement of material inequality raises two kinds of problems: conceptual and evidential. Two major conceptual issues merit attention here. First, most available indices measure and express relative inequality based on the share of total resources captured by particular segments of the population. Absolute inequality, by contrast, focuses on the difference in the amount of resources that accrue to these segments. These two approaches tend to produce very different results. Consider a population in which the average household in the top decile of income distribution earns ten times as much as an average household in the bottom decile, say, $100,000 versus $10,000. National income subsequently doubles while the distribution of income remains unchanged. The Gini coefficient and income shares remain the same as before. From this perspective, incomes have gone up without raising inequality in the process. Yet at the same time, the income gap between the top and bottom deciles has doubled, from $90,000 to $180,000, ensuring much greater gains for affluent than for low-income households. The same principle applies to the distribution of wealth. In fact, there is hardly any credible scenario in which economic growth will fail to cause absolute inequality to rise. Metrics of relative inequality can therefore be said to be more conservative in outlook as they serve to deflect attention from persistently growing income and wealth gaps in favor of smaller and multidirectional changes in the distribution of material resources. In this book, I follow convention in prioritizing standard measures of relative inequality such as the Gini coefficient and top income shares but draw attention to their limitations where appropriate.

A different problem stems from the Gini coefficient of income distribution’s sensitivity to subsistence requirements and to levels of economic development. At least in theory, it is perfectly possible for a single person to own all the wealth that exists in a given population. However, nobody completely deprived of income would be able to survive. This means that the highest feasible Gini values for income are bound to fall short of the nominal ceiling of 1. More specifically, they are limited by the amount of resources in excess of those needed to meet minimum subsistence requirements. This constraint is particularly powerful in the low-income economies that were typical of most of human history and that still exist in parts of the world today. For instance, in a society having a GDP equivalent to twice minimal subsistence, the Gini coefficient could not rise above 0.5 even if a single individual somehow managed to monopolize all income beyond what everybody else needed for bare survival. At higher levels of output, the maximum degree of inequality is further held in check by changing definitions of what constitutes minimum subsistence and by largely impoverished populations’ inability to sustain advanced economies. Nominal Gini coefficients need to be adjusted accordingly to calculate what has been called the extraction rate, the extent to which the maximum amount of inequality that is theoretically possible in a given environment has been actualized. This is a complex issue that is particularly salient to any comparisons of inequality in the very long run but that has only very recently begun to attract attention. I address it in more detail in the appendix at the end of this book.

This brings me to the second category: problems related to the quality of the evidence. The Gini coefficient and top income shares are broadly congruent measures of inequality: they generally (though not invariably) move in the same direction as they change over time. Both are sensitive to the shortcomings of the underlying data sources. Modern Gini coefficients are usually derived from household surveys from which putative national distributions are extrapolated. This format is not particularly suitable for capturing the very largest incomes. Even in Western countries, nominal Ginis need to be adjusted upward to take full account of the actual contribution of top incomes. In many developing countries, moreover, surveys are often of insufficient quality to support reliable national estimates. In such cases, wide confidence intervals not only impede comparison between countries but also can make it hard to track change over time.

Attempts to measure the overall distribution of wealth face even greater challenges, not only in developing countries, where a sizeable share of elite assets is thought to be concealed offshore, but even in data-rich environments such as the United States. Income shares are usually computed from tax records, whose quality and characteristics vary greatly across countries and over time and that are vulnerable to distortions motivated by tax evasion. Low participation rates in lower-income countries and politically driven definitions of what constitutes taxable income introduce additional complexities. Despite these difficulties, the compilation and online publication of a growing amount of information on top income shares in the “World Wealth and Income Database” has put our understanding of income inequality on a more solid footing and redirected attention from somewhat opaque single-value metrics such as the Gini coefficient to more articulated indices of resource concentration.

All these problems pale in comparison to those we encounter once we seek to extend the study of income and wealth inequality farther back in time. Regular income taxes rarely predate the twentieth century. In the absence of household surveys, we have to rely on proxy data to calculate Gini coefficients. Prior to about 1800, income inequality across entire societies can be estimated only with the help of social tables, rough approximations of the incomes obtained by different parts of the population that were drawn up by contemporary observers or inferred, however tenuously, by later scholars. More rewarding, a growing number of data sets that in parts of Europe reach back to the High Middle Ages have shed light on conditions in individual cities or regions. Surviving archival records of wealth taxes in French and Italian cities, taxes on housing rental values in the Netherlands, and income taxes in Portugal allow us to reconstruct the underlying distribution of assets and sometimes even incomes. So do early modern records of the dispersion of agricultural land in France and of the value of probate estates in England. In fact, Gini coefficients can fruitfully be applied to evidence that is much more remote in time. Patterns of landownership in late Roman Egypt; variation in the size of houses in ancient and early medieval Greece, Britain, Italy, and North Africa and in Aztec Mexico; the distribution of inheritance shares and dowries in Babylonian society; and even the dispersion of stone tools in Catal HéyUk, one of the earliest known proto-urban settlements in the world, established almost 10,000 years ago, have all been analyzed in this manner. Archaeology has enabled us to push back the boundaries of the study of material inequality into the Paleolithic at the time of the last Ice Age.

We also have access to a whole range of proxy data that do not directly document distributions but that are nevertheless known to be sensitive to changes in the level of income inequality. The ratio of land rents to wages is a good example. In predominantly agrarian societies, changes in the price of labor relative to the value of the most important type of capital tend to reflect changes in the relative gains that accrued to different classes: a rising index value suggests that landlords prospered at the expense of workers, causing inequality to grow. The same is true of a related measure, the ratio of mean per capita GDP to wages. The larger the nonlabor share in GDP, the higher the index, and the more unequal incomes were likely to be. To be sure, both methods have serious weaknesses. Rents and wages may be reliably reported for particular locales but need not be representative of larger populations or entire countries, and GDP guesstimates for any premodern society inevitably entail considerable margins of error. Nevertheless, such proxies are generally capable of giving us a sense of the contours of inequality trends over time. ReaI incomes represent a more widely available but somewhat less instructive proxy. In western Eurasia, real wages, expressed in grain equivalent, have now been traced back as far as 4,000 years. This very long-term perspective makes it possible to identify instances of unusually elevated real incomes for workers, a phenomenon plausibly associated with lowered inequality. Even so, information on real wages that cannot be contextualized with reference to capital values or GDP remains a very crude and not particularly reliable indicator of overall income inequality.

Recent years have witnessed considerable advances in the study of premodern tax records and the reconstruction of real wages, rent/wage ratios, and even GDP levels. It is not an exaggeration to say that much of this book could not have been written twenty or even ten years ago. The scale, scope, and pace of progress in the study of historical income and wealth inequality gives us much hope for the future of this field. There is no denying that long stretches of human history do not admit even the most rudimentary quantitative analysis of the distribution of material resources. Yet even in these cases we may be able to identify signals of change over time. Elite displays of wealth are the most promising, and, indeed, often the only, marker of inequality. When archaeological evidence of lavish elite consumption in housing, diet, or burials gives way to more modest remains or signs of stratification fade altogether, we may reasonably infer a degree of equalization.

In traditional societies, members of the wealth and power elites were often the only ones who controlled enough income or assets to suffer large losses, losses that are visible in the material record. Variation in human stature and other physiological features can likewise be associated with the distribution of resources, although other factors, such as pathogen loads, also played an important role. The more we move away from data that document inequality in a more immediate manner, the more conjectural our readings are bound to become. Yet global history is simply impossible unless we are prepared to stretch. This book is an attempt to do just that.

In so doing we face an enormous gradient in documentation, from detailed statistics concerning the factors behind the recent rise in American income inequality to vague hints at resource imbalances at the dawn of civilization, with a wide array of diverse data sets in between. To join all this together in a reasonably coherent analytical narrative presents us with a formidable challenge: in no small measure, this is the true challenge of inequality invoked in the title of this introduction. I have chosen to structure each part of this book in what seems to me the best way to address this problem. The opening part follows the evolution of inequality from our primate beginnings to the early twentieth century and is thus organized in conventional chronological fashion (chapters 1-3).

This changes once we turn to the Four Horsemen, the principal drivers of violent leveling. In the parts devoted to the first two members of this quartet, war and revolution, my survey starts in the twentieth century and subsequently moves back in time. There is a simple reason for this. Leveling by means of mass mobilization warfare and transformative revolution has primarily been a feature of modernity. The “Great Compression” of the 1910s to 1940s not only produced by far the best evidence of this process but also represents and indeed constitutes it in paradigmatic form (chapters 4-5).

In a second step, I look for antecedents of these violent ruptures, moving from the American Civil War all the way back to the experience of ancient China, Rome, and Greece, as well as from the French Revolution to the countless revolts of the premodern era (chapters 6 and 8). I follow the same trajectory in my discussion of civil war in the final part of chapter 6, from the consequences of such conflicts in contemporary developing countries to the end of the Roman Republic. This approach allows me to establish models of violent leveling that are solidly grounded in modern data before I explore whether they can also be applied to the more distant past.

In Part V, on plagues, I employ a modified version of the same strategy by moving from the best documented case-the Black Death of the Late Middle Ages (chapter 10), to progressively less well known examples, one of which (the Americas after 1492) happens to be somewhat more recent whereas the others are located in more ancient times (chapter 11). The rationale is the same: to establish the key mechanisms of violent leveling brought about by epidemic mass mortality with the help of the best available evidence before I search for analogous occurrences elsewhere.

Part IV, on state failure and systems collapse, takes this organizing principle to its logical conclusion. Chronology matters little in analyzing phenomena that were largely confined to premodern history, and there is nothing to be gained from following any particular time sequence. The dates of particular cases matter less than the nature of the evidence and the scope of modern scholarship, both of which vary considerably across space and time. I thus begin with a couple of well-attested examples before I move on to others that I discuss in less detail (chapter 9).

Part VI, on alternatives to violent leveling, is for the most part arranged by topic as I evaluate different factors (chapters 12-13) before I turn to counterfactual outcomes (chapter 14). The final part, which together with Part I frames my thematic survey, returns to a chronological format. Moving from the recent resurgence in inequality (chapter 15) to the prospects of leveling in the near and more distant future (chapter 16), it completes my evolutionary overview.

A study that brings together Hideki Tojo’s Japan and the Athens of Pericles or the Classic Lowland Maya and present-day Somalia may seem puzzling to some of my fellow historians, although less so, I hope, to readers from the social sciences. As I said, the challenge of exploring the global history of inequality is a serious one. If we want to identify forces of leveling across recorded history, we need to find ways to bridge the divide between different areas of specialization both within and beyond academic disciplines and to overcome huge disparities in the quality and quantity of the data. A long-term perspective calls for unorthodox solutions.

DOES IT MATTER?

All this raises a simple question. If it is so difficult to study the dynamics of inequality across very different cultures and in the very long run, why should we even try? Any answer to this question needs to address two separate but related issues, does economic inequality matter today, and why is its history worth exploring? Princeton philosopher Harry Frankfurt, best known for his earlier disquisition On Bullshit, opens his booklet On Inequality by disagreeing with Obama’s assessment quoted at the beginning of this introduction: “our most fundamental challenge is not the fact that the incomes of Americans are widely unequal. It is, rather, the fact that too many of our peopIe are poor.” Poverty, to be sure, is a moving target: someone who counts as poor in the United States need not seem so in central Africa. Sometimes poverty is even defined as a function of inequality, in the United Kingdom, the official poverty line is set as a fraction of median income, although absolute standards are more common, such as the threshold of $1.25 in 2005 prices used by the World Bank or reference to the cost of a basket of consumer goods in America.

Nobody would disagree that poverty, however defined, is undesirable: the challenge lies in demonstrating that income and wealth inequality as such has negative effects on our lives, rather than the poverty or the great fortunes with which it may be associated.

The most hard-nosed approach concentrates on inequality’s effect on economic growth. Economists have repeatedly noted that it can be hard to evaluate this relationship and that the theoretical complexity of the problem has not always been matched by the empirical specification of existing research. Even so, a number of studies argue that higher levels of inequality are indeed associated with lower rates of growth. For instance, lower disposable income inequality has been found to lead not only to faster growth but also to longer growth phases. Inequality appears to be particularly harmful to growth in developed economies. There is even some support for the much-debated thesis that high levels of inequality among American households contributed to the credit bubble that helped trigger the Great Recession of 2008, as lower-income households drew on readily available credit (in part produced by wealth accumulation at the top) to borrow for the sake of keeping up the with consumption patterns of more affluent groups. Under more restrictive conditions of lending, by contrast, wealth inequality is thought to disadvantage low-income groups by blocking their access to credit.

Among developed countries, higher inequality is associated with less economic mobility across generations. Because parental income and wealth are strong indicators of educational attainment as well as earnings, inequality tends to perpetuate itself over time, and all the more so the higher it is. The disequalizing consequences of residential segregation by income are a related issue. In metropolitan areas in the United States since the 1970s, population growth in high and low income areas alongside shrinking middle-income areas has led to increasing polarization. Affluent neighborhoods in particular have become more isolated, a development likely to precipitate concentration of resources, including locally funded public services, which in turns affects the life chances of children and impedes intergenerational mobility.

In developing countries, at least certain kinds of income inequality increase the likelihood of internal conflict and civil war. High-income societies contend with less extreme consequences. In the United States, inequality has been said to act on the political process by making it easier for the wealthy to exert influence, although in this case we may wonder whether it is the presence of very large fortunes rather than inequality per se that accounts for this phenomenon. Some studies find that high levels of inequality are correlated with lower levels of self-reported happiness. Only health appears to be unaffected by the distribution of resources as such, as opposed to income levels: whereas health differences generate income inequality, the reverse remains unproven.

What all these studies have in common is that they focus on the practical consequences of material inequality, on instrumental reasons for why it might be deemed a problem. A different set of objections to a skewed distribution of resources is grounded in normative ethics and notions of social justice, a perspective well beyond the scope of my study but deserving of greater attention in a debate that is all too often dominated by economic concerns. Yet even on the more limited basis of purely instrumental reasoning there is no doubt that at least in certain contexts, high levels of inequality and growing disparities in income and wealth are detrimental to social and economic development.

But what constitutes a “high” level, and how do we know whether “growing” imbalances are a novel feature of contemporary society or merely bring us closer to historically common conditions? Is there, to use Francois Bourguignon’s term, a “normal” level of inequality to which countries that are experiencing widening inequality should aspire to return? And if, as in many developed economies, inequality is higher now than it was a few decades ago but is lower than a century ago, what does this mean for our understanding of the determinants of the distribution of income and wealth?

Inequality either grew or held fairly steady for much of recorded history, and significant reductions have been rare. Yet policy proposals designed to stem or reverse the rising tide of inequality tend to show little awareness or appreciation of this historical background. Is that as it should be? Perhaps our age has become so fundamentally different, so completely untethered from its agrarian and undemocratic foundations, that history has nothing left to teach us. And indeed, there is no question that much has changed: low-income groups in rich economies are generally better off than most people were in the past, and even the most disadvantaged residents of the least developed countries live longer than their ancestors lived. The experience of life at the receiving end of inequality is in many ways very different from what it used to be.

But it is not economic or more broadly human development that concerns us here, rather how the fruits of civilization are distributed, what causes them to be distributed the way they are, and what it would take to change these outcomes. I wrote this book to show that the forces that used to shape inequality have not in fact changed beyond recognition. If we seek to rebalance the current distribution of income and wealth in favor of greater equality, we cannot simply close our eyes to what it took to accomplish this goal in the past. We need to ask whether great inequality has ever been alleviated without great violence, how more benign influences compare to the power of this Great Leveler, and whether the future is likely to be very different, even if we may not like the answers.

Part 1

A brief history of inequality

Chapter 1

THE RISE OF INEQUALITY

PRIMORDIAL LEVELING

Has inequality always been with us? Our closest nonhuman relatives in the world today, the African great apes, gorillas, chimpanzees, and bonobos, are intensely hierarchical creatures. Adult gorilla males divide into a dominant few endowed with harems of females and many others having no consorts at all. Silverbacks dominate not only the females in their groups but also any males who stay on after reaching maturity. Chimpanzees, especially but not only males, expend tremendous energy on status rivalry. Bullying and aggressive dominance displays are matched by a wide range of submission behaviors by those on the lower rungs of the pecking order. In groups of fifty or a hundred, ranking is a central and stressful fact of life, for each member occupies a specific place in the hierarchy but is always looking for ways to improve it. And there is no escape: because males who leave their group to avoid overbearing dominants run the risk of being killed by males in other groups, they tend to stay put and compete or submit. Echoing the phenomenon of social circumscription that has been invoked to explain the creation of hierarchy among humans, this powerful constraint serves to shore up inequality.

Their closest relatives, the bonobos, may present a gentler image to the world but likewise feature alpha males and females. Considerably less violent and intent on bullying than chimpanzees, they nevertheless maintain clear hierarchical rankings. Although concealed ovulation and the lack of systematic domination of females by males reduce violent conflict over mating opportunities, hierarchy manifests in feeding competition among males.

Across these species, inequality is expressed in unequal access to food sources, the closest approximation of human-style income disparities, and, above all, in terms of reproductive success. Dominance hierarchy, topped by the biggest, strongest, and most aggressive males, which consume the most and have sexual relations with the most females, is the standard pattern.

It is unlikely that these shared characteristics evolved only after these three species had branched off from the ancestral line, a process that commenced about 11 million years ago with the emergence of gorillas and that continued 3 million years later with the split of the common ancestor of chimpanzees and bonobos from the earliest forerunners of what were to evolve into australopiths and, eventually, humans. Even so, marked social expressions of inequality may not always have been common among primates. Hierarchy is a function of group living, and our more distant primate relatives, who branched off earlier, are now less social and live either on their own or in very small or transient groups. This is true both of gibbons, whose ancestors split from those of the great apes some 22 million years ago, and of the orangutans, the first of the great apes to undergo speciation about 17 million years ago and now confined to Asia. Conversely, hierarchical sociality is typical of the African genera of this taxonomic family, including our own. This suggests that the most recent common ancestor of gorillas, chimpanzees, bonobos, and humans already displayed some version of this trait, whereas more distant precursors need not have done.

Analogy to other primate species may be a poor guide to inequality among earlier hominids and humans. The best proxy evidence we have is skeletal data on sexual size dimorphism, the extent to which mature members of one sex, in this case, males, are taller, heavier, and stronger than those of the other. Among gorillas, as among sea lions, intense inequality among males with and without harems as well as between males and females is associated with a high degree of male-biased size dimorphism. Judging from the fossil record, prehuman hominids, australopiths and paranthropi, reaching back more than 4 million years, appear to have been more dimorphic than humans. If the orthodox position, which has recently come under growing pressure, can be upheld, some of the earliest species, Australopithecus afarensis and anamensis, which emerged 3 to 4 million years ago, were defined by a male body mass advantage of more than 50 percent, whereas later species occupied an intermediate position between them and humans. With the advent of larger-brained Homo erectus more than 2 million years ago, sexual size dimorphism had already declined to the relatively modest amount we still observe today. Insofar as the degree of dimorphism was correlated with the prevalence of agonistic male-on-male competition for females or shaped by female sexual selection, reduced sex differences may be a sign of lesser reproductive variance among males. On this reading, evolution attenuated inequality both among males and between the sexes. Even so, higher rates of reproductive inequality for men than for women have persisted alongside moderate levels of reproductive polygyny.

Other developments that may have begun as long as 2 million years ago are also thought to have fostered greater equality. Changes in the brain and in physiology that promoted cooperative breeding and feeding would have countered aggression by dominants and would have softened hierarchies in larger groups. Innovations in the application of violence may have contributed to this process. Anything that helped subalterns resist dominants would have curtailed the powers of the latter and thus diminished overall inequality. Coalition-building among lower-status men was one means to this end, use of projectile weapons another. Fights at close quarters, whether with hands and teeth or with sticks and rocks, favored stronger and more aggressive men. Weapons began to play an equalizing role after they could be deployed over a greater distance.

Some 2 million years ago, anatomical changes in the shoulder made it possible for the first time to throw stones and other objects in an effective manner, a skill unavailable to earlier species and to nonhuman primates today. This adaptation not only improved hunting abilities but also made it easier for gammas to challenge alphas. The manufacturing of spears was the next step, and enhancements such as fire-hardened tips and, later, stone tips followed. Controlled use of fire dates back perhaps 800,000 years, and heat treatment technology is at least 160,000 years old. The appearance of darts or arrow tips made of stone, first attested about 70,000 years ago in South Africa, was merely the latest phase in a drawn-out process of projectile weapons development. No matter how primitive they may seem to modern observers, such tools privileged skill over size, strength, and aggressiveness and encouraged first strikes and ambushes as well as cooperation among weaker individuals. The evolution of cognitive skills was a vital complement necessary for more accurate throwing, improved weapons design, and more reliable coalition building. Full language capabilities, which would have facilitated more elaborate alliances and reinforced notions of morality, may date back as few as 100,000 or as many as 300,000 years. Much of the chronology of these social changes remains unclear: they may have been strung out over the better part of the last 2 million years or may have been more concentrated among anatomically modern humans, our own species of Homo sapiens, which arose in Africa at least 200,000 years ago.

What matters most in the present context is the cumulative outcome, the improved ability of lower-status individuals to confront alpha males in ways that are not feasible among nonhuman primates. When dominants became embedded in groups whose members were armed with projectiles and capable of balancing their influence by forming coalitions, overt dominance through brute force and intimidation was no longer a viable option. If this conjecture, for this is all it can be, is correct, then violence and, more specifically, novel strategies of organizing and threatening violent action, played an important and perhaps even critical role in the first great leveling in human history. By that time, human biological and social evolution had given rise to an egalitarian equilibrium. Groups were not yet large enough, productive capabilities not yet differentiated enough, and intergroup conflict and territoriality not yet developed enough to make submission to the few seem the least bad option for the many. Whereas animalian forms of domination and hierarchy had been eroded, they had not yet been replaced by new forms of inequality based on domestication, property, and war. That world has been largely but not completely lost. Defined by low levels of resource inequality and a strong egalitarian ethos, the few remaining foraging populations in the world today give us a sense, however limited, of what the dynamics of equality in the Middle and Upper Paleolithic may have looked like.

Powerful logistical and infrastructural constraints help contain inequality among hunter-gatherers. A nomadic lifestyle that does not feature pack animals severely limits the accumulation of material possessions, and the small size and fluid and flexible composition of foraging groups are not conducive to stable asymmetric relationships beyond basic power disparities of age and gender. Moreover, forager egalitarianism is predicated on the deliberate rejection of attempts to dominate. This attitude serves as a crucial check to the natural human propensity to form hierarchies: active equalization is employed to maintain a level playing field. Numerous means of enforcing egalitarian values have been documented by anthropologists, graduated by severity. Begging, scrounging, and stealing help ensure a more equal distribution of resources. Sanctions against authoritarian behavior and self-aggrandizement range from gossip, criticism, ridicule, and disobedience to ostracism and even physical violence, including homicide. Leadership consequently tends to be subtle, dispersed among multiple group members, and transient; the least assertive have the best chances to influence others. This distinctive moral economy has been called “reverse dominance hierarchy”: operative among adult men (who commonly dominate women and children), it represents the ongoing and preemptive neutralization of authority.

Among the Hadza, a group of a few hundred huntergatherers in Tanzania, camp members forage individually and strongly prefer their own households in distributing the acquired food. At the same time, food sharing beyond one’s own household is expected and common, especially when resources can readily be spotted by others. Hadza may try to conceal honey because it is easier to hide, but if found out, they are compelled to share. Scrounging is tolerated and widespread. Thus even though individuals clearly prefer to keep more for themselves and their immediate kin, norms interfere: sharing is common because the absence of domination makes sharing hard to resist. Large perishable items such as big game may even be shared beyond the camp group. Saving is not valued, to the extent that available resources tend to be consumed without delay and not even shared with people who happen to be absent at that moment. As a result, the Hadza have only minimal private possessions: jewelry, clothes, a digging stick, and sometimes a cooking pot for women and a bow and arrows, clothes and jewelry, and perhaps a few tools for men. Many of these goods are not particularly durable, and owners do not form strong attachments to them. Property beyond these basic items does not exist, and territory is not defended. The lack or dispersion of authority makes it hard to arrive at group decisions, let alone enforce them. In all these respects, the Hadza are quite representative of extant foraging groups more generally.

A foraging mode of subsistence and an egalitarian moral economy combine into a formidable obstacle to any form of development for the simple reason that economic growth requires some degree of inequality in income and consumption to encourage innovation and surplus production. Without growth, there was hardly any surplus to appropriate and pass on. The moral economy prevented growth, and the lack of growth prevented the production and concentration of surplus. This must not be taken to suggest that foragers practice some form of communism: consumption is not equalized, and individuals differ not just in terms of their somatic endowments but also with respect to their access to support networks and material resources. As I show in the next section, forager inequality is not nonexistent but merely very low compared to inequality in societies that rely on other modes of subsistence.

We also need to allow for the possibility that contemporary hunter-gatherers may differ in important ways from our pre-agrarian ancestors. Surviving forager groups are utterly marginalized and confined to areas that are beyond the reach of, or of little interest to, farmers and herders, environments that are well suited to a lifestyle that eschews the accumulation of material resources and firm claims to territory. Prior to the domestication of plants and animals for food production, foragers were much more widely spread out across the globe and had access to more abundant natural resources. In some cases, moreover, contemporary foraging groups may respond to a dominant world of more hierarchical farmers and pastoralists, defining themselves in contradistinction to outside norms. Remaining foragers are not timeless or “living fossils,” and their practices need to be understood within specific historical contexts.

For this reason, prehistoric populations need not always have been as egalitarian as the experience of contemporary hunter-gatherers might suggest. Observable material inequalities in burial contexts that date from before the onset of the Holocene, which began about 11,700 years ago, are rare but do exist. The most famous example of unearned status and inequality comes from Sungir, a Pleistocene site 120 miles north of Moscow whose remains date from about 30,000 to 34,000 years ago, a time corresponding to a relatively mild phase of the last Ice Age. It contains the remains of a group of hunters and foragers who killed and consumed large mammals such as bison, horse, reindeer, antelope, and especially mammoth alongside wolf, fox, brown bear, and cave lion.

Three human burials stand out. One features an adult man who was buried with some 3,000 beads made of mammoth ivory that had probably been sewn onto his fur clothing as well as around twenty pendants and twenty-five mammoth ivory rings. A separate grave was the final resting place of a girl of about ten years and a roughly twelve-year-old boy. Both children’s clothing was adorned with an even larger number of ivory beads, about 10,000 overall, and their grave goods included a wide range of prestige items such as spears made of straightened mammoth tusk and various art objects.

Massive effort must have been expended on these deposits: modern scholars have estimated that it would have taken anywhere from fifteen to forty-five minutes to carve a single bead, which translates to a total of 1.6 to 4.7 years of work for one person carving forty hours a week. A minimum of seventy-five arctic foxes needed to be caught to extract the 300 canines attached to a belt and headgear in the children’s grave, and considering the difficulty of extracting them intact, the actual number may well have been higher. Although a substantial spell of relative sedentism would have given the members of this group enough spare time to accomplish all this, the question remains why they would have wished to do so in the first place. These three persons do not appear to have been buried with everyday clothing and objects. That the beads for the children were smaller than those for the man implies that these beads had been manufactured specifically for the children, whether in life or, more likely, just for their burial. For reasons unknown to us, these individuals were considered special. Yet the two children were too young to have earned their privileged treatment: perhaps they owed it to family ties to someone who mattered more than others. The presence of possibly fatal injuries in both the man and the boy and of femoral shortening that would have disabled the girl in life merely add to the mystery.

Although the splendor of the Sungir burials has so far remained without parallel in the Paleolithic record, other rich graves have been found farther west. In Dolni Véstovice in Moravia, at roughly the same time, three individuals were buried with intricate headgear and resting on ocher-stained ground. Later examples are somewhat more numerous. The cave of Arene Candide on the Ligurian coast housed a deep pit grave for a lavishly adorned adolescent male put to rest on a bed of red ocher about 28,000 or 29,000 years ago. Hundreds of perforated shells and deer canines found around his head would originally have been attached to some organic headgear. Pendants made of mammoth ivory, four batons made of elk antlers, and an exceptionally long blade made of exotic flint that had been placed in his right hand added to the assemblage.

A young woman buried in Saint-Germaine-la-Riviere some 16,000 years ago bore ornaments of shell and teeth: the latter, about seventy perforated red deer canines, must have been imported from 200 miles away. About 10,000 years ago, in the early Holocene but in a foraging context, a three-year-old child was laid to rest with 1,500 shell beads at the La Madeleine rock shelter in the Dordogne.

It is tempting to interpret these findings as the earliest harbingers of inequalities to come. Evidence of advanced and standardized craft production, time investment in highly repetitive tasks, and the use of raw materials sourced from far away offers us a glimpse of economic activities more advanced than those found among contemporary hunter-gatherers. It also hints at social disparities not normally associated with a foraging existence: lavish graves for children and adolescents point to ascribed and perhaps even inherited status. The existence of hierarchical relations is more difficult to infer from this material but is at least a plausible option.

But there is no sign of durable inequalities. Increases in complexity and status differentiation appear to have been temporary in nature. Egalitarianism need not be a stable category: social behavior could vary depending on changing circumstances or even recurring seasonal pressures. And although earliest coastal adaptations, cradles of social evolution in which access to maritime food resources such as shellfish encouraged territoriality and more effective leadership, may reach back as far as 100,000 years, there is, at least as yet, no related evidence of emergent hierarchy and consumption disparities. For all we can tell, social or economic inequality in the Paleolithic remained sporadic and transient.

THE GREAT DISEQUALIZATION

Inequality took off only after the last Ice Age had come to an end and climatic conditions entered a period of unusual stability. The Holocene, the first interglacial warm period for more than 100,000 years, created an environment that was more favorable to economic and social development. As these improvements allowed humans to extract more energy and grow in numbers, they also laid the ground for an increasingly unequal distribution of power and material resources. This led to what I call the “Great Disequalization,” a transition to new modes of subsistence and new forms of social organization that eroded forager egalitarianism and replaced it with durable hierarchies and disparities in income and wealth.

For these developments to occur, there had to be productive assets that could be defended against encroachment and from which owners could draw a surplus in a predictable manner. Food production by means of farming and herding fulfills both requirements and came to be the principal driver of economic, social, and political change.

However, domestication of plants and animals was not an indispensable prerequisite. Under certain conditions, foragers were also able to exploit undomesticated natural resources in an analogous fashion. Territoriality, hierarchy, and inequality could arise where fishing was feasible or particularly productive only in certain locations. This phenomenon, which is known as maritime or riverine adaptation, is well documented in the ethnographic record. From about 500 CE, pressure on fish stocks as a result of population growth along the West Coast of North America from Alaska to California encouraged foraging populations to establish control over highly localized salmon streams. This was sometimes accompanied by a shift from mostly uniform dwellings to stratified societies that featured large houses for chiefly families, clients, and slaves.

Detailed case studies have drawn attention to the close connection between resource scarcity and the emergence of inequality. From about 400 to 900 CE, the site of Keatley Creek in British Columbia housed a community of a few hundred members near the Fraser River that capitalized on the local salmon runs. Judging from the archaeological remains, salmon consumption declined around 800, and mammalian meat took its place. At this time, signs of inequality appear in the record. A large share of the fish bone recovered from the pits of the largest houses comes from mature Chinook and sockeye salmon, a prize catch rich in fat and calories. Prestige items such as rare types of stone are found there. Two of the smallest houses, by contrast, contained bones of only younger and less nutritious fish. As in many other societies at this level of complexity, inequality was both celebrated and mitigated by ceremonial redistribution: roasting pits that were large enough to prepare food for sizable crowds suggest that the rich and powerful organized feasts for the community. A thousand years later, potlatch rituals in which leaders competed among themselves through displays of generosity were a common feature across the Pacific Northwest. Similar changes took place at the Bridge River site in the same area: from about 800, as the owners of large buildings began to accumulate prestige goods and abandoned communal food preparation outdoors, poorer residents attached themselves to these households, and inequality became institutionalized.

On other occasions, it was technological progress that precipitated disequalizing social and economic change. For thousands of years, the Chumash on the Californian coast, in what is now Santa Barbara and Ventura counties, had lived as egalitarian foragers who used simple boats and gathered acorns. Around 500 to 700, the introduction of large oceangoing plank canoes that could carry a dozen men and venture more than sixty miles out to sea allowed the Chumash to catch larger fish and to establish themselves as middlemen in the shell trade along the coast. They sold flint obtained from the Channel Islands to inland groups in exchange for acorns, nuts, and edible grasses. This generated a hierarchical order in which polygamous chiefs controlled canoes and access to territory, led their men in war, and presided over ritual ceremonies. In return, they received payments of food and shells from their followers.

In such environments, foraging societies could attain relatively high levels of complexity. As reliance on concentrated local resources grew, mobility declined, and occupational specialization, strictly defined ownership of assets, perimeter defense, and intense competition between neighboring groups that commonly involved the enslavement of captives fostered hierarchy and inequality.

Among foragers, adaptations of this kind were possible only in specific ecological niches and did not normally spread beyond them. Only the domestication of food resources had the potential to transform economic activity and social relations on a global scale: in its absence, stark inequalities might have remained confined to small pockets along coasts and rivers, surrounded by a whole world of more egalitarian foragers. But this was not to be.

A variety of edible plants began to be domesticated on different continents, first in Southwest Asia about 11,500 years ago, then in China and South America 10,000 years ago, in Mexico 9,000 years ago, in New Guinea more than 7,000 years ago, and in South Asia, Africa, and North America some 5,000 years ago. The domestication of animals, when it did occur, sometimes preceded and sometimes followed these innovations. The shift from foraging to farming could be a drawn-out process that did not always follow a linear trajectory.

This was especially true of the Natufian culture and its prepottery Neolithic successors in the Levant, the first to witness this transition. From about 14,500 years ago, warmer and wetter weather allowed regional forager groups to grow in size and to operate from more permanent settlements, hunting abundant game and collecting wild cereal grains in sufficient quantities to require at least small storage facilities. The material evidence is very limited but shows signs of what leading experts have called an “incipient social hierarchy.” Archaeologists have discovered one larger building that might have served communal uses and a few special basalt mortars that would have taken great effort to manufacture. According to one count, about 8 percent of the recovered skeletons from the Early Natufian period, about 14,500 to 12,800 years ago, wore seashells, sometimes brought in from hundreds of miles away, and decorations made of bone or teeth. At one site, three males were buried with shell headdresses, one of them fringed with shells four rows deep. Only a few graves contained stone tools and figurines. The presence of large roasting pits and hearths may point to redistributive feasts of the type held much later in the American Northwest.

Yet whatever degree of social stratification and inequality had developed under these benign environmental conditions faded during a cold phase from about 12,800 to 11,700 years ago known as the Younger Dryas, when the remaining foragers returned to a more mobile lifestyle as local resources dwindled or became less predictable. The return to climatic stability around 11,700 years ago coincided with the earliest evidence for the cultivation of wild crops such as einkorn, emmer, wheat, and barley. During what is known as the early PrePottery Neolithic (about 11,500 to 10,500 years ago), settlements expanded and food eventually came to be stored in individual households, a practice that points to changing concepts of ownership. That some exotic materials such as obsidian appeared for the first time may reflect a desire to express and shore up elevated status.

The later Pre-Pottery Neolithic (about 10,500 to 8,300 years ago) has yielded more specific information. About 9,000 years ago, the village of Cayonil in southeastern Turkey comprised different zones whose buildings and finds differed in size and quality. Larger and better-built structures feature unusual and exotic artifacts and tend to be located in close proximity to a plaza and a temple. Whereas only a small share of graves include obsidian, beads, or tools, three of the four richest in-house burials in Cayonij took place in houses next to the plaza.

All of this may be regarded as markers of elite standing. There can be no doubt that most of the inequality we observe in the following millennia was made possible by farming. But other paths existed. I have already mentioned aquatic adaptations that allowed substantial political and economic disparities to arise in the absence of food domestication. In other cases, the introduction of the domesticated horse as a conveyance could have disequalizing effects even in the absence of food production. In the eighteenth and nineteenth centuries, the Comanche in the borderlands of the American Southwest formed a warrior culture that relied on horses of European origin to conduct warfare and raids over long distances. Buffalo and other wild mammals were their principal food source, complemented by gathered wild plants and maize obtained via trade or plunder. These arrangements supported high levels of inequality: captive boys were employed to tend to the horses of the rich, and the number of horses owned divided Comanche households rather sharply into the “rich” (tsaanaakatu), the “poor” (tahkapu), and the “very poor” (tubitsi tahkapu).

More generally, foraging, horticultural, and agricultural societies were not always systematically associated with different levels of inequality: some foraging groups could be more unequal than some farming communities. A survey of 258 Native American societies in North America suggests that the size of the surplus, not domestication as such, was the key determinant of levels of material inequality: whereas two-thirds of societies that had no or hardly any surplus did not manifest resource inequaIity, four in five of those that generated moderate or large surpluses did. This correlation is much stronger than between different modes of subsistence on the one hand and inequality on the other.

A collaborative study of twenty-one small-scale societies at different levels of development, huntergatherers, horticulturalists, herders, and farmers, and in different parts of the world identifies two crucial determinants of inequality: ownership rights in land and livestock and the ability to transmit wealth from one generation to the next.

Researchers looked at three different types of wealth: embodied (mostly body strength and reproductive success), relational (exemplified by partners in labor), and material (household goods, land, and livestock). In their sample, embodied endowments were the most important wealth category among foragers and horticulturalists, and material wealth was the least important one, whereas the opposite was true of herders and farmers. The relative weight of different wealth classes is an important factor mediating the overall degree of inequality. Physical constraints on embodied wealth are relatively stringent, especially for body size and somewhat less so for strength, hunting returns, and reproductive success. Relational wealth, though more flexible, was also more unevenly distributed among farmers and pastoralists, and measures of inequality in land and livestock in these two groups reached higher levels than those for utensils or boat shares among foragers and horticulturalists. The combination of diverse inequality constraints that apply to different types of wealth and the relative significance of particular types of wealth accounts for observed differences by mode of subsistence. Average composite wealth Gini coefficients were as low as 0.25 to 0.27 for hunter-gatherers and horticulturalists but were much higher for herders (0.42) and agriculturalists (0.48). For material wealth alone, the main divide appears to lie between foragers (0.36) and all others (0.51 to 0.57).

Transmissibility of wealth is another crucial variable. The degree of intergenerational wealth transmission was about twice as high for farmers and herders as for the others, and the material possessions available to them were much more suitable for transmission than were the assets of foragers and horticulturalists. These systematic differences exercise a strong influence on the inequality of life chances, measured in terms of the likelihood that a child of parents in the top composite wealth decile ends up in the same decile compared to that of a child of parents in the poorest decile. Defined in this way, intergenerational mobility was generally moderate: even among foragers and horticulturalists, offspring of the top decile were at least three times as likely to reproduce this standing as those of the bottom decile were to ascend to it.

For farmers, however, the odds were much better (about eleven times), and they were better still for herders (about twenty times). These discrepancies can be attributed to two factors. About half of this effect is explained by technology, which determines the relative importance and characteristics of different wealth types. Institutions governing the mode of wealth transmission account for the other half, as agrarian and pastoralist norms favor vertical transmission to kin.

According to this analysis, inequality and its persistence over time has been the result of a combination of three factors: the relative importance of different classes of assets, how suitable they are for passing on to others, and actual rates of transmission. Thus groups in which material wealth plays a minor role and does not readily lend itself to transmission and in which inheritance is discouraged are bound to experience lower levels of overall inequality than groups in which material wealth is the dominant asset class, is highly transmissible, and is permitted to be left to the next generation.

In the long run, transmissibility is critical: if wealth is passed on between generations, random shocks related to health, parity, and returns on capital and labor that create inequality will be preserved and accumulate over time instead of allowing distributional outcomes toregress to the mean.

In keeping with the observations made in the aforementioned survey of Native American societies, the empirical findings derived from this sample of twenty-one small-scale societies likewise suggest that domestication is not a sufficient precondition for significant disequalization. Reliance on defensible natural resources appears to be a more critical factor, because these can generally be bequeathed to the next generation. The same is true of investments such as plowing, terracing, and irrigation. The heritability of such productive assets and their improvements fosters inequality in two ways: by enabling it to increase over time and by reducing intergenerational variance and mobility.

A much broader survey of more than a thousand societies at different levels of development confirms the central role of transmission. According to this global data set, about a third of simple forager societies have inheritance rules for movable property, but only one in twelve recognizes the transmission of real estate. By contrast, almost all societies that practice intensive forms of agriculture are equipped with rules that cover both. Complex foragers and horticulturalists occupy an intermediate position.

Inheritance presupposes the existence of property rights. We can only conjecture the circumstances of their creation: Samuel Bowles has argued that farming favored rights in property that were impractical or unfeasible for foragers because farm resources such as crops, buildings, and animals could easily be delimited and defended, prerequisites not shared by the dispersed natural resources on which foragers tend to rely. Exceptions such as aquatic adaptations and horse cultures are fully consistent with this explanation.

. . .

from

The Great Leveler. Violence and the History of Inequality from the Stone Age to the Twenty-First Century

by Walter Scheidel

get it at Amazon.com

Advertisements

BASIC INCOME AND DEPRESSION. Restoring the Future – Johann Hari.

Giving people back time, and a sense of confidence in the future.

The point of a welfare state is to establish a safety net below which nobody should ever be allowed to fall. The poorer you are, the more likely you are to become depressed or anxious, and the more likely you are to become sick in almost every way.

There is a direct relationship between poverty and the number of mood-altering drugs that people take, the antidepressants they take just to get through the day. If we want to really treat these problems, we need to deal with poverty.

Instead of using a net to catch people when they fall, Basic Income raises the floor on which everyone stands.

The world has changed fundamentally. We won’t regain security by going backward, especially as robots and technology render more and more jobs obsolete, but we can go forward, to a basic income for everyone.

There was one more obstacle hanging over my attempts to overcome depression and anxiety, and it seemed larger than anything I had addressed up to now. If you’re going to try to reconnect in the ways I’ve been describing, if you’re going to (say) develop a community, democratize your workplace, or set up groups to explore your intrinsic values, you will need time, and you need confidence.

But we are being constantly drained of both. Most people are working all the time, and they are insecure about the future. They are exhausted, and they feel as if the pressure is being ratcheted up every year. It’s hard to join a big struggle when it feels like a struggle to make it to the end of the day. Asking people to take on more -when they’re already run down, seems almost like a taunt.

But as I researched this book, I learned about an experiment that is designed to give people back time, and a sense of confidence in the future.

In the middle of the 1970s, a group of Canadian government officials chose, apparently at random, a small town called Dauphin in the rural province of Manitoba. It was, they knew, nothing special to look at. The nearest city, Winnipeg, was a four-hour drive away. It lay in the middle of the prairies, and most of the people living there were farmers growing a crop called canola. Its seventeen thousand people worked as hard as they could, but they were still struggling. When the canola crop was good, everyone did well, the local car dealership sold cars, and the bar sold booze. When the canola crop was bad, everyone suffered.

And then one day the people of Dauphin were told they had been chosen to be part of an experiment, due to a bold decision by the country’s Liberal government. For a long time, Canadians had been wondering if the welfare state they had been developing, in fits and starts over the years, was too clunky and inefficient and didn’t cover enough people. The point of a welfare state is to establish a safety net below which nobody should ever be allowed to fall: a baseline of security that would prevent people from becoming poor and prevent anxiety. But it turned out there was still a lot of poverty, and a lot of insecurity, in Canada. Something wasn’t working.

So somebody had what seemed like an almost stupidly simple idea. Up to now, the welfare state had worked by trying to plug gaps, by catching the people who fell below a certain level and nudging them back up. But if insecurity is about not having enough money to live on, they wondered, what would happen if we just gave everyone enough, with no strings attached? What if we simply mailed every single Canadian citizen, young, old, all of them, a check every year that was enough for them to live on? It would be set at a carefully chosen rate. You’d get enough to survive, but not enough to have luxuries. They called it a universal basic income. Instead of using a net to catch people when they fall, they proposed to raise the floor on which everyone stands.

This idea had even been mooted by right-wing politicians like Richard Nixon, but it had never been tried before. So the Canadians decided to do it, in one place. That’s how for several years, the people of Dauphin were given a guarantee: Each of you will be unconditionally given the equivalent of $19,000 US. (in today’s money) by the government. You don’t have to worry. There’s nothing you can do that will take away this basic income. It’s yours by right. And then they stood back to see what would happen.

At that time, over in Toronto, there was a young economics student named Evelyn Forget, and one day, one of her professors told the class about this experiment. She was fascinated. But then, three years into the experiment, power in Canada was transferred to a Conservative government, and the program was abruptly shut down. The guaranteed income vanished. To everyone except the people who got the checks, and one other person, it was quickly forgotten.

Thirty years later, that young economics student, Evelyn, had become a professor at the medical school of the University of Manitoba, and she kept bumping up against some disturbing evidence. It is a well-established fact that the poorer you are, the more likely you are to become depressed or anxious, and the more likely you are to become sick in almost every way. In the United States, if you have an income below $20,000, you are more than twice as likely to become depressed as somebody who makes $70,000 or more. And if you receive a regular income from property you own, you are ten times less likely to develop an anxiety disorder than if you don’t get any income from property. “One of the things I find just astonishing,” she told me, “is the direct relationship between poverty and the number of mood-altering drugs that people take, the antidepressants they take just to get through the day.” If you want to really treat these problems, Evelyn believed, you need to deal with these questions.

And so Evelyn found herself wondering about that old experiment that had taken place decades earlier. What were the results? Did the people who were given that guaranteed income get healthier? What else might have changed in their lives? She began to search for academic studies written back then. She found nothing. So she began to write letters and make calls. She knew that the experiment was being studied carefully at the time, that mountains of data were gathered. That was the whole point: it was a study. Where did it go?

After a lot of detective work, stretching over five years, she finally got an answer. She was told that the data gathered during the experiment was hidden away in the National Archives, on the verge of being thrown in the trash. “I got there, and found most of it in paper. It was actually sitting in boxes,” she told me. “There were eighteen hundred cubic feet. That’s eighteen hundred bankers’ boxes, full of paper.” Nobody had ever added up the results. When the Conservatives came to power, they didn’t want anyone to look further, they believed the experiment was a waste of time and contrary to their moral values.

So Evelyn and a team of researchers began the long task of figuring out what the basic income experiment had actually achieved, all those years before. At the same time, they started to track down the people who had lived through it, to discover the Iong-term effects.

The first thing that struck Evelyn, as she spoke to the people who’d been through the program, was how vividly they remembered it. Everyone had a story about how it had affected their lives. They told her that, primarily, “the money acted as an insurance policy. It just sort of removed the stress of worrying about whether or not you can afford to keep your kids in school for another year, whether or not you could afford to pay for the things that you had to pay for.”

This had been a conservative farming community, and one of the biggest changes was how women saw themselves. Evelyn met with one woman who had taken her check and used it to become the first female in her family to get a postsecondary education. She trained to be a librarian and rose to be one of the most respected people in the community. She showed Evelyn pictures of her two daughters graduating, and she talked about how proud she was she had been able to become a role model for them.

Other people talked about how it lifted their heads above constant insecurity for the first time in their lives. One woman had a disabled husband and six kids, and she made a living by cutting people’s hair in her front room. She explained that the universal income meant for the first time the family had “some cream in the coffee” small things that made life a little better.

These were moving stories, but the hard facts lay in the number crunching. After years of compiling the data, here are some of the key effects Evelyn discovered:

  • Students stayed at school longer, and performed better there.
  • The number of low-birth-weight babies declined, as more women delayed having children until they were ready.
  • Parents with newborn babies stayed at home longer to care for them, and didn’t hurry back to work.
  • Overall work hours fell modestly, as people spent more time with their kids, or learning.

But there was one result that struck me as particularly important.

Evelyn went through the medical records of the people taking part, and she found that, as she explained to me, there were “fewer people showing up at their doctor’s office complaining about mood disorders.” Depression and anxiety in the community fell significantly. When it came to severe depression and other mental health disorders that were so bad the patient had to be hospitalized, there was a drop of 9 percent in just three years.

Why was that? “It just removed the stress, or reduced the stress, that people dealt with in their everyday lives,” Evelyn concludes. You knew you’d have a secure income next month, and next year, so you could create a picture of yourself in the future that was stable.

It had another unanticipated effect, she told me. If you know you have enough money to live on securely, no matter what happens, you can turn down a job that treats you badly, or that you find humiliating. “It makes you less of a hostage to the job you have, and some of the jobs that people work just in order to survive are terrible, demeaning jobs,” she says. It gave you “that little bit of power to say, I don’t need to stay here.” That meant that employers had to make work more appealing. And over time, it was poised to reduce inequality in the town, which we would expect to reduce the depression caused by extreme status differences.

For Evelyn, all this tells us something fundamental about the nature of depression. “If it were just a brain disorder,” she told me, “if it was just a physical ailment, you wouldn’t expect to see such a strong correlation with poverty,” and you wouldn’t see such a significant reduction from granting a guaranteed basic income. “Certainly,” she said, “it makes the lives of individuals who receive it more comfortable, which works as an antidepressant.”

As Evelyn looks out over the world today, and how it has changed from the Dauphin of the mid-1970s, she thinks the need for a program like this, across all societies, has only grown. Back then, “people still expected to graduate from high school and to go get a job and work at the same company [or] at least in the same industry until they’d be sixty-five, and then they’d be retired with a nice gold watch and a nice pension.” But “people are struggling to find that kind of stability in labor today, I don’t think those days are ever coming back. We live in a globalized world. The world has changed, fundamentally.” We won’t regain security by going backward, especially as robots and technology render more and more jobs obsolete, but we can go forward, to a basic income for everyone. As Barack Obama suggested in an interview late in his presidency, a universal income may be the best tool we have for recreating security, not with bogus promises to rebuild a lost world, but by doing something distinctively new.

Buried in those dusty boxes of data in the Canadian national archives, Evelyn might have found one of the most important antidepressants for the twenty-first century.

I wanted to understand the implications of this more, and to explore my own concerns and questions about it, so I went to see a brilliant Dutch economic historian named Rutger Bregman. He is the leading European champion of the idea of a universal basic income. We ate burgers and inhaled caffeinated drinks and ended up talking late into the night, discussing the implications of all this. “Time and again,” he said, “we blame a collective problem on the individual. So you’re depressed? You should get a pill. You don’t have a job? Go to a job coach, we’ll teach you how to write a résumé or [to join] LinkedIn. But obviously, that doesn’t go to the root of the problem. Not many people are thinking about what’s actually happened to our labor market, and our society, that these [forms of despair] are popping up everywhere.”

Even middle-class people are living with a chronic “lack of certainty” about what their lives will be like in even a few months’ time, he says. The alternative approach, a guaranteed income, is partly about removing this humiliation and replacing it with security. It has now been tried in many places on a small scale, as he documents in his book Utopia for Realists. There’s always a pattern, he shows. When it’s first proposed, people say, what, just give out money? That will destroy the work ethic. People will just spend it on alcohol and drugs and watching TV. And then the results come in.

For example, in the Great Smoky Mountains, there’s a Native American tribal group of eight thousand people who decided to open a casino. But they did it a little differently. They decided they were going to split the profits equally among everyone in the group, they’d all get a check for (as it turned out) $6,000 a year, rising to $9,000 later. It was, in effect, a universal basic income for everyone. Outsiders told them they were crazy. But when the program was studied in detail by social scientists, it turned out that this guaranteed income triggered one big change. Parents chose to spend a lot more time with their children, and because they were less stressed, they were more able to be present with their kids. The result? Behavioral problems like ADHD and childhood depression fell by 40 percent. I couldn’t find any other instance of a reduction in psychiatric problems in children by that amount in a comparable period of time. They did it by freeing up the space for parents to connect with their kids.

All over the world, from Brazil to India, these experiments keep finding the same result. Rutger told me: “When I ask people, ‘What would you personally do with a basic income?’ about 99 percent of people say, ‘I have dreams, I have ambitions, I’m going to do something ambitious and useful.’” But when he asks them what they think other people would do with a basic income, they say, oh, they’ll become lifeless zombies, they’ll binge-watch Netflix all day.

This program does trigger a big change, he says, but not the one most people imagine. The biggest change, Rutger believes, will be in how people think about work. When Rutger asks people what they actually do at work, and whether they think it is worthwhile, he is amazed by how many people readily volunteer that the work they do is pointless and adds nothing to the world. The key to a guaranteed income, Rutger says, is that it empowers people to say no. For the first time, they will be able to leave jobs that are degrading, or humiliating, or excruciating. Obviously, some boring things will still have to be done. That means those employers will have to offer either better wages, or better working conditions. In one swoop, the worst jobs, the ones that cause the most depression and anxiety, will have to radically improve, to attract workers.

People will be free to create businesses based on things they believe in, to run projects to improve their community, to look after their kids and their elderly relatives. Those are all real work, but much of the time, the market doesn’t reward this kind of work. When people are free to say no, Rutger says, “I think the definition of work would become; to add something of value to make the world a little more interesting, or a bit more beautiful.”

This is, we have to be candid, an expensive proposal, a real guaranteed income would take a big slice of the national wealth of any developed country. At the moment, it’s a distant goal. But every civilizing proposal started off as a utopian dream, from the welfare state, to women’s rights, to gay equality. President Obama said it could happen in the next twenty years. If we start to argue and campaign for it now, as an antidepressant; as a way of dealing with the pervasive stress that is dragging so many of us down, it will, over time, also help us to see one of the factors that are causing all this despair in the first place. It’s a way, Rutger explained to me, of restoring a secure future to people who are losing the ability to see one for themselves; a way of restoring to all of us the breathing space to change our lives, and our culture.

I was conscious, as I thought back over these seven provisional hints at solutions to our depression and anxiety, that they require huge changes, in ourselves, and in our societies. When I felt that way, a niggling voice would come into my head. It said, nothing will ever change. The forms of social change you’re arguing for are just a fantasy. We’re stuck here. Have you watched the news? You think positive changes are a-coming?

When these thoughts came to me, I always thought of one of my closest friends.

In 1993, the journalist Andrew Sullivan was diagnosed as HIV-positive. It was the height of the AIDS crisis. Gay men were dying all over the world. There was no treatment in sight. Andrew’s first thought was: I deserve this. I brought it on myself. He had been raised in a Catholic family in a homophobic culture in which, as a child, he thought he was the only gay person in the whole world, because he never saw anyone like him on TV, or on the streets, or in books. He lived in a world where if you were lucky, being gay was a punchline, and if you were unlucky, it got you a punch in the face.

So now he thought, ‘I had it coming. This fatal disease is the punishment I deserve.’

For Andrew, being told he was going to die of AIDS made him think of an image. He had once gone to see a movie and something went wrong with the projector, and the picture went all wrong, it displayed at a weird, unwatchable angle. It stayed like that for a few minutes. His life now, he realized, was like sitting in that cinema, except this picture would never be right again.

Not long after, he left his job as editor of one of the leading magazines in the United States, the New Republic. His closest friend, Patrick, was dying of AlDS, the fate Andrew was now sure awaited him.

So Andrew went to Provincetown, the gay enclave at the tip of Cape Cod in Massachussetts, to die. That summer, in a small house near the beach, he began to write a book. He knew it would be the last thing he ever did, so he decided to write something advocating a crazy, preposterous idea, one so outlandish that nobody had ever written a book about it before. He was going to propose that gay people should be allowed to get married, just like straight people. He thought this would be the only way to free gay people from the self-hatred and shame that had trapped Andrew himself. It’s too late for me, he thought, but maybe it will help the people who come after me.

When the book, Virtually Normal, came out a year later, Patrick died when it had only been in the bookstores for a few days, and Andrew was widely ridiculed for suggesting something so absurd as gay marriage. Andrew was attacked not just by right-wingers, but by many gay left-wingers, who said he was a sellout, a wannabe heterosexual, a freak, for believing in marriage. A group called the Lesbian Avengers turned up to protest at his events with his face in the crosshairs of a gun. Andrew looked out at the crowd and despaired. This mad idea, his last gesture before dying, was clearly going to come to nothing.

When I hear people saying that the changes we need to make in order to deal with depression and anxiety can’t happen, I imagine going back in time, to the summer of 1993, to that beach house in Provincetown, and telling Andrew something:

Okay, Andrew, you’re not going to believe me, but this is what’s going to happen next. Twenty-five years from now, you’ll be alive. I know; it’s amazing; but wait, that’s not the best part. This book you’ve written, it’s going to spark a movement. And this book, it’s going to be quoted in a key Supreme Court ruling declaring marriage equality for gay people. And I’m going to be with you and your future husband the day after you receive a letter from the president of the United States telling you that this fight for gay marriage that you started has succeeded in part because of you. He’s going to light up the White House like the rainbow flag that day. He’s going to invite you to have dinner there, to thank you for what you’ve done. Oh, and by the way, that president? He’s going to be black.

It would have seemed like science fiction. But it happened. It’s not a small thing to overturn two thousand years of gay people being jailed and scorned and beaten and burned. It happened for one reason only. Because enough brave people banded together and demanded it.

Every single person reading this is the beneficiary of big civilizing social changes that seemed impossible when somebody first proposed them. Are you a woman? My grandmothers weren’t even allowed to have their own bank accounts until they were in their forties, by law. Are you a worker? The weekend was mocked as a utopian idea when labor unions first began to fight for it. Are you black, or Asian, or disabled? You don’t need me to fill in this list.

So I told myself: if you hear a thought in your head telling you that we can’t deal with the social causes of depression and anxiety, you should stop and realize that’s a symptom of the depression and anxiety itself.

Yes, the changes we need now are huge. They’re about the size of the revolution in how gay people were treated. But that revolution happened.

There’s a huge fight ahead of us to really deal with these problems. But that’s because it’s a huge crisis. We can deny that, but then we’ll stay trapped in the problem. Andrew taught me: The response to a huge crisis isn’t to go home and weep. It’s to go big. It’s to demand something that seems impossible, and not rest until you’ve achieved it.

Every now and then, Rutger, the leading European campaigner for a universal basic income, will read a news story about somebody who has made a radical career choice. A fifty-year-old man realizes he’s unfulfilled as a manager so he quits, and becomes an opera singer. A forty-five-year-old woman quits Goldman Sachs and goes to work for a charity. “It is always framed as something heroic,” Rutger told me, as we drank our tenth Diet Coke between us. People ask them, in awe: “Are you really going to do what you want to do?” Are you really going to change your life, so you are doing something that fulfills you?

It’s a sign, Rutger says, of how badly off track we’ve gone, that having fulfilling work is seen as a freakish exception, like winning the lottery, instead of how we should all be living. Giving everyone a guaranteed basic income, he says “is actually all about making it so we tell everyone, ‘Of course you’re going to do what you want to do. You’re a human being. You only live once. What would you want to do instead, something you don’t want to do?’”

. . .

from

Lost Connections. Uncovering the Real Causes of Depression and the Unexpected Solutions

by Johann Hari

get it at Amazon.com

The Spirit Level. Why equality is better for everyone – Richard Wilkinson and Kate Pickett.

“For the first time in history, the poor are on average fatter than the rich.”
How is it that we have created so much mental and emotional suffering despite levels of wealth and comfort unprecedented in human history? The luxury and extravagance of our lives is so great that it threatens the planet.

At the pinnacle of human material and technical achievement, we find ourselves anxiety-ridden, prone to depression, worried about how others see us, unsure of our friendships, driven to consume and with little or no community life. Our societies are, despite their material success, increasingly burdened by their social failings.

If we are to gain further improvements in the real quality of life, we need to shift attention from material standards and economic growth to ways of improving the psychological and social wellbeing of whole societies. It is possible to improve the quality of life for everyone. We shall set out the evidence and our reasons for interpreting it the way we do, so that you can judge for yourself.

Social theories are partly theories about ourselves; indeed, they might almost be regarded as part of our selfawareness or self-consciousness of societies. The knowledge that we cannot carry on as we have, that change is necessary, is perhaps grounds for optimism: maybe we do, at last, have the chance to make a better world.

The truth is that both our broken society and broken economy resulted from the growth of inequality. The problems in rich countries are not caused by the society not being rich enough (or even by being too rich) but by the scale of material differences between people within each society being too big. What matters is where we stand in relation to others in our own society.

Why do we mistrust people more in the UK than in Japan? Why do Americans have higher rates of teenage pregnancy than the French? What makes the Swedish thinner than the Greeks? The answer: inequality.

This groundbreaking book, based on years of research, provides hard evidence to show:

  • How almost everything from life expectancy to depression levels, violence to illiteracy is affected not by how wealthy a society is, but how equal it is.
  • That societies with a bigger gap between rich and poor are bad for everyone in them including the well-off.
  • How we can flnd positive solutions and move towards a happier, fairer future.

Urgent, provocative and genuinely uplifting, The Spirit Level has been heralded as providing a new way of thinking about ourselves and our communities, and could change the way you see the world.

Richard Wilkinson has played a formative role in international research on the social determinants of health. He studied economic history at the London School of Economics before training in epidemiology and is Professor Emeritus at the University of Nottingham Medical School, Honorary Professor at University College London and Visiting Professor at the University of York.

Kate Pickett is Professor of Epidemiology at the University of York and a National Institute for Health Research Career Scientist. She studied physical anthropology at Cambridge, nutritional sciences at Cornell and epidemiology at the University of California Berkeley.

People usually exaggerate the importance of their own work and we worry about claiming too much. But this book is not just another set of nostrums and prejudices about how to put the world to rights. The work we describe here comes out of a very long period of research (over fifty person-years between us) devoted, initially, to trying to understand the causes of the big differences in life expectancy, the ‘health inequalities’ between people at different levels in the social hierarchy in modern societies. The focal problem initially was to understand why health gets worse at every step down the social ladder, so that the poor are less healthy than those in the middle, who in turn are less healthy than those further up.

Like others who work on the social determinants of health, our training in epidemiology means that our methods are those used to trace the causes of diseases in populations, trying to find out why one group of people gets a particular disease while another group doesn’t, or to explain why some disease is becoming more common. The same methods can, however, also be used to understand the causes of other kinds of problems, not just health.

Epidemiology is the study and analysis of the distribution (who, when, and where) and determinants of health and disease conditions in defined populations.

Just as the term ‘evidence-based medicine’ is used to describe current efforts to ensure that medical treatment is based on the best scientific evidence of what works and what does not, we thought of calling this book ‘Evidence-based Politics’. The research which underpins what we describe comes from a great many research teams in different universities and research organizations. Replicable methods have been used to study observable and objective outcomes, and peer-reviewed research reports have been published in academic, scientific journals.

This does not mean that there is no guesswork. Results always have to be interpreted, but there are usually good reasons for favouring one interpretation over another. Initial theories and expectations are often called into question by later research findings which make it necessary to think again. We would like to take you on the journey we have travelled, signposted by crucial bits of evidence and leaving out only the various culs-de-sac and wrong turnings that wasted so much time, to arrive at a better understanding of how we believe it is possible to improve the quality of life for everyone in modern societies. We shall set out the evidence and our reasons for interpreting it the way we do, so that you can judge for yourself.

At an intuitive level people have always recognized that inequality is socially corrosive. But there seemed little reason to think that levels of inequality in developed societies differed enough to expect any measurable effects. The reasons which first led one of us to look for effects seem now largely irrelevant to the striking picture which has emerged. Many discoveries owe as much to luck as judgement.

The reason why the picture we present has not been put together until now is probably that much of the data has only become available in recent years. With internationally comparable information not only on incomes and income distribution but also on different health and social problems, it could only have been a matter of time before someone came up with findings like ours. The emerging data have allowed us, and other researchers, to analyse how societies differ, to discover how one factor is related to another, and to test theories more rigorously.

It is easy to imagine that discoveries are more rapidly accepted in the natural than in the social sciences, as if physical theories are somehow less controversial than theories about the social world. But the history of the natural sciences is littered with painful personal disputes, which started off as theoretical disagreements but often lasted for the rest of people’s lives. Controversies in the natural sciences are usually confined to the experts: most people do not have strong views on rival theories in particle physics. But they do have views on how society works. Social theories are partly theories about ourselves; indeed, they might almost be regarded as part of our selfawareness or self-consciousness of societies. While natural scientists do not have to convince individual cells or atoms to accept their theories, social theorists are up against a plethora of individual views and powerful vested interests.

In 1847, Ignaz Semmelweiss discovered that if doctors washed their hands before attending women in childbirth it dramatically reduced deaths from puerperal fever. But before his work could have much benefit he had to persuade people, principally his medical colleagues to change their behaviour. His real battle was not his initial discovery but what followed from it. His views were ridiculed and he was driven eventually to insanity and suicide. Much of the medical profession did not take his work seriously until Louis Pasteur and Joseph Lister had developed the germ theory of disease, which explained why hygiene was important.

We live in a pessimistic period. As well as being worried by the likely consequences of global warming, it is easy to feel that many societies are, despite their material success, increasingly burdened by their social failings. And now, as if to add to our woes, we have the economic recession and its aftermath of high unemployment. But the knowledge that we cannot carry on as we have, that change is necessary, is perhaps grounds for optimism: maybe we do, at last, have the chance to make a better world. The extraordinarily positive reception of the hardback editon of this book confirms that there is a widespread appetite for change and a desire to find positive solutions to our problems.

We have made only minor changes to this edition. Details of the statistical sources, methods and results, from which we thought most readers would want to be spared, are now provided in an appendix for those with a taste for data. Chapter 13, which is substantially about causation, has been slightly reorganized and strengthened. We have also expanded our discussion of what has made societies substantially more or less equal in the past. Because we conclude that these changes have been driven by changes in political attitudes, we think it is a mistake to discuss policy as if it were a matter of finding the right technical fix. As there are really hundreds of ways that societies can become more equal if they choose to, we have not nailed our colours to one or other set of policies. What we need is not so much a clever solution as a society which recognizes the benefits of greater equality.

If correct, the theory and evidence set out in this book tells us how to make substantial improvements in the quality of life for the vast majority of the population. Yet unless it is possible to change the way most people see the societies they live in, the theory will be stillborn. Public opinion will only support the necessary political changes if something like the perspective we outline in this book permeates the public mind.

We have therefore set up a not-for-profit organization called The Equality Trust (described at the end of this book) to make the kind of evidence set out in the following pages better known and to suggest that there is a way out of the woods for us all.

PART ONE

Material Success, Social Failure

1 The end of an era

“I care for riches, to make gifts to friends, or lead a sick man back to health with ease and plenty. Else small aid is wealth for daily gladness; once a man be done with hunger, rich and poor are all as one.” Euripides, Electra

It is a remarkable paradox that, at the pinnacle of human material and technical achievement, we find ourselves anxiety-ridden, prone to depression, worried about how others see us, unsure of our friendships, driven to consume and with little or no community life. Lacking the relaxed social contact and emotional satisfaction we all need, we seek comfort in overeating, obsessive shopping and spending, or become prey to excessive alcohol, psychoactive medicines and illegal drugs.

How is it that we have created so much mental and emotional suffering despite levels of wealth and comfort unprecedented in human history? Often what we feel is missing is little more than time enjoying the company of friends, yet even that can seem beyond us. We talk as if our lives were a constant battle for psychological survival, struggling against stress and emotional exhaustion, but the truth is that the luxury and extravagance of our lives is so great that it threatens the planet.

Research from the Harwood Institute for Public Innovation (commissioned by the Merck Family Foundation) in the USA shows that people feel that ‘materialism’ somehow comes between them and the satisfaction of their social needs. A report entitled Yearning for Balance, based on a nationwide survey of Americans, concluded that they were ‘deeply ambivalent about wealth and material gain’. A large majority of people wanted society to ‘move away from greed and excess toward a way of life more centred on values, community, and family’. But they also felt that these priorities were not shared by most of their fellow Americans, who, they believed, had become ‘increasingly atomized, selfish, and irresponsible’. As a result they often felt isolated. However, the report says, that when brought together in focus groups to discuss these issues, people were ‘surprised and excited to find that others share[d] their views’. Rather than uniting us with others in a common cause, the unease we feel about the loss of social values and the way we are drawn into the pursuit of material gain is often experienced as if it were a purely private ambivalence which cuts us off from others.

Mainstream politics no longer taps into these issues and has abandoned the attempt to provide a shared vision capable of inspiring us to create a better society. As voters, we have lost sight of any collective belief that society could be different.

Instead of a better society, the only thing almost everyone strives for is to better their own position as individuals within the existing society.

The contrast between the material success and social failure of many rich countries is an important signpost. It suggests that, if we are to gain further improvements in the real quality of life, we need to shift attention from material standards and economic growth to ways of improving the psychological and social wellbeing of whole societies. However, as soon as anything psychological is mentioned, discussion tends to focus almost exclusively on individual remedies and treatments. Political thinking seems to run into the sand.

It is now possible to piece together a new, compelling and coherent picture of how we can release societies from the grip of so much dysfunctional behaviour. A proper understanding of what is going on could transform politics and the quality of life for all of us. It would change our experience of the world around us, change what we vote for, and change what we demand from our politicians.

In this book we show that the quality of social relations in a society is built on material foundations. The scale of income differences has a powerful effect on how we relate to each other. Rather than blaming parents, religion, values, education or the penal system, we will show that the scale of inequality provides a powerful policy lever on the psychological wellbeing of all of us. Just as it once took studies of weight gain in babies to show that interacting with a loving care-giver is crucial to child development, so it has taken studies of death rates and of income distribution to show the social needs of adults and to demonstrate how societies can meet them.

Long before the financial crisis which gathered pace in the later part of 2008, British politicians commenting on the decline of community or the rise of various forms of anti-social behaviour, would sometimes refer to our ‘broken society’. The financial collapse shifted attention to the broken economy, and while the broken society was sometimes blamed on the behaviour of the poor, the broken economy was widely attributed to the rich.

Stimulated by the prospects of ever bigger salaries and bonuses, those in charge of some of the most trusted financial institutions threw caution to the wind and built houses of cards which could stand only within the protection of a thin speculative bubble. But the truth is that both the broken society and the broken economy resulted from the growth of inequality.

WHERE THE EVIDENCE LEADS

We shall start by outlining the evidence which shows that we have got close to the end of what economic growth can do for us. For thousands of years the best way of improving the quality of human life was to raise material living standards. When the wolf was never far from the door, good times were simply times of plenty. But for the vast majority of people in affluent countries the difficulties of life are no longer about filling our stomachs, having clean water and keeping warm. Most of us now wish we could eat less rather than more. And, for the first time in history, the poor are on average fatter than the rich.

Economic growth, for so long the great engine of progress, has, in the rich countries, largely finished its work. Not only have measures of wellbeing and happiness ceased to rise with economic growth but, as affluent societies have grown richer, there have been long-term rises in rates of anxiety, depression and numerous other social problems. The populations of rich countries have got to the end of a long historical journey.

Figure 1.1 Only in its early stages does economic development boost life expectancy.
.

The course of the journey we have made can be seen in Figure 1.1. It shows the trends in life expectancy in relation to Gross National Income per head in countries at various stages of economic development. Among poorer countries, life expectancy increases rapidly during the early stages of economic development, but then, starting among the middle-income countries, the rate of improvement slows down. As living standards rise and countries get richer and richer, the relationship between economic growth and life expectancy weakens. Eventually it disappears entirely and the rising curve in Figure 1.1 becomes horizontal showing that for rich countries to get richer adds nothing further to their life expectancy. That has already happened in the richest thirty or so countries nearest the top righthand corner of Figure 1.1.

The reason why the curve in Figure 1.1 levels out is not because we have reached the limits of life expectancy. Even the richest countries go on enjoying substantial improvements in health as time goes by. What has changed is that the improvements have ceased to be related to average living standards. With every ten years that passes, life expectancy among the rich countries increases by between two and three years. This happens regardless of economic growth, so that a country as rich as the USA no longer does better than Greece or New Zealand, although they are not much more than half as rich. Rather than moving out along the curve in Figure 1.1, what happens as time goes by is that the curve shifts upwards: the same levels of income are associated with higher life expectancy. Looking at the data, you cannot help but conclude that as countries get richer, further increases in average living standards do less and less for health.

While good health and longevity are important, there are other components of the quality of life. But just as the relationship between health and economic growth has levelled off, so too has the relationship with happiness. Like health, how happy people are rises in the early stages of economic growth and then levels off. This is a point made strongly by the economist Richard Layard, in his book on happiness.

Figure 1.2 Happiness and average incomes (data for UK unavailable).
.

Figures on happiness in different countries are probably strongly affected by culture. In some societies not saying you are happy may sound like an admission of failure, while in another claiming to be happy may sound selfsatisfied and smug. But, despite the difficulties, Figure 1.2 shows the ‘happiness curve’ levelling off in the richest countries in much the same way as life expectancy. In both cases the important gains are made in the earlier stages of economic growth, but the richer a country gets, the less getting still richer adds to the population’s happiness. In these graphs the curves for both happiness and life expectancy flatten off at around $25,000 per capita, but there is some evidence that the income level at which this occurs may rise over time.

The evidence that happiness levels fail to rise further as rich countries get still richer does not come only from comparisons of different countries at a single point in time (as shown in Figure 1.2). In a few countries, such as Japan, the USA and Britain, it is possible to look at changes in happiness over sufficiently long periods of time to see whether they rise as a country gets richer. The evidence shows that happiness has not increased even over periods long enough for real incomes to have doubled. The same pattern has also been found by researchers using other indicators of wellbeing such as the ‘measure of economic welfare’ or the ‘genuine progress indicator’, which try to calculate net benefits of growth after removing costs like traffic congestion and pollution.

So whether we look at health, happiness or other measures of wellbeing there is a consistent picture. In poorer countries, economic development continues to be very important for human wellbeing. Increases in their material living standards result in substantial improvements both in objective measures of wellbeing like life expectancy, and in subjective ones like happiness. But as nations join the ranks of the affluent developed countries, further rises in income count for less and less.

This is a predictable pattern. As you get more and more of anything, each addition to what you have, whether loaves of bread or cars, contributes less and less to your wellbeing. If you are hungry, a loaf of bread is everything, but when your hunger is satisfied, many more loaves don’t particularly help you and might become a nuisance as they go stale.

Sooner or later in the long history of economic growth, countries inevitably reach a level of affluence where ‘diminishing returns’ set in and additional income buys less and less additional health, happiness or wellbeing. A number of developed countries have now had almost continuous rises in average incomes for over 150 years and additional wealth is not as beneficial as it once was.

The trends in different causes of death confirm this interpretation. It is the diseases of poverty which first decline as countries start to get richer. The great infectious diseases such as tuberculosis, cholera or measles which are still common in the poorest countries today, gradually cease to be the most important causes of death. As they disappear, we are left with the so-called diseases of affluence, the degenerative cardiovascuiar diseases and cancers. While the infectious diseases of poverty are particularly common in childhood and frequently kill even in the prime of life, the diseases of affluence are very largely diseases of later life.

One other piece of evidence confirms that the reason why the curves in Figures 1.1 and 1.2 level off is because countries have reached a threshold of material living standards after which the benefits of further economic growth are less substantial. It is that the diseases which used to be called the ‘diseases of affluence’ became the diseases of the poor in affluent societies. Diseases like heart disease, stroke and obesity used to be more common among the rich. Heart disease was regarded as a businessman’s disease and it used to be the rich who were fat and the poor who were thin. But from about the 1950s onwards, in one developed country after another, these patterns reversed. Diseases which had been most common among the better-off in each society reversed their social distribution to become more common among the poor.

THE ENVIRONMENTAL LIMITS TO GROWTH

At the same time as the rich countries reach the end of the real benefits of economic growth, we have also had to recognize the problems of global warming and the environmental limits to growth. The dramatic reductions in carbon emissions needed to prevent runaway climate change and rises in sea levels may mean that even present levels of consumption are unsustainable particularly if living standards in the poorer, developing, world are to rise as they need to. In Chapter 15 we shall discuss the ways in which the perspective outlined in this book fits in with policies designed to reduce global warming.

INCOME DIFFERENCES WITHIN AND BETWEEN SOCIETIES

We are the first generation to have to find new answers to the question of how we can make further improvements to the real quality of human life. What should we turn to if not to economic growth? One of the most powerful clues to the answer to this question comes from the fact that we are affected very differently by the income differences within our own society from the way we are affected by the differences in average income between one rich society and another.

In Chapters 4-12 we focus on a series of health and social problems like violence, mental illness, teenage births and educational failure, which within each country are all more common among the poor than the rich. As a result, it often looks as if the effect of higher incomes and living standards is to lift people out of these problems. However, when we make comparisons between different societies, we find that these social problems have little or no relation to levels of average incomes in a society.

Take health as an example. Instead of looking at life expectancy across both rich and poor countries as in Figure 1.1, look just at the richest countries. Figure 1.3 shows just the rich countries and confirms that among them some countries can be almost twice as rich as others without any benefit to life expectancy. Yet within any of them death rates are closely and systematically related to income.

Figure 1.3 Life expectancy is unrelated to differences in average income between rich countries.
.

Figure 1.4 shows the relation between death rates and income levels within the USA. The death rates are for people in zip code areas classified by the typical household income of the area in which they live. On the right are the richer zip code areas with lower death rates, and on the left are the poorer ones with higher death rates. Although we use American data to illustrate this, similar health gradients, of varying steepness, run across almost every society. Higher incomes are related to lower death rates at every level in society.

Figure 1.4 Death rates are closely related to differences in income within societies.
.

Note that this is not simply a matter of the poor having worse health than everyone else. What is so striking about Figure 1.4 is how regular the health gradient is right across society it is a qradient which affects us all.

Within each country, people’s health and happiness are related to their incomes. Richer people tend, on average, to be healthier and happier than poorer people in the same society. But comparing rich countries it makes no difference whether on average people in one society are almost twice as rich as people in another.

What sense can we make of this paradox that differences in average income or living standards between whole populations or countries don’t matter at all, but income differences within those same populations matter very much indeed? There are two plausible explanations. One is that what matters in rich countries may not be your actual income level and living standard, but how you compare with other people in the same society. Perhaps average standards don’t matter and what does is simply whether you are doing better or worse than other people, where you come in the social pecking order.

The other possibility is that the social gradient in health shown in Figure 1.4 results not from the effects of relative income or social status on health, but from the effects of social mobility, sorting the healthy from the unhealthy. Perhaps the healthy tend to move up the social ladder and the unhealthy end up at the bottom.

This issue will be resolved in the next chapter. We shall see whether compressing, or stretching out, the income differences in a society matters. Do more and less equal societies suffer the same overall burden of health and social problems?

2 Poverty or inequality?

“Poverty is not a certain small amount of goods, nor is it just a relation between means and ends; above all it is a relation between people. Poverty is a social status It has grown as an invidious distinction between classes”

Marshall Sahlins, Stone Age Economics

HOW MUCH INEQUALITY?

In the last chapter we saw that economic growth and increases in average incomes have ceased to contribute much to wellbeing in the rich countries. But we also saw that within societies health and social problems remain strongly associated with incomes. In this chapter we will see whether the amount of income inequality in a society makes any difference.

Figure 2.1 How much richer are the richest 20 per cent than the poorest 20 per cent in each country?
.

Figure 2.1 shows how the size of income differences varies from one developed country to another. At the top are the most equal countries and at the bottom are the most unequal. The length of the horizontal bars shows how much richer the richest 20 per cent of the population is in each country compared to the poorest 20 per cent.

Within countries such as Japan and some of the Scandinavian countries at the top of the chart, the richest 20 per cent are less than four times as rich as the poorest 20 per cent. At the bottom of the chart are countries in which these differences are at least twice as big, including two in which the richest 20 per cent get about nine times as much as the poorest. Among the most unequal are Singapore, USA, Portugal and the United Kingdom. (The figures are for household income, after taxes and benefits, adjusted for the number of people in each household.)

There are lots of ways of measuring income inequality and they are all so closely related to each other that it doesn’t usually make much difference which you use. Instead of the top and bottom 20 per cent, we could compare the top and bottom 10 or 30 per cent. Or we could have looked at the proportion of all incomes which go to the poorer half of the population. Typically, the poorest half of the population get something like 20 or 25 per cent of all incomes and the richest half get the remaining 75 or 80 per cent.

Other more sophisticated measures include one called the Gini coefficient. It measures inequality across the whole society rather than simply comparing the extremes. If all income went to one person (maximum inequality) and everyone else got nothing, the Gini coefficient would be equal to 1. If income was shared equally and everyone got exactly the same (perfect equality), the Gini would equal 0. The lower its value, the more equal a society is. The most common values tend to be between 0.3 and 0.5. Another measure of inequality is called the Robin Hood Index because it tells you what proportion of a society’s income would have to be taken from the rich and given to the poor to get complete equality.

To avoid being accused of picking and choosing our measures, our approach in this book has been to take measures provided by official agencies rather than calculating our own. We use the ratio of the income received by the top to the bottom 20 per cent whenever we are comparing inequality in different countries: it is easy to understand and it is one of the measures provided ready-made by the United Nations. When comparing inequality in US states, we use the Gini coefficient: it is the most common measure, it is favoured by economists and it is available from the US Census Bureau. In many academic research papers we and others have used two different inequality measures in order to show that the choice of measures rarely has a significant effect on results.

DOES THE AMOUNT OF INEQUALITY MAKE A DIFFERENCE?

Having got to the end of what economic growth can do for the quality of life and facing the problems of environmental damage, what difference do the inequalities shown in Figure 2.1 make?

It has been known for some years that poor health and violence are more common in more unequal societies. However, in the course of our research we became aware that almost all problems which are more common at the bottom of the social ladder are more common in more unequal societies. It is not just ill-health and violence, but also, as we will show in later chapters, a host of other social problems. Almost all of them contribute to the widespread concern that modern societies are, despite their affluence, social failures.

To see whether these problems were more common in more unequal countries, we collected internationally comparable data on health and as many social problems as we could find reliable figures for.

The list we ended up with included:

  • level of trust
  • mental illness (including drug and alcohol addiction)
  • life expectancy and infant mortality
  • obesity
  • children’s educational performance
  • teenage births
  • homicides
  • imprisonment rates
  • social mobility (not available for US states)

Occasionally what appear to be relationships between different things may arise spuriously or by chance. In order to be confident that our findings were sound we also collected data for the same health and social problems or as near as we could get to the same for each of the fifty states of the USA. This allowed us to check whether or not problems were consistently related to inequality in these two independent settings. As Lyndon Johnson said, ‘America is not merely a nation, but a nation of nations.’

To present the overall picture, we have combined all the health and social problem data for each country, and separately for each US state, to form an Index of Heaith and Social Problems for each country and US state. Each item in the indexes carries the same weight so, for example, the score for mental health has as much influence on a society’s overall score as the homicide rate or the teenage birth rate. The result is an index showing how common all these health and social problems are in each country and each US state. Things such as life expectancy are reverse scored, so that on every measure higher scores reflect worse outcomes. When looking at the Figures, the higher the score on the Index of Health and Social Problems, the worse things are. (For information on how we selected countries shown in the graphs we present in this book, please see the Appendix.)

Figure 2.2 Health and social problems are closely related to inequality among rich countries.
.

We start by showing, in Figure 2.2, that there is a very strong tendency for ill-health and social problems to occur less frequently in the more equal countries. With increasing inequality (to the right on the horizontal axis), the higher is the score on our Index of Health and Social Problems. Health and social problems are indeed more common in countries with bigger income inequalities. The two are extraordinarily closely related, chance alone would almost never produce a scatter in which countries lined up like this.

Figure 2.3 Health and social problems are only weakly related to national average income among rich countries.
.

To emphasize that the prevalence of poor health and social problems in whole societies really is related to inequality rather than to average living standards, we show in Figure 2.3 the same index of health and social problems but this time in relation to average incomes (National Income per person). It shows that there is no similarly clear trend towards better outcomes in richer countries. This confirms what we saw in Figures 1.1 and 1.2 in the first chapter. However, as well as knowing that health and social problems are more common among the less well-off within each society (as shown in Figure 1.4), we now know that the overall burden of these problems is much higher in more unequal societies.

To check whether these results are not just some odd fluke, let us see whether similar patterns also occur when we look at the fifty states of the USA. We were able to find data on almost exactly the same health and social problems for US states as we used in our international index.

Figure 2.4 Health and social problems are related to inequality in US states.
.

Figure 2.4 shows that the Index of Health and Social Problems is strongly related to the amount of inequality in each state, while Figure 2.5 shows that there is no clear relation between it and average income levels.

Figure 2.5 Health and social problems are only weakly related to average income in US states.
.

The evidence from the USA confirms the international picture. The position of the US in the international graph (Figure 2.2) shows that the high average income level in the US as a whole does nothing to reduce its health and social problems relative to other countries.

We should note that part of the reason why our index combining data for ten different health and social problems is so closely related to inequality is that combining them tends to emphasize what they have in common and downplays what they do not. In Chapters 4-12 we will examine whether each problem taken on its own is related to inequality and will discuss the various reasons why they might be caused by inequality.

This evidence cannot be dismissed as some statistical trick done with smoke and mirrors. What the close fit shown in Figure 2.2 suggests is that a common element related to the prevalence of all these health and social problems is indeed the amount of inequality in each country. All the data come from the most reputable sources from the World Bank, the World Health Organization, the United Nations and the Organization for Economic Cooperation and Development (OECD), and others.

Could these relationships be the result of some unrepresentative selection of problems? To answer this we also used the ‘Index of child wellbeing in rich countries’ compiled by the United Nations Children’s Fund (UNICEF). It combines forty different indicators covering many different aspects of child wellbeing. (We removed the measure of child relative poverty from it because it is, by definition, closely related to inequality.)

Figurer 2.6 The UNICEF index of child wellbeing in rich countries is related to inequality.
.

Figure 2.6 shows that child wellbeing is strongly related to inequality, and Figure 2.7 shows that it is not at all related to average income in each country.

Figure 2.7 The UNICEF index of child wellbeing is not related to Gross National Income per head in rich countries.
.

SOCIAL GRADIENTS

As we mentioned at the end of the last chapter, there are perhaps two widespread assumptions as to why people nearer the bottom of society suffer more problems. Either the circumstances people live in cause their problems, or people end up nearer the bottom of society because they are prone to problems which drag them down. The evidence we have seen in this chapter puts these issues in a new light.

Let’s first consider the view that society is a great sorting system with people moving up or down the social ladder according to their personal characteristics and vulnerabilities. While things such as having poor health, doing badly at school or having a baby when still a teenager all load the dice against your chances of getting up the social ladder, sorting alone does nothing to explain why more unequal societies have more of all these problems than less unequal ones. Social mobility may partly explain whether problems congregate at the bottom, but not why more unequal societies have more problems overall.

The view that social problems are caused directly by poor material conditions such as bad housing, poor diets, lack of educational opportunities and so on implies that richer developed societies would do better than the others. But this is a long way from the truth: some of the richest countries do worst.

It is remarkable that these measures of health and social problems in the two different settings, and of child wellbeing among rich countries, all tell so much the same story.

The problems in rich countries are not caused by the society not being rich enough (or even by being too rich) but by the scale of material differences between people within each society being too big. What matters is where we stand in relation to others in our own society.

Of course a small proportion of the least well-off people even in the richest countries sometimes find themselves without enough money for food. However, surveys of the 12.6 per cent of Americans living below the federal poverty line (an absolute income level rather than a relative standard such as half the average income) show that 80 per cent of them have airconditioning, almost 75 per cent own at least one car or truck and around 33 per cent have a computer, a dishwasher or a second car.

What this means is that when people lack money for essentials such as food, it is usually a reflection of the strength of their desire to live up to the prevailing standards. You may, for instance, feel it more important to maintain appearances by spending on clothes while stinting on food. We knew of a young man who was unemployed and had spent a month’s income on a new mobile phone because he said girls ignored people who hadn’t got the right stuff. As Adam Smith emphasized, it is important to be able to present oneself creditably in society without the shame and stigma of apparent poverty.

However, just as the gradient in health ran right across society from top to bottom, the pressures of inequality and of wanting to keep up are not confined to a small minority who are poor. Instead, the effects are as we shall see widespread in the population.

. . .

from

The Spirit Level. Why equality is better for everyone

by Richard Wilkinson and Kate Pickett

get it at Amazon.com

THIS TIME IS NO DIFFERENT. IMF’s dire warning on global economy – Liam Dann * Why a New Multilateralism Now? – David Lipton.

Merry Christmas and happy new financial crisis.

History suggests we are due for another financial crisis and right now the world is in no shape to cope with one.

With ingenuity and international cooperation, we can make the most of new technologies and new challenges, and create a shared and sustained prosperity.


With interest rates still low, central banks simply don’t have the firepower they did in 2008 to deal with a deep recession.

The official outlook for New Zealand’s economy remains solid with GDP growth expected to stay safely north of 2.5 per cent.

But these kind of forecasts will mean little if the world heads into a serious financial crisis.

NZ Herald

Why a New Multilateralism Now

David Lipton, IMF First Deputy Managing Director

Good morning.

Thank you for the introduction.

I appreciate the invitation to speak here today. This conference is tackling issues that have a great bearing on the stability of the world economy. Having just passed the 10th anniversary of the start of the Global Financial Crisis, and now looking forward, I’d like to address what I see as this morning’s key topic: the next financial crisis.

History suggests that an economic downturn lurks somewhere over the horizon. Many are already speculating as to exactly when, where, and why it might arise. While we can’t know all that, we ought to be focusing right now on how to forestall its arrival and how to limit it to a “garden variety” recession when it arrives, meaning, how to avoid creating another systemic crisis. Over the past two years, the IMF has called on governments to put in place policies aimed at just that goal, as we have put it, “fix the roof while the sun shines.” But like many of you, I see storm clouds building, and fear the work on crisis prevention is incomplete.

Before asking what should be done, let’s analyze whether the international community has the wherewithal to respond to the next crisis, should it occur. And here I mean both individual countries, and the international organizations tasked to act as first responders. Should we be confident that the resources, policy instruments, and regulatory frameworks at our disposal will prove potent enough to counter and contain the next recession? Consider the main policy options.

Policy Options for the Next Recession

On monetary policy, much has been said about whether central banks will be able to respond to a deep or prolonged downturn. For example, past U.S. recessions have been met with 500 basis points or more of easing by the Fed. With policy rates so low at present in so many places, that response will not be available. Central banks would likely end up exploring ever more unconventional measures. But with their effectiveness uncertain, we ought to be concerned about the potency of monetary policy.

We read every day that for fiscal policy, the room for maneuver has been narrowing in many countries. Public debt has risen and, in many countries, deficits remain too high to stabilize or reduce debt. Now to be fair, we can presume that if the next slowdown creates unemployment and slack, multipliers will grow larger, likely restoring some potency to fiscal policy, even at high debt levels. But we should not expect governments to end up with the ample space to respond to a downturn that they had ten years ago. Moreover, with high sovereign debt levels, decisions to adopt stimulus may be a hard sell politically.

Given the enduring public resentments borne by the Global Financial Crisis, a recession deep enough to endanger the finances of homeowners or small businesses would likely lead to a strong political call to help relieve debt burdens. That could further stress already stretched public finances.

And if recession once again impairs banks, the recourse to bailouts is now limited in law, following financial regulatory reforms that call for bail-ins of owners and lenders. Those new systems for bail-ins remain underfunded and untested.

Finally, the impairment of key U.S. capital markets during the global financial crisis, which might have produced crippling spillovers across the globe, was robustly contained by unorthodox Fed action supported by Treasury backstop funding. That capacity is also unlikely to be readily available again.

The point is that national policy options and public financial resources may be much more constrained than in the past. The right lesson to take from that possibility is for each country to be much more careful to sustain growth, to limit vulnerabilities, and to prepare for whatever may come.

But the reality is that many countries are not pursuing policies that will bolster their growth in a sustainable fashion. The expansion actually has become less balanced across regions over the past year, and we are witnessing a buildup of vulnerabilities: higher sovereign and corporate debt, tighter financial conditions, incomplete reform efforts, and rising geopolitical tensions.

Five Key Policy Challenges

So, let me turn to five key challenges that could affect the next downturn, areas where governments face a choice to take proactive steps now, or not, and where inaction would probably make matters worse.

The first challenge is the simple and familiar admonition: “First, do no harm.” This is worthy advice for doctors and economic policymakers. Let me mention some examples.

In the case of U.S. fiscal policy over the past year, the combination of spending increases and tax cuts was intended to provide a shot of adrenalin to the U.S. economy and improve investment incentives. However, coming at a time when advanced recovery meant little need for stimulus, this choice runs the three risks of increasing the potential need for Fed tightening; raising deficits and public debt; and spending resources that might better be put aside to combat the next downturn.

Another example is the recent escalation of tariffs and trade tensions. Fortunately, the U.S. and China agreed in Buenos Aires to call a ceasefire. That was a positive development. There certainly are shortcomings in the global trading system, and countries experiencing disruption from trade have some legitimate concerns about a number of trade practices. But the only safe way to address these issues is through dialogue and cooperation.

The IMF has been advocating de-escalation and dialogue for some time. That is because the alternative is hard to contemplate. We estimate that if all of the tariffs that have been threatened are put in place, as much as three-quarters of a percent of global GDP would be lost by 2020. That would be a self-inflicted wound.

So it is vital that this ceasefire leads to a durable agreement that avoids an intensification or spread of tensions.

Now to the second challenge, which is closely tied to the trade issue: China’s emergence as an economic powerhouse. In many ways, this is one of the success stories of our era, showing that global integration can lead to rapid growth, poverty elimination, and new global supply chains lifting up other countries.

But as Winston Churchill once said of the U.S. during World War II, “the price of greatness is responsibility.”

China’s Global Role

Chinese policies that may have been globally inconsequential and thus acceptable when China joined the WTO and had a $1 trillion economy are now consequential to much of the world. That’s because China now is a globally integrated $13 trillion economy whose actions have global reverberations. If China is to continue to benefit from globalization and support the aspirations of developing countries, it will need to focus on how to limit adverse spillovers from its own policies and invest in ensuring that globalization can be sustainable.

Moreover, China would likely gain at home by addressing many of the policy issues that have been contentious, for example through stronger protections for intellectual property, which will benefit China as it becomes a world leader in technologies; reduced trade barriers, especially related to investment rules and government procurement procedures, which will produce cost-reducing and productivity enhancing competition that will benefit the Chinese people in the long run, and an acceleration of market-oriented economic reforms that will help China make more efficient use of scarce resources.

This notion of global responsibility applies to Europe as well, and this is the third challenge. Our forecasts show growth in the euro area and the UK falling short of previous projections, and modest potential growth going forward.

The future of the European economy will be shaped by the way the EU addresses its architectural and macroeconomic challenges and by Brexit. The recent EMU agreement on reforms is welcome. Going forward, the Euro area would gain by pushing further to shore up its institutional foundations.

The absence of a common fiscal policy limits Europe’s ability to share risks and respond to shocks that can radiate through its financial system. And crisis response will be constrained because too much power remains vested in national regulators and supervisors at the expense of an integrated approach across the continent.

All of this prevents Europe from playing a global role commensurate with the size and importance of the euro area economy.

The Task for Emerging Markets

The fourth challenge is in the emerging markets. For all of their extraordinary dynamism, we have seen a divergence among emerging markets over the past year: between those who have not shored up their defenses against shocks, including preparation for the normalization of interest rates in the advanced economies; and those that have taken advantage of the global recovery to address their underlying vulnerabilities.

Capital outflows over the past several months have shown how markets are judging the perceived weaknesses in individual countries. If global conditions become more complicated, these outflows could increase and become more volatile.

The fifth and final challenge is the topic you will take up this afternoon: the role of multilateral institutions.

We know that these institutions have played a crucial role in keeping the global economy on track. In the nearly 75 years since the IMF was set up, our world has undergone multiple transformations, from post-war reconstruction and the Bretton Woods system of fixed exchange rates to the era of flexible rates; the rise of emerging economies; the collapse of the Soviet Union and transition to market economies; as well as a series of financial crises: the Mexican debt crisis, the Asian Crisis, and the Global Financial Crisis.

At each stage, we at the IMF have been called upon to evolve and even remake ourselves.

Now, we see a rising tide of doubt about globalization and discontent with multilateralism in some advanced economies. Just as with the IMF, it is fair for the international community to ask for modernization in its institutions and organizations, to seek reforms to ensure that institutions serve effectively their core purposes.

This applies to groupings such as the G20, as well as international organizations.

So, it was heartening to see the G20 Leaders to call for reform of the WTO when they came together in Buenos Aires. This reform initiative, which has the potential to modernize the global trading system and restore support for cooperative approaches, should now go forward.

The policy challenges we face are clear. As I have suggested, governments have their work cut out for them and may have to contend with less potent policy tools. It is essential they do what they can now to address vulnerabilities and avoid actions that exacerbate the next downturn.

The Multilateral Response

But we should prepare for the possibility that weaker national tools may mean limited effectiveness, and thus may result in greater reliance on multilateral responses and on the global financial safety net.

The IMF’s lending capacity was increased during the global financial crisis to about one trillion dollars – a forceful response from the membership at a time of dire need. One lesson from that crisis was that the IMF went into it under-resourced; we should try to avoid that next time.

From that point of view it was encouraging that the G20 in Buenos Aires underlined its continued commitment to strengthen the safety net, with a strong and adequately financed IMF at its center. It is important that the leaders pledged to conclude the next discussion of our funding, the quota review, next year.

But the stakes are bigger than any one decision about IMF funding. IMF Managing Director Christine Lagarde has called for a “new multilateralism,” one that is dedicated to improving the lives of all this world’s citizens. That ensures that the economic benefits of globalization are shared much more broadly. That focuses on governments and institutions that are both accountable and working together for the common good. And that can take on the many transnational challenges that no one government alone, not even a few governments working together, can handle: climate change, cyber-crime, massive refugee flows, failures of governance, and corruption.

Working together, we will be better able to prevent a damaging downturn in the coming years and a dystopian future in the coming decades. With ingenuity and international cooperation, we can make the most of new technologies and new challenges, and create a shared and sustained prosperity.

Thank you.

Education Impossible, Poverty and Inequality, New Zealand’s Neoliberal Legacy – Principal of one of NZ’s most challenging schools.

‘I shut my door and burst into tears’.

I’ve travelled the world. I have seen hard and I have done rough. But this was something else. It was not how a school should function.

Very few of our kids were actually functioning as they should. It broke my heart every single day.

Our teachers are doctors, psychologists, counsellors, behavioural therapists, and, for a small part of their day, educators.

These are beautiful kids, they can be anything they want to be, they just need to know we believe in them.

At 9am on my first day as principal of a small primary school, I shut my office door and burst into tears. After just 30 minutes on the job, I’d been sworn at by a child, abused by a parent, and a teacher had threatened to walk out. It only got worse. I had kids breaking windows. There were four or five fist fights a day. The police were on call.

I’ve travelled the world. I have seen hard and I have done rough. But this was something else. It was not how a school should function.

The behaviour issues meant there was no such thing as learning. For six weeks, I went home crying every night and said ‘I’m not going back.’ But I did. Because nothing has ever beaten me, and I was furious that this was happening to our children.
I wrote a list of every child in our school. We identified all of their needs. Seventy five per cent of our kids had high-level health, wellbeing, behaviour or academic issues. Very few of our kids were actually functioning as they should. It broke my heart every single day.

So our staff meetings weren’t about appraisals or the curriculum, they were about survival. How do we get to day two? How do we get to day three? We had to go back to basics before we could even start teaching.
I began to understand what was going on in the community. In one family, the kids’ clothes were dirty because they had no power. In another, the fridge didn’t work, there were rats in the walls, and the ceiling leaked. Their kids were constantly sick. It was clear the landlord didn’t care: I suspected in his eyes they weren’t “good enough” people to have the house repaired.

Some kids weren’t at school because their parents had no money for petrol. The stress was immense, and there were a lot of mental health issues. Tough, then, to bring your child up with a lust for life.

When I realised that this was bigger than me, I reached out to everyone who could help. The kids lacked resilience. If someone said “boo” to them in the playground there was a fight, or someone was crying. So we ran programmes about friendship and anger management. The kids have learnt to brainstorm, and problem solve, and communicate. Now we don’t have fights. It meant that in term two, we could start teaching the curriculum.

We realised the other big issue was hunger. When you get to the bottom of why a kid is acting up, it’s often because they haven’t eaten anything that day. Now, with KidsCan’s help, I watch 15 to 20 kids sitting around the school’s breakfast table every morning, chatting over a hot meal of baked beans. It’s a really positive start to the day. They know there’s no shame in needing food, if anyone is hungry they can go into the kitchen and help themselves to snacks too.

The change is huge. They have energy. In term one we struggled hugely with exercise. Everyone would opt out. Now, every morning we pump the sound system and everyone walks or runs laps of the field to ready us for learning. They go for it! We don’t have a single kid that opts out of exercise now, because they’ve got food in their bellies, and that makes them feel happier and more secure.

The day we handed out KidsCan’s jackets and shoes was incredible, the kids couldn’t believe that someone would give them something. I’ve never seen them that excited. They said, “oh Miss, this is the coolest thing I own.” They literally walked higher and taller and prouder. The parents were gobsmacked; many said they just couldn’t have afforded them. And because they have that extra money, it seems the families’ out-of-school lives are better too.

In term one we were too terrified to take the kids out of school. But in term three I took them out to the zoo. We all wore our jackets and I said to them, “we’re a team, we’re a unit”. I could not have been more proud. It was an amazing day with not a single behaviour issue. I hardly see students in my office in trouble anymore. I see them for stickers, and pencils for good writing, and for a hug if they need it.

But caring for these kids does take its toll. Our teachers are doctors, psychologists, counsellors, behavioural therapists, and, for a small part of their day, educators. My biggest fear for our kids is that their needs are far greater than the Government recognises. They don’t understand what’s really happening in our schools. They’ve just been setting standards for kids, and comparing them as if they all arrive at school on an equal footing. They don’t.

These are beautiful kids. They just need to know we believe in them. Sometimes I’ll purposefully leave my class and put a child in charge. Once, one boy said “Miss, he’s the worst person to put in charge!”
I said, “No he’s not, I’ve picked him, and he’s going to do it, just you watch.” I came back in and everything was perfect.
We built that child up. He sat there with a full tummy, feeling warm, with us supporting him, all those things together create success. Many of the kids recognise they don’t have as much money as others, so they think they’re not as good as them. I want them to throw all that away and know they can be anything they want to be.

Stuff.co.nz

To sponsor a Kiwi kid in need for $20 a month, visit KidsCan

HOW SHALL WE LIVE? How Universal Basic Income Solves Widespread Insecurity and Radical Inequality – Daniel Nettle.

Answering the four big objections from critics of UBI.

“A host of positive psychological changes inevitably will result from widespread economic security.” Martin Luther King Jr.

Security is one of the most basic human emotional needs.

Contrary to the predictions of mid-twentieth century economists, the age of universal wellbeing has not materialised.

We are washed up on the end of one big idea, failed Neoliberalism, waiting for something else to come along. At best we are dealing with one symptom at a time. Each piecemeal intervention increases the complexity of the state; divides citizens down into finer and finer ad hoc groups each eligible for different transactions; requires more bureaucratic monitoring; and often has unintended and perverse knock-on effects.

Each conditional government welfare scheme generates a bureaucracy of assessment and the need for constant eligibility monitoring, at vast expense.

Something more systemic is needed; an idea with bigger and bolder scope. That big, bold idea just might be the Universal Basic Income.

For UBI to go mainstream, a positive case will need to be made that also draws on easily available simple social heuristics. If we can’t make it make intuitive sense, it will be confined forever to the world of policy nerds.

The health and wellbeing benefits observed in trials of UBI and minimum income guarantees, even over quite short periods, have been so massive that it is hard not to conclude that security does something interesting to human beings, out of all proportion to the monetary value of the transfer, just as Martin Luther King predicted.

Daniel Nettle is Professor of Behavioural Science at Newcastle University. His varied research career has spanned a number of topics, from the behaviour of starlings to the origins of social inequality in human societies. His research is highly interdisciplinary and sits at the boundaries of the social, psychological and biological sciences.

“Can we not find a method of combining the advantages of anarchism and socialism? It seems to me that we can. The plan we are advocating amounts essentially to this: that a certain small income, sufficient for necessaries, should be secured to all, whether they work or not.” Bertrand Russell

Today should be the best time ever to be alive. Thanks to many decades of increasing productive efficiency, the real resources available to enable us to do the things we value, the avocados, the bicycles, the musical instruments, the bricks and glass are more abundant and of better quality than ever. Thus, at least in the industrialised world, we should be living in the Age of Aquarius, the age where the most urgent problem is self-actualisation, not mere subsistence: not ‘How can we live?’, but ‘How shall we live?’.

Why then, does it not feel like the best time ever? Contrary to the predictions of mid-twentieth-century economists, the age of universal wellbeing has not really materialised. Working hours are as high as they were for our parents, if not higher, and the quality of work is no better for most people. Many people work several jobs they do not enjoy, just to keep a roof over their heads, food on the table, and the lights on. In fact, many people are unable to satisfy these basic wants despite being in work: the greater part of the UK welfare bill, leaving aside retirement pensions, is spent on supporting people who have jobs, not the unemployed. Thousands of people sleep on the streets of Britain every night. Personal debt is at unprecedented levels. Many people feel too harried to even think about self-actualisation.

Twin spectres stalk the land, and help explain the gap between what our grandparents hoped for and what has materialised. These are the spectres of inequality and insecurity. Insecurity, in this context, means not being able to be sure that one will be able to meet one’s basic needs at some point in the future, either because cost may go up, or income may fluctuate. Insecurity is psychologically damaging: most typologies put security as one of the most basic human emotional needs. Insecurity dampens entrepreneurial activity: one of the big reasons that people don’t follow up their innovative ideas is that these are by definition risky, and they worry about keeping bread on the table whilst they try them out. Insecurity deters people from investing in increasing their skills: what if they cannot eat before the investment starts to pay off? It encourages rational short-termism: who would improve a house or a neighbourhood that might be taken away from them in a few months’ time for reasons beyond their control? It also increases the likelihood of anti-social behaviour: I would not steal a loaf of bread if I knew there was no danger of going hungry anyway, but faced with the danger of starvation tomorrow, I would seriously consider it. Insecurity is a problem that affects those who have little to start with especially acutely: hence the link between insecurity and inequality.

Big problems require big ideas. Our current generation of politicians don’t really have ideas big enough to deal with the problems of widespread insecurity and marked inequality. Big ideas come along every few decades. The last one was about forty years ago: neoliberalism, the idea that market competition between private-sector corporations would deliver the social outcomes we all wanted, as long as government got out of the way as far as possible. Interestingly, neoliberalism was not such an obviously good idea that politicians of all stripes ‘just got it’. It took several decades of carefully orchestrated deliberate communication and advocacy, which was not at all successful at first, to eventually make it seem, across the political spectrum, that the idea was so commonsensical as to be obvious. I don’t think any of the early advocates of neoliberalism could possibly have dreamed that after thirty years of implementation of their big idea, available incomes would have stagnated or declined for the median family; public faith in corporate capitalism would have seeped away; even the UK Conservative party would have to concede that market mechanisms did not really work as envisaged; or that the major UK political parties would both be advocating government-imposed pricecaps in an area, the supply of energy, where the neoliberal market model had been followed to its logical conclusion. It feels like we are washed up on the end of one big idea, waiting for something else to come along.

Our current politicians propose to deal with symptoms piecemeal, a minimum-wage increase here, a price cap there, rent-control in the other place; tax credits for those people; financial aid to buy a house for those others. At best we are dealing with one symptom at a time. Each piecemeal intervention increases the complexity of the state; divides citizens down into finer and finer ad hoc groups each eligible for different transactions; requires more bureaucratic monitoring; and often has unintended and perverse knock-on effects. For example, helping young people to buy a house with government financial aid only maintains the high levels of house prices. Vendors can simply factor into the price the transfer from government that they will receive. The policy would be much less popular if millions of pounds of taxpayer money were just given directly to large property development corporations, but that might as well be what the policy did. No, something more systemic is needed; an idea with bigger and bolder scope. That big, bold idea just might be the Universal Basic Income.

A Universal Basic Income (UB1) is a regular financial payment made to all eligible adults, whether they work or not, regardless of their other income. People can know that it will always be there, now and in the future. It should not be a fortune, but it should ideally be enough that no-one ever needs to be hungry or cold.

All developed societies agree on the need to protect citizens from desperate want that may befall them, usually for reasons beyond their control. However, the ways we currently make these transfers are incredibly complex. Guy Standing reports that in the USA, there are at least 126 different federal assistance schemes, not to mention state-level ones. In the UK, individuals have had until recently to be separately assessed for unemployment support, ill-health support, carer support, working tax credits (which amount to low-income support), and so on. The new Universal Credit system only partly simplifies this thicket. Each conditional scheme generates a bureaucracy of assessment and the need for constant eligibility monitoring, at vast expense.

Moreover, conditional transfers always generate incentive problems. If you go back into work after being unemployed, you lose benefits. If you are a carer and the person you care for recovers, you are financially penalised: you do better by keeping them ill. If your wages or hours go up, you lose out in benefit reductions. Under the UK’s new Universal Credit system, the marginal tax rate (the amount you lose of every extra pound you earn in the job market if you are a recipient) is around 80%, and that scheme was a reform designed to increase the incentive to work! Moreover, the 80% figure does not factor in the fact that if you move briefly out of eligibility, for example for some seasonal work, you are uncertain about when and whether you would be able to get back in afterwards, should you need to. This is a disincentive for taking the work.

It is very hard to eliminate these perversities within any system of conditional, circumstance-specific transfers.

The UBI, then, seems like a good idea. It is far from a new one. It has fragmentary roots in the eighteenth and nineteenth centuries. In the twentieth century, there was one wave of enthusiasm in the 1920s, and another in the late 1960s and 1970s. The second wave generated a positive consensus, specific policy proposals, and a certain amount of pilot activity, but other paths ended up being taken. The idea has never quite died, though. It is now back in political consciousness in a very big way.

Why, when the UBI seems such a good idea, when it has been cognitively available to us for so long, when so many very clever people have modelled it and found it desirable, is there no developed society on earth in which it has been fully implemented? Partly this is because democratic governments, indeed societies in general, are poor at farreaching systemic reform, instead finding it easier to tinker with and tune existing systems. It’s only the political outsiders who dare propose massive change, they have less to lose. But it is also because human psychology is an obstacle to the UBI, and this is what interests me in this essay. As Pascal Boyer and Michael Bang Petersen have recently argued, when we (non-specialists) think about how the economy ought to be organized, we don’t derive our conclusions from formal theory, simulations, or systematic research evidence. No, we generally fall back on simple social heuristics, like ‘if someone takes a benefit, they ought to pay a commensurate cost’; ‘more for you is less for me’; or ‘people should only get help when they are in need’. These simple social heuristics are all well and good for the problems they developed to solve, basically, regulating everyday dyadic or small-group social interactions. But they don’t automatically lead us to the right conclusions when trying to design optimal institutions for a complex system like a modern capitalist economy.

Certain aspects of the UBI idea violate one of these simple social heuristics. In fact, the UBI sometimes manages to violate two different and contradictory simple social heuristics simultaneously, as we shall see. These violations are like notes played slightly out of tune: they just seem wrong, before one has had to think much about it. Politicians are afraid of these reactions; they don’t like going out to campaign and meeting the same immediate objections all the time. If you want to build a consensus for the UBI, you have to analyse these jarring notes with some care, and develop a counter-strategy. For UBI to go mainstream, a positive case will need to be made that also draws on easily available simple social heuristics. If we can’t make it make intuitive sense, it will be confined forever to the world of policy nerds.

Fortunately, the challenge can be met. Our simple social heuristics do not constitute a formally consistent system, like arithmetic (why would they?). Instead, they are a diverse bunch of often contradictory gut feelings and moral reactions each triggered by particular contextual cues. For example, we do have strong intuitions that people should not take a benefit without paying a commensurate cost, but these intuitions only get triggered when certain sets of features are present in the situation. These features include: the resource is scarce enough every additional unit of it is valuable to me; the resource was created by deliberate individual effort; the person taking the benefit is somehow dissimilar to me, so their interests are not closely tied in to mine; and it is feasible to monitor who is getting what at reasonable cost.

The features do not always obtain: the resource might be more plentiful than anyone really needs; its acquisition might be mainly due to luck; the other people might be fundamentally similar to me, or their interests closely bound up with mine; or the cost of monitoring who got what might be prohibitive. In such situations, humans everywhere merrily and intuitively sign up to the proposition: the resource should be shared out somehow. There are a number of ways this can happen: pure communal sharing, where each qualifying individual just takes what they like, or equality matching, where every qualifying individual is allotted an equal share as of right. Every society has domains in which communal sharing or equality matching is deployed in preference to market pricing (the rule ‘you should only take a benefit if you pay a commensurate cost’).

Hunter-gatherers deal with large game, chancy and producing a huge surfeit when it comes, by communal sharing. Even in the more private property focussed Western societies, communal sharing is ubiquitous. Households, for example. If I buy a litre of milk, I don’t give my wife a bill at the end of the week for whatever she uses. Su casa es mi casa. Communal sharing or equality matching happens beyond households too. It is anathema to suggest that the residents of Summerhill Square might charge passersby for the air they breathe whilst walking through. Very few people think that those who pay more taxes should get more votes. When proposals are made to move a resource from the domain of the communally shared or equality matched to the priced, there is outcry: witness the response that greets proposals for road tolls in places where use of the roads is currently free; or to charge money at the gates of the town park. The case for the UBI is the case for moving part, no means all, of our money the other way, out of conditionality and into the domain of the equality-matched. Getting your head around it involves framing your understanding of our current economic situation in such a way as to trigger the appropriate equality-matching intuitions. Here as in many other political domains, those who determine the framing of the problem get to have a big influence on the outcome.

Whenever one talks about the UBI, one hears the same objections, including:

– How can we afford such a scheme?

– Why should I give my money to people for them to do nothing in return?

– Why would anyone work if they were given money for free?

– Why should we give money to the rich, who don’t need it?

The first of these objections is the easiest to dispose of. There have been detailed recent costings for the UK, which vary in their assumptions, but the consensus is that introduction of a modest initial UBI scheme would require surprisingly little disruption to our current tax and expenditure system; perhaps modest tax rises, perhaps no change, perhaps tax cuts. If this surprises you, let me give you the following back-of-an-envelope calculations. There are around 65 million people in the UK, of whom 63% are aged between 16 and 64. Assuming that the over 65s will continue with their current pension arrangements instead of the UBI, that gives us at most 41 million adults to cater for, plus about 12 million under-16s. Let’s say we want to give £80 per week to each of the adults. This would cost £171 billion per annum. And let’s further say that we want to give £40 per week, to the mother or other caregiver, for each child under 16. That’s another £25 billion, giving a nice round £200 billion in total.

Of course, £200 billion a year is an eye-watering sum. But UK government expenditure in 2017 was £814 billion, so we are only talking about one quarter of what the government spends anyway. Increasing government expenditure by one quarter might be a rather rash move, but this would not be the net increase, because the UBI would produce savings elsewhere. The welfare bill for 2017, less retirement pensions, was £153 billion. It’s unrealistic to expect a UBI scheme to reduce this to zero: most UBI advocates argue for retaining some extra provision for the disabled, and also retaining, for the time being, means-tested benefits to pay housing rental in some cases (the cost of housing is so high in parts of the UK that many people would become homeless if this disappeared overnight). But certainly, we might hope to eliminate up to £100 billion, or 2/3, of the non-pensions welfare bill, including a very large part of the administrative cost. So we are already half-way there.

At present, most UK adults are taxed at a zero rate on the Iirst £8,164 of earned income, 12% from £8,164 to £11,500, and 32% above £11,500. What this means, in effect, is that anyone earning £11,500 or more is effectively being given a freebie from the state of £3680, compared to being standardly taxed at 32% from the first pound. This figure, £3680 per year is, you will note, not so very far off my proposed initial UBI of £4160 anyway. Personal tax allowances cost the government around £100 billion per annum in foregone revenue. If my proposed UBI were to be introduced, it would be reasonable to ask people to pay their taxes from the first pound. For people like me who earn more than £11,500 per annum, the introduction of the UBI would then be largely neutral, my tax bill going up by around £4000, offset by £4000 coming separately into my bank account as UBI.

So, if you will allow me very broad approximations, moving to a modest UBI would cost about £200 billion per annum, to be funded by about £100 billion of welfare savings, and about £100 billion from abolishing personal tax allowances, so pretty much fiscally neutral.

And this is just a business-as-usual analysis of the likely financial consequences. What advocates believe is that there will be positive knock-on effects: people will be able to move to more productive and enjoyable jobs, or start entrepreneurial activities; people have no financial disincentives to take casual work or increase their hours; the expensive negative psychological consequences of insecurity (anxiety, depression, addiction, maybe even crime) will improve. Thus, what you end up with will be a net saving for the government, not a net cost.

The initial scheme discussed above, and other proposals like it, are not immediately very redistributive. Those currently receiving full Universal Credit would only end up with about the same as their current entitlement; and, as I mentioned above, for well-off people like me, the UBI would be almost exactly offset by the increase in my tax bill. So what is the point of such a reform? The answer has to do with security. I see UBI not so much as an immediate solution to inequality (you would have to set it very high to have a big direct effect on the inequality figures), but as a prophylactic against insecurity. For a wealthy person such as myself, there’s not much financial difference between getting a personal tax allowance and receiving a UBI, until my life is hit with a shock. I am well-off now, but I might not always be. Say I suddenly lose my job, or need to care for my wife. I know the UBI will continue to be there, every week, without any action required of my part. I can factor it into my worst expectations. The same is not true of the transfer effected by my personal tax allowance. And this, briefly, is the best response to objection 4, ‘Why should we give money to the rich, who don’t need it?’ Well, as long as they remain rich, then they are net payers into the system, since their tax bill exceeds their UBI, so we are giving them money only in an accounting sense. But it is still better to have them make a large tax payment in and concurrently take a small UBI payment out, rather than just make their tax rate a bit lower, because they might suddenly become non-rich at any moment. The UBI is ready for that moment should it come. To counter objection 4, we need to activate the social heuristics: ‘anyone could have bad luck’ and ‘everyone is potentially in the same boat’.

There is a large difference between the knowledge that £80 a week will always come into my bank account, this week, next month, and for the rest of my life; and the knowledge that, if things go badly for me, I can do a complex application process, be subjected to a humiliating and lengthy bureaucratic examination, following which, after a delay of up to six weeks during which I will receive nothing, about £80 per week may or may not start to appear in my bank account, and could be withdrawn at any moment if I am ten minutes late for an interview, or am deemed to not be sick enough or not be trying hard enough to look for work.

It is ironic that the system we often refer to as ‘social security’ provides the exact opposite of that: it provides continual, unplannable for uncertainty akin to a sword of Damacles.

The insecure, such as those waiting for benefits decisions or enduring benefits sanctions, have short term problems of liquidity. They lose their homes and possessions, or end up having to borrow money at very high interest rates. This is expensive and spirals them into abject poverty. Reducing insecurity could have an indirect effect on inequality, by stopping this spiral. And the health and wellbeing benefits observed in trials of UBI and minimum income guarantees, even over quite short periods, have been so massive that it is hard not to conclude that security does something interesting to human beings, out of all proportion to the monetary value of the transfer, just as Martin Luther King predicted.

What about objection 2 (‘Why should I give my money to people for them to do nothing in return?’). The objection has two parts: there’s a part about my money being my money, and a part about giving to other people without them doing anything in return. Both parts are important.

First, the my money part. All societies distinguish between individually owned resources and communal resources, though they draw the line in different places. Across societies, alienating an individually-owned resource from someone is morally wrong; but depriving people of a communal resource is equally so. The kinds of cues that trigger intuitions of individual ownership are: my having transformed the material extensively through deliberate action; the resource having been given to me by someone in return for something specific; or the resource having been in my sole possession and use for some time. The kinds of cues that trigger intuitions of communal ownership are: the resource being very abundant; its use being hard to monitor and police; a little of it being essential for everyone’s survival; and the having of it being mainly due to luck. So I think a first move you need to make in making the UBI make sense is to loosen the hold of the individual ownership schema on the money in your wage packet.

The money in my wage packet certainly feels like a good candidate for individual ownership. I have worked hard to get where I have, and this leads to the intuition that every penny in my wage packet is mine, should not be given away to other people without a specific reciprocal service rendered. I supposed I should grudgingly admit that I have got some help from others in earning my salary as an academic, I mean it’s not quite all my own sweat. Following the logic of individual ownership, I should really have paid for all these inputs at point of use, but somehow I didn’t always do so. There’s the statistical computing language R, the backbone of all my research; developed by people I didn’t know and made freely available without me lifting a finger. Maybe 1p in every pound I earn is really owable to the R Foundation for Statistical Computing.

Then come to think of it there is the computer itself, developed by a mixture of public and private investment mainly before I was born. It’s unthinkable that I could be a productive modern professor without this input available. So really I should attribute 2p of each pound I earn to having had that available. Come to think of it, I could not really earn anything as a professor without the existence of an affluent society in which enough people are freed from daily subsistence activities as to want to spend their time studying behavioural science. So I guess I owe the Industrial Revolution say 5p; and then another 3p to those Europeans who invented a rather good system of universities for students to come and study at. Oh, and I do use the scientific method rather a lot (say 4p distributed across a wide range of people in many countries over the last couple of hundred years, and another 2p specifically for the intellectual work of creating my discipline). And a couple of pence in the pound for the philosophers of the enlightenment; without them to make the world safe for my kind I would at best be a priest with low wages. And then there’s the Romans. What did the Romans ever do for me? Well, there’s the sanitation. And the roads…

As soon as we complete this exercise, we are forced to concede that what seems like my money only partly meets all the triggers for individual ownership (my individual labour produced it). In large part, it is a windfall of cumulative cultural evolution. I just got lucky to be born into a shared cultural and technological heritage. I can’t pay back to all those parties whose cultural activities contributed to my luck, since many of them are long gone (and besides, they are innumerable and diverse). But accepting that what I earn is partly due to an abundant social windfall created by a whole society over time, whose use and scope is hard to monitor, and I acquired by sheer luck, loosens the hold of the intuition that all my money all belongs exclusively to me. It’s a short step from ‘a part of what I receive from society is due to our common, difficult to monitor, abundant social luck’ to ‘a part of what I receive should be shared out’.

So now we turn to the part about why I should give anything to strangers without requiring them to pay any particular cost in return. A popular pro-UBI argument here, which goes back to Thomas Paine, is that people should be recompensed for the natural heritage that has been alienated from them. The land has been enclosed and privatised; the water has been bottled and sold; you can’t just chop down the trees, hunt game or build a house where you want, as you would have been able to do at the dawn of society. The UBI is this recompense, the royalty, if you will, on an inheritance that was once socially shared but has been taken away by civilization. This reasoning is fine, but a bit lofty and philosophical. I prefer a quiverful of different, more forward looking arguments.

First, social transfers of some kind are necessary, and monitoring them under the current system is really costly. The UK government recently announced that it needed to review whether its rules on disability benefit claims had been applied correctly to recent claimants. This review is estimated to cost £3.7 billion. That’s enough to give my proposed UBI to everyone in the town of Hexham for over 8 years. Not the cost of the benefit, not the cost of administering the benefit, just the cost of one review of whether the benefit has in fact been correctly administered, for a benefit that only a small fraction of the UK population claims anyway. Scale that up and you appreciate the madness of how we currently administer social transfers.

Second, I do derive all kinds of payoffs from the welfare of others, even strangers. What are they? Well, I enjoy strolling around my city. I enjoy living in a nice orderly street. I enjoy going to the theatre. If my cocitizens were so hungry and desperate that they turned to assaulting their fellows, smashing property, not tending their yards, and abandoning the arts, my personal wellbeing would be directly reduced. I like writing books and giving lectures. It’s therefore in my direct interest that as many people as possible have the resources to read or attend these. Businesses can only flourish if there are people well enough off to be customers. This was the great insight of Henry T. Ford: he realised he could really make a lot more money once he paid his workers enough that they would be able to buy his cars. It’s the kind of reverse Ponzi scheme trick, or perpetual motion machine, of modern consumer capitalism: those at the top of the pyramid need enough money to get down to those at the bottom of the pyramid that those people can buy goods and services, which means that the money comes back up to them again. Otherwise the whole thing grinds to a nasty halt.

One way of thinking about this is to say that, in a community, because of the fundamentally social character of human life, the wellbeing of each individual creates a spill-over benefit for the others. It’s what economists call a positive externality. Because of the changes in behaviour that will follow from my neighbour not being in completely dire straits, my life improves a tiny little bit as theirs does. This improvement is very real and substantial, but hard to tie to any one act my neighbour does, and hence hard to monitor or account for in a ledger.

Third, the marginal wellbeing returns to keeping all of my money are diminishing. Diminishing marginal returns mean that if the first few hundred pounds of income massively improve my well-being, then the next few hundred improve it slightly less, and so on. A few years ago, Karthik Panchanathan, Tage Rai, Alan Fiske and I produced a simple model of what resource distribution a selfish actor should prefer when there are positive social externalities, and diminishing wellbeing returns. We imagined a simple world where there are two actors, me and someone else. We put a value 5 on the positive externality that flows to me as the other person’s well-being increases by one unit. Now we ask: if I can decide how all the available resources get divided up, what allocation should I prefer? The exact numerical answer depends on the value of s and the degree to which marginal returns diminish, but generally, the result is the following. I should want to keep everything up until the point where I myself have got off the steepest part of the increasing wellbeing curve. Above that, it becomes rational for me to want the other actor to have the next chunk of resource, since the positive social externality coming to me from their large increase in wellbeing (they are still on the steep bit of the curve, remember) outweighs the rather small increase in my wellbeing I get from keeping it (since I am on the flatter bit of the curve). There is no ‘problem of cheating’ in this model, since we assume that the positive externalities arise from behavioural changes that the other party will simply want to make anyway as their state improves. It’s a model of mutual benefit, or interdependence, rather than tit-for-tat.

This is the reasoning I would use with a well-off person to advocate funding a UBI from their taxes. The money you put into other people’s UBIs will directly increase your individual wellbeing, because in a society where no-one is desperate, it’s easier for the things you really value and derive benefit from to flourish. Furthermore, as already discussed, UBI offers security to you too. You may not need it right now, but you could do in the future. Both of these are self-interest arguments, where selfinterest is construed sufficiently broadly. You have to be careful about basing all policy arguments on self-interest: it can end up signalling that self-interest is the only normal reason for action, which could become a self-fulfilling prophecy. Nonetheless, perhaps here having self-interest on side helps buttress nobler motives. Experience shows that the long-term success of social policies is tied to the relatively well-off seeing themselves as getting something from them. Where schemes are perceived to benefit only an ‘underclass’, different in kind from the people footing the bill, support is easily driven away in the next downturn.

Objection 3 (‘Why would anyone work if they were given money for free?’) is based on the reasonable intuition that conditionality is important in motivating others to do something. One does not generally say to the plumber: ‘Here’s £100. I’m hoping that at some point you will fix my tap’. However nice the plumber might be, the incentives are a bit wrong here. And if people withdrew their supply of labour, the very affluence that can fund the UBI would be undermined.

The best way to loosen this objection is to remind one’s interlocutor of two things. First, the UBI is only ever going to be basic, and people want more than basic out of life. If people’s life ambitions were limited to gaining some modest level of income of £5000 or £10000 per annum a year and then stopping, then frankly, the behaviour of the vast majority of people in western societies for the last century would be completely incomprehensible. Lottery winners almost universally continue to work, though often not in their previous jobs. Academics don’t work less when they become full professors: they work harder. The very same critics who say that people won’t do anything if given money for free also often advocate the awarding of huge salaries, millions of pounds per annum, to CEOs and other leaders. Admittedly, those huge salaries are conditional on working, whereas the UBI is not. But the fact that the salary allegedly needs to be so huge to attract candidates implies that people are motivated not just by getting a little bit of money, but by getting a lot. So those who advocate large salaries must believe that the motivation for more money holds up at levels of income way above the basic (at least for the right sort of people, but hey, maybe all people are the right sort).

Second, more important than the amount of labour people supply is the productivity of that labour. By this, I mean people choosing to do activities that are socially useful, in which they are happy, and that they are good at. That has to be key to maximising social wellbeing as well as economic stability in future. There is plenty of evidence from pilot schemes of the effect of the UBI (or similar policies) on labour supply. In the 1970s North American schemes, reductions in work hours were real but very modest. No-one stopped working altogether (and these were minimum income guarantee schemes, which provide stronger disincentives for work than a fully unconditional UBI). The slight reductions in labour supply overall were mainly explained by the behaviour of specific groups: parents took more time out of the labour market to look after their children; and young people were more likely to stay on in education, to improve their skills. Need I point out that these are all things that the state currently subsidizes people to do, at considerable cost, because they are felt to be socially desirable? In short, as Michael Howard has put it: ‘In the pilot schemes people withdrew from the labour market, but the kind of labour market withdrawal you got was the kind you would welcome’.In more recent trials of a full UBI in India and Namibia, overall economic activity actually went up, as more people were able to afford to access job markets, or began entrepreneurial activities on their own accounts. I believe that under a UBI scheme, work would continue, and become better: innovation, worthwhile work, scholarship, and the arts would flourish, whilst degrading or miserable jobs would have to pay people more or treat them better. Hardly the end of civilization as we know it then.

If people persist with their intuition that UBI incentivizes people to do nothing, then the argument of last resort is the following: If you think it is stupid to give money to people even if they do nothing (UBI), then you ought to think it really stupid to give people money only on condition that they do nothing (the current means-tested benefits system). How much sense does that make?

There is one other great obstacle to acceptance of the UBI. People can’t figure out whether it is a left-wing idea, or a right-wing one, so neither side takes it fully to its heart. At first it seems left-wing: making the welfare system more humane and less conditional, transferring money from those with most income to those with less, is the latest tool to further a long-standing socialist or social-democratic concern with inequality and social justice. The neoliberal big idea has failed. A big idea based on collective action must replace it, and the UBI is part of that idea.

But good UBI arguments have come from the right, too. Freemarket economist Milton Friedman flirted with the idea, and the most serious Federal-level US policy initiative, the Family Assistance Plan (born about 1968, died about 1973) was proposed by a Republican president (Nixon) and largely killed off by the Democratic party. The right-wing (or libertarian) argument is that UBI massively simplifies the state, and could facilitate it relinquishing a lot of its micro-control over our lives. For example, if a UBI is there providing a protective floor for everyone, does the state also need to regulate minimum wages so closely? Couldn’t people protected from dire exploitation by the UBI make their own minds up about what paid labour they wish to do under what conditions? Perhaps, going further down this line, the UBI plus control of law and order, is pretty much all the state needs to do, internally at any rate. We’ve given everyone enough to avoid starvation and be able to participate in economic life in a minimally sufficient way. After that, they are on their own: they can contract for the goods and services they choose in the market. This argument makes UBI the missing piece that completes, not replaces, the neoliberal vision.

In another essay, I have written about the difficulty of inter-disciplinarity. Valuable integrative ideas can languish in the academic uncanny valley, not obviously owned by one discipline or another, and thus fail to have their potential recognized by anyone. Ideas that are quite good from two points of view, perversely, end up being championed by neither side, and thus have less immediate success than ideas that only appeal to one camp or the other. But what happens to the best of these ideas, in the end, is interesting: They go quite abruptly from all parties saying ‘that makes no sense’, to all parties saying ‘well, everyone knows that!’. There’s a similar adage in public policy: Important policy reforms are politically impossible, until just about the point where they are politically inevitable. We’ve seen plenty of examples of this in the slow and halting march of progress. Perhaps that is what will happen with UBI. We will look back and wonder what took us quite so long. Until then, and this is what scholars are uniquely placed to do, we have to keep the idea alive.

Excerpt from Hanging On To The Edges by Daniel Nettle (2018).

Evonomics

Why should someone who is anti-austerity care about debt – Simon Wren-Lewis.

For a country that can create its own currency there is never any necessity to default.

Most of the posts I have written about austerity have been aimed at countering the idea that in a recession you need to bring down government deficits and therefore debt. But what if you accept all that (you are anti-austerity). Why should you care about debt at all? Why do we have fiscal rules based on deficits? Why not spend what the government needs to spend, and not worry that this resulted in a larger budget deficit?

The story often given is that the markets will impose some limit on what the government will be able to borrow, because if debt gets ‘too high’ in relation to GDP markets will start demanding a higher return. You can see why that argument is problematic by asking why interest rates on government debt would need to be higher. The most obvious reason is default risk. But for a country that can create its own currency there is never any necessity to default.

Being anti-austerity does not mean we can forget about debt completely, as long as we are using interest rates rather than fiscal policy to control demand.

. . . Mainly Macro

Modern Monetary Theory. IMF continues to tread the ridiculous path – Bill Mitchell.

Last week, the IMF released its so-called Fiscal Monitor October 2018.

Apparently the British government, which issues its own currency, has ‘shareholders’ who care about its Profit and Loss statement and the flow implications of the latter for the Balance Sheet of the Government.

Anyone who knows anything quickly realises this is a ruse. There is no meaningful application of the ‘finances’ pertaining to a private corporation to the ‘finances’ of a currency-issuing government.

A currency-issuing government’s ‘balance sheet’ provides no help in our understanding of what spending capacities such a government has.

A currency-issuing government can always service any liabilities that are denominated in its own currency.

. . . Professor Bill Mitchell’s blog

New Zealand’s political leadership has failed for decades on housing policy – Shamubeel Eaqub. 

New Zealand’s political leadership has failed for decades on housing policy, leading to the rise of a Victorian-style landed gentry, social cohesion coming under immense pressure and a cumulative undersupply of half a million houses over the last 30 years.

House prices are at the highest level they have ever been. And they have risen really, really fast since the 90s, but more so since the early 2000s and have far outstripped every fundamental that we can think of.

After nearly a century of rising home ownership in New Zealand, since 1991 home ownership has been falling. In the last census, the home ownership rate was the lowest level since 1956. And for my estimate for the end of 2016, it’s the lowest level since 1946.

We’ve gone back a long way in terms of the promise and the social pact in New Zealand that home ownership is good, and if you work hard you’re going to be able to afford a house.

The reality is that that social pact, that right of passage has not been true for many, many decades. The solutions are going to be difficult and they are going to take time.

Before you come and tell me that you paid 20% interest rates, the reality is that, yes interest rates are much lower. But the really big problem is, house prices have risen so much that it’s almost impossible in fact to save for the deposit. People could have saved a deposit and paid it off in about 20-30 years in the early 1990s. Fast forward to today, and that’s more like 50 years. How long do you want to work to pay off your mortgage?

What we’re talking about is the rise of Generation Rent. Those who manage to buy houses are in mortgage slavery for a long period of time.

There is a widening societal gap. If younger generations want to access housing, it’s not enough to have a job, nor enough to have a good job. You must now have parents that are wealthy, and home-owners too. The idea of New Zealand being an egalitarian country is no longer true. The kind of societal divide we’re talking about is very Victorian. We’re in fact talking about the rise of a landed gentry.

For those who are born after the 1980s, the chance of you doing better than your parents are less than 50%.

What we’re creating is a country where opportunities are going to be more limited for our children and when it comes to things like housing, than ourselves. I worry that what we’re creating in New Zealand is a social divide that is only going to keep growing. This is only one manifestation of this divide.

There has been a change in philosophy in what underpins the housing market. One very good example is what we have done with our social housing sector.

Housing NZ started building social housing in the late 1930s and stock accumulated over the next 50-60 years to a peak in 1991.

Since then we have not added more social housing. On a per capita basis we have the lowest number of social housing in New Zealand since the 1940s.

This is an ideological position where we do not want to create housing supply for the poor. We don’t want to. This is not about politicians. This is a reflection on us. It is our ideology, it is our politics. Our politicians are doing our bidding. The society that we’re living in today does not want to invest in the bottom half of our society.

The really big kicker has been credit. Significant reductions in mortgage rates over time have driven demand for housing. But we have misallocated our credit. We’re creating more and more debt, but most of that debt is chasing the existing houses. We’re buying and selling from each other rather than creating something new. The housing boom could not have happened on its own. The banking sector facilitated it. We have seen more and more credit being created and more of that credit is now more likely to go towards buying and selling houses from each other rather than funding businesses or building houses.

One of the saddest stories at the moment is, even though we have an acute housing shortage in Auckland, the most difficult to find funding for now is new developments. When the banks pull away credit, the first thing that goes is the riskiest elements of the market.

Seasonally adjusted house sales in Auckland are at the lowest level since 2011. This is worrying because what happens in the property market expands to the economy, consents and the construction sector.

I fully expect a construction bust next year. We are going to have a construction bust before we have a housing bust. We haven’t built enough houses for a very long period of time. And if we’re going to keep not building enough houses, I’m not confident that whatever correction we have in the housing market is going to last.

New money created in the economy is largely chasing the property market. Household debt to GDP has been rising steadily since the 1990s. People were now taking on more debt, but banks have started to cut back on the amount of credit available overall.

For every unit of economic growth over the course of the last 10, 20 years, we needed more and more debt to create that growth. We are more and more addicted to debt to create our economic growth.

Credit is now going backwards. If credit is not going to be available in aggregate, we know the biggest loses are in fact going to be businesses and property development.

It means we are not going to be building a lot of the projects that have been consented, and we know the construction cycle is going to come down. I despair.

I despair that we still talk so much more about buying and selling houses than actually starting businesses. The cultural sclerosis that we see in New Zealand has as much to do with the problem of the housing market as to do with our rules around the Resource Management Act, our banking sector.

On demand, we know there’s been significant growth in New Zealand’s population. Even though it feels like all of that population growth has come from net migration, the reality is that it’s actually natural population growth that’s created the bulk of the demand.

But net migration has created a volatility that we can’t deal with. A lot of the cyclicality in New Zealand’s housing market and demand, comes from net migration and we simply cannot respond.

We do know that there is money that’s global that is looking for a safe haven, and New Zealand is part of that story. We don’t have very good data in New Zealand because we refuse to collect it. There is a lack of leadership regarding our approach to foreign investment in our housing market.

Looking at what’s happening in Canada and Australia would indicate roughly 10% of house sales in Auckland are to foreign buyers. Yes it matters, but when 90% of your sales are going to locals, I think it’s a bit of a red herring.

Historical context of where demand for housing comes from shows the biggest chunk is from natural population growth. The second biggest was from changes in household size as families got smaller – more recently that has stopped, ie kids refusing to leave home.

There has been a massive variation in what happens with net migration.

New Zealand needs about 21,000 houses a year to keep up with population growth and changes that are taking place. But over the course of the last four years, we’ve needed more like 26,000. We’re nowhere near building those kinds of houses.

This means we need to think about demand management from a policy perspective. It’s more about cyclical management rather than structural management.

Population growth has always been there. Whether it’s from migration or not doesn’t matter. The problem is our housing market, our land supply, our infrastructure supply, can’t keep up with any of it.

While immigration was a side problem it nevertheless was an important conversation to have due to the volatility that can be created. I struggle with the fact that we have no articulated population strategy in New Zealand. We have immigration because we have immigration. That’s not a very good reason.

Why do we want immigration, how big do we want to be, do you want 15 million people or do you want five?

What sort of people do we want? Are we just using immigration as shorthand for not educating our kids because we can’t fill the skills shortages that we have in our industries?

Let’s not pretend that it’s all about people wanting to live in houses.

You’d be very hard pressed to argue that people want to buy houses in Epsom at a 3% rental yield for investment purposes. They want to buy houses in Epsom at 3% rental yield because they want to speculate on the capital gains. Let’s be honest with ourselves.

If your floating mortgage rate is 5.5% and you’re getting 3% from your rent, what does that tell you about your investment? It tells you that you’re not really doing it for cash-flow purposes. You’re doing because you expect capital gains, and you expect those capital gains to compensate you.

The real story in Auckland is that a lot of additional demand is coming from investment.

Land supply in New Zealand is slow, particularly in places like Auckland. But it’s not just in terms of sections, it’s also about density. The Unitary Plan was a win for Auckland. The reality is that if we only do greenfields, we will just see more people sitting out in traffic at the end of Drury.

The majority of New housing supply are large houses, when the majority of new households being formed are 1-2 person households.

Between the last two censuses, most of the housing stock built in New Zealand were four bedrooms or more. In contrast, the majority of households that were created were people that were single or couples. We have ageing populations, we have the empty nesters, we have young people who are having kids later…and we’re building stand-alone houses, with four bedrooms.

We have to think very hard about how to create supply not just for the top end, even though we know in theory building just enough houses is good for everybody, when you’re starting from a point of not enough houses, it means the bottom end gets screwed for longer. We have to think very hard about whether we want to use things like inclusionary zoning; we have to think very hard about what we want to do with social housing.

Right now we’re not building houses for everybody in our community. We are failing by building the wrong sorts of houses in our communities.

Right at the top is land costs. If we think about what has been driving up the cost of housing, the biggest one is the value of land. It’s true that we should also look at what’s happening in the rental market and what was happening with the costs of construction. But those are not the things that have been the majority driver of the very unaffordable house prices that we see in New Zealand today.

The biggest constraint is in land, and that is where the speculation is taking place.

We know we’re not building enough. In the 1930s to 1940s we had very different types of governments and ideology. We actually built more houses per capita back then than we have in the last 30 years.

In the late 40s-early 70s, with the rise of the welfare state and build-up of infrastructure. On a per capita basis, we built massive amounts of houses.

But since the oil shock and the 1980s reforms, we have never structurally managed to build as many houses as we did pre-1980. That cumulative gap between the trend that we have seen in the last 30 years, versus what we had seen in the 40s, 50s and 60s, is around half a million houses.

So there is something that is fundamentally and structurally different in what we have done in terms of housing supply in New Zealand over a very long period of time.

The changes in the way that we do our planning rules, the advent of the RMA, the way that we fund and govern our local government. All these things have changed. So the nature of the provision of infrastructure, the provision of land, then provision of consents, all of these things have changed massively. But the net result is we’re not building as many houses, and that is a fundamental problem.

In Auckland there is a massive gap between targets set by government for house building over the past three years and the amount of consents issued. On top of this, the targets themselves were still not high enough.

Somehow we’re still not able to respond to the growth that Auckland is facing. Consistently we have underestimated how many people want to live in a place like Auckland.

But it’s not just Auckland. Carterton surprises every year, it’s because they’ve got a fantastic train line and people live there, it’s not surprising.

But we are failing. We have been failing and we continue to fail. We have to be far more responsive and we have to have a much longer time horizon to have the provision for housing that’s needed.

There is in fact no real plan. The Unitary Plan is fantastic in that it actually plans for just enough houses for the projections for population. We can confidently say that projection is going to be pessimistic, we’re going to have way more people in Auckland.

Trump and Brexit have marked a shift in politics and a polarisation in the public’s view of politics. In New Zealand I think one of the catalysts could be Generation Rent. In the last census, 51% of adults, over 15 year olds, rented. It is no longer the minority that rent, but the majority of individuals that rent.

I’m not saying we’ll see the same kind of uprising in New Zealand, but what we saw in Brexit was that discontent was the majority of voters. If young people had actually turned up to vote, Brexit wouldn’t have happened. The same is true for New Zealand.

It is strange that there was no sense of crisis or urgency. For a lot of the voters, things are just fine. For the people for whom it’s not fine, they’re not voting and they feel disengaged.

The kind of politics that we will start to see in the next 10 years is something much more activist, the ‘urgency of now’.

The promise of democracy is to create an economy that is fit for everyone. It is about creating opportunities for everyone. Right now, particularly when it comes to housing, we are failing. We are not creating a democratic community when it comes to our housing supply because young people are locked out, because young people are going to suffer, and we know there are some big differences across the different parts of New Zealand.

It’s not going to be enough, when we’re starting from a position of crisis, to simply create more housing that will appease the public. We have to make sure that we’re far more activist in making sure that we’re creating housing that is fit for purpose, not just for the general populous, but for the bottom half who are clearly losing out from what is going on.

We know what the causes are. I’m sick of arguing why we’re here. We know why we’re here, because we haven’t ensured enough political leadership to deal with the problems that are there.

We can’t implement the solutions unless we have political leadership, political cohesion, and endurance over the political cycle. This is a big challenge, but a big opportunity.

Shamubeel Eaqub

***

  • There has been a cumulative 500,000 gap in housing supply over the last 30 years.
  • Eaqub predicted a construction bust next year, led by banks tightening lending.
  • It’s remarkable NZ authorities do not have proper data on foreign buyers. While he estimates 10% of purchases in Auckland are made by foreign investors, he said the main focus should be on the other 90% by local.
  • However, migration creates cyclical volatility that we can’t deal with; it is unbelievable that New Zealand doesn’t have a stated population policy.
  • New Zealand is still not building the right sized houses – the majority of properties being built in recent years have had four-plus bedrooms, while household sizes have grown smaller
  • The majority of New Zealand’s adult population is now renting. This could be the catalyst for a Brexit/Trump-style rising up of formerly disengaged voters – young people in our case – to engage at this year’s election.
  • New Zealand’s home ownership level is now at its lowest point since 1946.
  • We have a cultural sclerosis of buying and selling existing houses to one another.

interest.co.nz

Undoing poverty’s negative effect on brain development with cash transfers – Cameron McLeod. 

An upcoming experiment into brain development and poverty by Kimberly G Noble, associate professor of neuroscience and education at Columbia University’s Teachers College, asks whether poverty may affect the development, “the size, shape, and functioning,” of a child’s brain, and whether “a cash stipend to parents” would prevent this kind of damage.

Noble writes that “poverty places the young child’s brain at much greater risk of not going through the paces of normal development.” Children raised in poverty perform less well in school, are less likely to graduate from high school, and are less likely to continue on to college. Children raised in poverty are also more likely to be underemployed when adults. Sociological research and research done in the area of neuroscience has shown that a childhood spent in poverty can result in “significant differences in the size, shape and functioning” of the  brain. Can the damage done to children’s brains  be negated  by the intervention of a subsidy for brain health?

This most recent study’s fundamental difference from past efforts is that it explores what kind of effect “directly supplementing” the incomes of families will have on brain development. “Cash transfers, as opposed to counseling, child care and other services, have the potential to empower families to make the financial decisions they deem best for themselves and their children.” Noble’s hypothesis is that a “cascade of positive effects” will follow from the cash transfers, and that if proved correct, this has implications for public policy and “the potential to…affect the lives of millions of disadvantaged families with young children.”

Brain Trust, Kimberly G. Noble

  • Children who live in poverty tend to perform worse than peers in school on a bevy of different tests. They are less likely to graduate from high school and then continue on to college and are more apt to be underemployed once they enter the workforce.
  • Research that crosses neuroscience with sociology has begun to show that educational and occupational disadvantages that result from growing up poor can lead to significant differences in the size, shape and functioning of children’s brains.
  • Poverty’s potential to hijack normal brain development has led to plans for studying whether a simple intervention might reverse these injurious effects. A study now in the planning stages will explore if a modest subsidy can enhance brain health.

BasicIncome.org

***

The goal of Dr. Noble’s research is to better characterize socioeconomic disparities in children’s cognitive and brain development. Ongoing studies in her lab address the timing of neurocognitive disparities in infancy and early childhood, as well as the particular exposures and experiences that account for these disparities, including access to material resources, richness of language exposure, parenting style and exposure to stress. Finally, she is interested in applying this work to the design of interventions that aim to target gaps in school readiness, including early literacy, math, and self-regulation skills. She is honored to be part of a national team of social scientists and neuroscientists planning the first clinical trial of poverty reduction, which aims to estimate the causal impact of income supplementation on children’s cognitive, emotional and brain development in the first three years of life.

Columbia University

***

A short review on the link between poverty, children’s cognition and brain development, 13th March 2017

In the latest issue of the Scientific American, Kimberly Noble, associate professor in neuroscience and education, reviews her work and introduces an ambitious research project that may help understand the cause-and-effect connection between poverty and children’s brain development.

For the past 15 years, Noble and her colleagues have gathered evidence to explain how socioeconomic disparities may underlie differences in children’s cognition and brain development. In the course of their research they have found for example that children living in poverty tend to have reduced cognitive skills – including language, memory skills and cognitive control (Figure 1).

Figure 1. Wealth effect

More recently, they published evidence showing that the socio-economic status of parents (as assessed using parental education, income and occupation) can also predict children’s brain structure.

By measuring the cortical surface area of children’s brains (ie the area of the surface of the cortex, the outer layer of the brain which contains all the neurons), they found that lower family income was linked to smaller cortical surface area, especially in brain regions involved in language and cognitive control abilities (Figure 2 – in magenta).

Figure 2. A Brain on Poverty

In the same research, they also found that longer parental education was linked to increased hippocampus volume in children, a brain structure essential for memory processes.

Overall, Noble’s work adds to a growing body of research showing the negative relation between poverty and brain development and these findings may explain (at least in part) why children from poor families are less likely to obtain good grades at school, graduate from high-school or attend college.

What is less known however, is the causal mechanism underlying this relationship. As Noble describes, differences in school and neighbourhood quality, chronic stress in the family home, less nurturing parenting styles or a combination of all these factors might explain the impact of poverty on brain development and cognition.

To better understand the causal effect of poverty, Noble has teamed up with economists and developmental psychologists and together, they will soon launch a large-scale experiment or “randomised control trial”. As part of this experiment, 1000 US women from low-income backgrounds will be recruited soon after giving birth and will be followed over a three-year period. Half of the women will receive $333 per month (if they are part of the “experimental” group) and the other half will receive $20 per month (if they are part of the “control” group). Mothers and children will be monitored throughout the study, and mothers will be able to spend the money as they wish, without any constrains.

By comparing children belonging to the experimental group to those in the control group, researchers will be able to observe how increases in family income may directly benefit cognition and brain development. They will also be able to test whether the way mothers use the extra income is a relevant factor to explain these benefits.

Noble concludes that “although income may not be the only factor that determines a child’s developmental trajectory, it may be the easiest one to alter” through social policy. And given that 25% of American children and 12% of British children are affected by poverty (as reported by UNICEF in 2012), policies designed to alleviate poverty may have the capacity to reach and improve the life chances of millions of children.

NGN is looking forward to see the results of this large-scale experiment. We expect that this project, in association with other research studies, will improve our understanding of the link between poverty and child development, and will help design better interventions to support disadvantaged children.

Nature Groups

***

Socioeconomic inequality and children’s brain development. 

Research addresses issues at the intersection of psychology, neuroscience and public policy.

By Kimberly G. Noble, MD, PhD

Kimberly Noble, MD, PhD, is an associate professor of neuroscience and education at Teachers College, Columbia University. She received her undergraduate, graduate and medical degrees at the University of Pennsylvania. As a neuroscientist and board-certified pediatrician, she studies how inequality relates to children’s cognitive and brain development. Noble’s work has been supported by several federal and foundation grants, and she was named a “Rising Star” by the Association for Psychological Science. Together with a team of social scientists and neuroscientists from around the United States, she is planning the first clinical trial of poverty reduction to assess the causal impact of income on cognitive and brain development in early childhood.

Kimberley Noble website.

What can neuroscience tell us about why disadvantaged children are at risk for low achievement and poor mental health? How early in infancy does socioeconomic disadvantage leave an imprint on the developing brain, and what factors explain these links? How can we best apply this work to inform interventions? These and other questions are the focus of the research my colleagues and I have been addressing for the last several years.

What is socioeconomic status and why is it of interest to neuroscientists?

The developing human brain is remarkably malleable to experience. Of course, a child’s experience varies tremendously based on his or her family’s circumstances (McLoyd, 1998). And so, as neuroscientists, we can use family circumstance as a lens through which to better understand how experience relates to brain development.

Family socioeconomic status, or SES, is typically considered to include parental educational attainment, occupational prestige and income (McLoyd, 1998); subjective social status, or where one sees oneself on the social hierarchy, may also be taken into account (Adler, Epel, Castellazzo & Ickovics, 2000). A large literature has established that disparities in income and human capital are associated with substantial differences in children’s learning and school performance. For example, socioeconomic differences are observed across a range of important cognitive and achievement measures for children and adolescents, including IQ, literacy, achievement test scores and high school graduation rates (Brooks-Gunn & Duncan, 1997). These differences in achievement in turn result in dramatic differences in adult economic well-being and labor market success.

However, although outcomes such as school success are clearly critical for understanding disparities in development and cognition, they tell us little about the underlying neural mechanisms that lead to these differences. Distinct brain circuits support discrete cognitive skills, and differentiating between underlying neural substrates may point to different causal pathways and approaches for intervention (Farah et al., 2006; Hackman & Farah, 2009; Noble, McCandliss, & Farah, 2007; Raizada & Kishiyama, 2010). Studies that have used a neurocognitive framework to investigate disparities have documented that children and adolescents from socioeconomically disadvantaged backgrounds tend to perform worse than their more advantaged peers on several domains, most notably in language, memory, self-regulation and socio-emotional processing (Hackman & Farah, 2009; Hackman, Farah, & Meaney, 2010; Noble et al., 2007; Noble, Norman, & Farah, 2005; Raizada & Kishiyama, 2010).

Family socioeconomic circumstance and children’s brain structure

More recently, we and other neuroscientists have extended this line of research to examine how family socioeconomic circumstances relate to differences in the structure of the brain itself. For example, in the largest study of its kind to date, we analyzed the brain structure of 1099 children and adolescents recruited from socioeconomically diverse homes from ten sites across the United States (Noble, Houston et al., 2015). We were specifically interested in the structure of the cerebral cortex, or the outer layer of brain cells that does most of the cognitive “heavy lifting.” We found that both parental educational attainment and family income accounted for differences in the surface area, or size of the “nooks and crannies” of the cerebral cortex. These associations were found across much of the brain, but were particularly pronounced in areas that support language and self-regulation — two of the very skills that have been repeatedly documented to show large differences along socioeconomic lines.

Several points about these findings are worth noting. First, genetic ancestry, or the proportion of ancestral descent for each of six major continental populations, was held constant in the analyses. Thus, although race and SES tend to be confounded in the U.S., we can say that the socioeconomic disparities in brain structure that we observed were independent of genetically-defined race. Second, we observed dramatic individual differences, or variation from person to person. That is, there were many children and adolescents from disadvantaged homes who had larger cortical surface areas, and many children from more advantaged homes who had smaller surface areas. This means that our research team could in no way accurately predict a child’s brain size simply by knowing his or her family income alone. Finally, the relationship between family income and surface area was nonlinear, such that the steepest gradient was seen at the lowest end of the income spectrum. That is, dollar for dollar, differences in family income were associated with proportionately greater differences in brain structure among the most disadvantaged families.

More recently, we also examined the thickness of the cerebral cortex in the same sample (Piccolo, et al., 2016). In general, as we get older, our cortices tend to get thinner. Specifically, cortical thickness decreases rapidly in childhood and early adolescence, followed by a more gradual thinning, and ultimately plateauing in early- to mid-adulthood (Raznahan et al., 2011; Schnack et al., 2014; Sowell et al., 2003). Our work suggests that family socioeconomic circumstance may moderate this trajectory. 

Specifically, at lower levels of family SES, we observed relatively steep age-related decreases in cortical thickness earlier in childhood, and subsequent leveling off during adolescence. In contrast, at higher levels of family SES, we observed more gradual age-related reductions in cortical thickness through at least late adolescence. We speculated that these findings may reflect an abbreviated period of cortical thinning in lower SES environments, relative to a more prolonged period of cortical thinning in higher SES environments. It is possible that socioeconomic disadvantage is a proxy for experiences that narrow the sensitive period, or time window for certain aspects of brain development that are malleable to environmental influences, thereby accelerating maturation (Tottenham, 2015).

Are these socioeconomic differences in brain structure clinically meaningful? Early work would suggest so. In our work, we have found that differences in cortical surface area partially accounted for links between family income and children’s executive function skills (Noble, Houston et al., 2015). Independent work in other labs has suggested that differences in brain structure may account for between 15 and 44 percent of the family income-related achievement gap in adolescence (Hair, Hanson, Wolfe & Pollak, 2015; Mackey et al., 2015). This line of research is still in its infancy, however, and several outstanding questions remain to be addressed.

How early are socioeconomic disparities in brain development detectable?

By the start of school, it is apparent that dramatic socioeconomic disparities in children’s cognitive functioning are already evident, and indeed, several studies have found that socioeconomic disparities in language (Fernald, Marchman & Weisleder, 2013; Noble, Engelhardt et al., 2015; Rowe & Goldin-Meadow, 2009) and memory (Noble, Engelhardt et al., 2015) are already present by the second year of life. But methodologies that assess brain function or structure may be more sensitive to differences than are tests of behavior. This raises the question of just how early we can detect socioeconomic disparities in the structure or function of children’s brains.

 One group reported socioeconomic differences in resting electroencephalogram (EEG) activity — which indexes electrical activity of the brain as measured at the scalp — as early as 6–9 months of age (Tomalski et al., 2013). Recent work by our group, however, found no correlation between SES and the same EEG measures within the first four days following birth (Brito, Fifer, Myers, Elliott & Noble, 2016), raising the possibility that some of these differences in brain function may emerge in part as a result of early differences in postnatal experience. Of course, a longitudinal study assessing both the prenatal and postnatal environments would be necessary to formally test this hypothesis. Furthermore, another group recently reported that, among a group of African-American, female infants imaged at 5 weeks of age, socioeconomic disadvantage was associated with smaller cortical and deep gray matter volumes (Betancourt et al., 2015). It is thus also likely that at least some socioeconomic differences in brain development are the result of socioeconomic differences in the prenatal environment (e.g., maternal diet, stress) and/or genetic differences.

Disentangling links among socioeconomic disparities, modifiable experiences and brain development represents a clear priority for future research. Are the associations between SES and brain development the result of differences in experiences that can serve as the targets of intervention, such as differences in nutrition, housing and neighborhood quality, parenting style, family stress and/or education? Certainly, the preponderance of social science evidence would suggest that such differences in experience are likely to account at least in part for differences in child and adolescent development (Duncan & Magnuson, 2012). However, few studies have directly examined links among SES, experience and the brain (Luby et al., 2013). In my lab, we are actively focusing on these issues, with specific interest in how chronic stress and the home language environment may, in part, explain our findings.

How can this work inform interventions?

Quite a few interventions aim to reduce socioeconomic disparities in children’s achievement. Whether school-based or home-based, many are quite effective, though frequently face challenges: High-quality interventions are expensive, difficult to scale up and often suffer from “fadeout,” or the phenomenon whereby the positive effects of the intervention dwindle with time once children are no longer receiving services.

What about the effects of directly supplementing family income? Rather than providing services, such “cash transfer“ interventions have the potential to empower families to make the financial decisions they deem best for themselves and their children. Experimental and quasi-experimental studies in the social sciences, both domestically and in the developing world, have suggested the promise of direct income supplementation (Duncan & Magnuson, 2012).

To date, linkages between poverty and brain development have been entirely correlational in nature; the field of neuroscience is silent on the causal connections between poverty and brain development. As such, I am pleased to be part of a team of social scientists and neuroscientists who are currently planning and raising funds to launch the first-ever randomized experiment testing the causal connections between poverty reduction and brain development.

The ambition of this study is large, though the premise is simple. We plan to recruit 1,000 low-income U.S. mothers at the time of their child’s birth. Mothers will be randomized to receive a large monthly income supplement or a nominal monthly income supplement. Families will be tracked longitudinally to definitively assess the causal impact of this unconditional cash transfer on cognitive and brain development in the first three years following birth, when we believe the developing brain is most malleable to experience.

We hypothesize that increased family income will trigger a cascade of positive effects throughout the family system. As a result, across development, children will be better positioned to learn foundational skills. If our hypotheses are borne out, this proposed randomized trial has the potential to inform social policies that affect the lives of millions of disadvantaged families with young children. While income may not be the only or even the most important factor in determining children’s developmental trajectories, it may be the most manipulable from a policy perspective.

American Psychological Association

Abuse breeds child abusers – Jarrod Gilbert. 

Often when I’m doing research I dance a silly jig when I gleefully unearth a gem of information hitherto unknown or long forgotten. In studying the violent deaths of kids that doesn’t happen.

There was no dance of joy when I discovered New Zealanders are more likely to be homicide victims in their first tender years than at any other time in their lives. But nothing numbs you like the photographs of dead children.

Little bodies lying there limp with little hands and little fingers, covered in scratches and an array of bruises some dark black and some fading, looking as vulnerable dead as they were when they were alive.

James Whakaruru’s misery ended when he was killed in 1999. He had endured four years of life and that was all he could take. He was hit with a small hammer, a jug cord and a vacuum cleaner hose. During one beating his mind was so confused he stared blankly ahead. His tormentor responded by poking him in the eyes. It was a stomping that eventually switched out his little light. It was a case that even the Mongrel Mob condemned, calling the cruelty “amongst the lowest of any act”.

An inquiry by the Commissioner for Children found a number of failings by state agencies, which were all too aware of the boy’s troubled existence. The Commissioner said James became a hero because changes made to Government agencies would save lives in the future. Yet such horrors have continued. My colleague Greg Newbold has found that on average nine children (under 15) have been killed as a result of maltreatment since 1992 and the rate has not abated in recent years. In 2015, there were 14 such deaths, one of which was three-year-old Moko Rangitoheriri, or baby Moko as we knew him when he gained posthumous celebrity.

Moko’s life was the same as James’s, and he too died in agony; he endured weeks of being beaten, kicked, and smeared with faeces. That was the short life he knew. Most of us will struggle to comprehend these acts but we are desperate to stop them. Desperate to ensure state agencies are capable of intervening to protect those who can not protect themselves and, through no fault of their own, are subjected to cruelty by those who are meant to protect them.

The reasons for intervening don’t stop with the imperative to save young lives. For every child killed there are dozens who live wretched existences and from this cohort of unfortunates will come the next generation of abusers.  Solving the problems of today, then, is not just a moral imperative but is also about producing a positive ripple effect.

And this is why, In the cases of James Whakaruru and baby Moko the best and most efficient time for intervention was not in the period leading up to their abuse, but rather many years before they were born. The men involved in each of those killing came from the same family. And it seems their lives were transient and tragic: one spent time in the now infamous Epuni Boys home, which is ground zero for calls for an inquiry into state care abuse (and incidentally the birth place of the Mongrel Mob).

Once young victims themselves, those boys crawled into adulthood and became violent men capable of imparting cruelty onto kids in their care.

This cycle of abuse is well known, yet state spending on the problem is poorly aligned to it, and our targeting of the problem is reactionary and punitive rather than proactive and preventative.

Of the $1.4 billion we spend on family and sexual violence annually, less than 10 per cent is spent on interventions, of which just 1.5 per cent is spent on primary prevention. The morality of that is questionable, the economics even more so.

Not only must things be approached differently but there needs to be greater urgency in our thinking. It’s perhaps trite to say, but if nine New Zealanders were killed every year in acts of terrorism politicians would never stop talking about it and it would be priority number one.

In an election year, that’s exactly where this issue should be. If the kids in violent homes had a voice, that’s what they’d be saying.

But if the details of such deaths don’t move our political leaders to urgent action, I rather fear nothing will. Maybe they should be made to look at the photographs.

• Dr Jarrod Gilbert is a sociologist at the University of Canterbury and the lead researcher at Independent Research Solutions.

No, wealth isn’t created at the top. It is merely devoured there – Rutger Bregman. 

This piece is about one of the biggest taboos of our times. About a truth that is seldom acknowledged, and yet, on reflection, cannot be denied. The truth that we are living in an inverse welfare state.

These days, politicians from the left to the right assume that most wealth is created at the top. By the visionaries, by the job creators, and by the people who have “made it”. By the go-getters oozing talent and entrepreneurialism that are helping to advance the whole world.

Now, we may disagree about the extent to which success deserves to be rewarded – the philosophy of the left is that the strongest shoulders should bear the heaviest burden, while the right fears high taxes will blunt enterprise – but across the spectrum virtually all agree that wealth is created primarily at the top.

So entrenched is this assumption that it’s even embedded in our language. When economists talk about “productivity”, what they really mean is the size of your paycheck. And when we use terms like “welfare state”, “redistribution” and “solidarity”, we’re implicitly subscribing to the view that there are two strata: the makers and the takers, the producers and the couch potatoes, the hardworking citizens – and everybody else.

In reality, it is precisely the other way around. In reality, it is the waste collectors, the nurses, and the cleaners whose shoulders are supporting the apex of the pyramid. They are the true mechanism of social solidarity. Meanwhile, a growing share of those we hail as “successful” and “innovative” are earning their wealth at the expense of others. The people getting the biggest handouts are not down around the bottom, but at the very top. Yet their perilous dependence on others goes unseen. Almost no one talks about it. Even for politicians on the left, it’s a non-issue.

To understand why, we need to recognise that there are two ways of making money. The first is what most of us do: work. That means tapping into our knowledge and know-how (our “human capital” in economic terms) to create something new, whether that’s a takeout app, a wedding cake, a stylish updo, or a perfectly poured pint. To work is to create. Ergo, to work is to create new wealth.

But there is also a second way to make money. That’s the rentier way: by leveraging control over something that already exists, such as land, knowledge, or money, to increase your wealth. You produce nothing, yet profit nonetheless. By definition, the rentier makes his living at others’ expense, using his power to claim economic benefit.

For those who know their history, the term “rentier” conjures associations with heirs to estates, such as the 19th century’s large class of useless rentiers, well-described by the French economist Thomas Piketty. These days, that class is making a comeback. (Ironically, however, conservative politicians adamantly defend the rentier’s right to lounge around, deeming inheritance tax to be the height of unfairness.) But there are also other ways of rent-seeking. From Wall Street to Silicon Valley, from big pharma to the lobby machines in Washington and Westminster, zoom in and you’ll see rentiers everywhere.

There is no longer a sharp dividing line between working and rentiering. In fact, the modern-day rentier often works damn hard. Countless people in the financial sector, for example, apply great ingenuity and effort to amass “rent” on their wealth. Even the big innovations of our age – businesses like Facebook and Uber – are interested mainly in expanding the rentier economy. The problem with most rich people therefore is not that they are coach potatoes. Many a CEO toils 80 hours a week to multiply his allowance. It’s hardly surprising, then, that they feel wholly entitled to their wealth.

It may take quite a mental leap to see our economy as a system that shows solidarity with the rich rather than the poor. So I’ll start with the clearest illustration of modern freeloaders at the top: bankers. Studies conducted by the International Monetary Fund and the Bank of International Settlements – not exactly leftist thinktanks – have revealed that much of the financial sector has become downright parasitic. How instead of creating wealth, they gobble it up whole.

Don’t get me wrong. Banks can help to gauge risks and get money where it is needed, both of which are vital to a well-functioning economy. But consider this: economists tell us that the optimum level of total private-sector debt is 100% of GDP. Based on this equation, if the financial sector only grows, it won’t equal more wealth, but less. So here’s the bad news. In the United Kingdom, private-sector debt is now at 157.5%. In the United States the figure is 188.8%.

In other words, a big part of the modern banking sector is essentially a giant tapeworm gorging on a sick body. It’s not creating anything new, merely sucking others dry. Bankers have found a hundred and one ways to accomplish this. The basic mechanism, however, is always the same: offer loans like it’s going out of style, which in turn inflates the price of things like houses and shares, then earn a tidy percentage off those overblown prices (in the form of interest, commissions, brokerage fees, or what have you), and if the shit hits the fan, let Uncle Sam mop it up.

The financial innovation concocted by all the math whizzes working in modern banking (instead of at universities or companies that contribute to real prosperity) basically boils down to maximising the total amount of debt. And debt, of course, is a means of earning rent. So for those who believe that pay ought to be proportionate to the value of work, the conclusion we have to draw is that many bankers should be earning a negative salary; a fine, if you will, for destroying more wealth than they create.

Bankers are the most obvious class of closet freeloaders, but they are certainly not alone. Many a lawyer and an accountant wields a similar revenue model. Take tax evasion. Untold hardworking, academically degreed professionals make a good living at the expense of the populations of other countries. Or take the tide of privatisations over the past three decades, which have been all but a carte blanche for rentiers. One of the richest people in the world, Carlos Slim, earned his millions by obtaining a monopoly of the Mexican telecom market and then hiking prices sky high. The same goes for the Russian oligarchs who rose after the Berlin Wall fell, who bought up valuable state-owned assets for song to live off the rent.

But here comes the rub. Most rentiers are not as easily identified as the greedy banker or manager. Many are disguised. On the face of it, they look like industrious folks, because for part of the time they really are doing something worthwhile. Precisely that makes us overlook their massive rent-seeking.

Take the pharmaceutical industry. Companies like GlaxoSmithKline and Pfizer regularly unveil new drugs, yet most real medical breakthroughs are made quietly at government-subsidised labs. Private companies mostly manufacture medications that resemble what we’ve already got. They get it patented and, with a hefty dose of marketing, a legion of lawyers, and a strong lobby, can live off the profits for years. In other words, the vast revenues of the pharmaceutical industry are the result of a tiny pinch of innovation and fistfuls of rent.

Even paragons of modern progress like Apple, Amazon, Google, Facebook, Uber and Airbnb are woven from the fabric of rentierism. Firstly, because they owe their existence to government discoveries and inventions (every sliver of fundamental technology in the iPhone, from the internet to batteries and from touchscreens to voice recognition, was invented by researchers on the government payroll). And second, because they tie themselves into knots to avoid paying taxes, retaining countless bankers, lawyers, and lobbyists for this very purpose.

Even more important, many of these companies function as “natural monopolies”, operating in a positive feedback loop of increasing growth and value as more and more people contribute free content to their platforms. Companies like this are incredibly difficult to compete with, because as they grow bigger, they only get stronger.

Aptly characterising this “platform capitalism” in an article, Tom Goodwin writes: “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate.”

So what do these companies own? A platform. A platform that lots and lots of people want to use. Why? First and foremost, because they’re cool and they’re fun – and in that respect, they do offer something of value. However, the main reason why we’re all happy to hand over free content to Facebook is because all of our friends are on Facebook too, because their friends are on Facebook … because their friends are on Facebook.

Most of Mark Zuckerberg’s income is just rent collected off the millions of picture and video posts that we give away daily for free. And sure, we have fun doing it. But we also have no alternative – after all, everybody is on Facebook these days. Zuckerberg has a website that advertisers are clamouring to get onto, and that doesn’t come cheap. Don’t be fooled by endearing pilots with free internet in Zambia. Stripped down to essentials, it’s an ordinary ad agency. In fact, in 2015 Google and Facebook pocketed an astounding 64%of all online ad revenue in the US.

But don’t Google and Facebook make anything useful at all? Sure they do. The irony, however, is that their best innovations only make the rentier economy even bigger. They employ scores of programmers to create new algorithms so that we’ll all click on more and more ads. Uber has usurped the whole taxi sector just as Airbnb has upended the hotel industry and Amazon has overrun the book trade. The bigger such platforms grow the more powerful they become, enabling the lords of these digital feudalities to demand more and more rent.

Think back a minute to the definition of a rentier: someone who uses their control over something that already exists in order to increase their own wealth. The feudal lord of medieval times did that by building a tollgate along a road and making everybody who passed by pay. Today’s tech giants are doing basically the same thing, but transposed to the digital highway. Using technology funded by taxpayers, they build tollgates between you and other people’s free content and all the while pay almost no tax on their earnings.

This is the so-called innovation that has Silicon Valley gurus in raptures: ever bigger platforms that claim ever bigger handouts. So why do we accept this? Why does most of the population work itself to the bone to support these rentiers?

I think there are two answers. Firstly, the modern rentier knows to keep a low profile. There was a time when everybody knew who was freeloading. The king, the church, and the aristocrats controlled almost all the land and made peasants pay dearly to farm it. But in the modern economy, making rentierism work is a great deal more complicated. How many people can explain a credit default swap, or a collateralized debt obligation?  Or the revenue model behind those cute Google Doodles? And don’t the folks on Wall Street and in Silicon Valley work themselves to the bone, too? Well then, they must be doing something useful, right?

Maybe not. The typical workday of Goldman Sachs’ CEO may be worlds away from that of King Louis XIV, but their revenue models both essentially revolve around obtaining the biggest possible handouts. “The world’s most powerful investment bank,” wrote the journalist Matt Taibbi about Goldman Sachs, “is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money.”

But far from squids and vampires, the average rich freeloader manages to masquerade quite successfully as a decent hard worker. He goes to great lengths to present himself as a “job creator” and an “investor” who “earns” his income by virtue of his high “productivity”. Most economists, journalists, and politicians from left to right are quite happy to swallow this story. Time and again language is twisted around to cloak funneling and exploitation as creation and generation.

However, it would be wrong to think that all this is part of some ingenious conspiracy. Many modern rentiers have convinced even themselves that they are bona fide value creators. When current Goldman Sachs CEO Lloyd Blankfein was asked about the purpose of his job, his straight-faced answer was that he is “doing God’s work”. The Sun King would have approved.

The second thing that keeps rentiers safe is even more insidious. We’re all wannabe rentiers. They have made millions of people complicit in their revenue model. Consider this: What are our financial sector’s two biggest cash cows? Answer: the housing market and pensions. Both are markets in which many of us are deeply invested.

Recent decades have seen more and more people contract debts to buy a home, and naturally it’s in their interest if house prices continue to scale now heights (read: burst bubble upon bubble). The same goes for pensions. Over the past few decades we’ve all scrimped and saved up a mountainous pension piggy bank. Now pension funds are under immense pressure to ally with the biggest exploiters in order to ensure they pay out enough to please their investors.

The fact of the matter is that feudalism has been democratised. To a lesser or greater extent, we are all depending on handouts. En masse, we have been made complicit in this exploitation by the rentier elite, resulting in a political covenant between the rich rent-seekers and the homeowners and retirees.

Don’t get me wrong, most homeowners and retirees are not benefiting from this situation. On the contrary, the banks are bleeding them far beyond the extent to which they themselves profit from their houses and pensions. Still, it’s hard to point fingers at a kleptomaniac when you have sticky fingers too.

So why is this happening? The answer can be summed up in three little words: Because it can.

Rentierism is, in essence, a question of power. That the Sun King Louis XIV was able to exploit millions was purely because he had the biggest army in Europe. It’s no different for the modern rentier. He’s got the law, politicians and journalists squarely in his court. That’s why bankers get fined peanuts for preposterous fraud, while a mother on government assistance gets penalised within an inch of her life if she checks the wrong box.

The biggest tragedy of all, however, is that the rentier economy is gobbling up society’s best and brightest. Where once upon a time Ivy League graduates chose careers in science, public service or education, these days they are more likely to opt for banks, law firms, or trumped up ad agencies like Google and Facebook. When you think about it, it’s insane. We are forking over billions in taxes to help our brightest minds on and up the corporate ladder so they can learn how to score ever more outrageous handouts.

One thing is certain: countries where rentiers gain the upper hand gradually fall into decline. Just look at the Roman Empire. Or Venice in the 15th century. Look at the Dutch Republic in the 18th century. Like a parasite stunts a child’s growth, so the rentier drains a country of its vitality.

What innovation remains in a rentier economy is mostly just concerned with further bolstering that very same economy. This may explain why the big dreams of the 1970s, like flying cars, curing cancer, and colonising Mars, have yet to be realised, while bankers and ad-makers have at their fingertips technologies a thousand times more powerful.

Yet it doesn’t have to be this way. Tollgates can be torn down, financial products can be banned, tax havens dismantled, lobbies tamed, and patents rejected. Higher taxes on the ultra-rich can make rentierism less attractive, precisely because society’s biggest freeloaders are at the very top of the pyramid. And we can more fairly distribute our earnings on land, oil, and innovation through a system of, say, employee shares, or a universal basic income. 

But such a revolution will require a wholly different narrative about the origins of our wealth. It will require ditching the old-fashioned faith in “solidarity” with a miserable underclass that deserves to be borne aloft on the market-level salaried shoulders of society’s strongest. All we need to do is to give real hard-working people what they deserve.

And, yes, by that I mean the waste collectors, the nurses, the cleaners – theirs are the shoulders that carry us all.

The Guardian

The 1930s were humanity’s darkest, bloodiest hour. Are you paying attention? – Jonathan Freedland. 

Even to mention the 1930s is to evoke the period when human civilisation entered its darkest, bloodiest chapter. No case needs to be argued; just to name the decade is enough. It is a byword for mass poverty, violent extremism and the gathering storm of world war. “The 1930s” is not so much a label for a period of time than it is rhetorical shorthand – a two-word warning from history.

Witness the impact of an otherwise boilerplate broadcast by the Prince of Wales last December that made headlines. “Prince Charles warns of return to the ‘dark days of the 1930s’ in Thought for the Day message.” Or consider the reflex response to reports that Donald Trump was to maintain his own private security force even once he had reached the White House. The Nobel prize-winning economist Paul Krugman’s tweet was typical: “That 1930s show returns.”

Because that decade was scarred by multiple evils, the phrase can be used to conjure up serial spectres. It has an international meaning, with a vocabulary that centres on Hitler and Nazism and the failure to resist them: from brownshirts and Goebbels to appeasement, Munich and Chamberlain. And it has a domestic meaning, with a lexicon and imagery that refers to the Great Depression: the dust bowl, soup kitchens, the dole queue and Jarrow. It was this second association that gave such power to a statement from the usually dry Office for Budget Responsibility, following then-chancellor George Osborne’s autumn statement in 2014. The OBR warned that public spending would be at its lowest level since the 1930s; the political damage was enormous and instant.

In recent months, the 1930s have been invoked more than ever, not to describe some faraway menace but to warn of shifts under way in both Europe and the United States. The surge of populist, nationalist movements in Europe, and their apparent counterpart in the US, has stirred unhappy memories and has, perhaps inevitably, had commentators and others reaching for the historical yardstick to see if today measures up to 80 years ago.

Why is it the 1930s to which we return, again and again? For some sceptics, the answer is obvious: it’s the only history anybody knows. According to this jaundiced view of the British school curriculum, Hitler and Nazis long ago displaced Tudors and Stuarts as the core, compulsory subjects of the past. When we fumble in the dark for a historical precedent, our hands keep reaching for the 30s because they at least come with a little light.

The more generous explanation centres on the fact that that period, taken together with the first half of the 1940s, represents a kind of nadir in human affairs. The Depression was, as Larry Elliott wrote last week, “the biggest setback to the global economy since the dawn of the modern industrial age”, leaving 34 million Americans with no income. The hyperinflation experienced in Germany – when a thief would steal a laundry-basket full of cash, chucking away the money in order to keep the more valuable basket – is the stuff of legend. And the Depression paved the way for history’s bloodiest conflict, the second world war which left, by some estimates, a mind-numbing 60 million people dead. At its centre was the Holocaust, the industrialised slaughter of 6 million Jews by the Nazis: an attempt at the annihilation of an entire people.

In these multiple ways, then, the 1930s function as a historical rock bottom, a demonstration of how low humanity can descend. The decade’s illustrative power as a moral ultimate accounts for why it is deployed so fervently and so often.

Less abstractly, if we keep returning to that period, it’s partly because it can justifiably claim to be the foundation stone of our modern world. The international and economic architecture that still stands today – even if it currently looks shaky and threatened – was built in reaction to the havoc wreaked in the 30s and immediately afterwards. The United Nations, the European Union, the International Monetary Fund, Bretton Woods: these were all born of a resolve not to repeat the mistakes of the 30s, whether those mistakes be rampant nationalism or beggar-my-neighbour protectionism. The world of 2017 is shaped by the trauma of the 1930s.

The international and economic architecture that still stands today was built in reaction to the havoc of the 1930s

One telling, human illustration came in recent global polling for the Journal of Democracy, which showed an alarming decline in the number of people who believed it was “essential” to live in a democracy. From Sweden to the US, from Britain to Australia, only one in four of those born in the 1980s regarded democracy as essential. Among those born in the 1930s, the figure was at or above 75%. Put another way, those who were born into the hurricane have no desire to feel its wrath again.

Most of these dynamics are long established, but now there is another element at work. As the 30s move from living memory into history, as the hurricane moves further away, so what had once seemed solid and fixed – specifically, the view that that was an era of great suffering and pain, whose enduring value is as an eternal warning – becomes contested, even upended.

Witness the remarks of Steve Bannon, chief strategist in Donald Trump’s White House and the former chairman of the far-right Breitbart website. In an interview with the Hollywood Reporter, Bannon promised that the Trump era would be “as exciting as the 1930s”. (In the same interview, he said “Darkness is good” – citing Satan, Darth Vader and Dick Cheney as examples.)

“Exciting” is not how the 1930s are usually remembered, but Bannon did not choose his words by accident. He is widely credited with the authorship of Trump’s inaugural address, which twice used the slogan “America first”. That phrase has long been off-limits in US discourse, because it was the name of the movement – packed with nativists and antisemites, and personified by the celebrity aviator Charles Lindbergh – that sought to keep the US out of the war against Nazi Germany and to make an accommodation with Hitler. Bannon, who considers himself a student of history, will be fully aware of that 1930s association – but embraced it anyway.

That makes him an outlier in the US, but one with powerful allies beyond America’s shores. Timothy Snyder, professor of history at Yale and the author of On Tyranny: Twenty Lessons from the Twentieth Century, notes that European nationalists are also keen to overturn the previously consensual view of the 30s as a period of shame, never to be repeated. Snyder mentions Hungary’s prime minister, Viktor Orban, who avowedly seeks the creation of an “illiberal” state, and who, says Snyder, “looks fondly on that period as one of healthy national consciousness”.

The more arresting example is, perhaps inevitably, Vladimir Putin. Snyder notes Putin’s energetic rehabilitation of Ivan Ilyin, a philosopher of Russian fascism influential eight decades ago. Putin has exhumed Ilyin both metaphorically and literally, digging up and moving his remains from Switzerland to Russia.

Among other things, Ilyin wrote that individuality was evil; that the “variety of human beings” represented a failure of God to complete creation; that what mattered was not individual people but the “living totality” of the nation; that Hitler and Mussolini were exemplary leaders who were saving Europe by dissolving democracy; and that fascist holy Russia ought to be governed by a “national dictator”. Ilyin spent the 30s exiled from the Soviet Union, but Putin has brought him back, quoting him in his speeches and laying flowers on his grave.

European nationalists are keen to overturn the view of the 1930s as a period of shame, never to be repeated.

Still, Putin, Orbán and Bannon apart, when most people compare the current situation to that of the 1930s, they don’t mean it as a compliment. And the parallel has felt irresistible, so that when Trump first imposed his travel ban, for example, the instant comparison was with the door being closed to refugees from Nazi Germany in the 30s. (Theresa May was on the receiving end of the same comparison when she quietly closed off the Dubs route to child refugees from Syria.)

When Trump attacked the media as purveyors of “fake news”, the ready parallel was Hitler’s slamming of the newspapers as the Lügenpresse, the lying press (a term used by today’s German far right). When the Daily Mail branded a panel of high court judges “enemies of the people”, for their ruling that parliament needed to be consulted on Brexit, those who were outraged by the phrase turned to their collected works of European history, looking for the chapters on the 1930s.

The Great Depression

So the reflex is well-honed. But is it sound? Does any comparison of today and the 1930s hold up?

The starting point is surely economic, not least because the one thing everyone knows about the 30s – and which is common to both the US and European experiences of that decade – is the Great Depression. The current convulsions can be traced back to the crash of 2008, but the impact of that event and the shock that defined the 30s are not an even match. When discussing our own time, Krugman speaks instead of the Great Recession: a huge and shaping event, but one whose impact – measured, for example, in terms of mass unemployment – is not on the same scale. US joblessness reached 25% in the 1930s; even in the depths of 2009 it never broke the 10% barrier.

The political sphere reveals another mismatch between then and now. The 30s were characterised by ultra-nationalist and fascist movements seizing power in leading nations: Germany, Italy and Spain most obviously. The world is waiting nervously for the result of France’s presidential election in May: victory for Marine Le Pen would be seized on as the clearest proof yet that the spirit of the 30s is resurgent.

There is similar apprehension that Geert Wilders, who speaks of ridding the country of ‘Moroccan scum”, has led the polls ahead of Holland’s general election on Wednesday. And plenty of liberals will be perfectly content for the Christian Democrat Angela Merkel to prevail over her Social Democratic rival, Martin Schulz, just so long as the far-right Alternative Fur Deutschland makes no ground. Still, so far and as things stand, in Europe only Hungary and Poland have governments that seem doctrinally akin to those that flourished in the 30s.

That leaves the US, which dodged the bullet of fascistic rule in the 30s – although at times the success of the America First movement which at its peak could count on more than 800,000 paid-up members, suggested such an outcome was far from impossible. (Hence the intended irony in the title of Sinclair Lewis’s 1935 novel, It Can’t Happen Here.)

Donald Trump has certainly had Americans reaching for their history textbooks, fearful that his admiration for strongmen, his contempt for restraints on executive authority, and his demonisation of minorities and foreigners means he marches in step with the demagogues of the 30s.

But even those most anxious about Trump still focus on the form the new presidency could take rather than the one it is already taking. David From, a speechwriter to George W. Bush, wrote a much-noticed essay for the Atlantic titled, “How to build an autocracy”. It was billed as setting out “the playbook Donald Trump could use to set the country down a path towards illiberalism”. He was not arguing that Trump had already embarked on that route, just that he could (so long as the media came to heel and the public grew weary and worn down, shrugging in the face of obvious lies and persuaded that greater security was worth the price of lost freedoms).

Similarly, Trump has unloaded rhetorically on the free press – castigating them, Mail-style, as “enemies of the people” – but he has not closed down any newspapers. He meted out the same treatment via Twitter to a court that blocked his travel ban, rounding on the “so-called judge” – but he did eventually succumb to the courts’ verdict and withdrew his original executive order. He did not have the dissenting judges sacked or imprisoned; he has not moved to register or intern every Muslim citizen in the US; he has not suggested they wear identifying symbols.

These are crumbs of comfort; they are not intended to minimise the real danger Trump represents to the fundamental norms that underpin liberal democracy. Rather, the point is that we have not reached the 1930s yet. Those sounding the alarm are suggesting only that we may be travelling in that direction – which is bad enough.

Two further contrasts between now and the 1930s, one from each end of the sociological spectrum, are instructive. First, and particularly relevant to the US, is to ask: who is on the streets? In the 30s, much of the conflict was played out at ground level, with marchers and quasi-military forces duelling for control. The clashes of the Brownshirts with communists and socialists played a crucial part in the rise of the Nazis. (A turning point in the defeat of Oswald Mosley, Britain’s own little Hitler, came with his humbling in London’s East End, at the 1936 battle of Cable Street.)

But those taking to the streets today – so far – have tended to be opponents of the lurch towards extreme nationalism. In the US, anti-Trump movements – styling themselves, in a conscious nod to the 1930s, as “the resistance” – have filled city squares and plazas. The Women’s March led the way on the first day of the Trump presidency; then those protesters and others flocked to airports in huge numbers a week later, to obstruct the refugee ban. Those demonstrations have continued, and they supply an important contrast with 80 years ago. Back then, it was the fascists who were out first – and in force.

Snyder notes another key difference. “In the 1930s, all the stylish people were fascists: the film critics, the poets and so on.” He is speaking chiefly about Germany and Italy, and doubtless exaggerates to make his point, but he is right that today “most cultural figures tend to be against”. There are exceptions – Le Pen has her celebrity admirers, but Snyder speaks accurately when he says that now, in contrast with the 30s, there are “few who see fascism as a creative cultural force”.

Fear and loathing

So much for where the lines between then and now diverge. Where do they run in parallel?

The exercise is made complicated by the fact that ultra-nationalists are, so far, largely out of power where they ruled in the 30s – namely, Europe – and in power in the place where they were shut out in that decade, namely the US. It means that Trump has to be compared either to US movements that were strong but ultimately defeated, such as the America First Committee, or to those US figures who never governed on the national stage.

In that category stands Huey Long, the Louisiana strongman, who ruled that state as a personal fiefdom (and who was widely seen as the inspiration for the White House dictator at the heart of the Lewis novel).

“He was immensely popular,” says Tony Badger, former professor of American history at the University of Cambridge. Long would engage in the personal abuse of his opponents, often deploying colourful language aimed at mocking their physical characteristics. The judges were a frequent Long target, to the extent that he hounded one out of office – with fateful consequences.

Long went over the heads of the hated press, communicating directly with the voters via a medium he could control completely. In Trump’s day, that is Twitter, but for Long it was the establishment of his own newspaper, the Louisiana Progress (later the American Progress) – which Long had delivered via the state’s highway patrol and which he commanded be printed on rough paper, so that, says Badger, “his constituents could use it in the toilet”.

All this was tolerated by Long’s devotees because they lapped up his message of economic populism, captured by the slogan: “Share Our Wealth”. Tellingly, that resonated not with the very poorest – who tended to vote for Roosevelt, just as those earning below $50,000 voted for Hillary Clinton in 2016 – but with “the men who had jobs or had just lost them, whose wages had eroded and who felt they had lost out and been left behind”. That description of Badger’s could apply just as well to the demographic that today sees Trump as its champion.

Long never made it to the White House. In 1935, one month after announcing his bid for the presidency, he was assassinated, shot by the son-in-law of the judge Long had sought to remove from the bench. It’s a useful reminder that, no matter how hate-filled and divided we consider US politics now, the 30s were full of their own fear and loathing.

“I welcome their hatred,” Roosevelt would say of his opponents on the right. Nativist xenophobia was intense, even if most immigration had come to a halt with legislation passed in the previous decade. Catholics from eastern Europe were the target of much of that suspicion, while Lindbergh and the America Firsters played on enduring antisemitism.

This, remember, was in the midst of the Great Depression, when one in four US workers was out of a job. And surely this is the crucial distinction between then and now, between the Long phenomenon and Trump. As Badger summarises: “There was a real crisis then, whereas Trump’s is manufactured.”

And yet, scholars of the period are still hearing the insistent beep of their early warning systems. An immediate point of connection is globalisation, which is less novel than we might think. For Snyder, the 30s marked the collapse of the first globalisation, defined as an era in which a nation’s wealth becomes ever more dependent on exports. That pattern had been growing steadily more entrenched since the 1870s (just as the second globalisation took wing in the 1970s). Then, as now, it had spawned a corresponding ideology – a faith in liberal free trade as a global panacea – with, perhaps, the English philosopher Herbert Spencer in the role of the End of History essayist Francis Fukuyama. By the 1930s, and thanks to the Depression, that faith in globalisation’s ability to spread the wealth evenly had shattered. This time around, disillusionment has come a decade or so ahead of schedule.

The second loud alarm is clearly heard in the hostility to those deemed outsiders. Of course, the designated alien changes from generation to generation, but the impulse is the same: to see the family next door not as neighbours but as agents of some heinous worldwide scheme, designed to deprive you of peace, prosperity or what is rightfully yours. In 30s Europe, that was Jews. In 30s America, it was eastern Europeans and Jews. In today’s Europe, it’s Muslims. In America, it’s Muslims and Mexicans (with a nod from the so-called alt-right towards Jews). Then and now, the pattern is the same: an attempt to refashion the pain inflicted by globalisation and its discontents as the wilful act of a hated group of individuals. No need to grasp difficult, abstract questions of economic policy. We just need to banish that lot, over there.

The third warning sign, and it’s a necessary companion of the second, is a growing impatience with the rule of law and with democracy. “In the 1930s, many, perhaps even most, educated people had reached the conclusion that democracy was a spent force,” says Snyder. There were plenty of socialist intellectuals ready to profess their admiration for the efficiency of Soviet industrialisation under Stalin, just as rightwing thinkers were impressed by Hitler’s capacity for state action. In our own time, that generational plunge in the numbers regarding democracy as “essential” suggests a troubling echo.

Today’s European nationalists exhibit a similar impatience, especially with the rule of law: think of the Brexiters’ insistence that nothing can be allowed to impede “the will of the people”. As for Trump, it’s striking how very rarely he mentions democracy, still less praises it. “I alone can fix it” is his doctrine – the creed of the autocrat.

The geopolitical equivalent is a departure from, or even contempt for, the international rules-based system that has held since 1945 – in which trade, borders and the seas are loosely and imperfectly policed by multilateral institutions such as the UN, the EU and the World Trade Organisation. Admittedly, the international system was weaker to start with in the 30s, but it lay in pieces by the decade’s end: both Hitler and Stalin decided that the global rules no longer applied to them, that they could break them with impunity and get on with the business of empire-building.

If there’s a common thread linking 21st-century European nationalists to each other and to Trump, it is a similar, shared contempt for the structures that have bound together, and restrained, the principal world powers since the last war. Naturally, Le Pen and Wilders want to follow the Brexit lead and leave, or else break up, the EU. And, no less naturally, Trump supports them – as well as regarding Nato as “obsolete” and the UN as an encumbrance to US power (even if his subordinates rush to foreign capitals to say the opposite).

For historians of the period, the 1930s are always worthy of study because the decade proves that systems – including democratic republics – which had seemed solid and robust can collapse. That fate is possible, even in advanced, sophisticated societies. The warning never gets old.

But when we contemplate our forebears from eight decades ago, we should recall one crucial advantage we have over them. We have what they lacked. We have the memory of the 1930s. We can learn the period’s lessons and avoid its mistakes. Of course, cheap comparisons coarsen our collective conversation. But having a keen ear tuned to the echoes of a past that brought such horror? That is not just our right. It is surely our duty.

The Guardian

How a Ruthless Network of Super-Rich Ideologues Killed Choice and Destroyed People’s Faith in Politics – George Monbiot. 

Neoliberalism: the deep story that lies beneath Donald Trump’s triumph.

The events that led to Donald Trump’s election started in England in 1975. At a meeting a few months after Margaret Thatcher became leader of the Conservative party, one of her colleagues, or so the story goes, was explaining what he saw as the core beliefs of conservatism. She snapped open her handbag, pulled out a dog-eared book, and slammed it on the table. “This is what we believe,” she said. A political revolution that would sweep the world had begun.

The book was The Constitution of Liberty by Frederick Hayek. Its publication, in 1960, marked the transition from an honest, if extreme, philosophy to an outright racket. The philosophy was called neoliberalism. It saw competition as the defining characteristic of human relations. The market would discover a natural hierarchy of winners and losers, creating a more efficient system than could ever be devised through planning or by design. Anything that impeded this process, such as significant tax, regulation, trade union activity or state provision, was counter-productive. Unrestricted entrepreneurs would create the wealth that would trickle down to everyone.

This, at any rate, is how it was originally conceived. But by the time Hayek came to write The Constitution of Liberty, the network of lobbyists and thinkers he had founded was being lavishly funded by multimillionaires who saw the doctrine as a means of defending themselves against democracy. Not every aspect of the neoliberal programme advanced their interests. Hayek, it seems, set out to close the gap.

He begins the book by advancing the narrowest possible conception of liberty: an absence of coercion. He rejects such notions as political freedom, universal rights, human equality and the distribution of wealth, all of which, by restricting the behaviour of the wealthy and powerful, intrude on the absolute freedom from coercion he demands.

Democracy, by contrast, “is not an ultimate or absolute value”. In fact, liberty depends on preventing the majority from exercising choice over the direction that politics and society might take.

He justifies this position by creating a heroic narrative of extreme wealth. He conflates the economic elite, spending their money in new ways, with philosophical and scientific pioneers. Just as the political philosopher should be free to think the unthinkable, so the very rich should be free to do the undoable, without constraint by public interest or public opinion.

The ultra rich are “scouts”, “experimenting with new styles of living”, who blaze the trails that the rest of society will follow. The progress of society depends on the liberty of these “independents” to gain as much money as they want and spend it how they wish. All that is good and useful, therefore, arises from inequality. There should be no connection between merit and reward, no distinction made between earned and unearned income, and no limit to the rents they can charge.

Inherited wealth is more socially useful than earned wealth: “the idle rich”, who don’t have to work for their money, can devote themselves to influencing “fields of thought and opinion, of tastes and beliefs”. Even when they seem to be spending money on nothing but “aimless display”, they are in fact acting as society’s vanguard.

Hayek softened his opposition to monopolies and hardened his opposition to trade unions. He lambasted progressive taxation and attempts by the state to raise the general welfare of citizens. He insisted that there is “an overwhelming case against a free health service for all” and dismissed the conservation of natural resources. It should come as no surprise to those who follow such matters that he was awarded the Nobel prize for economics.

By the time Thatcher slammed his book on the table, a lively network of thinktanks, lobbyists and academics promoting Hayek’s doctrines had been established on both sides of the Atlantic, abundantly financed by some of the world’s richest people and businesses, including DuPont, General Electric, the Coors brewing company, Charles Koch, Richard Mellon Scaife, Lawrence Fertig, the William Volker Fund and the Earhart Foundation. Using psychology and linguistics to brilliant effect, the thinkers these people sponsored found the words and arguments required to turn Hayek’s anthem to the elite into a plausible political programme.

Thatcherism and Reaganism were not ideologies in their own right: they were just two faces of neoliberalism. Their massive tax cuts for the rich, crushing of trade unions, reduction in public housing, deregulation, privatisation, outsourcing and competition in public services were all proposed by Hayek and his disciples. But the real triumph of this network was not its capture of the right, but its colonisation of parties that once stood for everything Hayek detested.

Bill Clinton and Tony Blair did not possess a narrative of their own. Rather than develop a new political story, they thought it was sufficient to triangule. In other words, they extracted a few elements of what their parties had once believed, mixed them with elements of what their opponents believed, and developed from this unlikely combination a “third way”.

It was inevitable that the blazing, insurrectionary confidence of neoliberalism would exert a stronger gravitational pull than the dying star of social democracy. Hayek’s triumph could be witnessed everywhere from Blair’s expansion of the private finance initiative to Clinton’s repeal of the Glass-Steagal Act, which had regulated the financial sector. For all his grace and touch, Barack Obama, who didn’t possess a narrative either (except “hope”), was slowly reeled in by those who owned the means of persuasion.

As I warned in April, the result is first disempowerment then disenfranchisement. If the dominant ideology stops governments from changing social outcomes, they can no longer respond to the needs of the electorate. Politics becomes irrelevant to people’s lives; debate is reduced to the jabber of a remote elite. The disenfranchised turn instead to a virulent anti-politics in which facts and arguments are replaced by slogans, symbols and sensation. The man who sank Hillary Clinton’s bid for the presidency was not Donald Trump. It was her husband.

The paradoxical result is that the backlash against neoliberalism’s crushing of political choice has elevated just the kind of man that Hayek worshipped. Trump, who has no coherent politics, is not a classic neoliberal. But he is the perfect representation of Hayek’s “independent”; the beneficiary of inherited wealth, unconstrained by common morality, whose gross predilections strike a new path that others may follow. The neoliberal thinktankers are now swarming round this hollow man, this empty vessel waiting to be filled by those who know what they want. The likely result is the demolition of our remaining decencies, beginning with the agreement to limit global warming.

Those who tell the stories run the world. Politics has failed through a lack of competing narratives. The key task now is to tell a new story of what it is to be a human in the 21st century. It must be as appealing to some who have voted for Trump and Ukip as it is to the supporters of Clinton, Bernie Sanders or Jeremy Corbyn.

A few of us have been working on this, and can discern what may be the beginning of a story. It’s too early to say much yet, but at its core is the recognition that – as modern psychology and neuroscience make abundantly clear – human beings, by comparison with any other animals, are both remarkably social and remarkably unselfish. The atomisation and self-interested behaviour neoliberalism promotes run counter to much of what comprises human nature.

Hayek told us who we are, and he was wrong. Our first step is to reclaim our humanity.

Evonomics.com

Preparing Our Economy And Society For Automation & AI – Robert Reich. 

Professor Reich comes to Google to discuss the impact of automation & artificial intelligence on our economy. He also provides a recommendation on how we can ensure future technologies benefit the entire economy, not just those at the top.

Social Europe

How To Win Back Obama, Sanders, and Trump Voters – Les Leopold. 

It is imperative we think big to destroy Neoliberalism’s grip. 

Hillary Clinton underperformed Barack Obama by minus 290,000 votes in Pennsylvania, minus 222,000 votes in Wisconsin and a whopping minus 500,000 votes in Michigan. We don’t know how many of these voters also supported Sanders along the way, but it is highly likely that millions took that journey. Winning them back is the key to the battle for economic and social justice.

The current resistance to Trump is truly remarkable. Not since the anti-Vietnam War and Civil Rights protests have we seen so many people in the streets ― women, Muslim ban protesters, scientists protesting in behalf of facts, people just protesting ― with more to come. Even three New England Patriots are refusing to attend their Super Bowl White House event.

As Trump’s lunacy and destructiveness grow day-by-day, defensive struggles are an absolute must. But defensiveness alone is not likely to win back Obama/Sanders/Trump voters.

It is possible that those voters will soon get buyer’s remorse and join our defensive struggles. Or maybe a dozen more Nordstrom-like ethics violations could lead to impeachment. But such hopes leave political agency in Trump’s hands rather than in our own.

Instead of waiting for Trump to implode, we should be engaging directly with the Obama/Sanders/Trump voters. But doing so requires an understanding of the economic forces that fueled both the Sanders and Trump revolts.

We live in an era of runaway inequality. In 1970 the gap between CEO pay and the average worker was about 45 to 1. That’s a hefty gap. It means that if you could afford one house and one car, a top CEO could afford 45 homes and 45 cars. Today, the gap is an unfathomable 844 to 1 and rising. 844 houses to your one!

That money gushing to the top is the direct result of 40 years of neoliberalism, a philosophy that captured both political parties. It calls for:

– Tax cuts (especially for the wealthy)

– Government deregulation (especially for Wall Street)

– Cuts in social spending (especially for programs and infrastructure that benefit the rest of us.)

– Free trade (which gives corporations the tools to destroy unions and hold down worker wages.)

Supposedly, this plan would create a massive profit and investment boom, job creation and rising incomes for all. Of course, it failed miserably for the vast majority of us, while succeeding beyond belief for the super rich.

The failure, however, involved far more than rising income gaps. Financial deregulation unleashed Wall Street to financially strip-mine the wealth from our workplaces and our communities into the pockets of Wall Street and corporate elites. This outrageous process has nothing to do with talent or hard work. It also is not an economic act of God. Rather it is the direct result of weakening rules that protect us from the financial predators. This is why the richest country in the history of the world has a crumbling infrastructure, the largest prison population, the most costly health care system, the most student debt and the most income inequality.

Sanders and Trump led revolts against runaway inequality. Both claimed the established order had to be changed radically. Sanders nearly defeated the Clinton machine with a social democratic platform of free higher education, Medicare for All, turning the screws on Wall Street, and taking big money out of politics. Trump, like Sanders, attacked trade deals and claimed he would bring jobs back to America, But he also led the hard right’s racist, sexist and xenophobic calls for immigration restrictions, walls, climate change denial and the curtailment of women’s rights. 

Resisting Trump alone will not stop the hard right. We need to continue the Sanders attack on the neoliberal order by offering a compelling vision for social and economic justice.

Right now Trump is a clear and present danger to us all. But an equally dangerous problem is that the regime of runaway inequality will grow worse as the hard right consolidates its political power. Long before Trump entered the political fray, the hard right was upending neoliberal Democrats. Since 2009, when Obama took office, the Democrats have lost 919 state legislative seats. The Republicans now control 68 percent of all state legislative chambers, and control both state chambers and the governorship in 24 states while the Democrats have such tri-partite control in only 6 states. The Democrats are losing, in large part, because they can’t untangle themselves from financial and corporate elites.

Here’s a terrifying thought: Once the Republicans capture 38 states they can amend the Constitution.

Resisting Trump alone will not stop the hard right. We need to continue the Sanders attack on the neoliberal order by offering a compelling vision for social and economic justice.

Building the Educational Infrastructure

During America’s first epic battle against Wall Street, the Populist movement of the late 19th century fielded 6,000 educators to help small farmers, black and white, learn how to reverse runaway inequality. Because of their efforts a powerful movement grew to take back our country from the moneyed interests — an effort that ultimately culminated in social security, the regulation of Wall Street and large corporations, and the protection of working people on the job.

Today, we need an vast army of educators to spread the word about how runaway inequality is linking all of us together. Toward that end groups all over the country are conduct workshops that lead to the following takeaways:

– Runaway inequality will not cure itself. There is no hidden mechanism in the economy that will right the ship. Financial and corporate elites are gaining more and more at our expense.

– The financial strip-mining of our economy impacts all of us and all of our issues — from climate change, to mass incarceration, to job loss, to declining incomes, to labor rights, to student loans.

– It will take an organized mass movement to take back our country from the hard right. That means no matter what our individual identity (labor unionist, environmentalist, racial justice activist, feminist, etc.), we also need to take on the identity of movement builder. We all must come together or we all lose.

– We can start the building process right now by sharing educational information with our friends, colleagues and neighbors.

Many of the participants in these workshops are Obama/Sanders/Trump voters and they would tell you the educational process works. Good things happen when people come together in a safe space to discuss their common concerns. There’s something special about face-to-face discussions that social media alone cannot replace.

In order to shift the balance of power away from the hard right, we need more educators ― thousands more leading tens of thousands discussions.

Armed with facts ― not alternate facts ― we can build the foundation for a new movement to take back our country both from Trump and from the financial strip-miners.

Substantive change, rather than just returning to the pre-Trump era of massive inequality, requires a long and sustained movement the likes of which we haven’t seen in our lifetimes.

Sustainability

By taking a page from the Tea Party playbook, the Resist Trump efforts are aiming at the mid-term elections as well as at state and local contests. Let’s hope for success. Let’s also hope it also leads to a broader movement to reverse runaway inequality. However, substantive change, rather than just returning to the pre-Trump era of massive inequality, requires a long and sustained movement the likes of which we haven’t seen in our lifetimes.

Sustained movement building on a large scale is foreign to us. For more than a generation we’ve grown accustomed to the neoliberal vision that has narrowed our sense of the possible. It taught us that it’s ok for students to go deeply in debt; that it’s natural to have the largest prison population in the world; that it’s inevitable to have a crumbling public sector; and that it’s an economic law for corporations to shift our jobs to low wage areas with poor environmental protections. Worst of all it conditioned us to think small about movement building ― to work in our own issue silos and not link up together.

Occupy Wall Street, Elizabeth Warren and Bernie Sanders woke us up from our stupor. Let’s not slide back by failing to engage, face to face, with the Obama/Sanders/Trump voters who are eager for real change.

By Les Leopold, director of the Labor Institute in New York

Common Dreams

When Shareholder Capitalism Came to Town – Steven Pearlstein.  

The rise in inequality can be blamed on the shift from managerial to shareholder capitalism.

It was only 20 years ago that the world was in the thrall of American-style capitalism. Not only had it vanquished communism, but it was widening its lead over Japan Inc. and European-style socialism. America’s companies were widely viewed as the most innovative and productive, its capital markets the most efficient, its labor markets the most flexible and meritocratic, its product markets the most open and competitive, its tax and regulatory regimes the most accommodating to economic growth.

Today, that sense of confidence and economic hegemony seems a distant memory. We have watched the bursting of two financial bubbles, struggled through two long recessions, and suffered a lost decade in terms of incomes of average American households.

We continue to rack up large trade deficits even as many of the country’s biggest corporations shift more of their activity and investment overseas. Economic growth has slowed, and the top 10 percent of households have captured whatever productivity gains there have been. Economic mobility has declined to the point that, by international comparison, it is only middling. A series of accounting and financial scandals, coupled with ever-escalating pay for chief executives and hedge-fund managers, has generated widespread cynicism about business. Other countries are beginning to turn to China, Germany, Sweden, and even Israel as models for their economies.

No wonder, then, that large numbers of Americans have begun to question the superiority of our brand of free-market capitalism. This disillusionment is reflected in the rise of the Tea Party and the Occupy Wall Street movements and the increasing polarization of our national politics. It is also reflected on the shelves of bookstores and on the screens of movie theaters.

Embedded in these critiques is not simply a collective disappointment in the inability of American capitalism to deliver on its economic promise of wealth and employment opportunity. Running through them is also a nagging question about the larger purpose of the market economy and how it serves society.

In the current, cramped model of American capitalism, with its focus on maximizing output growth and shareholder value, there is ample recognition of the importance of financial capital, human capital, and physical capital but no consideration of social capital.

Social capital is the trust we have in one another, and the sense of mutual responsibility for one another, that gives us the comfort to take risks, make long-term investments, and accept the inevitable dislocations caused by the economic gales of creative destruction. Social capital provides the necessary grease for the increasingly complex machinery of capitalism and for the increasingly contentious machinery of democracy. Without it, democratic capitalism cannot survive.

It is our social capital that is now badly depleted. This erosion manifests in the weakened norms of behavior that once restrained the most selfish impulses of economic actors and provided an ethical basis for modern capitalism.

A capitalism in which Wall Street bankers and traders think peddling dangerous loans or worthless securities to unsuspecting customers is just “part of the game.”

A capitalism in which top executives believe it is economically necessary that they earn 350 times what their front-line workers do. 

A capitalism that thinks of employees as expendable inputs. 

A capitalism in which corporations perceive it as both their fiduciary duty to evade taxes and their constitutional right to use unlimited amounts of corporate funds to purchase control of the political system. 

That is a capitalism whose trust deficit is every bit as corrosive as budget and trade deficits.

As economist Luigi Zingales of the University of Chicago concludes in his recent book, A Capitalism for the People, American capitalism has become a victim of its own success. In the years after the demise of communism, “the intellectual hegemony of capitalism, however, led to complacency and extremism: complacency through the degeneration of the system, extremism in the application of its ideological premises,” he writes. “‘Greed is good’ became the norm rather than the frowned-upon exception. Capitalism lost its moral higher ground.”

Pope Francis recently gave voice to this nagging sense that our free-market system had lost its moral bearings. “Some people continue to defend trickle-down theories, which assume that economic growth, encouraged by a free market, will inevitably succeed in bringing about greater justice and inclusiveness in the world,” wrote the new pope in an 84-page apostolic exhortation. “This opinion, which has never been confirmed by the facts, expresses a crude and naïve trust in the goodness of those wielding economic power and in the sacralized workings of the prevailing economic system.”

Our challenge now is to restore both the economic and moral legitimacy of American capitalism. And there is no better place to start than with a reconsideration of the purpose of the corporation.

“MAXIMIZING SHAREHOLDER VALUE”

In the recent history of bad ideas, few have had a more pernicious effect than the one that corporations should be managed to maximize “shareholder value.”

Indeed, much of what we perceive to be wrong with the American economy these days—the slowing growth and rising inequality, the recurring scandals and wild swings from boom to bust, the inadequate investment in research and development and worker training—has its roots in this misguided ideology.

It is an ideology, moreover, that has no basis in history, in law, or in logic. What began in the 1970s and 1980s as a useful corrective to self-satisfied managerial mediocrity has become a corrupting self-interested dogma peddled by finance professors, Wall Street money managers, and overcompensated corporate executives.

Let’s start with the history. The earliest corporations, in fact, were generally chartered not for private but for public purposes, such as building canals or transit systems. Well into the 1960s, corporations were broadly viewed as owing something in return to the community that provided them with special legal protections and the economic ecosystem in which they could grow and thrive.

Legally, no statutes require that companies be run to maximize profits or share prices. In most states, corporations can be formed for any lawful purpose. Lynn Stout, a Cornell law professor, has been looking for years for a corporate charter that even mentions maximizing profits or share price. So far, she hasn’t found one. Companies that put shareholders at the top of their hierarchy do so by choice, Stout writes, not by law.

Nor does the law require, as many believe, that executives and directors owe a special fiduciary duty to the shareholders who own the corporation. The director’s fiduciary duty, in fact, is owed simply to the corporation, which is owned by no one, just as you and I are owned by no one—we are all “persons” in the eyes of the law. Corporations own themselves.

What shareholders possess is a contractual claim to the “residual value” of the corporation once all its other obligations have been satisfied—and even then the directors are given wide latitude to make whatever use of that residual value they choose, just as long as they’re not stealing it for themselves.

It is true, of course, that only shareholders have the power to elect the corporate directors. But given that directors are almost always nominated by the management and current board and run unopposed, it requires the peculiar imagination of corporate counsel to leap from the shareholders’ power to “elect” directors to a sweeping mandate that directors and the executives must put the interests of shareholders above all others.

Given this lack of legal or historical support, it is curious how “maximizing shareholder value” has evolved into such a widely accepted norm of corporate behavior.

Milton Friedman, the University of Chicago free-market economist, is often credited with first articulating the idea in a 1970 New York Times Magazine essay in which he argued that “there is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits.” Anything else, he argues, is “unadulterated socialism.”

A decade later, Friedman’s was still a minority view among corporate leaders. In 1981, as Ralph Gomory and Richard Sylla recount in a recent article in Daedalus, the Business Roundtable, representing the nation’s largest firms, issued a statement recognizing a broader purpose of the corporation: “Corporations have a responsibility, first of all, to make available to the public quality goods and services at fair prices, thereby earning a profit that attracts investment to continue and enhance the enterprise, provide jobs and build the economy.” The statement went on to talk about a “symbiotic relationship” between business and society not unlike that voiced nearly 30 years earlier by General Motors chief executive Charlie Wilson, when he reportedly told a Senate committee that “what is good for the country is good for General Motors, and vice versa.”

By 1997, however, the Business Roundtable was striking a tone that sounded a whole lot more like Professor Friedman than CEO Wilson. “The principal objective of a business enterprise is to generate economic returns to its owners,” it declared in its statement on corporate responsibility. “If the CEO and the directors are not focused on shareholder value, it may be less likely the corporation will realize that value.”

The most likely explanation for this transformation involves three broad structural changes that were going on in the U.S. economy—globalization, deregulation, and rapid technological change. Over a number of decades, these three forces have conspired to rob what were once the dominant American corporations of the competitive advantages they had during the “golden era” of the 1950s and 1960s in both U.S. and global markets. Those advantages—and the operating profits they generated—were so great that they could spread the benefits around to all corporate stakeholders. The postwar prosperity was so widely shared that it rarely occurred to stockholders, consumers, or communities to wonder if they were being shortchanged.

It was only when competition from foreign suppliers or recently deregulated upstarts began to squeeze out those profits—often with the help of new technologies—that these once-mighty corporations were forced to make difficult choices. In the early going, their executives found that it was easier to disappoint shareholders than customers, workers, or even their communities. The result, during the 1970s, was a lost decade for investors.

Beginning in the mid-1980s, however, a number of companies with lagging stock prices found themselves targets for hostile takeovers launched by rival companies or corporate raiders employing newfangled “junk bonds” to finance unsolicited bids. Disappointed shareholders were only too willing to sell out to the raiders. So it developed that the mere threat of a hostile takeover was sufficient to force executives and directors across the corporate landscape to embrace a focus on profits and share prices. Almost overnight they tossed aside their more complacent and paternalistic management style, and with it a host of inhibitions against laying off workers, cutting wages and benefits, closing plants, spinning off divisions, taking on debt, moving production overseas. Some even joined in waging hostile takeovers themselves.

Spurred on by this new “market for corporate control,” companies traded in their old managerial capitalism for a new shareholder capitalism, which continues to dominate the business sector to this day. Those high-yield bonds, once labeled as “junk” and peddled by upstart and ethically challenged investment banks, are now a large and profitable part of the business of every Wall Street firm. The unsavory raiders have now morphed into respected private-equity and hedge-fund managers, some of whom proudly call themselves “activist investors.” And corporate executives who once arrogantly ignored the demands of Wall Street now profess they have no choice but to dance to its tune.

THE INSTITUTIONS SUPPORTING SHAREHOLDER VALUE

An elaborate institutional infrastructure has developed to reinforce shareholder capitalism and its generally accepted corporate mandate to maximize short-term profits and share price. This infrastructure includes free–market-oriented think tanks and university faculties that continue to spin out elaborate theories about the efficiency of financial markets.

An earlier generation of economists had looked at the stock-market boom and bust that led to the Great Depression and concluded that share prices often reflected irrational herd behavior on the part of investors. But in the 1960s, a different theory began to take hold at intellectual strongholds such as the University of Chicago that quickly spread to other economics departments and business schools. The essence of the “efficient market” hypothesis, first articulated by Eugene Fama (a 2013 Nobel laureate) is that the current stock price reflects all the public and private information known about a company and therefore is a reliable gauge of the company’s true economic value. For a generation of finance professors, it was only a short logical leap from this hypothesis to a broader conclusion that the share price is therefore the best metric around which to organize a company’s strategy and measure its success.

With the rise of behavioral economics, and the onset of two stock-market bubbles, the efficient–market hypothesis has more recently come under serious criticism. Another of last year’s Nobel winners, Robert Shiller, demonstrated the various ways in which financial markets 
are predictably irrational. Curiously, however, the efficient-market hypothesis is still widely accepted by business schools—and, in particular, their finance departments—which continue to preach the shareholder-first ideology.

Surveys by the Aspen Institute’s Center for Business Education, for example, find that
most MBA students believe that maximizing value for shareholders is the most important responsibility of a company and that this conviction strengthens as they proceed toward their degree, in many schools taking courses that teach techniques for manipulating short-term earnings and share prices. The assumption is so entrenched that even business-school deans who have publicly rejected the ideology acknowledge privately that they’ve given up trying to convince their faculties to take a more balanced approach.

Equally important in sustaining the shareholder focus are corporate lawyers, in-house as well as outside counsels, who now reflexively advise companies against actions that would predictably lower a company’s stock price.

For many years, much of the jurisprudence coming out of the Delaware courts—where most big corporations have their legal home—was based around the “business judgment” rule, which held that corporate directors have wide discretion in determining a firm’s goals and strategies, even if their decisions reduce profits or share prices. But in 1986, the Delaware Court of Chancery ruled that directors of the cosmetics company Revlon had to put the interests of shareholders first and accept the highest price offered for the company. As Lynn Stout has written, and the Delaware courts subsequently confirmed, the decision was a narrowly drawn exception to the business–judgment rule that only applies once a company has decided to put itself up for sale. But it has been widely—and mistakenly—used ever since as a legal rationale for the primacy of shareholder interests and the legitimacy of share-price maximization.

Reinforcing this mistaken belief are the shareholder lawsuits now routinely filed against public companies by class-action lawyers any time the stock price takes a sudden dive. Most of these are frivolous and, particularly since passage of reform legislation in 1995, many are dismissed. But even those that are dismissed generate cost and hassle, while the few that go to trial risk exposing the company to significant embarrassment, damages, and legal fees.

The bigger damage from these lawsuits comes from the subtle way they affect corporate behavior. Corporate lawyers, like many of their clients, crave certainty when it comes to legal matters. So they’ve developed what might be described as a “safe harbor” mentality—an undue reliance on well-established bright lines in advising clients to shy away from actions that might cause the stock price to fall and open the company up to a shareholder lawsuit. Such actions include making costly long-term investments, or admitting mistakes, or failing to follow the same ruthless strategies as their competitors. One effect of this safe-harbor mentality is to reinforce the focus on short-term share price.

The most extensive infrastructure supporting the shareholder-value ideology is to be found on Wall Street, which remains thoroughly fixated on quarterly earnings and short-term trading. Companies that refuse to give quarterly-earnings guidance are systematically shunned by some money managers, while those that miss their earnings targets by even small amounts see their stock prices hammered.

Recent investigations into insider trading have revealed the elaborate strategies and tactics used by some hedge funds to get advance information about a quarterly earnings report in order to turn enormous profits by trading on it. And corporate executives continue to spend enormous amounts of time and attention on industry analysts whose forecasts and ratings have tremendous impact on share prices.

In a now-infamous press interview in the summer of 2007, former Citigroup chairman Charles Prince provided a window into the hold that Wall Street has over corporate behavior. At 
the time, Citi’s share price had lagged behind that of the other big banks, and there was speculation in the financial press that Prince would be fired if he didn’t quickly find a way to catch up. In the interview with the Financial Times, Prince seemed to confirm that speculation. When asked why he was continuing to make loans for high-priced corporate takeovers despite evidence that the takeover boom was losing steam, he basically said he had no choice—as long as other banks were making big profits from such loans, Wall Street would force him, or anyone else in his job, to make them as well. “As long as the music is playing,” Prince explained, “you’ve got to get up and dance.”

It isn’t simply the stick of losing their jobs, however, that causes corporate executives to focus on maximizing shareholder value. There are also plenty of carrots to be found in those generous—some would say gluttonous—pay packages, the value of which is closely tied to the short-term performance of company stock.

The idea of loading up executives with stock options also dates to the transition to shareholder capitalism. The academic critique of managerial capitalism was that the lagging performance of big corporations was a manifestation of what economists call a “principal-agent” problem. In this case, the “principals” were the shareholders and their directors, and the misbehaving “agents” were the executives who were spending too much of their time, and the shareholder’s money, worrying about employees, customers, and the community at large.

In what came to be one of the most widely cited academic papers of all time, business-school professors Michael Jensen of Harvard and William Meckling of the University of Rochester wrote in 1976 that the best way to align the interests of managers to those of the shareholders was to tie a substantial amount of the managers’ compensation to the share price. In a subsequent paper in 1989 written with Kevin Murphy, Jensen went even further, arguing
that the reason corporate executives acted more like “bureaucrats than value-maximizing entrepreneurs” was because they didn’t get to keep enough of the extra value they created.

With that academic foundation, and the enthusiastic support of executive-compensation specialists, stock-based compensation took off. Given the tens and, in more than a few cases, the hundreds of millions of dollars lavished on individual executives, the focus
on boosting share price is hardly surprising. The ultimate irony, of course, is that the result
 of this lavish campaign to more closely align incentives and interests is that the “agents” have done considerably better than the “principals.”

Roger Martin, the former dean of the Rotman School of Management at the University of Toronto, calculates that from 1933 until 1976—roughly speaking, the era of “managerial capitalism” in which managers sought to balance the interest of shareholders with those of employees, customers, and society at large—the total real compound annual return on the stocks of the S&P 500 was 7.5 percent. From 1976 until 2011—roughly the period of “shareholder capitalism”—the comparable return has been 6.5 percent. Meanwhile, according to Martin’s calculation, the ratio of chief-executive compensation to corporate profits increased eightfold between 1980 and 2000, almost all of it coming in the form of stock-based compensation.

HOW SHAREHOLDER PRIMACY HAS RESHAPED CORPORATE BEHAVIOR

All of this reinforcing infrastructure—the academic underpinning, the business-school indoctrination, the threat of shareholder lawsuits, the Wall Street quarterly earnings machine, the executive compensation—has now succeeded in hardwiring the shareholder-value ideology into the economy and business culture. It has also set in motion a dynamic in which corporate and investor time horizons have become shorter and shorter. The average holding periods for corporate stocks, which for decades was six years, is now down to less than six months. The average tenure of a Fortune 500 chief executive is now down to less than six years. Given those realities, it should be no surprise that the willingness of corporate executives to sacrifice short-term profits to make long-term investments is rapidly disappearing.

A recent study by McKinsey & Company, the blue-chip consulting firm, and Canada’s public pension board found alarming levels of short-termism in the corporate executive suite. According
to the study, nearly 80 percent of top executives and directors reported feeling the most pressure to demonstrate a strong financial performance over a period of two years or less, with only 7 percent feeling considerable pressure to deliver strong performance over a period of five years or more. It also found that 55 percent of chief financial officers would forgo an attractive investment project today if it would cause the company to even marginally miss its quarterly-earnings target.

The shift on Wall Street from long-term investing to short-term trading presents a dilemma for those directing a company solely for shareholders: Which group of shareholders is it whose interests the corporation is supposed to optimize? Should it be the hedge funds that are buying and selling millions of shares in a matter of seconds to earn hedge fund–like returns? Or the “activist investors” who have just bought a third of the shares? Or should it be the retired teacher in Dubuque who has held the stock for decades as part of her retirement savings and wants a decent return with minimal downside risk?

One way to deal with this quandary would be for corporations to give shareholders a
bigger voice in corporate decision-making. But it turns out that even as they proclaim
their dedication to shareholder value, executives and directors have been doing everything possible to minimize shareholder involvement and influence in corporate governance. This curious hypocrisy is most recently revealed in the all-out effort by the business lobby to limit shareholder “say on pay” or the right to nominate a competing slate of directors.

For too many corporations, “maximizing shareholder value” has also provided justification
for bamboozling customers, squeezing employees, avoiding taxes, and leaving communities in the lurch. For any one profit–maximizing company, such ruthless behavior may be perfectly rational. But when competition forces all companies to behave in this fashion, neither they, nor we, wind up better off.

Take the simple example of outsourcing production to lower-cost countries overseas. Certainly it makes sense for any one company to aggressively pursue such a strategy. But
if every company does it, these companies may eventually find that so many American consumers have suffered job loss and wage cuts that they no longer can buy the goods they are producing, even at the cheaper prices. The companies may also find that government no longer has sufficient revenue to educate its remaining American workers or dredge the ports through which its imported goods are delivered to market.

Economists have a name for such unintended spillover effects—negative externalities—and normally the most effective response is some form of government action, such as regulation, taxes, or income transfers. But one of the hallmarks of the current political environment is that every tax, every regulation, and every new safety-net program is bitterly opposed by the corporate lobby as an assault on profits and job creation. Not only must the corporation commit to putting shareholders first—as they see it, the society must as well. And with the Supreme Court’s decision in Citizens United, corporations are now free to spend unlimited sums of money on political campaigns to elect politicians sympathetic to this view.

Perhaps the most ridiculous aspect of shareholder–über-alles is how at odds it is with every modern theory about managing people. David Langstaff, then–chief executive of TASC, a Virginia–based government-contracting firm, put it this way in a recent speech at a conference hosted by the Aspen Institute and the business school at Northwestern University: “If you are the sole proprietor of a business, do you think that you can motivate your employees for maximum performance by encouraging them simply to make more money for you?” Langstaff asked rhetorically. “That is effectively what an enterprise is saying when it states that its purpose is to maximize profit for its investors.”

Indeed, a number of economists have been trying to figure out the cause of the recent slowdown in both the pace of innovation and the growth in worker productivity. There are lots of possible culprits, but surely one candidate is that American workers have come to understand that whatever financial benefit may result from their ingenuity or increased efficiency is almost certain to be captured by shareholders and top executives.

The new focus on shareholders also hasn’t been a big winner with the public. Gallup polls show that people’s trust in and respect for big corporations have been on a long, slow decline in recent decades—at the moment, only Congress and health-maintenance organizations rank lower. When was the last time you saw a corporate chief executive lionized on the cover of a newsweekly? Odds are it was probably the late Steve Jobs of Apple, who wound up creating more wealth for more shareholders than anyone on the planet by putting shareholders near the bottom of his priority list.

RISING DOUBTS ABOUT SHAREHOLDER PRIMACY

The usual defense you hear of “maximizing shareholder value” from corporate chief executives is that at many firms—not theirs!—it has been poorly understood and badly executed. These executives make clear they don’t confuse today’s stock price or this quarter’s earnings with shareholder value, which they understand to be profitability and stock appreciation over the long term. They are also quick to acknowledge that no enterprise can maximize long-term value for its shareholders without attracting great employees, providing great products and services to customers, and helping to support efficient governments and healthy communities.

Even Michael Jensen has felt the need to reformulate his thinking. In a 2001 paper, he wrote, “A firm cannot maximize value if it ignores the interest of its stakeholders.” He offered a proposal he called “enlightened stakeholder theory,” one that “accepts maximization of the long run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders.”

But if optimizing shareholder value implicitly requires firms to take good care of customers, employees, and communities, then by the same logic you could argue that optimizing customer satisfaction would require firms to take good care of employees, communities, and shareholders. More broadly, optimizing any function inevitably requires the same tradeoffs or messy balancing of interests that executives of an earlier era claimed to have done.

The late, great management guru Peter Drucker long argued that if one stakeholder group should be first among equals, surely it should be the customer. “The purpose of business is to create and keep a customer,” he famously wrote.

Roger Martin picked up on Drucker’s theme in “Fixing the Game,” his book-length critique of shareholder value. Martin cites the experience of companies such as Apple, Johnson & Johnson, and Proctor & Gamble, companies that put customers first, and whose long-term shareholders have consistently done better than those of companies that claim to put shareholders first. The reason, Martin says, is that customer focus minimizes undue risk taking, maximizes reinvestment, and creates, over the long run, a larger pie.

Having spoken with more than a few top executives over the years, I can tell you that many would be thrilled if they could focus on customers rather than shareholders. In private, they chafe under the quarterly earnings regime forced on them by asset managers and the financial press. They fear and loathe “activist” investors. They are disheartened by their low public esteem. Few, however, have dared to challenge the shareholder-first ideology in public.

But recently, some cracks have appeared.

In 2006, Ian Davis, then–managing director of McKinsey, gave a lecture at the University of Pennsylvania’s Wharton School in which he declared, “Maximization of shareholder value is in danger of becoming irrelevant.”

Davis’s point was that global corporations have to operate not just in the United States but in the rest of the world where people either don’t understand the concept of putting shareholders first or explicitly reject it—and companies that trumpet it will almost surely draw the attention of hostile regulators and politicians.

“Big businesses have to be forthright in saying what their role is in society, and they will never do it by saying, ‘We maximize shareholder value.’”

A few years later, Jack Welch, the former chief executive of General Electric, made headlines when he told the Financial Times, “On the face of it, shareholder value is the dumbest idea in the world.” What he meant, he scrambled to explain a few days later, is that shareholder value is an outcome, not a strategy. But coming from the corporate executive (“Neutron Jack”) who had embodied ruthlessness in the pursuit of competitive dominance, his comment was viewed as a recognition that the single-minded pursuit of shareholder value had gone too far. “That’s not a strategy that helps you know what to do when you come to work every day,” Welch told Bloomberg Businessweek. “It doesn’t energize or motivate anyone. So basically my point is, increasing the value of your company in both the short and long term is an outcome of the implementation of successful strategies.”

Tom Rollins, the founder of the Teaching Company, offers as an alternative what he calls the “CEO” strategy, standing for customers, employees, and owners. Rollins starts by noting that at the foundation of all microeconomics are voluntary trades or exchanges that create “surplus” for both buyer and seller that in most cases exceed their minimum expectations. The same logic, he argues, ought to apply to the transactions between a company and its employees, customers, and owners/shareholders.

The problem with a shareholder-first strategy, Rollins argues, is that it ignores this basic tenet of economics. It views any surplus earned by employees and customers as both unnecessary and costly. After all, if the market would allow the firm to hire employees for 10 percent less, or charge customers 10 percent more, then by not driving the hardest possible bargain with employees and customers, shareholder profit is not maximized.

But behavioral research into the importance of “reciprocity” in social relationships strongly suggests that if employees and customers believe they are not getting any surplus from a transaction, they are unlikely to want to continue to engage in additional transactions with the firm. Other studies show that having highly satisfied customers and highly engaged employees leads directly to higher profits. As Rollins sees it, if firms provide above-market returns—surplus—to customers and employees, then customers and employees are likely to reciprocate and provide surplus value to firms and their owners.

Harvard Business School professor Michael Porter and Kennedy School senior fellow Mark Kramer have also rejected the false choice between a company’s social and value–maximizing responsibilities that is implicit in the shareholder-value model. “The solution lies in the principle of shared value, which involves creating economic value
in a way that also creates value for society by addressing its needs and challenges,” they wrote in the Harvard Business Review in 2011. In the past, economists have theorized that
for profit-maximizing companies to provide societal benefits, they had to sacrifice economic success by adding to their costs or forgoing revenue. What they overlooked, Porter and Kramer wrote, was that by ignoring social goals—safe workplaces, clean environments, effective school systems, adequate infrastructure—companies wound up adding to their overall costs while failing to exploit profitable business opportunities. “Businesses must reconnect company success with social progress,” Porter and Kramer wrote. “Shared value is not social responsibility, philanthropy or even sustainability, but a new way to achieve economic success. It is not on the margin of what companies do, but at the center.”

SMALL STEPS TOWARD A MORE BALANCED CAPITALISM

If it were simply the law that was responsible for the undue focus on shareholder value, it would be relatively easy to alter it. Changing a behavioral norm, however—particularly one so accepted and reinforced by so much supporting infrastructure—is a tougher challenge. The process will, of necessity, be gradual, requiring carrots as well as sticks. The goal should not be to impose a different focus for corporate decision-making as inflexible as maximizing shareholder value has become but rather to make it acceptable for executives and directors to experiment with and adopt a variety of goals and purposes.

Companies would surely be responsive if investors and money managers would make clear that they have a longer time horizon or are looking for more than purely bottom-line results. There has long been a small universe of “socially responsible” investing made up of mutual funds, public and union pension funds, and research organizations that monitor corporate behavior and publish scorecards based on an assessment of how companies treat customers, workers, the environment, and their communities. While some socially responsible funds and asset managers and investors have consistently achieved returns comparable or even slightly superior to those of competitors focused strictly on financial returns, there is no evidence of any systematic advantage. Nor has there been a large hedge fund or private-equity fund that made it to the top with a socially responsible investment strategy. You can do well by doing good, but it’s no sure thing that you’ll do better.

Nineteen states—the latest is Delaware, where a million businesses are legally registered—have recently established a new kind of corporate charter, the “benefit corporation,” that explicitly commits companies to be managed for the benefit of all stakeholders. About 550 companies, including Patagonia and Seventh Generation, now have B charters, while 960 have been certified as meeting the standards set out by the nonprofit B Lab. Although almost all of today’s B corps are privately held, supporters of the concept hope that a number of sizable firms will become B corps and that their stocks will then be traded on a separate exchange.

One big challenge facing B corps and the socially responsible investment community is
that the criteria they use to assess corporate behavior exhibit an unmistakable liberal bias that makes it easy for many investors, money managers, and executives to dismiss them
as ideological and naïve. Even a company run for the benefit of multiple stakeholders will at various points be forced to make tough choices, such as reducing payroll, trimming costs, closing facilities, switching suppliers, or doing business in places where corruption is rampant or environmental regulations are weak. As chief executives are quick to remind, companies that ignore short-term profitability run the risk of never making it to the long term.

Among the growing chorus of critics of “shareholder value,” a consensus is emerging around a number of relatively modest changes in tax and corporate governance laws that, at a minimum, could help lengthen time horizons of corporate decision-making. A group of business leaders assembled by the Aspen Institute to address the problem of “short-termism” recommended a recalibration of the capital-gains tax to provide investors with lower tax rates for longer-term investments. A small transaction tax, such as the one proposed by the European Union, could also be used to dampen the volume and importance of short-term trading.

The financial-services industry and some academics have argued that such measures, by reducing market liquidity, will inevitably increase the cost of capital and result in markets that are more volatile, not less. A lower tax rate for long-term investing has also been shown to have a “lock-in” effect that discourages investors from moving capital to companies offering the prospect of the highest return. But such conclusions are implicitly based on the questionable assumption that markets without such tax incentives are otherwise rational and operate with perfect efficiency. They also beg fundamental questions about the role played by financial markets in the broader economy. Once you assume, as they do, that the sole purpose of financial markets is to channel capital into investments that earn the highest financial return to private investors, then maximizing shareholder value becomes the only logical corporate strategy.

There is also a lively debate on the question of whether companies should offer earnings guidance to investors and analysts—estimates of what earnings per share will be for the coming quarter. The argument against such guidance is that it reinforces the undue focus of both executives and investors on short-term earnings results, discouraging long-term investment and incentivizing earnings manipulation. The counterargument is that even in the absence of company guidance, investors and executives inevitably play the same game by fixating on the “consensus” earnings estimates of Wall Street analysts. Given that reality, they argue, isn’t it better that those analyst estimates are informed as much as possible by information provided by the companies themselves?

In weighing these conflicting arguments, the Aspen group concluded that investors and analysts would be better served if companies provided information on a wider range of metrics with which to assess and predict business performance over a longer time horizon than the next quarter. While it might take Wall Street and its analysts some time to adjust to this richer and more nuanced form of communication, it would give the markets a better understanding of what drives each business while taking some of the focus off the quarterly numbers game.

In addressing the question of which shareholders should have the most say over company strategies and objectives, there have been suggestions for giving long-term investors greater power in selecting directors, approving mergers and asset sales, and setting executive compensation. The idea has been championed by McKinsey & Company managing director Dominic Barton and John Bogle, the former chief executive of the Vanguard Group, and is under active consideration by European securities regulators. Such enhanced voting rights, however, would have to be carefully structured so that they encourage a sense of stewardship on the part of long-term investors without giving company insiders or a few large shareholders the opportunity to run roughshod over other shareholders.

The short-term focus of corporate executives and directors is heavily reinforced by the demands of asset managers at mutual funds, pension funds, hedge funds, and endowments, who are evaluated and compensated on the basis of the returns they generated over the last year and the last quarter. Even while most big companies have now taken steps to stretch out over several years the incentive pay plans of top corporate executives to encourage those executives to take a longer-term perspective, the outsize quarterly and annual bonuses on Wall Street keep the economy’s time horizons fixated on the short term. At a minimum, federal regulators could require asset managers to disclose how their compensation is determined. They might also require funds to justify, on the basis of actual performance, the use of short-term metrics when managing long-term money such as pensions and college endowments.

The Securities and Exchange Commission also could nudge companies to put greater emphasis on long-term strategy and performance in their communications with shareholders. For starters, companies could be required to state explicitly in their annual reports whether their priority is to maximize shareholder value or to balance shareholder interests with other interests in some fashion—certainly shareholders deserve to know that in advance. The commission might require companies to annually disclose the size of their workforce in each country and information on the pay and working conditions of the company’s employees and those of its major contractors. Disclosure of any major shifts in where work is done could also be required, along with the rationale. There could be a requirement for companies to perform and disclose regular environmental audits and to acknowledge other potential threats to their reputation and brand equity. In proxy statements, public companies could be required to explain the ways in which executive compensation is aligned with long-term strategy and performance.

If I had to guess, however, my hunch would be that employees, not outside investors and regulators, will finally free the corporate sector from the straitjacket
of shareholder value. Today, young people—particularly those with high-demand skills—are drawn to work that doesn’t simply pay well but also has meaning and social value. You can already see that reflected in what students are choosing to study and where highly sought graduates are choosing to work. As the economy improves and the baby-boom generation retires, companies that have reputations as ruthless maximizers of short-term profits will find themselves on the losing end of the global competition for talent.

In an era of plentiful capital, it will be skills, knowledge, and creativity that will be in short supply, with those who have them calling the corporate tune. Who knows? In the future, there might even be conferences at which hedge-fund managers and chief executives get together to gripe about the tyranny of “maximizing employee satisfaction” and vow to do something about it.

The American Prospect

Steven Pearlstein is a Pulitzer Prize-winning columnist for The Washington Post and a professor of public affairs at George Mason University. 

Has Income Inequality Finally Got To Top Of The IMF Agenda? – Christian Proaño. 

Outgoing US President Barack Obama has named the reduction of economic inequality as the “defining challenge of our time”. This is true not only for the United States – the richest country and, at the same time, the one with the highest wealth inequality – but also for a large number of countries around the world, independent of their level of economic development. So, for instance, the average Gini coefficient of disposable household income across OECD countries reached its highest level since the mid-1980s, from 0.315 in 2010 to 0.318 in 2014 (OECD 2016).

Extreme economic inequality is undesirable for many reasons.

First and foremost, extreme income and wealth inequality is likely to jeopardize moral equality (“all people are created equal”), undermining the very basis of democratic societies. When a large share of wealth is in the hands of a few privileged people, equal access to nominally public goods such as education or an independent judicial system may not be guaranteed. As economic inequality may exacerbate inequality of opportunity, it is likely to solidify the social stratification and divisions in a country, making its society more prone to extremist political movements, as the recent US elections and other global political developments have shown.

And second, pronounced economic inequality may contribute to the instability of the global macroeconomic system through the buildup of large imbalances either through excessively credit-financed consumption, as in the US, or through deficient domestic aggregate demand and oversized net exports, as in China and Germany.

For a long period, however, and partly due to the emergence of a large affluent middle class in the US and most industrialized countries, economic inequality was a second order issue in them. The emergence of the neoliberal era with Ronald Reagan and Margaret Thatcher, however, not only eroded many social institutions such as trade unions demanding a more equal distribution of income in those countries but significantly influenced the IMF’s perspective of the IMF from then on.

The poor performance of the IMF’s Washington Consensus policy prescriptions during that era is nowadays widely acknowledged.

Social Europe

The Ghost of Poverty This Christmas – Bryan Bruce. 

In 1843 Charles Dickens released his classic tale A Christmas Carol.

Creatives are like sponges. They soak up what’s happening in society and squeeze the gathered material into their work. Dickens was a master of it.

A year earlier he’d read a British parliamentary report on the condition of children working in mines for 10 hours a day – naked, starving and sick. The cause of this misery, he recognised, was greed – a few people getting very rich at the expense of the many. (Sound familiar?)

So, in that magical way it takes a genius to do, Dickens poured all of Victorian Britain’s mean-spiritedness into his fictional character Ebenezer Scrooge, the miserly old man who hates Christmas.
Until, that is, he is visited on Christmas Eve by three Ghosts (Of Christmas Past and Present and Yet To Come) who reveal to him how giving can be much more rewarding than taking.

173 years on a lot of Kiwis have got that message. They help their friends and neighbours whenever they can, they run food banks, free used clothing and furniture outlets, and open their maraes to the homeless.

But none of these things would be necessary if the meanness of Scrooge had not become institutionalised into the Neoliberal economic policies successive New Zealand governments have promoted over the last 30 years.
Yes it’s true that children no longer work in factories or down mines  – but that’s simply proof (if proof be needed) that things can change if we vote to alter them.

What I suspect is that if Dickens could return like one of his ghosts to visit us today, he’d look in dismay at the long lines of poor outside the City Missions this Christmas and tell us that we are going backwards towards the selfish society he railed against – where the poor were dependent on the good will of strangers for food and the essentials of life.

That we have lost sight of what is really important is clear….
85,000 of our children are living in severe hardship
•14 % of our kids (155,000) are experiencing material hardship which means they are living without seven or more necessary items for their wellbeing. • 28% per cent of our children (295,000) are living in low income homes and experiencing material hardship as a result.

So thank you to all of the good people throughout our country who know this widening gap between the have and have-not  isn’t right and do so much to help those less fortunate than themselves.
But let’s also make a new year’s resolution – to encourage our friends and families and everyone we know to vote for a better deal for all our children next year.
10% of New Zealanders now own 60% of the wealth of our country while the bottom 20% own nothing of worth at all.
Let’s make the scrooges of New Zealand pay their fair share.

My very best wishes to all of you this Christmas Eve.
Take care.
Bryan Bruce. 

Brexit Britain turns against globalisation and modern technology, blaming it for low UK wages and inequality – The Independent. 

Post-Brexit Britain is in the throes of a major backlash against globalisation, blaming dwindling wages and rife inequality on the opening of the world’s economy, an exclusive poll for The Independent has revealed.

The survey by ComRes even exposes a new backward-looking dislike of modern technology in the UK, with the public blaming advances for a widening gap between the rich and poor.

People believe the gap has also been widened by the low interest rates employed by governments in many countries now suffering resurgent populist movements.

The Independent

This is the most dangerous time for our planet – Stephen Hawking. 

We can’t go on ignoring inequality, because we have the means to destroy our world but not to escape it.

What matters now, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake.

The concerns underlying these votes about the economic consequences of globalisation and accelerating technologically are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.

This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive.

We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent.

The Guardian 

Global Wealth Inequality – What you never knew you never knew – YouTube. 

Let’s Change The Rules! 

Rich countries give $130 billion annually in development aid to poor countries while sucking $2 Trillion out by various means. Every Year!

YouTube 

It’s not just the Pharmaceuticals screwing you Americans! Your doctors are in on the scam too. – Robert Reich. 

The real threat to the public’s health is drugs priced so high that an estimated fifty million Americans—more than a quarter of them with chronic health conditions—did not fill their prescriptions in 2012, according to the National Consumers League. The law allows pharmaceutical companies to pay doctors for prescribing their drugs. Over a five-month period in 2013, doctors received some $380 million in speaking and consulting fees from drug companies and device makers. Some doctors pocketed over half a million dollars each, and others received millions of dollars in royalties from products they had a hand in developing. Doctors claim these payments have no effect on what they prescribe. But why would pharmaceutical companies shell out all this money if it did not provide them a healthy return on their investment?

Drug companies pay the makers of generic drugs to delay their cheaper versions. These so-called pay-for-delay agreements, perfectly legal, generate huge profits both for the original manufacturers and for the generics—profits that come from consumers, from health insurers, and from government agencies paying higher prices than would otherwise be the case. The tactic costs Americans an estimated $3.5 billion a year. Europe doesn’t allow these sorts of payoffs. The major American drugmakers and generics have fought off any attempts to stop them. The drug companies claim they need these additional profits to pay for researching and developing new drugs. Perhaps this is so. But that argument neglects the billions of dollars drug companies spend annually for advertising and marketing—often tens of millions of dollars to promote a single drug. They also spend hundreds of millions every year lobbying. In 2013, their lobbying tab came to $225 million, which was more than the lobbying expenditures of America’s military contractors. In addition, Big Pharma spends heavily on political campaigns. In 2012 it shelled out more than $36 million, making it one of the biggest political contributors of all American industries.

The average American is unaware of this system—the patenting of drugs from nature, the renewal of patents based on insignificant changes, the aggressive marketing of prescription drugs, bans on purchases from foreign pharmacies, payments to doctors to prescribe specific drugs, and pay-for-delay—as well as the laws and administrative decisions that undergird all of it. Yet, as I said, because of this system, Americans pay more for drugs, per person, than citizens of any other nation on earth. The critical question is not whether government should play a role. Without government, patents would not exist, and pharmaceutical companies would have no incentive to produce new drugs. The issue is how government organizes the market. So long as big drugmakers have a disproportionate say in those decisions, the rest of us pay through the nose.
Robert Reich , from his book ‘Saving Capitalism’ 

Those who claim to be on the side of freedom while ignoring the growing imbalance of economic and political power in America and other advanced economies are not in fact on the side of freedom. They are on the side of those with the power. – Robert Reich

As economic and political power have once again moved into the hands of a relative few large corporations and wealthy individuals, “freedom” is again being used to justify the multitude of ways they entrench and enlarge that power by influencing the rules of the game.

These include escalating campaign contributions, as well as burgeoning “independent” campaign expenditures, often in the form of negative advertising targeting candidates whom they oppose; growing lobbying prowess, both in Washington and in state capitals; platoons of lawyers and paid experts to defend against or mount lawsuits, so that courts interpret the laws in ways that favor them; additional lawyers and experts to push their agendas in agency rule-making proceedings; the prospect of (or outright offers of) lucrative private-sector jobs for public officials who define or enforce the rules in ways that benefit them; public relations campaigns designed to convince the public of the truth and wisdom of policies they support and the falsity and deficiency of policies they don’t; think tanks and sponsored research that confirm their positions; and ownership of, or economic influence over, media outlets that further promote their goals.

Robert Reich, from his book ‘Saving Capitalism’

Investment in the Early Years Prevents Crime. 

Instead of competing in some sort of bizarre race to the bottom with the United States for the most number of people incarcerated per head of population, we need to attend to what works to prevent crime.

While the idea of scaring young people out of crime (boot camps, prison visits etc) might have popular appeal, it is a total failure as it has been found to increase the probability of crime. What does work is reducing the stress on low income families and providing them with sufficient material resources to ensure their kids have opportunities to thrive.

Increasing the incomes of low-income low opportunity parents using unconditional cash assistance reduces children’s likelihood of engaging in criminal activity. Yes giving parents more money, and trusting them to identify where the pressure points in their family life are, improves both the economic position of a family and reduces stress, the key pathway between poverty and poor outcomes for children.

A substantial body of research supports unconditional cash assistance in countries similar to New Zealand.

Jess Berentson-Shaw. Morgan Foundation 

The Police (and pretty much everyone else) Know how to Prevent Crime, Why Don’t the Politicians?

Jess Berentson-Shaw – The Morgan Foundation 

It was nicely done really; Judith Collins neatly deflected the question posed by a delegate at a Police Association’s Conference about when the government was going to start addressing the causes of the crimes, notably child poverty. She repelled that pretty brave question by simply saying child poverty was not the Government’s problem to fix.

The following week the Government announced it will be spending $1 billion to provide 1800 additional prison beds. While that is a one off cost, it also costs $92,000 to house each prisoner per year, so the on-going cost will be over $150m. Labour also announced it would be funding 1000 more frontline police at a cost of $180m per year.

This is all ambulance at the bottom of the cliff stuff. No-one has attended to what that member of the Police force was saying; deal with low incomes and low opportunities in childhood and you don’t need to spend billions on new prisons, or constantly have to fund more frontline police.

It is unlikely that merely lacking money directly leads to a child committing crime. Rather, a child who grows up poor is more likely to be a low achiever in their education with more behavioural and or mental health issues, and it is these factors that influence their likelihood of running up against the criminal justice system.

While the absolute numbers of appearances for all ethnicities have gone down, both apprehensions and appearances in court show a serious overrepresentation of Māori young people. Low income is noted to be a key factor in the criminal offending of young people. 

Morgan Foundation 

Inequality As Policy: Selective Trade Protectionism Favors Higher Earners. – Dean Baker. 

Offshoring manufacturing may have hurt many working people in America, but professionals and intellectual property have been robustly protected. 

Globalization and technology are routinely cited as drivers of inequality over the last four decades. While the relative importance of these causes is disputed, both are often viewed as natural and inevitable products of the working of the economy, rather than as the outcomes of deliberate policy. 

In fact, both the course of globalization and the distribution of rewards from technological innovation are very much the result of policy. Insofar as they have led to greater inequality, this has been the result of conscious policy choices.

Starting with globalization, there was nothing pre-determined about a pattern of trade liberalization that put U.S. manufacturing workers in direct competition with their much lower paid counterparts in the developing world. Instead, that competition was the result of trade pacts written to make it as easy as possible for U.S. corporations to invest in the developing world to take advantage of lower labor costs, and then ship their products back to the United States. The predicted and actual result of this pattern of trade has been to lower wages for manufacturing workers and non-college educated workers more generally, as displaced manufacturing workers crowd into other sectors of the economy.

New Economic Thinking 

“I’ve seen a lot of death, but not this thing. This is shocking and this is what makes you feel you are not living in a civilized world”

These horrifying pictures are the product of global economic inequality, victims of a world where 71% of the world owns only 3 percent of global wealth. People come from all over Africa. Some are fleeing extremist violence from groups like Nigeria’s Boko Haram or Somalia’s al-Shabaab. Others are simply people without opportunity or any hope for bettering their lives and their families in home countries where jobs are nonexistent and money is funneled to the ruling elites, who guard their wealth jealously. Occupy Democrats 

The reason is “a sense of hopelessness and helplessness of the young people. They believe the sun isn’t going to come up any more.”

Paul Little: What the Dickens is happening?

Dickens was a campaigner – he campaigned against, among other things, pollution, child poverty, poor public health, homelessness and the exploitation of women and children. He died nearly 150 years ago.

Great writer, but he doesn’t seem to have made much of a difference. NZ Herald 

Political economy, social policy, psychology, ending Neoliberalism, Trump . . . and other facts of our times.

%d bloggers like this: