Tag Archives: Inequality

New Zealand’s political leadership has failed for decades on housing policy – Shamubeel Eaqub. 

New Zealand’s political leadership has failed for decades on housing policy, leading to the rise of a Victorian-style landed gentry, social cohesion coming under immense pressure and a cumulative undersupply of half a million houses over the last 30 years.

House prices are at the highest level they have ever been. And they have risen really, really fast since the 90s, but more so since the early 2000s and have far outstripped every fundamental that we can think of.

After nearly a century of rising home ownership in New Zealand, since 1991 home ownership has been falling. In the last census, the home ownership rate was the lowest level since 1956. And for my estimate for the end of 2016, it’s the lowest level since 1946.

We’ve gone back a long way in terms of the promise and the social pact in New Zealand that home ownership is good, and if you work hard you’re going to be able to afford a house.

The reality is that that social pact, that right of passage has not been true for many, many decades. The solutions are going to be difficult and they are going to take time.

Before you come and tell me that you paid 20% interest rates, the reality is that, yes interest rates are much lower. But the really big problem is, house prices have risen so much that it’s almost impossible in fact to save for the deposit. People could have saved a deposit and paid it off in about 20-30 years in the early 1990s. Fast forward to today, and that’s more like 50 years. How long do you want to work to pay off your mortgage?

What we’re talking about is the rise of Generation Rent. Those who manage to buy houses are in mortgage slavery for a long period of time.

There is a widening societal gap. If younger generations want to access housing, it’s not enough to have a job, nor enough to have a good job. You must now have parents that are wealthy, and home-owners too. The idea of New Zealand being an egalitarian country is no longer true. The kind of societal divide we’re talking about is very Victorian. We’re in fact talking about the rise of a landed gentry.

For those who are born after the 1980s, the chance of you doing better than your parents are less than 50%.

What we’re creating is a country where opportunities are going to be more limited for our children and when it comes to things like housing, than ourselves. I worry that what we’re creating in New Zealand is a social divide that is only going to keep growing. This is only one manifestation of this divide.

There has been a change in philosophy in what underpins the housing market. One very good example is what we have done with our social housing sector.

Housing NZ started building social housing in the late 1930s and stock accumulated over the next 50-60 years to a peak in 1991.

Since then we have not added more social housing. On a per capita basis we have the lowest number of social housing in New Zealand since the 1940s.

This is an ideological position where we do not want to create housing supply for the poor. We don’t want to. This is not about politicians. This is a reflection on us. It is our ideology, it is our politics. Our politicians are doing our bidding. The society that we’re living in today does not want to invest in the bottom half of our society.

The really big kicker has been credit. Significant reductions in mortgage rates over time have driven demand for housing. But we have misallocated our credit. We’re creating more and more debt, but most of that debt is chasing the existing houses. We’re buying and selling from each other rather than creating something new. The housing boom could not have happened on its own. The banking sector facilitated it. We have seen more and more credit being created and more of that credit is now more likely to go towards buying and selling houses from each other rather than funding businesses or building houses.

One of the saddest stories at the moment is, even though we have an acute housing shortage in Auckland, the most difficult to find funding for now is new developments. When the banks pull away credit, the first thing that goes is the riskiest elements of the market.

Seasonally adjusted house sales in Auckland are at the lowest level since 2011. This is worrying because what happens in the property market expands to the economy, consents and the construction sector.

I fully expect a construction bust next year. We are going to have a construction bust before we have a housing bust. We haven’t built enough houses for a very long period of time. And if we’re going to keep not building enough houses, I’m not confident that whatever correction we have in the housing market is going to last.

New money created in the economy is largely chasing the property market. Household debt to GDP has been rising steadily since the 1990s. People were now taking on more debt, but banks have started to cut back on the amount of credit available overall.

For every unit of economic growth over the course of the last 10, 20 years, we needed more and more debt to create that growth. We are more and more addicted to debt to create our economic growth.

Credit is now going backwards. If credit is not going to be available in aggregate, we know the biggest loses are in fact going to be businesses and property development.

It means we are not going to be building a lot of the projects that have been consented, and we know the construction cycle is going to come down. I despair.

I despair that we still talk so much more about buying and selling houses than actually starting businesses. The cultural sclerosis that we see in New Zealand has as much to do with the problem of the housing market as to do with our rules around the Resource Management Act, our banking sector.

On demand, we know there’s been significant growth in New Zealand’s population. Even though it feels like all of that population growth has come from net migration, the reality is that it’s actually natural population growth that’s created the bulk of the demand.

But net migration has created a volatility that we can’t deal with. A lot of the cyclicality in New Zealand’s housing market and demand, comes from net migration and we simply cannot respond.

We do know that there is money that’s global that is looking for a safe haven, and New Zealand is part of that story. We don’t have very good data in New Zealand because we refuse to collect it. There is a lack of leadership regarding our approach to foreign investment in our housing market.

Looking at what’s happening in Canada and Australia would indicate roughly 10% of house sales in Auckland are to foreign buyers. Yes it matters, but when 90% of your sales are going to locals, I think it’s a bit of a red herring.

Historical context of where demand for housing comes from shows the biggest chunk is from natural population growth. The second biggest was from changes in household size as families got smaller – more recently that has stopped, ie kids refusing to leave home.

There has been a massive variation in what happens with net migration.

New Zealand needs about 21,000 houses a year to keep up with population growth and changes that are taking place. But over the course of the last four years, we’ve needed more like 26,000. We’re nowhere near building those kinds of houses.

This means we need to think about demand management from a policy perspective. It’s more about cyclical management rather than structural management.

Population growth has always been there. Whether it’s from migration or not doesn’t matter. The problem is our housing market, our land supply, our infrastructure supply, can’t keep up with any of it.

While immigration was a side problem it nevertheless was an important conversation to have due to the volatility that can be created. I struggle with the fact that we have no articulated population strategy in New Zealand. We have immigration because we have immigration. That’s not a very good reason.

Why do we want immigration, how big do we want to be, do you want 15 million people or do you want five?

What sort of people do we want? Are we just using immigration as shorthand for not educating our kids because we can’t fill the skills shortages that we have in our industries?

Let’s not pretend that it’s all about people wanting to live in houses.

You’d be very hard pressed to argue that people want to buy houses in Epsom at a 3% rental yield for investment purposes. They want to buy houses in Epsom at 3% rental yield because they want to speculate on the capital gains. Let’s be honest with ourselves.

If your floating mortgage rate is 5.5% and you’re getting 3% from your rent, what does that tell you about your investment? It tells you that you’re not really doing it for cash-flow purposes. You’re doing because you expect capital gains, and you expect those capital gains to compensate you.

The real story in Auckland is that a lot of additional demand is coming from investment.

Land supply in New Zealand is slow, particularly in places like Auckland. But it’s not just in terms of sections, it’s also about density. The Unitary Plan was a win for Auckland. The reality is that if we only do greenfields, we will just see more people sitting out in traffic at the end of Drury.

The majority of New housing supply are large houses, when the majority of new households being formed are 1-2 person households.

Between the last two censuses, most of the housing stock built in New Zealand were four bedrooms or more. In contrast, the majority of households that were created were people that were single or couples. We have ageing populations, we have the empty nesters, we have young people who are having kids later…and we’re building stand-alone houses, with four bedrooms.

We have to think very hard about how to create supply not just for the top end, even though we know in theory building just enough houses is good for everybody, when you’re starting from a point of not enough houses, it means the bottom end gets screwed for longer. We have to think very hard about whether we want to use things like inclusionary zoning; we have to think very hard about what we want to do with social housing.

Right now we’re not building houses for everybody in our community. We are failing by building the wrong sorts of houses in our communities.

Right at the top is land costs. If we think about what has been driving up the cost of housing, the biggest one is the value of land. It’s true that we should also look at what’s happening in the rental market and what was happening with the costs of construction. But those are not the things that have been the majority driver of the very unaffordable house prices that we see in New Zealand today.

The biggest constraint is in land, and that is where the speculation is taking place.

We know we’re not building enough. In the 1930s to 1940s we had very different types of governments and ideology. We actually built more houses per capita back then than we have in the last 30 years.

In the late 40s-early 70s, with the rise of the welfare state and build-up of infrastructure. On a per capita basis, we built massive amounts of houses.

But since the oil shock and the 1980s reforms, we have never structurally managed to build as many houses as we did pre-1980. That cumulative gap between the trend that we have seen in the last 30 years, versus what we had seen in the 40s, 50s and 60s, is around half a million houses.

So there is something that is fundamentally and structurally different in what we have done in terms of housing supply in New Zealand over a very long period of time.

The changes in the way that we do our planning rules, the advent of the RMA, the way that we fund and govern our local government. All these things have changed. So the nature of the provision of infrastructure, the provision of land, then provision of consents, all of these things have changed massively. But the net result is we’re not building as many houses, and that is a fundamental problem.

In Auckland there is a massive gap between targets set by government for house building over the past three years and the amount of consents issued. On top of this, the targets themselves were still not high enough.

Somehow we’re still not able to respond to the growth that Auckland is facing. Consistently we have underestimated how many people want to live in a place like Auckland.

But it’s not just Auckland. Carterton surprises every year, it’s because they’ve got a fantastic train line and people live there, it’s not surprising.

But we are failing. We have been failing and we continue to fail. We have to be far more responsive and we have to have a much longer time horizon to have the provision for housing that’s needed.

There is in fact no real plan. The Unitary Plan is fantastic in that it actually plans for just enough houses for the projections for population. We can confidently say that projection is going to be pessimistic, we’re going to have way more people in Auckland.

Trump and Brexit have marked a shift in politics and a polarisation in the public’s view of politics. In New Zealand I think one of the catalysts could be Generation Rent. In the last census, 51% of adults, over 15 year olds, rented. It is no longer the minority that rent, but the majority of individuals that rent.

I’m not saying we’ll see the same kind of uprising in New Zealand, but what we saw in Brexit was that discontent was the majority of voters. If young people had actually turned up to vote, Brexit wouldn’t have happened. The same is true for New Zealand.

It is strange that there was no sense of crisis or urgency. For a lot of the voters, things are just fine. For the people for whom it’s not fine, they’re not voting and they feel disengaged.

The kind of politics that we will start to see in the next 10 years is something much more activist, the ‘urgency of now’.

The promise of democracy is to create an economy that is fit for everyone. It is about creating opportunities for everyone. Right now, particularly when it comes to housing, we are failing. We are not creating a democratic community when it comes to our housing supply because young people are locked out, because young people are going to suffer, and we know there are some big differences across the different parts of New Zealand.

It’s not going to be enough, when we’re starting from a position of crisis, to simply create more housing that will appease the public. We have to make sure that we’re far more activist in making sure that we’re creating housing that is fit for purpose, not just for the general populous, but for the bottom half who are clearly losing out from what is going on.

We know what the causes are. I’m sick of arguing why we’re here. We know why we’re here, because we haven’t ensured enough political leadership to deal with the problems that are there.

We can’t implement the solutions unless we have political leadership, political cohesion, and endurance over the political cycle. This is a big challenge, but a big opportunity.

Shamubeel Eaqub

***

  • There has been a cumulative 500,000 gap in housing supply over the last 30 years.
  • Eaqub predicted a construction bust next year, led by banks tightening lending.
  • It’s remarkable NZ authorities do not have proper data on foreign buyers. While he estimates 10% of purchases in Auckland are made by foreign investors, he said the main focus should be on the other 90% by local.
  • However, migration creates cyclical volatility that we can’t deal with; it is unbelievable that New Zealand doesn’t have a stated population policy.
  • New Zealand is still not building the right sized houses – the majority of properties being built in recent years have had four-plus bedrooms, while household sizes have grown smaller
  • The majority of New Zealand’s adult population is now renting. This could be the catalyst for a Brexit/Trump-style rising up of formerly disengaged voters – young people in our case – to engage at this year’s election.
  • New Zealand’s home ownership level is now at its lowest point since 1946.
  • We have a cultural sclerosis of buying and selling existing houses to one another.

interest.co.nz

Undoing poverty’s negative effect on brain development with cash transfers – Cameron McLeod. 

An upcoming experiment into brain development and poverty by Kimberly G Noble, associate professor of neuroscience and education at Columbia University’s Teachers College, asks whether poverty may affect the development, “the size, shape, and functioning,” of a child’s brain, and whether “a cash stipend to parents” would prevent this kind of damage.

Noble writes that “poverty places the young child’s brain at much greater risk of not going through the paces of normal development.” Children raised in poverty perform less well in school, are less likely to graduate from high school, and are less likely to continue on to college. Children raised in poverty are also more likely to be underemployed when adults. Sociological research and research done in the area of neuroscience has shown that a childhood spent in poverty can result in “significant differences in the size, shape and functioning” of the  brain. Can the damage done to children’s brains  be negated  by the intervention of a subsidy for brain health?

This most recent study’s fundamental difference from past efforts is that it explores what kind of effect “directly supplementing” the incomes of families will have on brain development. “Cash transfers, as opposed to counseling, child care and other services, have the potential to empower families to make the financial decisions they deem best for themselves and their children.” Noble’s hypothesis is that a “cascade of positive effects” will follow from the cash transfers, and that if proved correct, this has implications for public policy and “the potential to…affect the lives of millions of disadvantaged families with young children.”

Brain Trust, Kimberly G. Noble

  • Children who live in poverty tend to perform worse than peers in school on a bevy of different tests. They are less likely to graduate from high school and then continue on to college and are more apt to be underemployed once they enter the workforce.
  • Research that crosses neuroscience with sociology has begun to show that educational and occupational disadvantages that result from growing up poor can lead to significant differences in the size, shape and functioning of children’s brains.
  • Poverty’s potential to hijack normal brain development has led to plans for studying whether a simple intervention might reverse these injurious effects. A study now in the planning stages will explore if a modest subsidy can enhance brain health.

BasicIncome.org

***

The goal of Dr. Noble’s research is to better characterize socioeconomic disparities in children’s cognitive and brain development. Ongoing studies in her lab address the timing of neurocognitive disparities in infancy and early childhood, as well as the particular exposures and experiences that account for these disparities, including access to material resources, richness of language exposure, parenting style and exposure to stress. Finally, she is interested in applying this work to the design of interventions that aim to target gaps in school readiness, including early literacy, math, and self-regulation skills. She is honored to be part of a national team of social scientists and neuroscientists planning the first clinical trial of poverty reduction, which aims to estimate the causal impact of income supplementation on children’s cognitive, emotional and brain development in the first three years of life.

Columbia University

***

A short review on the link between poverty, children’s cognition and brain development, 13th March 2017

In the latest issue of the Scientific American, Kimberly Noble, associate professor in neuroscience and education, reviews her work and introduces an ambitious research project that may help understand the cause-and-effect connection between poverty and children’s brain development.

For the past 15 years, Noble and her colleagues have gathered evidence to explain how socioeconomic disparities may underlie differences in children’s cognition and brain development. In the course of their research they have found for example that children living in poverty tend to have reduced cognitive skills – including language, memory skills and cognitive control (Figure 1).

Figure 1. Wealth effect

More recently, they published evidence showing that the socio-economic status of parents (as assessed using parental education, income and occupation) can also predict children’s brain structure.

By measuring the cortical surface area of children’s brains (ie the area of the surface of the cortex, the outer layer of the brain which contains all the neurons), they found that lower family income was linked to smaller cortical surface area, especially in brain regions involved in language and cognitive control abilities (Figure 2 – in magenta).

Figure 2. A Brain on Poverty

In the same research, they also found that longer parental education was linked to increased hippocampus volume in children, a brain structure essential for memory processes.

Overall, Noble’s work adds to a growing body of research showing the negative relation between poverty and brain development and these findings may explain (at least in part) why children from poor families are less likely to obtain good grades at school, graduate from high-school or attend college.

What is less known however, is the causal mechanism underlying this relationship. As Noble describes, differences in school and neighbourhood quality, chronic stress in the family home, less nurturing parenting styles or a combination of all these factors might explain the impact of poverty on brain development and cognition.

To better understand the causal effect of poverty, Noble has teamed up with economists and developmental psychologists and together, they will soon launch a large-scale experiment or “randomised control trial”. As part of this experiment, 1000 US women from low-income backgrounds will be recruited soon after giving birth and will be followed over a three-year period. Half of the women will receive $333 per month (if they are part of the “experimental” group) and the other half will receive $20 per month (if they are part of the “control” group). Mothers and children will be monitored throughout the study, and mothers will be able to spend the money as they wish, without any constrains.

By comparing children belonging to the experimental group to those in the control group, researchers will be able to observe how increases in family income may directly benefit cognition and brain development. They will also be able to test whether the way mothers use the extra income is a relevant factor to explain these benefits.

Noble concludes that “although income may not be the only factor that determines a child’s developmental trajectory, it may be the easiest one to alter” through social policy. And given that 25% of American children and 12% of British children are affected by poverty (as reported by UNICEF in 2012), policies designed to alleviate poverty may have the capacity to reach and improve the life chances of millions of children.

NGN is looking forward to see the results of this large-scale experiment. We expect that this project, in association with other research studies, will improve our understanding of the link between poverty and child development, and will help design better interventions to support disadvantaged children.

Nature Groups

***

Socioeconomic inequality and children’s brain development. 

Research addresses issues at the intersection of psychology, neuroscience and public policy.

By Kimberly G. Noble, MD, PhD

Kimberly Noble, MD, PhD, is an associate professor of neuroscience and education at Teachers College, Columbia University. She received her undergraduate, graduate and medical degrees at the University of Pennsylvania. As a neuroscientist and board-certified pediatrician, she studies how inequality relates to children’s cognitive and brain development. Noble’s work has been supported by several federal and foundation grants, and she was named a “Rising Star” by the Association for Psychological Science. Together with a team of social scientists and neuroscientists from around the United States, she is planning the first clinical trial of poverty reduction to assess the causal impact of income on cognitive and brain development in early childhood.

Kimberley Noble website.

What can neuroscience tell us about why disadvantaged children are at risk for low achievement and poor mental health? How early in infancy does socioeconomic disadvantage leave an imprint on the developing brain, and what factors explain these links? How can we best apply this work to inform interventions? These and other questions are the focus of the research my colleagues and I have been addressing for the last several years.

What is socioeconomic status and why is it of interest to neuroscientists?

The developing human brain is remarkably malleable to experience. Of course, a child’s experience varies tremendously based on his or her family’s circumstances (McLoyd, 1998). And so, as neuroscientists, we can use family circumstance as a lens through which to better understand how experience relates to brain development.

Family socioeconomic status, or SES, is typically considered to include parental educational attainment, occupational prestige and income (McLoyd, 1998); subjective social status, or where one sees oneself on the social hierarchy, may also be taken into account (Adler, Epel, Castellazzo & Ickovics, 2000). A large literature has established that disparities in income and human capital are associated with substantial differences in children’s learning and school performance. For example, socioeconomic differences are observed across a range of important cognitive and achievement measures for children and adolescents, including IQ, literacy, achievement test scores and high school graduation rates (Brooks-Gunn & Duncan, 1997). These differences in achievement in turn result in dramatic differences in adult economic well-being and labor market success.

However, although outcomes such as school success are clearly critical for understanding disparities in development and cognition, they tell us little about the underlying neural mechanisms that lead to these differences. Distinct brain circuits support discrete cognitive skills, and differentiating between underlying neural substrates may point to different causal pathways and approaches for intervention (Farah et al., 2006; Hackman & Farah, 2009; Noble, McCandliss, & Farah, 2007; Raizada & Kishiyama, 2010). Studies that have used a neurocognitive framework to investigate disparities have documented that children and adolescents from socioeconomically disadvantaged backgrounds tend to perform worse than their more advantaged peers on several domains, most notably in language, memory, self-regulation and socio-emotional processing (Hackman & Farah, 2009; Hackman, Farah, & Meaney, 2010; Noble et al., 2007; Noble, Norman, & Farah, 2005; Raizada & Kishiyama, 2010).

Family socioeconomic circumstance and children’s brain structure

More recently, we and other neuroscientists have extended this line of research to examine how family socioeconomic circumstances relate to differences in the structure of the brain itself. For example, in the largest study of its kind to date, we analyzed the brain structure of 1099 children and adolescents recruited from socioeconomically diverse homes from ten sites across the United States (Noble, Houston et al., 2015). We were specifically interested in the structure of the cerebral cortex, or the outer layer of brain cells that does most of the cognitive “heavy lifting.” We found that both parental educational attainment and family income accounted for differences in the surface area, or size of the “nooks and crannies” of the cerebral cortex. These associations were found across much of the brain, but were particularly pronounced in areas that support language and self-regulation — two of the very skills that have been repeatedly documented to show large differences along socioeconomic lines.

Several points about these findings are worth noting. First, genetic ancestry, or the proportion of ancestral descent for each of six major continental populations, was held constant in the analyses. Thus, although race and SES tend to be confounded in the U.S., we can say that the socioeconomic disparities in brain structure that we observed were independent of genetically-defined race. Second, we observed dramatic individual differences, or variation from person to person. That is, there were many children and adolescents from disadvantaged homes who had larger cortical surface areas, and many children from more advantaged homes who had smaller surface areas. This means that our research team could in no way accurately predict a child’s brain size simply by knowing his or her family income alone. Finally, the relationship between family income and surface area was nonlinear, such that the steepest gradient was seen at the lowest end of the income spectrum. That is, dollar for dollar, differences in family income were associated with proportionately greater differences in brain structure among the most disadvantaged families.

More recently, we also examined the thickness of the cerebral cortex in the same sample (Piccolo, et al., 2016). In general, as we get older, our cortices tend to get thinner. Specifically, cortical thickness decreases rapidly in childhood and early adolescence, followed by a more gradual thinning, and ultimately plateauing in early- to mid-adulthood (Raznahan et al., 2011; Schnack et al., 2014; Sowell et al., 2003). Our work suggests that family socioeconomic circumstance may moderate this trajectory. 

Specifically, at lower levels of family SES, we observed relatively steep age-related decreases in cortical thickness earlier in childhood, and subsequent leveling off during adolescence. In contrast, at higher levels of family SES, we observed more gradual age-related reductions in cortical thickness through at least late adolescence. We speculated that these findings may reflect an abbreviated period of cortical thinning in lower SES environments, relative to a more prolonged period of cortical thinning in higher SES environments. It is possible that socioeconomic disadvantage is a proxy for experiences that narrow the sensitive period, or time window for certain aspects of brain development that are malleable to environmental influences, thereby accelerating maturation (Tottenham, 2015).

Are these socioeconomic differences in brain structure clinically meaningful? Early work would suggest so. In our work, we have found that differences in cortical surface area partially accounted for links between family income and children’s executive function skills (Noble, Houston et al., 2015). Independent work in other labs has suggested that differences in brain structure may account for between 15 and 44 percent of the family income-related achievement gap in adolescence (Hair, Hanson, Wolfe & Pollak, 2015; Mackey et al., 2015). This line of research is still in its infancy, however, and several outstanding questions remain to be addressed.

How early are socioeconomic disparities in brain development detectable?

By the start of school, it is apparent that dramatic socioeconomic disparities in children’s cognitive functioning are already evident, and indeed, several studies have found that socioeconomic disparities in language (Fernald, Marchman & Weisleder, 2013; Noble, Engelhardt et al., 2015; Rowe & Goldin-Meadow, 2009) and memory (Noble, Engelhardt et al., 2015) are already present by the second year of life. But methodologies that assess brain function or structure may be more sensitive to differences than are tests of behavior. This raises the question of just how early we can detect socioeconomic disparities in the structure or function of children’s brains.

 One group reported socioeconomic differences in resting electroencephalogram (EEG) activity — which indexes electrical activity of the brain as measured at the scalp — as early as 6–9 months of age (Tomalski et al., 2013). Recent work by our group, however, found no correlation between SES and the same EEG measures within the first four days following birth (Brito, Fifer, Myers, Elliott & Noble, 2016), raising the possibility that some of these differences in brain function may emerge in part as a result of early differences in postnatal experience. Of course, a longitudinal study assessing both the prenatal and postnatal environments would be necessary to formally test this hypothesis. Furthermore, another group recently reported that, among a group of African-American, female infants imaged at 5 weeks of age, socioeconomic disadvantage was associated with smaller cortical and deep gray matter volumes (Betancourt et al., 2015). It is thus also likely that at least some socioeconomic differences in brain development are the result of socioeconomic differences in the prenatal environment (e.g., maternal diet, stress) and/or genetic differences.

Disentangling links among socioeconomic disparities, modifiable experiences and brain development represents a clear priority for future research. Are the associations between SES and brain development the result of differences in experiences that can serve as the targets of intervention, such as differences in nutrition, housing and neighborhood quality, parenting style, family stress and/or education? Certainly, the preponderance of social science evidence would suggest that such differences in experience are likely to account at least in part for differences in child and adolescent development (Duncan & Magnuson, 2012). However, few studies have directly examined links among SES, experience and the brain (Luby et al., 2013). In my lab, we are actively focusing on these issues, with specific interest in how chronic stress and the home language environment may, in part, explain our findings.

How can this work inform interventions?

Quite a few interventions aim to reduce socioeconomic disparities in children’s achievement. Whether school-based or home-based, many are quite effective, though frequently face challenges: High-quality interventions are expensive, difficult to scale up and often suffer from “fadeout,” or the phenomenon whereby the positive effects of the intervention dwindle with time once children are no longer receiving services.

What about the effects of directly supplementing family income? Rather than providing services, such “cash transfer“ interventions have the potential to empower families to make the financial decisions they deem best for themselves and their children. Experimental and quasi-experimental studies in the social sciences, both domestically and in the developing world, have suggested the promise of direct income supplementation (Duncan & Magnuson, 2012).

To date, linkages between poverty and brain development have been entirely correlational in nature; the field of neuroscience is silent on the causal connections between poverty and brain development. As such, I am pleased to be part of a team of social scientists and neuroscientists who are currently planning and raising funds to launch the first-ever randomized experiment testing the causal connections between poverty reduction and brain development.

The ambition of this study is large, though the premise is simple. We plan to recruit 1,000 low-income U.S. mothers at the time of their child’s birth. Mothers will be randomized to receive a large monthly income supplement or a nominal monthly income supplement. Families will be tracked longitudinally to definitively assess the causal impact of this unconditional cash transfer on cognitive and brain development in the first three years following birth, when we believe the developing brain is most malleable to experience.

We hypothesize that increased family income will trigger a cascade of positive effects throughout the family system. As a result, across development, children will be better positioned to learn foundational skills. If our hypotheses are borne out, this proposed randomized trial has the potential to inform social policies that affect the lives of millions of disadvantaged families with young children. While income may not be the only or even the most important factor in determining children’s developmental trajectories, it may be the most manipulable from a policy perspective.

American Psychological Association

Abuse breeds child abusers – Jarrod Gilbert. 

Often when I’m doing research I dance a silly jig when I gleefully unearth a gem of information hitherto unknown or long forgotten. In studying the violent deaths of kids that doesn’t happen.

There was no dance of joy when I discovered New Zealanders are more likely to be homicide victims in their first tender years than at any other time in their lives. But nothing numbs you like the photographs of dead children.

Little bodies lying there limp with little hands and little fingers, covered in scratches and an array of bruises some dark black and some fading, looking as vulnerable dead as they were when they were alive.

James Whakaruru’s misery ended when he was killed in 1999. He had endured four years of life and that was all he could take. He was hit with a small hammer, a jug cord and a vacuum cleaner hose. During one beating his mind was so confused he stared blankly ahead. His tormentor responded by poking him in the eyes. It was a stomping that eventually switched out his little light. It was a case that even the Mongrel Mob condemned, calling the cruelty “amongst the lowest of any act”.

An inquiry by the Commissioner for Children found a number of failings by state agencies, which were all too aware of the boy’s troubled existence. The Commissioner said James became a hero because changes made to Government agencies would save lives in the future. Yet such horrors have continued. My colleague Greg Newbold has found that on average nine children (under 15) have been killed as a result of maltreatment since 1992 and the rate has not abated in recent years. In 2015, there were 14 such deaths, one of which was three-year-old Moko Rangitoheriri, or baby Moko as we knew him when he gained posthumous celebrity.

Moko’s life was the same as James’s, and he too died in agony; he endured weeks of being beaten, kicked, and smeared with faeces. That was the short life he knew. Most of us will struggle to comprehend these acts but we are desperate to stop them. Desperate to ensure state agencies are capable of intervening to protect those who can not protect themselves and, through no fault of their own, are subjected to cruelty by those who are meant to protect them.

The reasons for intervening don’t stop with the imperative to save young lives. For every child killed there are dozens who live wretched existences and from this cohort of unfortunates will come the next generation of abusers.  Solving the problems of today, then, is not just a moral imperative but is also about producing a positive ripple effect.

And this is why, In the cases of James Whakaruru and baby Moko the best and most efficient time for intervention was not in the period leading up to their abuse, but rather many years before they were born. The men involved in each of those killing came from the same family. And it seems their lives were transient and tragic: one spent time in the now infamous Epuni Boys home, which is ground zero for calls for an inquiry into state care abuse (and incidentally the birth place of the Mongrel Mob).

Once young victims themselves, those boys crawled into adulthood and became violent men capable of imparting cruelty onto kids in their care.

This cycle of abuse is well known, yet state spending on the problem is poorly aligned to it, and our targeting of the problem is reactionary and punitive rather than proactive and preventative.

Of the $1.4 billion we spend on family and sexual violence annually, less than 10 per cent is spent on interventions, of which just 1.5 per cent is spent on primary prevention. The morality of that is questionable, the economics even more so.

Not only must things be approached differently but there needs to be greater urgency in our thinking. It’s perhaps trite to say, but if nine New Zealanders were killed every year in acts of terrorism politicians would never stop talking about it and it would be priority number one.

In an election year, that’s exactly where this issue should be. If the kids in violent homes had a voice, that’s what they’d be saying.

But if the details of such deaths don’t move our political leaders to urgent action, I rather fear nothing will. Maybe they should be made to look at the photographs.

• Dr Jarrod Gilbert is a sociologist at the University of Canterbury and the lead researcher at Independent Research Solutions.

No, wealth isn’t created at the top. It is merely devoured there – Rutger Bregman. 

This piece is about one of the biggest taboos of our times. About a truth that is seldom acknowledged, and yet, on reflection, cannot be denied. The truth that we are living in an inverse welfare state.

These days, politicians from the left to the right assume that most wealth is created at the top. By the visionaries, by the job creators, and by the people who have “made it”. By the go-getters oozing talent and entrepreneurialism that are helping to advance the whole world.

Now, we may disagree about the extent to which success deserves to be rewarded – the philosophy of the left is that the strongest shoulders should bear the heaviest burden, while the right fears high taxes will blunt enterprise – but across the spectrum virtually all agree that wealth is created primarily at the top.

So entrenched is this assumption that it’s even embedded in our language. When economists talk about “productivity”, what they really mean is the size of your paycheck. And when we use terms like “welfare state”, “redistribution” and “solidarity”, we’re implicitly subscribing to the view that there are two strata: the makers and the takers, the producers and the couch potatoes, the hardworking citizens – and everybody else.

In reality, it is precisely the other way around. In reality, it is the waste collectors, the nurses, and the cleaners whose shoulders are supporting the apex of the pyramid. They are the true mechanism of social solidarity. Meanwhile, a growing share of those we hail as “successful” and “innovative” are earning their wealth at the expense of others. The people getting the biggest handouts are not down around the bottom, but at the very top. Yet their perilous dependence on others goes unseen. Almost no one talks about it. Even for politicians on the left, it’s a non-issue.

To understand why, we need to recognise that there are two ways of making money. The first is what most of us do: work. That means tapping into our knowledge and know-how (our “human capital” in economic terms) to create something new, whether that’s a takeout app, a wedding cake, a stylish updo, or a perfectly poured pint. To work is to create. Ergo, to work is to create new wealth.

But there is also a second way to make money. That’s the rentier way: by leveraging control over something that already exists, such as land, knowledge, or money, to increase your wealth. You produce nothing, yet profit nonetheless. By definition, the rentier makes his living at others’ expense, using his power to claim economic benefit.

For those who know their history, the term “rentier” conjures associations with heirs to estates, such as the 19th century’s large class of useless rentiers, well-described by the French economist Thomas Piketty. These days, that class is making a comeback. (Ironically, however, conservative politicians adamantly defend the rentier’s right to lounge around, deeming inheritance tax to be the height of unfairness.) But there are also other ways of rent-seeking. From Wall Street to Silicon Valley, from big pharma to the lobby machines in Washington and Westminster, zoom in and you’ll see rentiers everywhere.

There is no longer a sharp dividing line between working and rentiering. In fact, the modern-day rentier often works damn hard. Countless people in the financial sector, for example, apply great ingenuity and effort to amass “rent” on their wealth. Even the big innovations of our age – businesses like Facebook and Uber – are interested mainly in expanding the rentier economy. The problem with most rich people therefore is not that they are coach potatoes. Many a CEO toils 80 hours a week to multiply his allowance. It’s hardly surprising, then, that they feel wholly entitled to their wealth.

It may take quite a mental leap to see our economy as a system that shows solidarity with the rich rather than the poor. So I’ll start with the clearest illustration of modern freeloaders at the top: bankers. Studies conducted by the International Monetary Fund and the Bank of International Settlements – not exactly leftist thinktanks – have revealed that much of the financial sector has become downright parasitic. How instead of creating wealth, they gobble it up whole.

Don’t get me wrong. Banks can help to gauge risks and get money where it is needed, both of which are vital to a well-functioning economy. But consider this: economists tell us that the optimum level of total private-sector debt is 100% of GDP. Based on this equation, if the financial sector only grows, it won’t equal more wealth, but less. So here’s the bad news. In the United Kingdom, private-sector debt is now at 157.5%. In the United States the figure is 188.8%.

In other words, a big part of the modern banking sector is essentially a giant tapeworm gorging on a sick body. It’s not creating anything new, merely sucking others dry. Bankers have found a hundred and one ways to accomplish this. The basic mechanism, however, is always the same: offer loans like it’s going out of style, which in turn inflates the price of things like houses and shares, then earn a tidy percentage off those overblown prices (in the form of interest, commissions, brokerage fees, or what have you), and if the shit hits the fan, let Uncle Sam mop it up.

The financial innovation concocted by all the math whizzes working in modern banking (instead of at universities or companies that contribute to real prosperity) basically boils down to maximising the total amount of debt. And debt, of course, is a means of earning rent. So for those who believe that pay ought to be proportionate to the value of work, the conclusion we have to draw is that many bankers should be earning a negative salary; a fine, if you will, for destroying more wealth than they create.

Bankers are the most obvious class of closet freeloaders, but they are certainly not alone. Many a lawyer and an accountant wields a similar revenue model. Take tax evasion. Untold hardworking, academically degreed professionals make a good living at the expense of the populations of other countries. Or take the tide of privatisations over the past three decades, which have been all but a carte blanche for rentiers. One of the richest people in the world, Carlos Slim, earned his millions by obtaining a monopoly of the Mexican telecom market and then hiking prices sky high. The same goes for the Russian oligarchs who rose after the Berlin Wall fell, who bought up valuable state-owned assets for song to live off the rent.

But here comes the rub. Most rentiers are not as easily identified as the greedy banker or manager. Many are disguised. On the face of it, they look like industrious folks, because for part of the time they really are doing something worthwhile. Precisely that makes us overlook their massive rent-seeking.

Take the pharmaceutical industry. Companies like GlaxoSmithKline and Pfizer regularly unveil new drugs, yet most real medical breakthroughs are made quietly at government-subsidised labs. Private companies mostly manufacture medications that resemble what we’ve already got. They get it patented and, with a hefty dose of marketing, a legion of lawyers, and a strong lobby, can live off the profits for years. In other words, the vast revenues of the pharmaceutical industry are the result of a tiny pinch of innovation and fistfuls of rent.

Even paragons of modern progress like Apple, Amazon, Google, Facebook, Uber and Airbnb are woven from the fabric of rentierism. Firstly, because they owe their existence to government discoveries and inventions (every sliver of fundamental technology in the iPhone, from the internet to batteries and from touchscreens to voice recognition, was invented by researchers on the government payroll). And second, because they tie themselves into knots to avoid paying taxes, retaining countless bankers, lawyers, and lobbyists for this very purpose.

Even more important, many of these companies function as “natural monopolies”, operating in a positive feedback loop of increasing growth and value as more and more people contribute free content to their platforms. Companies like this are incredibly difficult to compete with, because as they grow bigger, they only get stronger.

Aptly characterising this “platform capitalism” in an article, Tom Goodwin writes: “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate.”

So what do these companies own? A platform. A platform that lots and lots of people want to use. Why? First and foremost, because they’re cool and they’re fun – and in that respect, they do offer something of value. However, the main reason why we’re all happy to hand over free content to Facebook is because all of our friends are on Facebook too, because their friends are on Facebook … because their friends are on Facebook.

Most of Mark Zuckerberg’s income is just rent collected off the millions of picture and video posts that we give away daily for free. And sure, we have fun doing it. But we also have no alternative – after all, everybody is on Facebook these days. Zuckerberg has a website that advertisers are clamouring to get onto, and that doesn’t come cheap. Don’t be fooled by endearing pilots with free internet in Zambia. Stripped down to essentials, it’s an ordinary ad agency. In fact, in 2015 Google and Facebook pocketed an astounding 64%of all online ad revenue in the US.

But don’t Google and Facebook make anything useful at all? Sure they do. The irony, however, is that their best innovations only make the rentier economy even bigger. They employ scores of programmers to create new algorithms so that we’ll all click on more and more ads. Uber has usurped the whole taxi sector just as Airbnb has upended the hotel industry and Amazon has overrun the book trade. The bigger such platforms grow the more powerful they become, enabling the lords of these digital feudalities to demand more and more rent.

Think back a minute to the definition of a rentier: someone who uses their control over something that already exists in order to increase their own wealth. The feudal lord of medieval times did that by building a tollgate along a road and making everybody who passed by pay. Today’s tech giants are doing basically the same thing, but transposed to the digital highway. Using technology funded by taxpayers, they build tollgates between you and other people’s free content and all the while pay almost no tax on their earnings.

This is the so-called innovation that has Silicon Valley gurus in raptures: ever bigger platforms that claim ever bigger handouts. So why do we accept this? Why does most of the population work itself to the bone to support these rentiers?

I think there are two answers. Firstly, the modern rentier knows to keep a low profile. There was a time when everybody knew who was freeloading. The king, the church, and the aristocrats controlled almost all the land and made peasants pay dearly to farm it. But in the modern economy, making rentierism work is a great deal more complicated. How many people can explain a credit default swap, or a collateralized debt obligation?  Or the revenue model behind those cute Google Doodles? And don’t the folks on Wall Street and in Silicon Valley work themselves to the bone, too? Well then, they must be doing something useful, right?

Maybe not. The typical workday of Goldman Sachs’ CEO may be worlds away from that of King Louis XIV, but their revenue models both essentially revolve around obtaining the biggest possible handouts. “The world’s most powerful investment bank,” wrote the journalist Matt Taibbi about Goldman Sachs, “is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money.”

But far from squids and vampires, the average rich freeloader manages to masquerade quite successfully as a decent hard worker. He goes to great lengths to present himself as a “job creator” and an “investor” who “earns” his income by virtue of his high “productivity”. Most economists, journalists, and politicians from left to right are quite happy to swallow this story. Time and again language is twisted around to cloak funneling and exploitation as creation and generation.

However, it would be wrong to think that all this is part of some ingenious conspiracy. Many modern rentiers have convinced even themselves that they are bona fide value creators. When current Goldman Sachs CEO Lloyd Blankfein was asked about the purpose of his job, his straight-faced answer was that he is “doing God’s work”. The Sun King would have approved.

The second thing that keeps rentiers safe is even more insidious. We’re all wannabe rentiers. They have made millions of people complicit in their revenue model. Consider this: What are our financial sector’s two biggest cash cows? Answer: the housing market and pensions. Both are markets in which many of us are deeply invested.

Recent decades have seen more and more people contract debts to buy a home, and naturally it’s in their interest if house prices continue to scale now heights (read: burst bubble upon bubble). The same goes for pensions. Over the past few decades we’ve all scrimped and saved up a mountainous pension piggy bank. Now pension funds are under immense pressure to ally with the biggest exploiters in order to ensure they pay out enough to please their investors.

The fact of the matter is that feudalism has been democratised. To a lesser or greater extent, we are all depending on handouts. En masse, we have been made complicit in this exploitation by the rentier elite, resulting in a political covenant between the rich rent-seekers and the homeowners and retirees.

Don’t get me wrong, most homeowners and retirees are not benefiting from this situation. On the contrary, the banks are bleeding them far beyond the extent to which they themselves profit from their houses and pensions. Still, it’s hard to point fingers at a kleptomaniac when you have sticky fingers too.

So why is this happening? The answer can be summed up in three little words: Because it can.

Rentierism is, in essence, a question of power. That the Sun King Louis XIV was able to exploit millions was purely because he had the biggest army in Europe. It’s no different for the modern rentier. He’s got the law, politicians and journalists squarely in his court. That’s why bankers get fined peanuts for preposterous fraud, while a mother on government assistance gets penalised within an inch of her life if she checks the wrong box.

The biggest tragedy of all, however, is that the rentier economy is gobbling up society’s best and brightest. Where once upon a time Ivy League graduates chose careers in science, public service or education, these days they are more likely to opt for banks, law firms, or trumped up ad agencies like Google and Facebook. When you think about it, it’s insane. We are forking over billions in taxes to help our brightest minds on and up the corporate ladder so they can learn how to score ever more outrageous handouts.

One thing is certain: countries where rentiers gain the upper hand gradually fall into decline. Just look at the Roman Empire. Or Venice in the 15th century. Look at the Dutch Republic in the 18th century. Like a parasite stunts a child’s growth, so the rentier drains a country of its vitality.

What innovation remains in a rentier economy is mostly just concerned with further bolstering that very same economy. This may explain why the big dreams of the 1970s, like flying cars, curing cancer, and colonising Mars, have yet to be realised, while bankers and ad-makers have at their fingertips technologies a thousand times more powerful.

Yet it doesn’t have to be this way. Tollgates can be torn down, financial products can be banned, tax havens dismantled, lobbies tamed, and patents rejected. Higher taxes on the ultra-rich can make rentierism less attractive, precisely because society’s biggest freeloaders are at the very top of the pyramid. And we can more fairly distribute our earnings on land, oil, and innovation through a system of, say, employee shares, or a universal basic income. 

But such a revolution will require a wholly different narrative about the origins of our wealth. It will require ditching the old-fashioned faith in “solidarity” with a miserable underclass that deserves to be borne aloft on the market-level salaried shoulders of society’s strongest. All we need to do is to give real hard-working people what they deserve.

And, yes, by that I mean the waste collectors, the nurses, the cleaners – theirs are the shoulders that carry us all.

The Guardian

The 1930s were humanity’s darkest, bloodiest hour. Are you paying attention? – Jonathan Freedland. 

Even to mention the 1930s is to evoke the period when human civilisation entered its darkest, bloodiest chapter. No case needs to be argued; just to name the decade is enough. It is a byword for mass poverty, violent extremism and the gathering storm of world war. “The 1930s” is not so much a label for a period of time than it is rhetorical shorthand – a two-word warning from history.

Witness the impact of an otherwise boilerplate broadcast by the Prince of Wales last December that made headlines. “Prince Charles warns of return to the ‘dark days of the 1930s’ in Thought for the Day message.” Or consider the reflex response to reports that Donald Trump was to maintain his own private security force even once he had reached the White House. The Nobel prize-winning economist Paul Krugman’s tweet was typical: “That 1930s show returns.”

Because that decade was scarred by multiple evils, the phrase can be used to conjure up serial spectres. It has an international meaning, with a vocabulary that centres on Hitler and Nazism and the failure to resist them: from brownshirts and Goebbels to appeasement, Munich and Chamberlain. And it has a domestic meaning, with a lexicon and imagery that refers to the Great Depression: the dust bowl, soup kitchens, the dole queue and Jarrow. It was this second association that gave such power to a statement from the usually dry Office for Budget Responsibility, following then-chancellor George Osborne’s autumn statement in 2014. The OBR warned that public spending would be at its lowest level since the 1930s; the political damage was enormous and instant.

In recent months, the 1930s have been invoked more than ever, not to describe some faraway menace but to warn of shifts under way in both Europe and the United States. The surge of populist, nationalist movements in Europe, and their apparent counterpart in the US, has stirred unhappy memories and has, perhaps inevitably, had commentators and others reaching for the historical yardstick to see if today measures up to 80 years ago.

Why is it the 1930s to which we return, again and again? For some sceptics, the answer is obvious: it’s the only history anybody knows. According to this jaundiced view of the British school curriculum, Hitler and Nazis long ago displaced Tudors and Stuarts as the core, compulsory subjects of the past. When we fumble in the dark for a historical precedent, our hands keep reaching for the 30s because they at least come with a little light.

The more generous explanation centres on the fact that that period, taken together with the first half of the 1940s, represents a kind of nadir in human affairs. The Depression was, as Larry Elliott wrote last week, “the biggest setback to the global economy since the dawn of the modern industrial age”, leaving 34 million Americans with no income. The hyperinflation experienced in Germany – when a thief would steal a laundry-basket full of cash, chucking away the money in order to keep the more valuable basket – is the stuff of legend. And the Depression paved the way for history’s bloodiest conflict, the second world war which left, by some estimates, a mind-numbing 60 million people dead. At its centre was the Holocaust, the industrialised slaughter of 6 million Jews by the Nazis: an attempt at the annihilation of an entire people.

In these multiple ways, then, the 1930s function as a historical rock bottom, a demonstration of how low humanity can descend. The decade’s illustrative power as a moral ultimate accounts for why it is deployed so fervently and so often.

Less abstractly, if we keep returning to that period, it’s partly because it can justifiably claim to be the foundation stone of our modern world. The international and economic architecture that still stands today – even if it currently looks shaky and threatened – was built in reaction to the havoc wreaked in the 30s and immediately afterwards. The United Nations, the European Union, the International Monetary Fund, Bretton Woods: these were all born of a resolve not to repeat the mistakes of the 30s, whether those mistakes be rampant nationalism or beggar-my-neighbour protectionism. The world of 2017 is shaped by the trauma of the 1930s.

The international and economic architecture that still stands today was built in reaction to the havoc of the 1930s

One telling, human illustration came in recent global polling for the Journal of Democracy, which showed an alarming decline in the number of people who believed it was “essential” to live in a democracy. From Sweden to the US, from Britain to Australia, only one in four of those born in the 1980s regarded democracy as essential. Among those born in the 1930s, the figure was at or above 75%. Put another way, those who were born into the hurricane have no desire to feel its wrath again.

Most of these dynamics are long established, but now there is another element at work. As the 30s move from living memory into history, as the hurricane moves further away, so what had once seemed solid and fixed – specifically, the view that that was an era of great suffering and pain, whose enduring value is as an eternal warning – becomes contested, even upended.

Witness the remarks of Steve Bannon, chief strategist in Donald Trump’s White House and the former chairman of the far-right Breitbart website. In an interview with the Hollywood Reporter, Bannon promised that the Trump era would be “as exciting as the 1930s”. (In the same interview, he said “Darkness is good” – citing Satan, Darth Vader and Dick Cheney as examples.)

“Exciting” is not how the 1930s are usually remembered, but Bannon did not choose his words by accident. He is widely credited with the authorship of Trump’s inaugural address, which twice used the slogan “America first”. That phrase has long been off-limits in US discourse, because it was the name of the movement – packed with nativists and antisemites, and personified by the celebrity aviator Charles Lindbergh – that sought to keep the US out of the war against Nazi Germany and to make an accommodation with Hitler. Bannon, who considers himself a student of history, will be fully aware of that 1930s association – but embraced it anyway.

That makes him an outlier in the US, but one with powerful allies beyond America’s shores. Timothy Snyder, professor of history at Yale and the author of On Tyranny: Twenty Lessons from the Twentieth Century, notes that European nationalists are also keen to overturn the previously consensual view of the 30s as a period of shame, never to be repeated. Snyder mentions Hungary’s prime minister, Viktor Orban, who avowedly seeks the creation of an “illiberal” state, and who, says Snyder, “looks fondly on that period as one of healthy national consciousness”.

The more arresting example is, perhaps inevitably, Vladimir Putin. Snyder notes Putin’s energetic rehabilitation of Ivan Ilyin, a philosopher of Russian fascism influential eight decades ago. Putin has exhumed Ilyin both metaphorically and literally, digging up and moving his remains from Switzerland to Russia.

Among other things, Ilyin wrote that individuality was evil; that the “variety of human beings” represented a failure of God to complete creation; that what mattered was not individual people but the “living totality” of the nation; that Hitler and Mussolini were exemplary leaders who were saving Europe by dissolving democracy; and that fascist holy Russia ought to be governed by a “national dictator”. Ilyin spent the 30s exiled from the Soviet Union, but Putin has brought him back, quoting him in his speeches and laying flowers on his grave.

European nationalists are keen to overturn the view of the 1930s as a period of shame, never to be repeated.

Still, Putin, Orbán and Bannon apart, when most people compare the current situation to that of the 1930s, they don’t mean it as a compliment. And the parallel has felt irresistible, so that when Trump first imposed his travel ban, for example, the instant comparison was with the door being closed to refugees from Nazi Germany in the 30s. (Theresa May was on the receiving end of the same comparison when she quietly closed off the Dubs route to child refugees from Syria.)

When Trump attacked the media as purveyors of “fake news”, the ready parallel was Hitler’s slamming of the newspapers as the Lügenpresse, the lying press (a term used by today’s German far right). When the Daily Mail branded a panel of high court judges “enemies of the people”, for their ruling that parliament needed to be consulted on Brexit, those who were outraged by the phrase turned to their collected works of European history, looking for the chapters on the 1930s.

The Great Depression

So the reflex is well-honed. But is it sound? Does any comparison of today and the 1930s hold up?

The starting point is surely economic, not least because the one thing everyone knows about the 30s – and which is common to both the US and European experiences of that decade – is the Great Depression. The current convulsions can be traced back to the crash of 2008, but the impact of that event and the shock that defined the 30s are not an even match. When discussing our own time, Krugman speaks instead of the Great Recession: a huge and shaping event, but one whose impact – measured, for example, in terms of mass unemployment – is not on the same scale. US joblessness reached 25% in the 1930s; even in the depths of 2009 it never broke the 10% barrier.

The political sphere reveals another mismatch between then and now. The 30s were characterised by ultra-nationalist and fascist movements seizing power in leading nations: Germany, Italy and Spain most obviously. The world is waiting nervously for the result of France’s presidential election in May: victory for Marine Le Pen would be seized on as the clearest proof yet that the spirit of the 30s is resurgent.

There is similar apprehension that Geert Wilders, who speaks of ridding the country of ‘Moroccan scum”, has led the polls ahead of Holland’s general election on Wednesday. And plenty of liberals will be perfectly content for the Christian Democrat Angela Merkel to prevail over her Social Democratic rival, Martin Schulz, just so long as the far-right Alternative Fur Deutschland makes no ground. Still, so far and as things stand, in Europe only Hungary and Poland have governments that seem doctrinally akin to those that flourished in the 30s.

That leaves the US, which dodged the bullet of fascistic rule in the 30s – although at times the success of the America First movement which at its peak could count on more than 800,000 paid-up members, suggested such an outcome was far from impossible. (Hence the intended irony in the title of Sinclair Lewis’s 1935 novel, It Can’t Happen Here.)

Donald Trump has certainly had Americans reaching for their history textbooks, fearful that his admiration for strongmen, his contempt for restraints on executive authority, and his demonisation of minorities and foreigners means he marches in step with the demagogues of the 30s.

But even those most anxious about Trump still focus on the form the new presidency could take rather than the one it is already taking. David From, a speechwriter to George W. Bush, wrote a much-noticed essay for the Atlantic titled, “How to build an autocracy”. It was billed as setting out “the playbook Donald Trump could use to set the country down a path towards illiberalism”. He was not arguing that Trump had already embarked on that route, just that he could (so long as the media came to heel and the public grew weary and worn down, shrugging in the face of obvious lies and persuaded that greater security was worth the price of lost freedoms).

Similarly, Trump has unloaded rhetorically on the free press – castigating them, Mail-style, as “enemies of the people” – but he has not closed down any newspapers. He meted out the same treatment via Twitter to a court that blocked his travel ban, rounding on the “so-called judge” – but he did eventually succumb to the courts’ verdict and withdrew his original executive order. He did not have the dissenting judges sacked or imprisoned; he has not moved to register or intern every Muslim citizen in the US; he has not suggested they wear identifying symbols.

These are crumbs of comfort; they are not intended to minimise the real danger Trump represents to the fundamental norms that underpin liberal democracy. Rather, the point is that we have not reached the 1930s yet. Those sounding the alarm are suggesting only that we may be travelling in that direction – which is bad enough.

Two further contrasts between now and the 1930s, one from each end of the sociological spectrum, are instructive. First, and particularly relevant to the US, is to ask: who is on the streets? In the 30s, much of the conflict was played out at ground level, with marchers and quasi-military forces duelling for control. The clashes of the Brownshirts with communists and socialists played a crucial part in the rise of the Nazis. (A turning point in the defeat of Oswald Mosley, Britain’s own little Hitler, came with his humbling in London’s East End, at the 1936 battle of Cable Street.)

But those taking to the streets today – so far – have tended to be opponents of the lurch towards extreme nationalism. In the US, anti-Trump movements – styling themselves, in a conscious nod to the 1930s, as “the resistance” – have filled city squares and plazas. The Women’s March led the way on the first day of the Trump presidency; then those protesters and others flocked to airports in huge numbers a week later, to obstruct the refugee ban. Those demonstrations have continued, and they supply an important contrast with 80 years ago. Back then, it was the fascists who were out first – and in force.

Snyder notes another key difference. “In the 1930s, all the stylish people were fascists: the film critics, the poets and so on.” He is speaking chiefly about Germany and Italy, and doubtless exaggerates to make his point, but he is right that today “most cultural figures tend to be against”. There are exceptions – Le Pen has her celebrity admirers, but Snyder speaks accurately when he says that now, in contrast with the 30s, there are “few who see fascism as a creative cultural force”.

Fear and loathing

So much for where the lines between then and now diverge. Where do they run in parallel?

The exercise is made complicated by the fact that ultra-nationalists are, so far, largely out of power where they ruled in the 30s – namely, Europe – and in power in the place where they were shut out in that decade, namely the US. It means that Trump has to be compared either to US movements that were strong but ultimately defeated, such as the America First Committee, or to those US figures who never governed on the national stage.

In that category stands Huey Long, the Louisiana strongman, who ruled that state as a personal fiefdom (and who was widely seen as the inspiration for the White House dictator at the heart of the Lewis novel).

“He was immensely popular,” says Tony Badger, former professor of American history at the University of Cambridge. Long would engage in the personal abuse of his opponents, often deploying colourful language aimed at mocking their physical characteristics. The judges were a frequent Long target, to the extent that he hounded one out of office – with fateful consequences.

Long went over the heads of the hated press, communicating directly with the voters via a medium he could control completely. In Trump’s day, that is Twitter, but for Long it was the establishment of his own newspaper, the Louisiana Progress (later the American Progress) – which Long had delivered via the state’s highway patrol and which he commanded be printed on rough paper, so that, says Badger, “his constituents could use it in the toilet”.

All this was tolerated by Long’s devotees because they lapped up his message of economic populism, captured by the slogan: “Share Our Wealth”. Tellingly, that resonated not with the very poorest – who tended to vote for Roosevelt, just as those earning below $50,000 voted for Hillary Clinton in 2016 – but with “the men who had jobs or had just lost them, whose wages had eroded and who felt they had lost out and been left behind”. That description of Badger’s could apply just as well to the demographic that today sees Trump as its champion.

Long never made it to the White House. In 1935, one month after announcing his bid for the presidency, he was assassinated, shot by the son-in-law of the judge Long had sought to remove from the bench. It’s a useful reminder that, no matter how hate-filled and divided we consider US politics now, the 30s were full of their own fear and loathing.

“I welcome their hatred,” Roosevelt would say of his opponents on the right. Nativist xenophobia was intense, even if most immigration had come to a halt with legislation passed in the previous decade. Catholics from eastern Europe were the target of much of that suspicion, while Lindbergh and the America Firsters played on enduring antisemitism.

This, remember, was in the midst of the Great Depression, when one in four US workers was out of a job. And surely this is the crucial distinction between then and now, between the Long phenomenon and Trump. As Badger summarises: “There was a real crisis then, whereas Trump’s is manufactured.”

And yet, scholars of the period are still hearing the insistent beep of their early warning systems. An immediate point of connection is globalisation, which is less novel than we might think. For Snyder, the 30s marked the collapse of the first globalisation, defined as an era in which a nation’s wealth becomes ever more dependent on exports. That pattern had been growing steadily more entrenched since the 1870s (just as the second globalisation took wing in the 1970s). Then, as now, it had spawned a corresponding ideology – a faith in liberal free trade as a global panacea – with, perhaps, the English philosopher Herbert Spencer in the role of the End of History essayist Francis Fukuyama. By the 1930s, and thanks to the Depression, that faith in globalisation’s ability to spread the wealth evenly had shattered. This time around, disillusionment has come a decade or so ahead of schedule.

The second loud alarm is clearly heard in the hostility to those deemed outsiders. Of course, the designated alien changes from generation to generation, but the impulse is the same: to see the family next door not as neighbours but as agents of some heinous worldwide scheme, designed to deprive you of peace, prosperity or what is rightfully yours. In 30s Europe, that was Jews. In 30s America, it was eastern Europeans and Jews. In today’s Europe, it’s Muslims. In America, it’s Muslims and Mexicans (with a nod from the so-called alt-right towards Jews). Then and now, the pattern is the same: an attempt to refashion the pain inflicted by globalisation and its discontents as the wilful act of a hated group of individuals. No need to grasp difficult, abstract questions of economic policy. We just need to banish that lot, over there.

The third warning sign, and it’s a necessary companion of the second, is a growing impatience with the rule of law and with democracy. “In the 1930s, many, perhaps even most, educated people had reached the conclusion that democracy was a spent force,” says Snyder. There were plenty of socialist intellectuals ready to profess their admiration for the efficiency of Soviet industrialisation under Stalin, just as rightwing thinkers were impressed by Hitler’s capacity for state action. In our own time, that generational plunge in the numbers regarding democracy as “essential” suggests a troubling echo.

Today’s European nationalists exhibit a similar impatience, especially with the rule of law: think of the Brexiters’ insistence that nothing can be allowed to impede “the will of the people”. As for Trump, it’s striking how very rarely he mentions democracy, still less praises it. “I alone can fix it” is his doctrine – the creed of the autocrat.

The geopolitical equivalent is a departure from, or even contempt for, the international rules-based system that has held since 1945 – in which trade, borders and the seas are loosely and imperfectly policed by multilateral institutions such as the UN, the EU and the World Trade Organisation. Admittedly, the international system was weaker to start with in the 30s, but it lay in pieces by the decade’s end: both Hitler and Stalin decided that the global rules no longer applied to them, that they could break them with impunity and get on with the business of empire-building.

If there’s a common thread linking 21st-century European nationalists to each other and to Trump, it is a similar, shared contempt for the structures that have bound together, and restrained, the principal world powers since the last war. Naturally, Le Pen and Wilders want to follow the Brexit lead and leave, or else break up, the EU. And, no less naturally, Trump supports them – as well as regarding Nato as “obsolete” and the UN as an encumbrance to US power (even if his subordinates rush to foreign capitals to say the opposite).

For historians of the period, the 1930s are always worthy of study because the decade proves that systems – including democratic republics – which had seemed solid and robust can collapse. That fate is possible, even in advanced, sophisticated societies. The warning never gets old.

But when we contemplate our forebears from eight decades ago, we should recall one crucial advantage we have over them. We have what they lacked. We have the memory of the 1930s. We can learn the period’s lessons and avoid its mistakes. Of course, cheap comparisons coarsen our collective conversation. But having a keen ear tuned to the echoes of a past that brought such horror? That is not just our right. It is surely our duty.

The Guardian

How a Ruthless Network of Super-Rich Ideologues Killed Choice and Destroyed People’s Faith in Politics – George Monbiot. 

Neoliberalism: the deep story that lies beneath Donald Trump’s triumph.

The events that led to Donald Trump’s election started in England in 1975. At a meeting a few months after Margaret Thatcher became leader of the Conservative party, one of her colleagues, or so the story goes, was explaining what he saw as the core beliefs of conservatism. She snapped open her handbag, pulled out a dog-eared book, and slammed it on the table. “This is what we believe,” she said. A political revolution that would sweep the world had begun.

The book was The Constitution of Liberty by Frederick Hayek. Its publication, in 1960, marked the transition from an honest, if extreme, philosophy to an outright racket. The philosophy was called neoliberalism. It saw competition as the defining characteristic of human relations. The market would discover a natural hierarchy of winners and losers, creating a more efficient system than could ever be devised through planning or by design. Anything that impeded this process, such as significant tax, regulation, trade union activity or state provision, was counter-productive. Unrestricted entrepreneurs would create the wealth that would trickle down to everyone.

This, at any rate, is how it was originally conceived. But by the time Hayek came to write The Constitution of Liberty, the network of lobbyists and thinkers he had founded was being lavishly funded by multimillionaires who saw the doctrine as a means of defending themselves against democracy. Not every aspect of the neoliberal programme advanced their interests. Hayek, it seems, set out to close the gap.

He begins the book by advancing the narrowest possible conception of liberty: an absence of coercion. He rejects such notions as political freedom, universal rights, human equality and the distribution of wealth, all of which, by restricting the behaviour of the wealthy and powerful, intrude on the absolute freedom from coercion he demands.

Democracy, by contrast, “is not an ultimate or absolute value”. In fact, liberty depends on preventing the majority from exercising choice over the direction that politics and society might take.

He justifies this position by creating a heroic narrative of extreme wealth. He conflates the economic elite, spending their money in new ways, with philosophical and scientific pioneers. Just as the political philosopher should be free to think the unthinkable, so the very rich should be free to do the undoable, without constraint by public interest or public opinion.

The ultra rich are “scouts”, “experimenting with new styles of living”, who blaze the trails that the rest of society will follow. The progress of society depends on the liberty of these “independents” to gain as much money as they want and spend it how they wish. All that is good and useful, therefore, arises from inequality. There should be no connection between merit and reward, no distinction made between earned and unearned income, and no limit to the rents they can charge.

Inherited wealth is more socially useful than earned wealth: “the idle rich”, who don’t have to work for their money, can devote themselves to influencing “fields of thought and opinion, of tastes and beliefs”. Even when they seem to be spending money on nothing but “aimless display”, they are in fact acting as society’s vanguard.

Hayek softened his opposition to monopolies and hardened his opposition to trade unions. He lambasted progressive taxation and attempts by the state to raise the general welfare of citizens. He insisted that there is “an overwhelming case against a free health service for all” and dismissed the conservation of natural resources. It should come as no surprise to those who follow such matters that he was awarded the Nobel prize for economics.

By the time Thatcher slammed his book on the table, a lively network of thinktanks, lobbyists and academics promoting Hayek’s doctrines had been established on both sides of the Atlantic, abundantly financed by some of the world’s richest people and businesses, including DuPont, General Electric, the Coors brewing company, Charles Koch, Richard Mellon Scaife, Lawrence Fertig, the William Volker Fund and the Earhart Foundation. Using psychology and linguistics to brilliant effect, the thinkers these people sponsored found the words and arguments required to turn Hayek’s anthem to the elite into a plausible political programme.

Thatcherism and Reaganism were not ideologies in their own right: they were just two faces of neoliberalism. Their massive tax cuts for the rich, crushing of trade unions, reduction in public housing, deregulation, privatisation, outsourcing and competition in public services were all proposed by Hayek and his disciples. But the real triumph of this network was not its capture of the right, but its colonisation of parties that once stood for everything Hayek detested.

Bill Clinton and Tony Blair did not possess a narrative of their own. Rather than develop a new political story, they thought it was sufficient to triangule. In other words, they extracted a few elements of what their parties had once believed, mixed them with elements of what their opponents believed, and developed from this unlikely combination a “third way”.

It was inevitable that the blazing, insurrectionary confidence of neoliberalism would exert a stronger gravitational pull than the dying star of social democracy. Hayek’s triumph could be witnessed everywhere from Blair’s expansion of the private finance initiative to Clinton’s repeal of the Glass-Steagal Act, which had regulated the financial sector. For all his grace and touch, Barack Obama, who didn’t possess a narrative either (except “hope”), was slowly reeled in by those who owned the means of persuasion.

As I warned in April, the result is first disempowerment then disenfranchisement. If the dominant ideology stops governments from changing social outcomes, they can no longer respond to the needs of the electorate. Politics becomes irrelevant to people’s lives; debate is reduced to the jabber of a remote elite. The disenfranchised turn instead to a virulent anti-politics in which facts and arguments are replaced by slogans, symbols and sensation. The man who sank Hillary Clinton’s bid for the presidency was not Donald Trump. It was her husband.

The paradoxical result is that the backlash against neoliberalism’s crushing of political choice has elevated just the kind of man that Hayek worshipped. Trump, who has no coherent politics, is not a classic neoliberal. But he is the perfect representation of Hayek’s “independent”; the beneficiary of inherited wealth, unconstrained by common morality, whose gross predilections strike a new path that others may follow. The neoliberal thinktankers are now swarming round this hollow man, this empty vessel waiting to be filled by those who know what they want. The likely result is the demolition of our remaining decencies, beginning with the agreement to limit global warming.

Those who tell the stories run the world. Politics has failed through a lack of competing narratives. The key task now is to tell a new story of what it is to be a human in the 21st century. It must be as appealing to some who have voted for Trump and Ukip as it is to the supporters of Clinton, Bernie Sanders or Jeremy Corbyn.

A few of us have been working on this, and can discern what may be the beginning of a story. It’s too early to say much yet, but at its core is the recognition that – as modern psychology and neuroscience make abundantly clear – human beings, by comparison with any other animals, are both remarkably social and remarkably unselfish. The atomisation and self-interested behaviour neoliberalism promotes run counter to much of what comprises human nature.

Hayek told us who we are, and he was wrong. Our first step is to reclaim our humanity.

Evonomics.com

Preparing Our Economy And Society For Automation & AI – Robert Reich. 

Professor Reich comes to Google to discuss the impact of automation & artificial intelligence on our economy. He also provides a recommendation on how we can ensure future technologies benefit the entire economy, not just those at the top.

Social Europe

How To Win Back Obama, Sanders, and Trump Voters – Les Leopold. 

It is imperative we think big to destroy Neoliberalism’s grip. 

Hillary Clinton underperformed Barack Obama by minus 290,000 votes in Pennsylvania, minus 222,000 votes in Wisconsin and a whopping minus 500,000 votes in Michigan. We don’t know how many of these voters also supported Sanders along the way, but it is highly likely that millions took that journey. Winning them back is the key to the battle for economic and social justice.

The current resistance to Trump is truly remarkable. Not since the anti-Vietnam War and Civil Rights protests have we seen so many people in the streets ― women, Muslim ban protesters, scientists protesting in behalf of facts, people just protesting ― with more to come. Even three New England Patriots are refusing to attend their Super Bowl White House event.

As Trump’s lunacy and destructiveness grow day-by-day, defensive struggles are an absolute must. But defensiveness alone is not likely to win back Obama/Sanders/Trump voters.

It is possible that those voters will soon get buyer’s remorse and join our defensive struggles. Or maybe a dozen more Nordstrom-like ethics violations could lead to impeachment. But such hopes leave political agency in Trump’s hands rather than in our own.

Instead of waiting for Trump to implode, we should be engaging directly with the Obama/Sanders/Trump voters. But doing so requires an understanding of the economic forces that fueled both the Sanders and Trump revolts.

We live in an era of runaway inequality. In 1970 the gap between CEO pay and the average worker was about 45 to 1. That’s a hefty gap. It means that if you could afford one house and one car, a top CEO could afford 45 homes and 45 cars. Today, the gap is an unfathomable 844 to 1 and rising. 844 houses to your one!

That money gushing to the top is the direct result of 40 years of neoliberalism, a philosophy that captured both political parties. It calls for:

– Tax cuts (especially for the wealthy)

– Government deregulation (especially for Wall Street)

– Cuts in social spending (especially for programs and infrastructure that benefit the rest of us.)

– Free trade (which gives corporations the tools to destroy unions and hold down worker wages.)

Supposedly, this plan would create a massive profit and investment boom, job creation and rising incomes for all. Of course, it failed miserably for the vast majority of us, while succeeding beyond belief for the super rich.

The failure, however, involved far more than rising income gaps. Financial deregulation unleashed Wall Street to financially strip-mine the wealth from our workplaces and our communities into the pockets of Wall Street and corporate elites. This outrageous process has nothing to do with talent or hard work. It also is not an economic act of God. Rather it is the direct result of weakening rules that protect us from the financial predators. This is why the richest country in the history of the world has a crumbling infrastructure, the largest prison population, the most costly health care system, the most student debt and the most income inequality.

Sanders and Trump led revolts against runaway inequality. Both claimed the established order had to be changed radically. Sanders nearly defeated the Clinton machine with a social democratic platform of free higher education, Medicare for All, turning the screws on Wall Street, and taking big money out of politics. Trump, like Sanders, attacked trade deals and claimed he would bring jobs back to America, But he also led the hard right’s racist, sexist and xenophobic calls for immigration restrictions, walls, climate change denial and the curtailment of women’s rights. 

Resisting Trump alone will not stop the hard right. We need to continue the Sanders attack on the neoliberal order by offering a compelling vision for social and economic justice.

Right now Trump is a clear and present danger to us all. But an equally dangerous problem is that the regime of runaway inequality will grow worse as the hard right consolidates its political power. Long before Trump entered the political fray, the hard right was upending neoliberal Democrats. Since 2009, when Obama took office, the Democrats have lost 919 state legislative seats. The Republicans now control 68 percent of all state legislative chambers, and control both state chambers and the governorship in 24 states while the Democrats have such tri-partite control in only 6 states. The Democrats are losing, in large part, because they can’t untangle themselves from financial and corporate elites.

Here’s a terrifying thought: Once the Republicans capture 38 states they can amend the Constitution.

Resisting Trump alone will not stop the hard right. We need to continue the Sanders attack on the neoliberal order by offering a compelling vision for social and economic justice.

Building the Educational Infrastructure

During America’s first epic battle against Wall Street, the Populist movement of the late 19th century fielded 6,000 educators to help small farmers, black and white, learn how to reverse runaway inequality. Because of their efforts a powerful movement grew to take back our country from the moneyed interests — an effort that ultimately culminated in social security, the regulation of Wall Street and large corporations, and the protection of working people on the job.

Today, we need an vast army of educators to spread the word about how runaway inequality is linking all of us together. Toward that end groups all over the country are conduct workshops that lead to the following takeaways:

– Runaway inequality will not cure itself. There is no hidden mechanism in the economy that will right the ship. Financial and corporate elites are gaining more and more at our expense.

– The financial strip-mining of our economy impacts all of us and all of our issues — from climate change, to mass incarceration, to job loss, to declining incomes, to labor rights, to student loans.

– It will take an organized mass movement to take back our country from the hard right. That means no matter what our individual identity (labor unionist, environmentalist, racial justice activist, feminist, etc.), we also need to take on the identity of movement builder. We all must come together or we all lose.

– We can start the building process right now by sharing educational information with our friends, colleagues and neighbors.

Many of the participants in these workshops are Obama/Sanders/Trump voters and they would tell you the educational process works. Good things happen when people come together in a safe space to discuss their common concerns. There’s something special about face-to-face discussions that social media alone cannot replace.

In order to shift the balance of power away from the hard right, we need more educators ― thousands more leading tens of thousands discussions.

Armed with facts ― not alternate facts ― we can build the foundation for a new movement to take back our country both from Trump and from the financial strip-miners.

Substantive change, rather than just returning to the pre-Trump era of massive inequality, requires a long and sustained movement the likes of which we haven’t seen in our lifetimes.

Sustainability

By taking a page from the Tea Party playbook, the Resist Trump efforts are aiming at the mid-term elections as well as at state and local contests. Let’s hope for success. Let’s also hope it also leads to a broader movement to reverse runaway inequality. However, substantive change, rather than just returning to the pre-Trump era of massive inequality, requires a long and sustained movement the likes of which we haven’t seen in our lifetimes.

Sustained movement building on a large scale is foreign to us. For more than a generation we’ve grown accustomed to the neoliberal vision that has narrowed our sense of the possible. It taught us that it’s ok for students to go deeply in debt; that it’s natural to have the largest prison population in the world; that it’s inevitable to have a crumbling public sector; and that it’s an economic law for corporations to shift our jobs to low wage areas with poor environmental protections. Worst of all it conditioned us to think small about movement building ― to work in our own issue silos and not link up together.

Occupy Wall Street, Elizabeth Warren and Bernie Sanders woke us up from our stupor. Let’s not slide back by failing to engage, face to face, with the Obama/Sanders/Trump voters who are eager for real change.

By Les Leopold, director of the Labor Institute in New York

Common Dreams

When Shareholder Capitalism Came to Town – Steven Pearlstein.  

The rise in inequality can be blamed on the shift from managerial to shareholder capitalism.

It was only 20 years ago that the world was in the thrall of American-style capitalism. Not only had it vanquished communism, but it was widening its lead over Japan Inc. and European-style socialism. America’s companies were widely viewed as the most innovative and productive, its capital markets the most efficient, its labor markets the most flexible and meritocratic, its product markets the most open and competitive, its tax and regulatory regimes the most accommodating to economic growth.

Today, that sense of confidence and economic hegemony seems a distant memory. We have watched the bursting of two financial bubbles, struggled through two long recessions, and suffered a lost decade in terms of incomes of average American households.

We continue to rack up large trade deficits even as many of the country’s biggest corporations shift more of their activity and investment overseas. Economic growth has slowed, and the top 10 percent of households have captured whatever productivity gains there have been. Economic mobility has declined to the point that, by international comparison, it is only middling. A series of accounting and financial scandals, coupled with ever-escalating pay for chief executives and hedge-fund managers, has generated widespread cynicism about business. Other countries are beginning to turn to China, Germany, Sweden, and even Israel as models for their economies.

No wonder, then, that large numbers of Americans have begun to question the superiority of our brand of free-market capitalism. This disillusionment is reflected in the rise of the Tea Party and the Occupy Wall Street movements and the increasing polarization of our national politics. It is also reflected on the shelves of bookstores and on the screens of movie theaters.

Embedded in these critiques is not simply a collective disappointment in the inability of American capitalism to deliver on its economic promise of wealth and employment opportunity. Running through them is also a nagging question about the larger purpose of the market economy and how it serves society.

In the current, cramped model of American capitalism, with its focus on maximizing output growth and shareholder value, there is ample recognition of the importance of financial capital, human capital, and physical capital but no consideration of social capital.

Social capital is the trust we have in one another, and the sense of mutual responsibility for one another, that gives us the comfort to take risks, make long-term investments, and accept the inevitable dislocations caused by the economic gales of creative destruction. Social capital provides the necessary grease for the increasingly complex machinery of capitalism and for the increasingly contentious machinery of democracy. Without it, democratic capitalism cannot survive.

It is our social capital that is now badly depleted. This erosion manifests in the weakened norms of behavior that once restrained the most selfish impulses of economic actors and provided an ethical basis for modern capitalism.

A capitalism in which Wall Street bankers and traders think peddling dangerous loans or worthless securities to unsuspecting customers is just “part of the game.”

A capitalism in which top executives believe it is economically necessary that they earn 350 times what their front-line workers do. 

A capitalism that thinks of employees as expendable inputs. 

A capitalism in which corporations perceive it as both their fiduciary duty to evade taxes and their constitutional right to use unlimited amounts of corporate funds to purchase control of the political system. 

That is a capitalism whose trust deficit is every bit as corrosive as budget and trade deficits.

As economist Luigi Zingales of the University of Chicago concludes in his recent book, A Capitalism for the People, American capitalism has become a victim of its own success. In the years after the demise of communism, “the intellectual hegemony of capitalism, however, led to complacency and extremism: complacency through the degeneration of the system, extremism in the application of its ideological premises,” he writes. “‘Greed is good’ became the norm rather than the frowned-upon exception. Capitalism lost its moral higher ground.”

Pope Francis recently gave voice to this nagging sense that our free-market system had lost its moral bearings. “Some people continue to defend trickle-down theories, which assume that economic growth, encouraged by a free market, will inevitably succeed in bringing about greater justice and inclusiveness in the world,” wrote the new pope in an 84-page apostolic exhortation. “This opinion, which has never been confirmed by the facts, expresses a crude and naïve trust in the goodness of those wielding economic power and in the sacralized workings of the prevailing economic system.”

Our challenge now is to restore both the economic and moral legitimacy of American capitalism. And there is no better place to start than with a reconsideration of the purpose of the corporation.

“MAXIMIZING SHAREHOLDER VALUE”

In the recent history of bad ideas, few have had a more pernicious effect than the one that corporations should be managed to maximize “shareholder value.”

Indeed, much of what we perceive to be wrong with the American economy these days—the slowing growth and rising inequality, the recurring scandals and wild swings from boom to bust, the inadequate investment in research and development and worker training—has its roots in this misguided ideology.

It is an ideology, moreover, that has no basis in history, in law, or in logic. What began in the 1970s and 1980s as a useful corrective to self-satisfied managerial mediocrity has become a corrupting self-interested dogma peddled by finance professors, Wall Street money managers, and overcompensated corporate executives.

Let’s start with the history. The earliest corporations, in fact, were generally chartered not for private but for public purposes, such as building canals or transit systems. Well into the 1960s, corporations were broadly viewed as owing something in return to the community that provided them with special legal protections and the economic ecosystem in which they could grow and thrive.

Legally, no statutes require that companies be run to maximize profits or share prices. In most states, corporations can be formed for any lawful purpose. Lynn Stout, a Cornell law professor, has been looking for years for a corporate charter that even mentions maximizing profits or share price. So far, she hasn’t found one. Companies that put shareholders at the top of their hierarchy do so by choice, Stout writes, not by law.

Nor does the law require, as many believe, that executives and directors owe a special fiduciary duty to the shareholders who own the corporation. The director’s fiduciary duty, in fact, is owed simply to the corporation, which is owned by no one, just as you and I are owned by no one—we are all “persons” in the eyes of the law. Corporations own themselves.

What shareholders possess is a contractual claim to the “residual value” of the corporation once all its other obligations have been satisfied—and even then the directors are given wide latitude to make whatever use of that residual value they choose, just as long as they’re not stealing it for themselves.

It is true, of course, that only shareholders have the power to elect the corporate directors. But given that directors are almost always nominated by the management and current board and run unopposed, it requires the peculiar imagination of corporate counsel to leap from the shareholders’ power to “elect” directors to a sweeping mandate that directors and the executives must put the interests of shareholders above all others.

Given this lack of legal or historical support, it is curious how “maximizing shareholder value” has evolved into such a widely accepted norm of corporate behavior.

Milton Friedman, the University of Chicago free-market economist, is often credited with first articulating the idea in a 1970 New York Times Magazine essay in which he argued that “there is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits.” Anything else, he argues, is “unadulterated socialism.”

A decade later, Friedman’s was still a minority view among corporate leaders. In 1981, as Ralph Gomory and Richard Sylla recount in a recent article in Daedalus, the Business Roundtable, representing the nation’s largest firms, issued a statement recognizing a broader purpose of the corporation: “Corporations have a responsibility, first of all, to make available to the public quality goods and services at fair prices, thereby earning a profit that attracts investment to continue and enhance the enterprise, provide jobs and build the economy.” The statement went on to talk about a “symbiotic relationship” between business and society not unlike that voiced nearly 30 years earlier by General Motors chief executive Charlie Wilson, when he reportedly told a Senate committee that “what is good for the country is good for General Motors, and vice versa.”

By 1997, however, the Business Roundtable was striking a tone that sounded a whole lot more like Professor Friedman than CEO Wilson. “The principal objective of a business enterprise is to generate economic returns to its owners,” it declared in its statement on corporate responsibility. “If the CEO and the directors are not focused on shareholder value, it may be less likely the corporation will realize that value.”

The most likely explanation for this transformation involves three broad structural changes that were going on in the U.S. economy—globalization, deregulation, and rapid technological change. Over a number of decades, these three forces have conspired to rob what were once the dominant American corporations of the competitive advantages they had during the “golden era” of the 1950s and 1960s in both U.S. and global markets. Those advantages—and the operating profits they generated—were so great that they could spread the benefits around to all corporate stakeholders. The postwar prosperity was so widely shared that it rarely occurred to stockholders, consumers, or communities to wonder if they were being shortchanged.

It was only when competition from foreign suppliers or recently deregulated upstarts began to squeeze out those profits—often with the help of new technologies—that these once-mighty corporations were forced to make difficult choices. In the early going, their executives found that it was easier to disappoint shareholders than customers, workers, or even their communities. The result, during the 1970s, was a lost decade for investors.

Beginning in the mid-1980s, however, a number of companies with lagging stock prices found themselves targets for hostile takeovers launched by rival companies or corporate raiders employing newfangled “junk bonds” to finance unsolicited bids. Disappointed shareholders were only too willing to sell out to the raiders. So it developed that the mere threat of a hostile takeover was sufficient to force executives and directors across the corporate landscape to embrace a focus on profits and share prices. Almost overnight they tossed aside their more complacent and paternalistic management style, and with it a host of inhibitions against laying off workers, cutting wages and benefits, closing plants, spinning off divisions, taking on debt, moving production overseas. Some even joined in waging hostile takeovers themselves.

Spurred on by this new “market for corporate control,” companies traded in their old managerial capitalism for a new shareholder capitalism, which continues to dominate the business sector to this day. Those high-yield bonds, once labeled as “junk” and peddled by upstart and ethically challenged investment banks, are now a large and profitable part of the business of every Wall Street firm. The unsavory raiders have now morphed into respected private-equity and hedge-fund managers, some of whom proudly call themselves “activist investors.” And corporate executives who once arrogantly ignored the demands of Wall Street now profess they have no choice but to dance to its tune.

THE INSTITUTIONS SUPPORTING SHAREHOLDER VALUE

An elaborate institutional infrastructure has developed to reinforce shareholder capitalism and its generally accepted corporate mandate to maximize short-term profits and share price. This infrastructure includes free–market-oriented think tanks and university faculties that continue to spin out elaborate theories about the efficiency of financial markets.

An earlier generation of economists had looked at the stock-market boom and bust that led to the Great Depression and concluded that share prices often reflected irrational herd behavior on the part of investors. But in the 1960s, a different theory began to take hold at intellectual strongholds such as the University of Chicago that quickly spread to other economics departments and business schools. The essence of the “efficient market” hypothesis, first articulated by Eugene Fama (a 2013 Nobel laureate) is that the current stock price reflects all the public and private information known about a company and therefore is a reliable gauge of the company’s true economic value. For a generation of finance professors, it was only a short logical leap from this hypothesis to a broader conclusion that the share price is therefore the best metric around which to organize a company’s strategy and measure its success.

With the rise of behavioral economics, and the onset of two stock-market bubbles, the efficient–market hypothesis has more recently come under serious criticism. Another of last year’s Nobel winners, Robert Shiller, demonstrated the various ways in which financial markets 
are predictably irrational. Curiously, however, the efficient-market hypothesis is still widely accepted by business schools—and, in particular, their finance departments—which continue to preach the shareholder-first ideology.

Surveys by the Aspen Institute’s Center for Business Education, for example, find that
most MBA students believe that maximizing value for shareholders is the most important responsibility of a company and that this conviction strengthens as they proceed toward their degree, in many schools taking courses that teach techniques for manipulating short-term earnings and share prices. The assumption is so entrenched that even business-school deans who have publicly rejected the ideology acknowledge privately that they’ve given up trying to convince their faculties to take a more balanced approach.

Equally important in sustaining the shareholder focus are corporate lawyers, in-house as well as outside counsels, who now reflexively advise companies against actions that would predictably lower a company’s stock price.

For many years, much of the jurisprudence coming out of the Delaware courts—where most big corporations have their legal home—was based around the “business judgment” rule, which held that corporate directors have wide discretion in determining a firm’s goals and strategies, even if their decisions reduce profits or share prices. But in 1986, the Delaware Court of Chancery ruled that directors of the cosmetics company Revlon had to put the interests of shareholders first and accept the highest price offered for the company. As Lynn Stout has written, and the Delaware courts subsequently confirmed, the decision was a narrowly drawn exception to the business–judgment rule that only applies once a company has decided to put itself up for sale. But it has been widely—and mistakenly—used ever since as a legal rationale for the primacy of shareholder interests and the legitimacy of share-price maximization.

Reinforcing this mistaken belief are the shareholder lawsuits now routinely filed against public companies by class-action lawyers any time the stock price takes a sudden dive. Most of these are frivolous and, particularly since passage of reform legislation in 1995, many are dismissed. But even those that are dismissed generate cost and hassle, while the few that go to trial risk exposing the company to significant embarrassment, damages, and legal fees.

The bigger damage from these lawsuits comes from the subtle way they affect corporate behavior. Corporate lawyers, like many of their clients, crave certainty when it comes to legal matters. So they’ve developed what might be described as a “safe harbor” mentality—an undue reliance on well-established bright lines in advising clients to shy away from actions that might cause the stock price to fall and open the company up to a shareholder lawsuit. Such actions include making costly long-term investments, or admitting mistakes, or failing to follow the same ruthless strategies as their competitors. One effect of this safe-harbor mentality is to reinforce the focus on short-term share price.

The most extensive infrastructure supporting the shareholder-value ideology is to be found on Wall Street, which remains thoroughly fixated on quarterly earnings and short-term trading. Companies that refuse to give quarterly-earnings guidance are systematically shunned by some money managers, while those that miss their earnings targets by even small amounts see their stock prices hammered.

Recent investigations into insider trading have revealed the elaborate strategies and tactics used by some hedge funds to get advance information about a quarterly earnings report in order to turn enormous profits by trading on it. And corporate executives continue to spend enormous amounts of time and attention on industry analysts whose forecasts and ratings have tremendous impact on share prices.

In a now-infamous press interview in the summer of 2007, former Citigroup chairman Charles Prince provided a window into the hold that Wall Street has over corporate behavior. At 
the time, Citi’s share price had lagged behind that of the other big banks, and there was speculation in the financial press that Prince would be fired if he didn’t quickly find a way to catch up. In the interview with the Financial Times, Prince seemed to confirm that speculation. When asked why he was continuing to make loans for high-priced corporate takeovers despite evidence that the takeover boom was losing steam, he basically said he had no choice—as long as other banks were making big profits from such loans, Wall Street would force him, or anyone else in his job, to make them as well. “As long as the music is playing,” Prince explained, “you’ve got to get up and dance.”

It isn’t simply the stick of losing their jobs, however, that causes corporate executives to focus on maximizing shareholder value. There are also plenty of carrots to be found in those generous—some would say gluttonous—pay packages, the value of which is closely tied to the short-term performance of company stock.

The idea of loading up executives with stock options also dates to the transition to shareholder capitalism. The academic critique of managerial capitalism was that the lagging performance of big corporations was a manifestation of what economists call a “principal-agent” problem. In this case, the “principals” were the shareholders and their directors, and the misbehaving “agents” were the executives who were spending too much of their time, and the shareholder’s money, worrying about employees, customers, and the community at large.

In what came to be one of the most widely cited academic papers of all time, business-school professors Michael Jensen of Harvard and William Meckling of the University of Rochester wrote in 1976 that the best way to align the interests of managers to those of the shareholders was to tie a substantial amount of the managers’ compensation to the share price. In a subsequent paper in 1989 written with Kevin Murphy, Jensen went even further, arguing
that the reason corporate executives acted more like “bureaucrats than value-maximizing entrepreneurs” was because they didn’t get to keep enough of the extra value they created.

With that academic foundation, and the enthusiastic support of executive-compensation specialists, stock-based compensation took off. Given the tens and, in more than a few cases, the hundreds of millions of dollars lavished on individual executives, the focus
on boosting share price is hardly surprising. The ultimate irony, of course, is that the result
 of this lavish campaign to more closely align incentives and interests is that the “agents” have done considerably better than the “principals.”

Roger Martin, the former dean of the Rotman School of Management at the University of Toronto, calculates that from 1933 until 1976—roughly speaking, the era of “managerial capitalism” in which managers sought to balance the interest of shareholders with those of employees, customers, and society at large—the total real compound annual return on the stocks of the S&P 500 was 7.5 percent. From 1976 until 2011—roughly the period of “shareholder capitalism”—the comparable return has been 6.5 percent. Meanwhile, according to Martin’s calculation, the ratio of chief-executive compensation to corporate profits increased eightfold between 1980 and 2000, almost all of it coming in the form of stock-based compensation.

HOW SHAREHOLDER PRIMACY HAS RESHAPED CORPORATE BEHAVIOR

All of this reinforcing infrastructure—the academic underpinning, the business-school indoctrination, the threat of shareholder lawsuits, the Wall Street quarterly earnings machine, the executive compensation—has now succeeded in hardwiring the shareholder-value ideology into the economy and business culture. It has also set in motion a dynamic in which corporate and investor time horizons have become shorter and shorter. The average holding periods for corporate stocks, which for decades was six years, is now down to less than six months. The average tenure of a Fortune 500 chief executive is now down to less than six years. Given those realities, it should be no surprise that the willingness of corporate executives to sacrifice short-term profits to make long-term investments is rapidly disappearing.

A recent study by McKinsey & Company, the blue-chip consulting firm, and Canada’s public pension board found alarming levels of short-termism in the corporate executive suite. According
to the study, nearly 80 percent of top executives and directors reported feeling the most pressure to demonstrate a strong financial performance over a period of two years or less, with only 7 percent feeling considerable pressure to deliver strong performance over a period of five years or more. It also found that 55 percent of chief financial officers would forgo an attractive investment project today if it would cause the company to even marginally miss its quarterly-earnings target.

The shift on Wall Street from long-term investing to short-term trading presents a dilemma for those directing a company solely for shareholders: Which group of shareholders is it whose interests the corporation is supposed to optimize? Should it be the hedge funds that are buying and selling millions of shares in a matter of seconds to earn hedge fund–like returns? Or the “activist investors” who have just bought a third of the shares? Or should it be the retired teacher in Dubuque who has held the stock for decades as part of her retirement savings and wants a decent return with minimal downside risk?

One way to deal with this quandary would be for corporations to give shareholders a
bigger voice in corporate decision-making. But it turns out that even as they proclaim
their dedication to shareholder value, executives and directors have been doing everything possible to minimize shareholder involvement and influence in corporate governance. This curious hypocrisy is most recently revealed in the all-out effort by the business lobby to limit shareholder “say on pay” or the right to nominate a competing slate of directors.

For too many corporations, “maximizing shareholder value” has also provided justification
for bamboozling customers, squeezing employees, avoiding taxes, and leaving communities in the lurch. For any one profit–maximizing company, such ruthless behavior may be perfectly rational. But when competition forces all companies to behave in this fashion, neither they, nor we, wind up better off.

Take the simple example of outsourcing production to lower-cost countries overseas. Certainly it makes sense for any one company to aggressively pursue such a strategy. But
if every company does it, these companies may eventually find that so many American consumers have suffered job loss and wage cuts that they no longer can buy the goods they are producing, even at the cheaper prices. The companies may also find that government no longer has sufficient revenue to educate its remaining American workers or dredge the ports through which its imported goods are delivered to market.

Economists have a name for such unintended spillover effects—negative externalities—and normally the most effective response is some form of government action, such as regulation, taxes, or income transfers. But one of the hallmarks of the current political environment is that every tax, every regulation, and every new safety-net program is bitterly opposed by the corporate lobby as an assault on profits and job creation. Not only must the corporation commit to putting shareholders first—as they see it, the society must as well. And with the Supreme Court’s decision in Citizens United, corporations are now free to spend unlimited sums of money on political campaigns to elect politicians sympathetic to this view.

Perhaps the most ridiculous aspect of shareholder–über-alles is how at odds it is with every modern theory about managing people. David Langstaff, then–chief executive of TASC, a Virginia–based government-contracting firm, put it this way in a recent speech at a conference hosted by the Aspen Institute and the business school at Northwestern University: “If you are the sole proprietor of a business, do you think that you can motivate your employees for maximum performance by encouraging them simply to make more money for you?” Langstaff asked rhetorically. “That is effectively what an enterprise is saying when it states that its purpose is to maximize profit for its investors.”

Indeed, a number of economists have been trying to figure out the cause of the recent slowdown in both the pace of innovation and the growth in worker productivity. There are lots of possible culprits, but surely one candidate is that American workers have come to understand that whatever financial benefit may result from their ingenuity or increased efficiency is almost certain to be captured by shareholders and top executives.

The new focus on shareholders also hasn’t been a big winner with the public. Gallup polls show that people’s trust in and respect for big corporations have been on a long, slow decline in recent decades—at the moment, only Congress and health-maintenance organizations rank lower. When was the last time you saw a corporate chief executive lionized on the cover of a newsweekly? Odds are it was probably the late Steve Jobs of Apple, who wound up creating more wealth for more shareholders than anyone on the planet by putting shareholders near the bottom of his priority list.

RISING DOUBTS ABOUT SHAREHOLDER PRIMACY

The usual defense you hear of “maximizing shareholder value” from corporate chief executives is that at many firms—not theirs!—it has been poorly understood and badly executed. These executives make clear they don’t confuse today’s stock price or this quarter’s earnings with shareholder value, which they understand to be profitability and stock appreciation over the long term. They are also quick to acknowledge that no enterprise can maximize long-term value for its shareholders without attracting great employees, providing great products and services to customers, and helping to support efficient governments and healthy communities.

Even Michael Jensen has felt the need to reformulate his thinking. In a 2001 paper, he wrote, “A firm cannot maximize value if it ignores the interest of its stakeholders.” He offered a proposal he called “enlightened stakeholder theory,” one that “accepts maximization of the long run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders.”

But if optimizing shareholder value implicitly requires firms to take good care of customers, employees, and communities, then by the same logic you could argue that optimizing customer satisfaction would require firms to take good care of employees, communities, and shareholders. More broadly, optimizing any function inevitably requires the same tradeoffs or messy balancing of interests that executives of an earlier era claimed to have done.

The late, great management guru Peter Drucker long argued that if one stakeholder group should be first among equals, surely it should be the customer. “The purpose of business is to create and keep a customer,” he famously wrote.

Roger Martin picked up on Drucker’s theme in “Fixing the Game,” his book-length critique of shareholder value. Martin cites the experience of companies such as Apple, Johnson & Johnson, and Proctor & Gamble, companies that put customers first, and whose long-term shareholders have consistently done better than those of companies that claim to put shareholders first. The reason, Martin says, is that customer focus minimizes undue risk taking, maximizes reinvestment, and creates, over the long run, a larger pie.

Having spoken with more than a few top executives over the years, I can tell you that many would be thrilled if they could focus on customers rather than shareholders. In private, they chafe under the quarterly earnings regime forced on them by asset managers and the financial press. They fear and loathe “activist” investors. They are disheartened by their low public esteem. Few, however, have dared to challenge the shareholder-first ideology in public.

But recently, some cracks have appeared.

In 2006, Ian Davis, then–managing director of McKinsey, gave a lecture at the University of Pennsylvania’s Wharton School in which he declared, “Maximization of shareholder value is in danger of becoming irrelevant.”

Davis’s point was that global corporations have to operate not just in the United States but in the rest of the world where people either don’t understand the concept of putting shareholders first or explicitly reject it—and companies that trumpet it will almost surely draw the attention of hostile regulators and politicians.

“Big businesses have to be forthright in saying what their role is in society, and they will never do it by saying, ‘We maximize shareholder value.’”

A few years later, Jack Welch, the former chief executive of General Electric, made headlines when he told the Financial Times, “On the face of it, shareholder value is the dumbest idea in the world.” What he meant, he scrambled to explain a few days later, is that shareholder value is an outcome, not a strategy. But coming from the corporate executive (“Neutron Jack”) who had embodied ruthlessness in the pursuit of competitive dominance, his comment was viewed as a recognition that the single-minded pursuit of shareholder value had gone too far. “That’s not a strategy that helps you know what to do when you come to work every day,” Welch told Bloomberg Businessweek. “It doesn’t energize or motivate anyone. So basically my point is, increasing the value of your company in both the short and long term is an outcome of the implementation of successful strategies.”

Tom Rollins, the founder of the Teaching Company, offers as an alternative what he calls the “CEO” strategy, standing for customers, employees, and owners. Rollins starts by noting that at the foundation of all microeconomics are voluntary trades or exchanges that create “surplus” for both buyer and seller that in most cases exceed their minimum expectations. The same logic, he argues, ought to apply to the transactions between a company and its employees, customers, and owners/shareholders.

The problem with a shareholder-first strategy, Rollins argues, is that it ignores this basic tenet of economics. It views any surplus earned by employees and customers as both unnecessary and costly. After all, if the market would allow the firm to hire employees for 10 percent less, or charge customers 10 percent more, then by not driving the hardest possible bargain with employees and customers, shareholder profit is not maximized.

But behavioral research into the importance of “reciprocity” in social relationships strongly suggests that if employees and customers believe they are not getting any surplus from a transaction, they are unlikely to want to continue to engage in additional transactions with the firm. Other studies show that having highly satisfied customers and highly engaged employees leads directly to higher profits. As Rollins sees it, if firms provide above-market returns—surplus—to customers and employees, then customers and employees are likely to reciprocate and provide surplus value to firms and their owners.

Harvard Business School professor Michael Porter and Kennedy School senior fellow Mark Kramer have also rejected the false choice between a company’s social and value–maximizing responsibilities that is implicit in the shareholder-value model. “The solution lies in the principle of shared value, which involves creating economic value
in a way that also creates value for society by addressing its needs and challenges,” they wrote in the Harvard Business Review in 2011. In the past, economists have theorized that
for profit-maximizing companies to provide societal benefits, they had to sacrifice economic success by adding to their costs or forgoing revenue. What they overlooked, Porter and Kramer wrote, was that by ignoring social goals—safe workplaces, clean environments, effective school systems, adequate infrastructure—companies wound up adding to their overall costs while failing to exploit profitable business opportunities. “Businesses must reconnect company success with social progress,” Porter and Kramer wrote. “Shared value is not social responsibility, philanthropy or even sustainability, but a new way to achieve economic success. It is not on the margin of what companies do, but at the center.”

SMALL STEPS TOWARD A MORE BALANCED CAPITALISM

If it were simply the law that was responsible for the undue focus on shareholder value, it would be relatively easy to alter it. Changing a behavioral norm, however—particularly one so accepted and reinforced by so much supporting infrastructure—is a tougher challenge. The process will, of necessity, be gradual, requiring carrots as well as sticks. The goal should not be to impose a different focus for corporate decision-making as inflexible as maximizing shareholder value has become but rather to make it acceptable for executives and directors to experiment with and adopt a variety of goals and purposes.

Companies would surely be responsive if investors and money managers would make clear that they have a longer time horizon or are looking for more than purely bottom-line results. There has long been a small universe of “socially responsible” investing made up of mutual funds, public and union pension funds, and research organizations that monitor corporate behavior and publish scorecards based on an assessment of how companies treat customers, workers, the environment, and their communities. While some socially responsible funds and asset managers and investors have consistently achieved returns comparable or even slightly superior to those of competitors focused strictly on financial returns, there is no evidence of any systematic advantage. Nor has there been a large hedge fund or private-equity fund that made it to the top with a socially responsible investment strategy. You can do well by doing good, but it’s no sure thing that you’ll do better.

Nineteen states—the latest is Delaware, where a million businesses are legally registered—have recently established a new kind of corporate charter, the “benefit corporation,” that explicitly commits companies to be managed for the benefit of all stakeholders. About 550 companies, including Patagonia and Seventh Generation, now have B charters, while 960 have been certified as meeting the standards set out by the nonprofit B Lab. Although almost all of today’s B corps are privately held, supporters of the concept hope that a number of sizable firms will become B corps and that their stocks will then be traded on a separate exchange.

One big challenge facing B corps and the socially responsible investment community is
that the criteria they use to assess corporate behavior exhibit an unmistakable liberal bias that makes it easy for many investors, money managers, and executives to dismiss them
as ideological and naïve. Even a company run for the benefit of multiple stakeholders will at various points be forced to make tough choices, such as reducing payroll, trimming costs, closing facilities, switching suppliers, or doing business in places where corruption is rampant or environmental regulations are weak. As chief executives are quick to remind, companies that ignore short-term profitability run the risk of never making it to the long term.

Among the growing chorus of critics of “shareholder value,” a consensus is emerging around a number of relatively modest changes in tax and corporate governance laws that, at a minimum, could help lengthen time horizons of corporate decision-making. A group of business leaders assembled by the Aspen Institute to address the problem of “short-termism” recommended a recalibration of the capital-gains tax to provide investors with lower tax rates for longer-term investments. A small transaction tax, such as the one proposed by the European Union, could also be used to dampen the volume and importance of short-term trading.

The financial-services industry and some academics have argued that such measures, by reducing market liquidity, will inevitably increase the cost of capital and result in markets that are more volatile, not less. A lower tax rate for long-term investing has also been shown to have a “lock-in” effect that discourages investors from moving capital to companies offering the prospect of the highest return. But such conclusions are implicitly based on the questionable assumption that markets without such tax incentives are otherwise rational and operate with perfect efficiency. They also beg fundamental questions about the role played by financial markets in the broader economy. Once you assume, as they do, that the sole purpose of financial markets is to channel capital into investments that earn the highest financial return to private investors, then maximizing shareholder value becomes the only logical corporate strategy.

There is also a lively debate on the question of whether companies should offer earnings guidance to investors and analysts—estimates of what earnings per share will be for the coming quarter. The argument against such guidance is that it reinforces the undue focus of both executives and investors on short-term earnings results, discouraging long-term investment and incentivizing earnings manipulation. The counterargument is that even in the absence of company guidance, investors and executives inevitably play the same game by fixating on the “consensus” earnings estimates of Wall Street analysts. Given that reality, they argue, isn’t it better that those analyst estimates are informed as much as possible by information provided by the companies themselves?

In weighing these conflicting arguments, the Aspen group concluded that investors and analysts would be better served if companies provided information on a wider range of metrics with which to assess and predict business performance over a longer time horizon than the next quarter. While it might take Wall Street and its analysts some time to adjust to this richer and more nuanced form of communication, it would give the markets a better understanding of what drives each business while taking some of the focus off the quarterly numbers game.

In addressing the question of which shareholders should have the most say over company strategies and objectives, there have been suggestions for giving long-term investors greater power in selecting directors, approving mergers and asset sales, and setting executive compensation. The idea has been championed by McKinsey & Company managing director Dominic Barton and John Bogle, the former chief executive of the Vanguard Group, and is under active consideration by European securities regulators. Such enhanced voting rights, however, would have to be carefully structured so that they encourage a sense of stewardship on the part of long-term investors without giving company insiders or a few large shareholders the opportunity to run roughshod over other shareholders.

The short-term focus of corporate executives and directors is heavily reinforced by the demands of asset managers at mutual funds, pension funds, hedge funds, and endowments, who are evaluated and compensated on the basis of the returns they generated over the last year and the last quarter. Even while most big companies have now taken steps to stretch out over several years the incentive pay plans of top corporate executives to encourage those executives to take a longer-term perspective, the outsize quarterly and annual bonuses on Wall Street keep the economy’s time horizons fixated on the short term. At a minimum, federal regulators could require asset managers to disclose how their compensation is determined. They might also require funds to justify, on the basis of actual performance, the use of short-term metrics when managing long-term money such as pensions and college endowments.

The Securities and Exchange Commission also could nudge companies to put greater emphasis on long-term strategy and performance in their communications with shareholders. For starters, companies could be required to state explicitly in their annual reports whether their priority is to maximize shareholder value or to balance shareholder interests with other interests in some fashion—certainly shareholders deserve to know that in advance. The commission might require companies to annually disclose the size of their workforce in each country and information on the pay and working conditions of the company’s employees and those of its major contractors. Disclosure of any major shifts in where work is done could also be required, along with the rationale. There could be a requirement for companies to perform and disclose regular environmental audits and to acknowledge other potential threats to their reputation and brand equity. In proxy statements, public companies could be required to explain the ways in which executive compensation is aligned with long-term strategy and performance.

If I had to guess, however, my hunch would be that employees, not outside investors and regulators, will finally free the corporate sector from the straitjacket
of shareholder value. Today, young people—particularly those with high-demand skills—are drawn to work that doesn’t simply pay well but also has meaning and social value. You can already see that reflected in what students are choosing to study and where highly sought graduates are choosing to work. As the economy improves and the baby-boom generation retires, companies that have reputations as ruthless maximizers of short-term profits will find themselves on the losing end of the global competition for talent.

In an era of plentiful capital, it will be skills, knowledge, and creativity that will be in short supply, with those who have them calling the corporate tune. Who knows? In the future, there might even be conferences at which hedge-fund managers and chief executives get together to gripe about the tyranny of “maximizing employee satisfaction” and vow to do something about it.

The American Prospect

Steven Pearlstein is a Pulitzer Prize-winning columnist for The Washington Post and a professor of public affairs at George Mason University. 

Has Income Inequality Finally Got To Top Of The IMF Agenda? – Christian Proaño. 

Outgoing US President Barack Obama has named the reduction of economic inequality as the “defining challenge of our time”. This is true not only for the United States – the richest country and, at the same time, the one with the highest wealth inequality – but also for a large number of countries around the world, independent of their level of economic development. So, for instance, the average Gini coefficient of disposable household income across OECD countries reached its highest level since the mid-1980s, from 0.315 in 2010 to 0.318 in 2014 (OECD 2016).

Extreme economic inequality is undesirable for many reasons.

First and foremost, extreme income and wealth inequality is likely to jeopardize moral equality (“all people are created equal”), undermining the very basis of democratic societies. When a large share of wealth is in the hands of a few privileged people, equal access to nominally public goods such as education or an independent judicial system may not be guaranteed. As economic inequality may exacerbate inequality of opportunity, it is likely to solidify the social stratification and divisions in a country, making its society more prone to extremist political movements, as the recent US elections and other global political developments have shown.

And second, pronounced economic inequality may contribute to the instability of the global macroeconomic system through the buildup of large imbalances either through excessively credit-financed consumption, as in the US, or through deficient domestic aggregate demand and oversized net exports, as in China and Germany.

For a long period, however, and partly due to the emergence of a large affluent middle class in the US and most industrialized countries, economic inequality was a second order issue in them. The emergence of the neoliberal era with Ronald Reagan and Margaret Thatcher, however, not only eroded many social institutions such as trade unions demanding a more equal distribution of income in those countries but significantly influenced the IMF’s perspective of the IMF from then on.

The poor performance of the IMF’s Washington Consensus policy prescriptions during that era is nowadays widely acknowledged.

Social Europe

The Ghost of Poverty This Christmas – Bryan Bruce. 

In 1843 Charles Dickens released his classic tale A Christmas Carol.

Creatives are like sponges. They soak up what’s happening in society and squeeze the gathered material into their work. Dickens was a master of it.

A year earlier he’d read a British parliamentary report on the condition of children working in mines for 10 hours a day – naked, starving and sick. The cause of this misery, he recognised, was greed – a few people getting very rich at the expense of the many. (Sound familiar?)

So, in that magical way it takes a genius to do, Dickens poured all of Victorian Britain’s mean-spiritedness into his fictional character Ebenezer Scrooge, the miserly old man who hates Christmas.
Until, that is, he is visited on Christmas Eve by three Ghosts (Of Christmas Past and Present and Yet To Come) who reveal to him how giving can be much more rewarding than taking.

173 years on a lot of Kiwis have got that message. They help their friends and neighbours whenever they can, they run food banks, free used clothing and furniture outlets, and open their maraes to the homeless.

But none of these things would be necessary if the meanness of Scrooge had not become institutionalised into the Neoliberal economic policies successive New Zealand governments have promoted over the last 30 years.
Yes it’s true that children no longer work in factories or down mines  – but that’s simply proof (if proof be needed) that things can change if we vote to alter them.

What I suspect is that if Dickens could return like one of his ghosts to visit us today, he’d look in dismay at the long lines of poor outside the City Missions this Christmas and tell us that we are going backwards towards the selfish society he railed against – where the poor were dependent on the good will of strangers for food and the essentials of life.

That we have lost sight of what is really important is clear….
85,000 of our children are living in severe hardship
•14 % of our kids (155,000) are experiencing material hardship which means they are living without seven or more necessary items for their wellbeing. • 28% per cent of our children (295,000) are living in low income homes and experiencing material hardship as a result.

So thank you to all of the good people throughout our country who know this widening gap between the have and have-not  isn’t right and do so much to help those less fortunate than themselves.
But let’s also make a new year’s resolution – to encourage our friends and families and everyone we know to vote for a better deal for all our children next year.
10% of New Zealanders now own 60% of the wealth of our country while the bottom 20% own nothing of worth at all.
Let’s make the scrooges of New Zealand pay their fair share.

My very best wishes to all of you this Christmas Eve.
Take care.
Bryan Bruce. 

Brexit Britain turns against globalisation and modern technology, blaming it for low UK wages and inequality – The Independent. 

Post-Brexit Britain is in the throes of a major backlash against globalisation, blaming dwindling wages and rife inequality on the opening of the world’s economy, an exclusive poll for The Independent has revealed.

The survey by ComRes even exposes a new backward-looking dislike of modern technology in the UK, with the public blaming advances for a widening gap between the rich and poor.

People believe the gap has also been widened by the low interest rates employed by governments in many countries now suffering resurgent populist movements.

The Independent

This is the most dangerous time for our planet – Stephen Hawking. 

We can’t go on ignoring inequality, because we have the means to destroy our world but not to escape it.

What matters now, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake.

The concerns underlying these votes about the economic consequences of globalisation and accelerating technologically are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.

This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive.

We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent.

The Guardian 

Global Wealth Inequality – What you never knew you never knew – YouTube. 

Let’s Change The Rules! 

Rich countries give $130 billion annually in development aid to poor countries while sucking $2 Trillion out by various means. Every Year!

YouTube 

It’s not just the Pharmaceuticals screwing you Americans! Your doctors are in on the scam too. – Robert Reich. 

The real threat to the public’s health is drugs priced so high that an estimated fifty million Americans—more than a quarter of them with chronic health conditions—did not fill their prescriptions in 2012, according to the National Consumers League. The law allows pharmaceutical companies to pay doctors for prescribing their drugs. Over a five-month period in 2013, doctors received some $380 million in speaking and consulting fees from drug companies and device makers. Some doctors pocketed over half a million dollars each, and others received millions of dollars in royalties from products they had a hand in developing. Doctors claim these payments have no effect on what they prescribe. But why would pharmaceutical companies shell out all this money if it did not provide them a healthy return on their investment?

Drug companies pay the makers of generic drugs to delay their cheaper versions. These so-called pay-for-delay agreements, perfectly legal, generate huge profits both for the original manufacturers and for the generics—profits that come from consumers, from health insurers, and from government agencies paying higher prices than would otherwise be the case. The tactic costs Americans an estimated $3.5 billion a year. Europe doesn’t allow these sorts of payoffs. The major American drugmakers and generics have fought off any attempts to stop them. The drug companies claim they need these additional profits to pay for researching and developing new drugs. Perhaps this is so. But that argument neglects the billions of dollars drug companies spend annually for advertising and marketing—often tens of millions of dollars to promote a single drug. They also spend hundreds of millions every year lobbying. In 2013, their lobbying tab came to $225 million, which was more than the lobbying expenditures of America’s military contractors. In addition, Big Pharma spends heavily on political campaigns. In 2012 it shelled out more than $36 million, making it one of the biggest political contributors of all American industries.

The average American is unaware of this system—the patenting of drugs from nature, the renewal of patents based on insignificant changes, the aggressive marketing of prescription drugs, bans on purchases from foreign pharmacies, payments to doctors to prescribe specific drugs, and pay-for-delay—as well as the laws and administrative decisions that undergird all of it. Yet, as I said, because of this system, Americans pay more for drugs, per person, than citizens of any other nation on earth. The critical question is not whether government should play a role. Without government, patents would not exist, and pharmaceutical companies would have no incentive to produce new drugs. The issue is how government organizes the market. So long as big drugmakers have a disproportionate say in those decisions, the rest of us pay through the nose.
Robert Reich , from his book ‘Saving Capitalism’ 

Those who claim to be on the side of freedom while ignoring the growing imbalance of economic and political power in America and other advanced economies are not in fact on the side of freedom. They are on the side of those with the power. – Robert Reich

As economic and political power have once again moved into the hands of a relative few large corporations and wealthy individuals, “freedom” is again being used to justify the multitude of ways they entrench and enlarge that power by influencing the rules of the game.

These include escalating campaign contributions, as well as burgeoning “independent” campaign expenditures, often in the form of negative advertising targeting candidates whom they oppose; growing lobbying prowess, both in Washington and in state capitals; platoons of lawyers and paid experts to defend against or mount lawsuits, so that courts interpret the laws in ways that favor them; additional lawyers and experts to push their agendas in agency rule-making proceedings; the prospect of (or outright offers of) lucrative private-sector jobs for public officials who define or enforce the rules in ways that benefit them; public relations campaigns designed to convince the public of the truth and wisdom of policies they support and the falsity and deficiency of policies they don’t; think tanks and sponsored research that confirm their positions; and ownership of, or economic influence over, media outlets that further promote their goals.

Robert Reich, from his book ‘Saving Capitalism’

Investment in the Early Years Prevents Crime. 

Instead of competing in some sort of bizarre race to the bottom with the United States for the most number of people incarcerated per head of population, we need to attend to what works to prevent crime.

While the idea of scaring young people out of crime (boot camps, prison visits etc) might have popular appeal, it is a total failure as it has been found to increase the probability of crime. What does work is reducing the stress on low income families and providing them with sufficient material resources to ensure their kids have opportunities to thrive.

Increasing the incomes of low-income low opportunity parents using unconditional cash assistance reduces children’s likelihood of engaging in criminal activity. Yes giving parents more money, and trusting them to identify where the pressure points in their family life are, improves both the economic position of a family and reduces stress, the key pathway between poverty and poor outcomes for children.

A substantial body of research supports unconditional cash assistance in countries similar to New Zealand.

Jess Berentson-Shaw. Morgan Foundation 

The Police (and pretty much everyone else) Know how to Prevent Crime, Why Don’t the Politicians?

Jess Berentson-Shaw – The Morgan Foundation 

It was nicely done really; Judith Collins neatly deflected the question posed by a delegate at a Police Association’s Conference about when the government was going to start addressing the causes of the crimes, notably child poverty. She repelled that pretty brave question by simply saying child poverty was not the Government’s problem to fix.

The following week the Government announced it will be spending $1 billion to provide 1800 additional prison beds. While that is a one off cost, it also costs $92,000 to house each prisoner per year, so the on-going cost will be over $150m. Labour also announced it would be funding 1000 more frontline police at a cost of $180m per year.

This is all ambulance at the bottom of the cliff stuff. No-one has attended to what that member of the Police force was saying; deal with low incomes and low opportunities in childhood and you don’t need to spend billions on new prisons, or constantly have to fund more frontline police.

It is unlikely that merely lacking money directly leads to a child committing crime. Rather, a child who grows up poor is more likely to be a low achiever in their education with more behavioural and or mental health issues, and it is these factors that influence their likelihood of running up against the criminal justice system.

While the absolute numbers of appearances for all ethnicities have gone down, both apprehensions and appearances in court show a serious overrepresentation of Māori young people. Low income is noted to be a key factor in the criminal offending of young people. 

Morgan Foundation 

Inequality As Policy: Selective Trade Protectionism Favors Higher Earners. – Dean Baker. 

Offshoring manufacturing may have hurt many working people in America, but professionals and intellectual property have been robustly protected. 

Globalization and technology are routinely cited as drivers of inequality over the last four decades. While the relative importance of these causes is disputed, both are often viewed as natural and inevitable products of the working of the economy, rather than as the outcomes of deliberate policy. 

In fact, both the course of globalization and the distribution of rewards from technological innovation are very much the result of policy. Insofar as they have led to greater inequality, this has been the result of conscious policy choices.

Starting with globalization, there was nothing pre-determined about a pattern of trade liberalization that put U.S. manufacturing workers in direct competition with their much lower paid counterparts in the developing world. Instead, that competition was the result of trade pacts written to make it as easy as possible for U.S. corporations to invest in the developing world to take advantage of lower labor costs, and then ship their products back to the United States. The predicted and actual result of this pattern of trade has been to lower wages for manufacturing workers and non-college educated workers more generally, as displaced manufacturing workers crowd into other sectors of the economy.

New Economic Thinking 

“I’ve seen a lot of death, but not this thing. This is shocking and this is what makes you feel you are not living in a civilized world”

These horrifying pictures are the product of global economic inequality, victims of a world where 71% of the world owns only 3 percent of global wealth. People come from all over Africa. Some are fleeing extremist violence from groups like Nigeria’s Boko Haram or Somalia’s al-Shabaab. Others are simply people without opportunity or any hope for bettering their lives and their families in home countries where jobs are nonexistent and money is funneled to the ruling elites, who guard their wealth jealously. Occupy Democrats 

The reason is “a sense of hopelessness and helplessness of the young people. They believe the sun isn’t going to come up any more.”

Paul Little: What the Dickens is happening?

Dickens was a campaigner – he campaigned against, among other things, pollution, child poverty, poor public health, homelessness and the exploitation of women and children. He died nearly 150 years ago.

Great writer, but he doesn’t seem to have made much of a difference. NZ Herald 

Political economy, social policy, the coming end of Neoliberalism, Trump . . . and other facts of our times.

%d bloggers like this: