Category Archives: Poverty & Inequality

How to Use Fiscal and Monetary Policy to Make Us Rich Again – Tom Streithorst. 

The easiest way to return to Golden Age tranquility and equality is to empower fiscal policy.

During the post war Golden Age, from 1950 to 1973, US median real wages more than doubled. Today, they are lower than they were when Jimmy Carter was president. If you want an explanation why Americans are pessimistic about their future, that is as good a reason as any. In a recent article, Noah Smith examines the various causes of the slide in labor’s share of national income and finds most explanations wanting. With a blind spot common amongst economists he doesn’t even investigate the most obvious: politics.

Take a look at this chart. From the end of World War II, productivity rose steadily. Until the 1972 recession wages went up alongside it. Both dipped, both recovered and then, right around the time Ronald Reagan became President, productivity continued its upward trajectory but wages stopped following. If wages had continued to track productivity increases, the average American would earn twice as much as he does today and America would undoubtedly be a calmer and happier nation.

Collectively we are richer than we were 40 years ago, as we should be, considering the incredible advances in technology since them, but today the benefits of productivity increases no longer go to workers but rather to owners of stocks, bonds, and real estate. Wages don’t go up, but asset prices do. Rising productivity, that is to say the ability to make more goods and services with fewer inputs of labor and capital should make us all more prosperous. That it hasn’t can only be a distributional issue.

The timing suggests Ronald Reagan had something to do stagnating wages. That makes sense. Reagan cut taxes on the rich, deregulated the economy, eviscerated the labor unions and created the neoliberal order that still rules today. But perhaps an even more significant change is the tiny, technical and tedious shift from fiscal to monetary policy.

Government has two ways of affecting the economy: monetary and fiscal policy. The first involves the setting of interest rates, the other government tax and spending policy. Both fiscal and monetary policy work by putting money in people’s pockets so they will spend and thereby stimulate the economy but fiscal focuses on workers while monetary mostly benefits the already rich. Since Ronald Reagan, even under Democratic presidents, monetary has been the policy of choice. No wonder wages stopped going up but real estate, stock and bond prices have gone through the roof. During the Golden Age we shared the benefits of technological progress through wages gains. Since Reagan, we have allocated them through asset price inflation.

Fiscal policy, by increasing government spending, creates jobs and so raises wages even in the private sector. Monetary policy works mostly through the wealth effect. Lower interest rates almost automatically raise the value of stocks, bonds, and other real assets. Fiscal policy makes workers richer, monetary policy makes rich people richer. This, I suspect, explains better than anything else why monetary policy, even extreme monetary policy remains more respectable than even conventional monetary policy.

During the Golden Age, fiscal was king. Wages rose steadily and everybody was richer than their parents. Recessions were short and shallow. Economic policy makers’ primary task was insuring full unemployment. Anytime unemployment rose over a certain level, a government spending boost or tax cut would get the economy going again. And since firms were confident the government would never allow a steep downturn, they were ready and willing to invest in new technology and increased productive capacity. The economy grew faster (and more equitably) than it ever has before or since.

During the 1960s, Keynesian economists thought they could “fine tune” the economy, using Philips curve trade offs between inflation and unemployment. Stagflation in the 1970s shattered that optimism. Inflation went up but so did unemployment. New Classical economists decided in the long run, Keynesian stimulus couldn’t increase GDP, it could only accelerate inflation. Keynesianism stopped being cool. According to Robert Lucas, graduate students, would “snicker” whenever Keynesian concepts were mentioned.

In policy circles, Keynesians were replaced by monetarists, acolytes of Milton “Inflation is always and everywhere a monetary phenomenon” Friedman. Volcker in America and Thatcher in Britain decided the only way to stomp out inflationary expectations was to cut the money supply. This, despite their best efforts, they were unable to do. Controlling the money supply proved almost impossible but monetarism gave Volcker and Thatcher the cover to manufacture the deepest recession since the Great Depression.

By raising interest rates until the economy screamed Volcker and Thatcher crushed investment and allowed unemployment to rise to levels unthinkable just a few years before. Businessmen, union leaders, and politicians pleaded for a rate cut but the central bankers were implacable. Ending inflationary expectations was worth the cost, they insisted. Volcker and Thatcher succeed in crushing inflation, not by cutting the money supply, but rather with an old fashioned Phillips curve trade off. Workers who fear for their jobs don’t ask for cost of living increases. Inflation was history.

The Federal Funds Rate hit 20% in 1980. Now even after a few hikes, it is barely over 1%. The story of the past 30 years is of the most stimulative monetary policy in history. Anytime the economy stumbled, interest rate cuts were the automatic response. Other than military Keynesianism and tax cuts, fiscal policy was relegated to the ash heap of history. Reagan of course combined tax cuts with increased military spending but traditional peacetime infrastructure stimulus was tainted by the 1970s stagflation and for policymakers remained beyond the pale.

Fiscal stimulus came back, momentarily, at the peak of the financial crisis. China’s investment binge combined with Obama’s stimulus package probably stopped the Great Recession from being as catastrophic as the Great Depression but by 2010, fiscal stimulus was replaced by its opposite, austerity. According to elementary macroeconomics, when the private sector is cutting back its spending, as it was still doing in the wake of the financial crisis, government should increase its spending to take up the slack. But Obama in America, Cameron in Britain and Merkel in the EU insisted that government cut spending, even as the private sector continued to retrench.

It is rather shocking, for anyone who has taken Econ 101 that in 2010, when the global economy had barely recovered from the worst recession since the Great Depression, politicians and pundits were calling for lower deficits, higher taxes and less government spending even as monetary policy was maxed out. Rates were already close to zero so central banks had no more room to cut.

So, instead of going to the tool box and taking out their tried and tested fiscal kit, which would have created jobs and had the added benefit of improving infrastructure, policymakers instead invented Quantitative Easing, which in essence is monetary policy on steroids. Central Banks promised to buy bonds from the private sector, increasing their price, thereby shoveling money towards bond owners. The idea was that by buying safe assets they would push the private sector to buy riskier assets and by increasing bank reserves they would stimulate lending but the consequence of all the Quantitative Easings is that all of the benefits of growth since the financial crisis have gone to the top 5% and most of that to the top 0.1%.

A feature or a bug? The men who rule the planet are happy that most of us think economics is boring, that we would much rather read about R Kelly’s sexual predilections than about the difference between fiscal and monetary policy but were we to remember that spending money on infrastructure or health care or education would create jobs, raise wages, and create demand which the economy craves, we would have a much more equitable world.

One cogent objection to stimulative fiscal policy is that it has the potential to be inflationary. Indeed the fundamental goal of macroeconomic policy is to match the economy’s demand to its ability to supply. If fiscal policy gets out of hand (as arguably it did in the 1960s when Lyndon Johnson tried to fund both his Great Society and the Vietnam war without raising taxes), demand could outstrip supply, creating inflation. But should that happen, we have the monetary tools to cure any inflationary pressure. Rates today are still barely above zero. Should inflation threaten, central banks can raise interest rates and nip it in the bud.

Fiscal and monetary policy both have a place in policymakers’ toolkits. Perhaps the ideal combination would be to use fiscal to stimulate the economy and monetary to cool it down. Both Brexit and Trump should have told elites that unless they share the benefits of growth, a populist onslaught could threaten all our prosperity. The easiest way to return to Golden Age tranquility and equality is to empower fiscal policy to invest in our future and create jobs today.

2017 August 6

Evonomics.com

Poverty-traps and pay-gaps: Why single mothers need basic income – Dr Petra Bueskens. 

Harper discovered she wasn’t alone when she packed up her house, stopped paying rent and took her four-year-old son, Finn, on a six month “holiday” up north to warmer climes.

“I found in every camp site, especially the show grounds as they’re the cheapest ones that still have facilities, there were a couple of other single mums and their kids. I was also travelling with a friend and her son, so there were often five or six of us and a bunch of kids at each campsite. Up north there’s even more. Over time we became familiar with each other.”

Harper gave up her home because she couldn’t afford the rent and have any quality of life. Paid work put her in a double bind: if she worked, she lost most of her Centrelink payments; if she didn’t work there wasn’t quite enough to make ends meet. So, she worked and stayed poor. These are the poverty-traps that keep many single mothers working-poor and unable to dig out.

In Australia now, there is a clandestine group of mobile single parents, mostly mothers, who have found they cannot, on Centrelink benefits and low-paid casual work, meet the cost of living. They have chosen instead to travel and live with their children in camping grounds and caravan parks around Australia, particularly in Northern NSW and Queensland, where living outdoors is relatively easy. For as little as $10 a night at national parks and showgrounds and up to $25 at caravan parks that have showers, washing machines and other facilities, they live on the move. 

continued … Basic Income.org

***

Dr Petra Bueskens is an Honorary Fellow in Social and Political Sciences at the University of Melbourne, a psychotherapist in private practice at PPMD Therapy and a columnist at news media site New Matilda. She is the author of Mothering and Psychoanalysis: Clinical, Sociological and Feminist Perspectives. 



New Zealand’s political leadership has failed for decades on housing policy – Shamubeel Eaqub. 

New Zealand’s political leadership has failed for decades on housing policy, leading to the rise of a Victorian-style landed gentry, social cohesion coming under immense pressure and a cumulative undersupply of half a million houses over the last 30 years.

House prices are at the highest level they have ever been. And they have risen really, really fast since the 90s, but more so since the early 2000s and have far outstripped every fundamental that we can think of.

After nearly a century of rising home ownership in New Zealand, since 1991 home ownership has been falling. In the last census, the home ownership rate was the lowest level since 1956. And for my estimate for the end of 2016, it’s the lowest level since 1946.

We’ve gone back a long way in terms of the promise and the social pact in New Zealand that home ownership is good, and if you work hard you’re going to be able to afford a house.

The reality is that that social pact, that right of passage has not been true for many, many decades. The solutions are going to be difficult and they are going to take time.

Before you come and tell me that you paid 20% interest rates, the reality is that, yes interest rates are much lower. But the really big problem is, house prices have risen so much that it’s almost impossible in fact to save for the deposit. People could have saved a deposit and paid it off in about 20-30 years in the early 1990s. Fast forward to today, and that’s more like 50 years. How long do you want to work to pay off your mortgage?

What we’re talking about is the rise of Generation Rent. Those who manage to buy houses are in mortgage slavery for a long period of time.

There is a widening societal gap. If younger generations want to access housing, it’s not enough to have a job, nor enough to have a good job. You must now have parents that are wealthy, and home-owners too. The idea of New Zealand being an egalitarian country is no longer true. The kind of societal divide we’re talking about is very Victorian. We’re in fact talking about the rise of a landed gentry.

For those who are born after the 1980s, the chance of you doing better than your parents are less than 50%.

What we’re creating is a country where opportunities are going to be more limited for our children and when it comes to things like housing, than ourselves. I worry that what we’re creating in New Zealand is a social divide that is only going to keep growing. This is only one manifestation of this divide.

There has been a change in philosophy in what underpins the housing market. One very good example is what we have done with our social housing sector.

Housing NZ started building social housing in the late 1930s and stock accumulated over the next 50-60 years to a peak in 1991.

Since then we have not added more social housing. On a per capita basis we have the lowest number of social housing in New Zealand since the 1940s.

This is an ideological position where we do not want to create housing supply for the poor. We don’t want to. This is not about politicians. This is a reflection on us. It is our ideology, it is our politics. Our politicians are doing our bidding. The society that we’re living in today does not want to invest in the bottom half of our society.

The really big kicker has been credit. Significant reductions in mortgage rates over time have driven demand for housing. But we have misallocated our credit. We’re creating more and more debt, but most of that debt is chasing the existing houses. We’re buying and selling from each other rather than creating something new. The housing boom could not have happened on its own. The banking sector facilitated it. We have seen more and more credit being created and more of that credit is now more likely to go towards buying and selling houses from each other rather than funding businesses or building houses.

One of the saddest stories at the moment is, even though we have an acute housing shortage in Auckland, the most difficult to find funding for now is new developments. When the banks pull away credit, the first thing that goes is the riskiest elements of the market.

Seasonally adjusted house sales in Auckland are at the lowest level since 2011. This is worrying because what happens in the property market expands to the economy, consents and the construction sector.

I fully expect a construction bust next year. We are going to have a construction bust before we have a housing bust. We haven’t built enough houses for a very long period of time. And if we’re going to keep not building enough houses, I’m not confident that whatever correction we have in the housing market is going to last.

New money created in the economy is largely chasing the property market. Household debt to GDP has been rising steadily since the 1990s. People were now taking on more debt, but banks have started to cut back on the amount of credit available overall.

For every unit of economic growth over the course of the last 10, 20 years, we needed more and more debt to create that growth. We are more and more addicted to debt to create our economic growth.

Credit is now going backwards. If credit is not going to be available in aggregate, we know the biggest loses are in fact going to be businesses and property development.

It means we are not going to be building a lot of the projects that have been consented, and we know the construction cycle is going to come down. I despair.

I despair that we still talk so much more about buying and selling houses than actually starting businesses. The cultural sclerosis that we see in New Zealand has as much to do with the problem of the housing market as to do with our rules around the Resource Management Act, our banking sector.

On demand, we know there’s been significant growth in New Zealand’s population. Even though it feels like all of that population growth has come from net migration, the reality is that it’s actually natural population growth that’s created the bulk of the demand.

But net migration has created a volatility that we can’t deal with. A lot of the cyclicality in New Zealand’s housing market and demand, comes from net migration and we simply cannot respond.

We do know that there is money that’s global that is looking for a safe haven, and New Zealand is part of that story. We don’t have very good data in New Zealand because we refuse to collect it. There is a lack of leadership regarding our approach to foreign investment in our housing market.

Looking at what’s happening in Canada and Australia would indicate roughly 10% of house sales in Auckland are to foreign buyers. Yes it matters, but when 90% of your sales are going to locals, I think it’s a bit of a red herring.

Historical context of where demand for housing comes from shows the biggest chunk is from natural population growth. The second biggest was from changes in household size as families got smaller – more recently that has stopped, ie kids refusing to leave home.

There has been a massive variation in what happens with net migration.

New Zealand needs about 21,000 houses a year to keep up with population growth and changes that are taking place. But over the course of the last four years, we’ve needed more like 26,000. We’re nowhere near building those kinds of houses.

This means we need to think about demand management from a policy perspective. It’s more about cyclical management rather than structural management.

Population growth has always been there. Whether it’s from migration or not doesn’t matter. The problem is our housing market, our land supply, our infrastructure supply, can’t keep up with any of it.

While immigration was a side problem it nevertheless was an important conversation to have due to the volatility that can be created. I struggle with the fact that we have no articulated population strategy in New Zealand. We have immigration because we have immigration. That’s not a very good reason.

Why do we want immigration, how big do we want to be, do you want 15 million people or do you want five?

What sort of people do we want? Are we just using immigration as shorthand for not educating our kids because we can’t fill the skills shortages that we have in our industries?

Let’s not pretend that it’s all about people wanting to live in houses.

You’d be very hard pressed to argue that people want to buy houses in Epsom at a 3% rental yield for investment purposes. They want to buy houses in Epsom at 3% rental yield because they want to speculate on the capital gains. Let’s be honest with ourselves.

If your floating mortgage rate is 5.5% and you’re getting 3% from your rent, what does that tell you about your investment? It tells you that you’re not really doing it for cash-flow purposes. You’re doing because you expect capital gains, and you expect those capital gains to compensate you.

The real story in Auckland is that a lot of additional demand is coming from investment.

Land supply in New Zealand is slow, particularly in places like Auckland. But it’s not just in terms of sections, it’s also about density. The Unitary Plan was a win for Auckland. The reality is that if we only do greenfields, we will just see more people sitting out in traffic at the end of Drury.

The majority of New housing supply are large houses, when the majority of new households being formed are 1-2 person households.

Between the last two censuses, most of the housing stock built in New Zealand were four bedrooms or more. In contrast, the majority of households that were created were people that were single or couples. We have ageing populations, we have the empty nesters, we have young people who are having kids later…and we’re building stand-alone houses, with four bedrooms.

We have to think very hard about how to create supply not just for the top end, even though we know in theory building just enough houses is good for everybody, when you’re starting from a point of not enough houses, it means the bottom end gets screwed for longer. We have to think very hard about whether we want to use things like inclusionary zoning; we have to think very hard about what we want to do with social housing.

Right now we’re not building houses for everybody in our community. We are failing by building the wrong sorts of houses in our communities.

Right at the top is land costs. If we think about what has been driving up the cost of housing, the biggest one is the value of land. It’s true that we should also look at what’s happening in the rental market and what was happening with the costs of construction. But those are not the things that have been the majority driver of the very unaffordable house prices that we see in New Zealand today.

The biggest constraint is in land, and that is where the speculation is taking place.

We know we’re not building enough. In the 1930s to 1940s we had very different types of governments and ideology. We actually built more houses per capita back then than we have in the last 30 years.

In the late 40s-early 70s, with the rise of the welfare state and build-up of infrastructure. On a per capita basis, we built massive amounts of houses.

But since the oil shock and the 1980s reforms, we have never structurally managed to build as many houses as we did pre-1980. That cumulative gap between the trend that we have seen in the last 30 years, versus what we had seen in the 40s, 50s and 60s, is around half a million houses.

So there is something that is fundamentally and structurally different in what we have done in terms of housing supply in New Zealand over a very long period of time.

The changes in the way that we do our planning rules, the advent of the RMA, the way that we fund and govern our local government. All these things have changed. So the nature of the provision of infrastructure, the provision of land, then provision of consents, all of these things have changed massively. But the net result is we’re not building as many houses, and that is a fundamental problem.

In Auckland there is a massive gap between targets set by government for house building over the past three years and the amount of consents issued. On top of this, the targets themselves were still not high enough.

Somehow we’re still not able to respond to the growth that Auckland is facing. Consistently we have underestimated how many people want to live in a place like Auckland.

But it’s not just Auckland. Carterton surprises every year, it’s because they’ve got a fantastic train line and people live there, it’s not surprising.

But we are failing. We have been failing and we continue to fail. We have to be far more responsive and we have to have a much longer time horizon to have the provision for housing that’s needed.

There is in fact no real plan. The Unitary Plan is fantastic in that it actually plans for just enough houses for the projections for population. We can confidently say that projection is going to be pessimistic, we’re going to have way more people in Auckland.

Trump and Brexit have marked a shift in politics and a polarisation in the public’s view of politics. In New Zealand I think one of the catalysts could be Generation Rent. In the last census, 51% of adults, over 15 year olds, rented. It is no longer the minority that rent, but the majority of individuals that rent.

I’m not saying we’ll see the same kind of uprising in New Zealand, but what we saw in Brexit was that discontent was the majority of voters. If young people had actually turned up to vote, Brexit wouldn’t have happened. The same is true for New Zealand.

It is strange that there was no sense of crisis or urgency. For a lot of the voters, things are just fine. For the people for whom it’s not fine, they’re not voting and they feel disengaged.

The kind of politics that we will start to see in the next 10 years is something much more activist, the ‘urgency of now’.

The promise of democracy is to create an economy that is fit for everyone. It is about creating opportunities for everyone. Right now, particularly when it comes to housing, we are failing. We are not creating a democratic community when it comes to our housing supply because young people are locked out, because young people are going to suffer, and we know there are some big differences across the different parts of New Zealand.

It’s not going to be enough, when we’re starting from a position of crisis, to simply create more housing that will appease the public. We have to make sure that we’re far more activist in making sure that we’re creating housing that is fit for purpose, not just for the general populous, but for the bottom half who are clearly losing out from what is going on.

We know what the causes are. I’m sick of arguing why we’re here. We know why we’re here, because we haven’t ensured enough political leadership to deal with the problems that are there.

We can’t implement the solutions unless we have political leadership, political cohesion, and endurance over the political cycle. This is a big challenge, but a big opportunity.

Shamubeel Eaqub

***

  • There has been a cumulative 500,000 gap in housing supply over the last 30 years.
  • Eaqub predicted a construction bust next year, led by banks tightening lending.
  • It’s remarkable NZ authorities do not have proper data on foreign buyers. While he estimates 10% of purchases in Auckland are made by foreign investors, he said the main focus should be on the other 90% by local.
  • However, migration creates cyclical volatility that we can’t deal with; it is unbelievable that New Zealand doesn’t have a stated population policy.
  • New Zealand is still not building the right sized houses – the majority of properties being built in recent years have had four-plus bedrooms, while household sizes have grown smaller
  • The majority of New Zealand’s adult population is now renting. This could be the catalyst for a Brexit/Trump-style rising up of formerly disengaged voters – young people in our case – to engage at this year’s election.
  • New Zealand’s home ownership level is now at its lowest point since 1946.
  • We have a cultural sclerosis of buying and selling existing houses to one another.

interest.co.nz

Undoing poverty’s negative effect on brain development with cash transfers – Cameron McLeod. 

An upcoming experiment into brain development and poverty by Kimberly G Noble, associate professor of neuroscience and education at Columbia University’s Teachers College, asks whether poverty may affect the development, “the size, shape, and functioning,” of a child’s brain, and whether “a cash stipend to parents” would prevent this kind of damage.

Noble writes that “poverty places the young child’s brain at much greater risk of not going through the paces of normal development.” Children raised in poverty perform less well in school, are less likely to graduate from high school, and are less likely to continue on to college. Children raised in poverty are also more likely to be underemployed when adults. Sociological research and research done in the area of neuroscience has shown that a childhood spent in poverty can result in “significant differences in the size, shape and functioning” of the  brain. Can the damage done to children’s brains  be negated  by the intervention of a subsidy for brain health?

This most recent study’s fundamental difference from past efforts is that it explores what kind of effect “directly supplementing” the incomes of families will have on brain development. “Cash transfers, as opposed to counseling, child care and other services, have the potential to empower families to make the financial decisions they deem best for themselves and their children.” Noble’s hypothesis is that a “cascade of positive effects” will follow from the cash transfers, and that if proved correct, this has implications for public policy and “the potential to…affect the lives of millions of disadvantaged families with young children.”

Brain Trust, Kimberly G. Noble

  • Children who live in poverty tend to perform worse than peers in school on a bevy of different tests. They are less likely to graduate from high school and then continue on to college and are more apt to be underemployed once they enter the workforce.
  • Research that crosses neuroscience with sociology has begun to show that educational and occupational disadvantages that result from growing up poor can lead to significant differences in the size, shape and functioning of children’s brains.
  • Poverty’s potential to hijack normal brain development has led to plans for studying whether a simple intervention might reverse these injurious effects. A study now in the planning stages will explore if a modest subsidy can enhance brain health.

BasicIncome.org

***

The goal of Dr. Noble’s research is to better characterize socioeconomic disparities in children’s cognitive and brain development. Ongoing studies in her lab address the timing of neurocognitive disparities in infancy and early childhood, as well as the particular exposures and experiences that account for these disparities, including access to material resources, richness of language exposure, parenting style and exposure to stress. Finally, she is interested in applying this work to the design of interventions that aim to target gaps in school readiness, including early literacy, math, and self-regulation skills. She is honored to be part of a national team of social scientists and neuroscientists planning the first clinical trial of poverty reduction, which aims to estimate the causal impact of income supplementation on children’s cognitive, emotional and brain development in the first three years of life.

Columbia University

***

A short review on the link between poverty, children’s cognition and brain development, 13th March 2017

In the latest issue of the Scientific American, Kimberly Noble, associate professor in neuroscience and education, reviews her work and introduces an ambitious research project that may help understand the cause-and-effect connection between poverty and children’s brain development.

For the past 15 years, Noble and her colleagues have gathered evidence to explain how socioeconomic disparities may underlie differences in children’s cognition and brain development. In the course of their research they have found for example that children living in poverty tend to have reduced cognitive skills – including language, memory skills and cognitive control (Figure 1).

Figure 1. Wealth effect

More recently, they published evidence showing that the socio-economic status of parents (as assessed using parental education, income and occupation) can also predict children’s brain structure.

By measuring the cortical surface area of children’s brains (ie the area of the surface of the cortex, the outer layer of the brain which contains all the neurons), they found that lower family income was linked to smaller cortical surface area, especially in brain regions involved in language and cognitive control abilities (Figure 2 – in magenta).

Figure 2. A Brain on Poverty

In the same research, they also found that longer parental education was linked to increased hippocampus volume in children, a brain structure essential for memory processes.

Overall, Noble’s work adds to a growing body of research showing the negative relation between poverty and brain development and these findings may explain (at least in part) why children from poor families are less likely to obtain good grades at school, graduate from high-school or attend college.

What is less known however, is the causal mechanism underlying this relationship. As Noble describes, differences in school and neighbourhood quality, chronic stress in the family home, less nurturing parenting styles or a combination of all these factors might explain the impact of poverty on brain development and cognition.

To better understand the causal effect of poverty, Noble has teamed up with economists and developmental psychologists and together, they will soon launch a large-scale experiment or “randomised control trial”. As part of this experiment, 1000 US women from low-income backgrounds will be recruited soon after giving birth and will be followed over a three-year period. Half of the women will receive $333 per month (if they are part of the “experimental” group) and the other half will receive $20 per month (if they are part of the “control” group). Mothers and children will be monitored throughout the study, and mothers will be able to spend the money as they wish, without any constrains.

By comparing children belonging to the experimental group to those in the control group, researchers will be able to observe how increases in family income may directly benefit cognition and brain development. They will also be able to test whether the way mothers use the extra income is a relevant factor to explain these benefits.

Noble concludes that “although income may not be the only factor that determines a child’s developmental trajectory, it may be the easiest one to alter” through social policy. And given that 25% of American children and 12% of British children are affected by poverty (as reported by UNICEF in 2012), policies designed to alleviate poverty may have the capacity to reach and improve the life chances of millions of children.

NGN is looking forward to see the results of this large-scale experiment. We expect that this project, in association with other research studies, will improve our understanding of the link between poverty and child development, and will help design better interventions to support disadvantaged children.

Nature Groups

***

Socioeconomic inequality and children’s brain development. 

Research addresses issues at the intersection of psychology, neuroscience and public policy.

By Kimberly G. Noble, MD, PhD

Kimberly Noble, MD, PhD, is an associate professor of neuroscience and education at Teachers College, Columbia University. She received her undergraduate, graduate and medical degrees at the University of Pennsylvania. As a neuroscientist and board-certified pediatrician, she studies how inequality relates to children’s cognitive and brain development. Noble’s work has been supported by several federal and foundation grants, and she was named a “Rising Star” by the Association for Psychological Science. Together with a team of social scientists and neuroscientists from around the United States, she is planning the first clinical trial of poverty reduction to assess the causal impact of income on cognitive and brain development in early childhood.

Kimberley Noble website.

What can neuroscience tell us about why disadvantaged children are at risk for low achievement and poor mental health? How early in infancy does socioeconomic disadvantage leave an imprint on the developing brain, and what factors explain these links? How can we best apply this work to inform interventions? These and other questions are the focus of the research my colleagues and I have been addressing for the last several years.

What is socioeconomic status and why is it of interest to neuroscientists?

The developing human brain is remarkably malleable to experience. Of course, a child’s experience varies tremendously based on his or her family’s circumstances (McLoyd, 1998). And so, as neuroscientists, we can use family circumstance as a lens through which to better understand how experience relates to brain development.

Family socioeconomic status, or SES, is typically considered to include parental educational attainment, occupational prestige and income (McLoyd, 1998); subjective social status, or where one sees oneself on the social hierarchy, may also be taken into account (Adler, Epel, Castellazzo & Ickovics, 2000). A large literature has established that disparities in income and human capital are associated with substantial differences in children’s learning and school performance. For example, socioeconomic differences are observed across a range of important cognitive and achievement measures for children and adolescents, including IQ, literacy, achievement test scores and high school graduation rates (Brooks-Gunn & Duncan, 1997). These differences in achievement in turn result in dramatic differences in adult economic well-being and labor market success.

However, although outcomes such as school success are clearly critical for understanding disparities in development and cognition, they tell us little about the underlying neural mechanisms that lead to these differences. Distinct brain circuits support discrete cognitive skills, and differentiating between underlying neural substrates may point to different causal pathways and approaches for intervention (Farah et al., 2006; Hackman & Farah, 2009; Noble, McCandliss, & Farah, 2007; Raizada & Kishiyama, 2010). Studies that have used a neurocognitive framework to investigate disparities have documented that children and adolescents from socioeconomically disadvantaged backgrounds tend to perform worse than their more advantaged peers on several domains, most notably in language, memory, self-regulation and socio-emotional processing (Hackman & Farah, 2009; Hackman, Farah, & Meaney, 2010; Noble et al., 2007; Noble, Norman, & Farah, 2005; Raizada & Kishiyama, 2010).

Family socioeconomic circumstance and children’s brain structure

More recently, we and other neuroscientists have extended this line of research to examine how family socioeconomic circumstances relate to differences in the structure of the brain itself. For example, in the largest study of its kind to date, we analyzed the brain structure of 1099 children and adolescents recruited from socioeconomically diverse homes from ten sites across the United States (Noble, Houston et al., 2015). We were specifically interested in the structure of the cerebral cortex, or the outer layer of brain cells that does most of the cognitive “heavy lifting.” We found that both parental educational attainment and family income accounted for differences in the surface area, or size of the “nooks and crannies” of the cerebral cortex. These associations were found across much of the brain, but were particularly pronounced in areas that support language and self-regulation — two of the very skills that have been repeatedly documented to show large differences along socioeconomic lines.

Several points about these findings are worth noting. First, genetic ancestry, or the proportion of ancestral descent for each of six major continental populations, was held constant in the analyses. Thus, although race and SES tend to be confounded in the U.S., we can say that the socioeconomic disparities in brain structure that we observed were independent of genetically-defined race. Second, we observed dramatic individual differences, or variation from person to person. That is, there were many children and adolescents from disadvantaged homes who had larger cortical surface areas, and many children from more advantaged homes who had smaller surface areas. This means that our research team could in no way accurately predict a child’s brain size simply by knowing his or her family income alone. Finally, the relationship between family income and surface area was nonlinear, such that the steepest gradient was seen at the lowest end of the income spectrum. That is, dollar for dollar, differences in family income were associated with proportionately greater differences in brain structure among the most disadvantaged families.

More recently, we also examined the thickness of the cerebral cortex in the same sample (Piccolo, et al., 2016). In general, as we get older, our cortices tend to get thinner. Specifically, cortical thickness decreases rapidly in childhood and early adolescence, followed by a more gradual thinning, and ultimately plateauing in early- to mid-adulthood (Raznahan et al., 2011; Schnack et al., 2014; Sowell et al., 2003). Our work suggests that family socioeconomic circumstance may moderate this trajectory. 

Specifically, at lower levels of family SES, we observed relatively steep age-related decreases in cortical thickness earlier in childhood, and subsequent leveling off during adolescence. In contrast, at higher levels of family SES, we observed more gradual age-related reductions in cortical thickness through at least late adolescence. We speculated that these findings may reflect an abbreviated period of cortical thinning in lower SES environments, relative to a more prolonged period of cortical thinning in higher SES environments. It is possible that socioeconomic disadvantage is a proxy for experiences that narrow the sensitive period, or time window for certain aspects of brain development that are malleable to environmental influences, thereby accelerating maturation (Tottenham, 2015).

Are these socioeconomic differences in brain structure clinically meaningful? Early work would suggest so. In our work, we have found that differences in cortical surface area partially accounted for links between family income and children’s executive function skills (Noble, Houston et al., 2015). Independent work in other labs has suggested that differences in brain structure may account for between 15 and 44 percent of the family income-related achievement gap in adolescence (Hair, Hanson, Wolfe & Pollak, 2015; Mackey et al., 2015). This line of research is still in its infancy, however, and several outstanding questions remain to be addressed.

How early are socioeconomic disparities in brain development detectable?

By the start of school, it is apparent that dramatic socioeconomic disparities in children’s cognitive functioning are already evident, and indeed, several studies have found that socioeconomic disparities in language (Fernald, Marchman & Weisleder, 2013; Noble, Engelhardt et al., 2015; Rowe & Goldin-Meadow, 2009) and memory (Noble, Engelhardt et al., 2015) are already present by the second year of life. But methodologies that assess brain function or structure may be more sensitive to differences than are tests of behavior. This raises the question of just how early we can detect socioeconomic disparities in the structure or function of children’s brains.

 One group reported socioeconomic differences in resting electroencephalogram (EEG) activity — which indexes electrical activity of the brain as measured at the scalp — as early as 6–9 months of age (Tomalski et al., 2013). Recent work by our group, however, found no correlation between SES and the same EEG measures within the first four days following birth (Brito, Fifer, Myers, Elliott & Noble, 2016), raising the possibility that some of these differences in brain function may emerge in part as a result of early differences in postnatal experience. Of course, a longitudinal study assessing both the prenatal and postnatal environments would be necessary to formally test this hypothesis. Furthermore, another group recently reported that, among a group of African-American, female infants imaged at 5 weeks of age, socioeconomic disadvantage was associated with smaller cortical and deep gray matter volumes (Betancourt et al., 2015). It is thus also likely that at least some socioeconomic differences in brain development are the result of socioeconomic differences in the prenatal environment (e.g., maternal diet, stress) and/or genetic differences.

Disentangling links among socioeconomic disparities, modifiable experiences and brain development represents a clear priority for future research. Are the associations between SES and brain development the result of differences in experiences that can serve as the targets of intervention, such as differences in nutrition, housing and neighborhood quality, parenting style, family stress and/or education? Certainly, the preponderance of social science evidence would suggest that such differences in experience are likely to account at least in part for differences in child and adolescent development (Duncan & Magnuson, 2012). However, few studies have directly examined links among SES, experience and the brain (Luby et al., 2013). In my lab, we are actively focusing on these issues, with specific interest in how chronic stress and the home language environment may, in part, explain our findings.

How can this work inform interventions?

Quite a few interventions aim to reduce socioeconomic disparities in children’s achievement. Whether school-based or home-based, many are quite effective, though frequently face challenges: High-quality interventions are expensive, difficult to scale up and often suffer from “fadeout,” or the phenomenon whereby the positive effects of the intervention dwindle with time once children are no longer receiving services.

What about the effects of directly supplementing family income? Rather than providing services, such “cash transfer“ interventions have the potential to empower families to make the financial decisions they deem best for themselves and their children. Experimental and quasi-experimental studies in the social sciences, both domestically and in the developing world, have suggested the promise of direct income supplementation (Duncan & Magnuson, 2012).

To date, linkages between poverty and brain development have been entirely correlational in nature; the field of neuroscience is silent on the causal connections between poverty and brain development. As such, I am pleased to be part of a team of social scientists and neuroscientists who are currently planning and raising funds to launch the first-ever randomized experiment testing the causal connections between poverty reduction and brain development.

The ambition of this study is large, though the premise is simple. We plan to recruit 1,000 low-income U.S. mothers at the time of their child’s birth. Mothers will be randomized to receive a large monthly income supplement or a nominal monthly income supplement. Families will be tracked longitudinally to definitively assess the causal impact of this unconditional cash transfer on cognitive and brain development in the first three years following birth, when we believe the developing brain is most malleable to experience.

We hypothesize that increased family income will trigger a cascade of positive effects throughout the family system. As a result, across development, children will be better positioned to learn foundational skills. If our hypotheses are borne out, this proposed randomized trial has the potential to inform social policies that affect the lives of millions of disadvantaged families with young children. While income may not be the only or even the most important factor in determining children’s developmental trajectories, it may be the most manipulable from a policy perspective.

American Psychological Association

Getting Basic Income Right – Kemal Dervis.  

Universal basic income (UBI) schemes are getting a lot of attention these days. Of course, the idea – to provide all legal residents of a country a standard sum of cash unconnected to work – is not new. The philosopher Thomas More advocated it back in the sixteenth century, and many others, including Milton Friedman on the right and John Kenneth Galbraith on the left, have promoted variants of it over the years. But the idea has lately been gaining much more traction, with some regarding it as a solution to today’s technology-driven economic disruptions. Can it work?

The appeal of a UBI is rooted in three key features: it provides a basic social “floor” to all citizens; it lets people choose how to use that support; and it could help to streamline the bureaucracy on which many social-support programs depend. A UBI would also be totally “portable,” thereby helping citizens who change jobs frequently, cannot depend on a long-term employer for social insurance, or are self-employed.

Viewing a UBI as a straightforward means to limit poverty, many on the left have made it part of their program. Many libertarians like the concept, because it enables – indeed, requires – recipients to choose freely how to spend the money. Even very wealthy people sometimes support it, because it would enable them to go to bed knowing that their taxes had finally and efficiently eradicated extreme poverty.

The UBI concept also appeals to those who focus on how economic development can replace at least some of the in-kind aid that is now given to the poor. Already, various local social programs in Latin America contain elements of the UBI idea, though they are targeted at the poor and usually conditional on certain behavior, such as having children regularly attend school.

But implementing a full-blown UBI would be difficult, not least because it would require answering a number of complex questions about goals and priorities. Perhaps the most obvious balancing act relates to how much money is actually delivered to each citizen (or legal resident).

In the United States and Europe, a UBI of, say, $2,000 per year would not do much, except perhaps alleviate the most extreme poverty, even if it was added to existing social-welfare programs. An UBI of $10,000 would make a real difference; but, depending on how many people qualify, that could cost as much as 10% or 15% of GDP – a huge fiscal outlay, particularly if it came on top of existing social programs.

Even with a significant increase in tax revenue, such a high basic income would have to be packaged with gradual reductions in some existing public spending – for example, on unemployment benefits, education, health, transportation, and housing – to be fiscally feasible. The system that would ultimately take shape would depend on how these components were balanced.

In today’s labor market, which is being transformed by digital technologies, one of the most important features of a UBI is portability. Indeed, to insist on greater labor-market flexibility, without ensuring that workers, who face a constant need to adapt to technological disruptions, can rely on continuous social-safety nets, is to advocate a lopsided world in which employers have all the flexibility and employees have very little.

Making modern labor markets flexible for employers and employees alike would require a UBI’s essential features, like portability and free choice. But only the most extreme libertarian would argue that the money should be handed out without any policy guidance. It would be more advisable to create a complementary active social policy that guides, to some extent, the use of the benefits.

Here, a proposal that has emerged in France is a step in the right direction. The idea is to endow each citizen with a personal social account containing partly redeemable “points.” Such accounts would work something like a savings account, with their owners augmenting a substantial public contribution to them by working, studying, or performing certain types of national service. The accounts could be drawn upon in times of need, particularly for training and re-skilling, though the amount that could be withdrawn would be guided by predetermined “prices” and limited to a certain amount in a given period of time.

The approach seems like a good compromise between portability and personal choice, on the one hand, and sufficient social-policy guidance, on the other. It contains elements of both US social security and individual retirement accounts, while reflecting a commitment to training and reskilling. Such a program could be combined with a more flexible retirement system, and thus developed into a modern and comprehensive social-solidarity system.

The challenge now – for the developed economies, at least – is to develop stronger and more streamlined social-solidarity systems, create room for more individual choice in the use of benefits, and make benefits portable. Only by striking the right balance between individual choice and social-policy guidance can modern economies build the social-safety programs they need.

Social Europe

Abuse breeds child abusers – Jarrod Gilbert. 

Often when I’m doing research I dance a silly jig when I gleefully unearth a gem of information hitherto unknown or long forgotten. In studying the violent deaths of kids that doesn’t happen.

There was no dance of joy when I discovered New Zealanders are more likely to be homicide victims in their first tender years than at any other time in their lives. But nothing numbs you like the photographs of dead children.

Little bodies lying there limp with little hands and little fingers, covered in scratches and an array of bruises some dark black and some fading, looking as vulnerable dead as they were when they were alive.

James Whakaruru’s misery ended when he was killed in 1999. He had endured four years of life and that was all he could take. He was hit with a small hammer, a jug cord and a vacuum cleaner hose. During one beating his mind was so confused he stared blankly ahead. His tormentor responded by poking him in the eyes. It was a stomping that eventually switched out his little light. It was a case that even the Mongrel Mob condemned, calling the cruelty “amongst the lowest of any act”.

An inquiry by the Commissioner for Children found a number of failings by state agencies, which were all too aware of the boy’s troubled existence. The Commissioner said James became a hero because changes made to Government agencies would save lives in the future. Yet such horrors have continued. My colleague Greg Newbold has found that on average nine children (under 15) have been killed as a result of maltreatment since 1992 and the rate has not abated in recent years. In 2015, there were 14 such deaths, one of which was three-year-old Moko Rangitoheriri, or baby Moko as we knew him when he gained posthumous celebrity.

Moko’s life was the same as James’s, and he too died in agony; he endured weeks of being beaten, kicked, and smeared with faeces. That was the short life he knew. Most of us will struggle to comprehend these acts but we are desperate to stop them. Desperate to ensure state agencies are capable of intervening to protect those who can not protect themselves and, through no fault of their own, are subjected to cruelty by those who are meant to protect them.

The reasons for intervening don’t stop with the imperative to save young lives. For every child killed there are dozens who live wretched existences and from this cohort of unfortunates will come the next generation of abusers.  Solving the problems of today, then, is not just a moral imperative but is also about producing a positive ripple effect.

And this is why, In the cases of James Whakaruru and baby Moko the best and most efficient time for intervention was not in the period leading up to their abuse, but rather many years before they were born. The men involved in each of those killing came from the same family. And it seems their lives were transient and tragic: one spent time in the now infamous Epuni Boys home, which is ground zero for calls for an inquiry into state care abuse (and incidentally the birth place of the Mongrel Mob).

Once young victims themselves, those boys crawled into adulthood and became violent men capable of imparting cruelty onto kids in their care.

This cycle of abuse is well known, yet state spending on the problem is poorly aligned to it, and our targeting of the problem is reactionary and punitive rather than proactive and preventative.

Of the $1.4 billion we spend on family and sexual violence annually, less than 10 per cent is spent on interventions, of which just 1.5 per cent is spent on primary prevention. The morality of that is questionable, the economics even more so.

Not only must things be approached differently but there needs to be greater urgency in our thinking. It’s perhaps trite to say, but if nine New Zealanders were killed every year in acts of terrorism politicians would never stop talking about it and it would be priority number one.

In an election year, that’s exactly where this issue should be. If the kids in violent homes had a voice, that’s what they’d be saying.

But if the details of such deaths don’t move our political leaders to urgent action, I rather fear nothing will. Maybe they should be made to look at the photographs.

• Dr Jarrod Gilbert is a sociologist at the University of Canterbury and the lead researcher at Independent Research Solutions.

The 1930s were humanity’s darkest, bloodiest hour. Are you paying attention? – Jonathan Freedland. 

Even to mention the 1930s is to evoke the period when human civilisation entered its darkest, bloodiest chapter. No case needs to be argued; just to name the decade is enough. It is a byword for mass poverty, violent extremism and the gathering storm of world war. “The 1930s” is not so much a label for a period of time than it is rhetorical shorthand – a two-word warning from history.

Witness the impact of an otherwise boilerplate broadcast by the Prince of Wales last December that made headlines. “Prince Charles warns of return to the ‘dark days of the 1930s’ in Thought for the Day message.” Or consider the reflex response to reports that Donald Trump was to maintain his own private security force even once he had reached the White House. The Nobel prize-winning economist Paul Krugman’s tweet was typical: “That 1930s show returns.”

Because that decade was scarred by multiple evils, the phrase can be used to conjure up serial spectres. It has an international meaning, with a vocabulary that centres on Hitler and Nazism and the failure to resist them: from brownshirts and Goebbels to appeasement, Munich and Chamberlain. And it has a domestic meaning, with a lexicon and imagery that refers to the Great Depression: the dust bowl, soup kitchens, the dole queue and Jarrow. It was this second association that gave such power to a statement from the usually dry Office for Budget Responsibility, following then-chancellor George Osborne’s autumn statement in 2014. The OBR warned that public spending would be at its lowest level since the 1930s; the political damage was enormous and instant.

In recent months, the 1930s have been invoked more than ever, not to describe some faraway menace but to warn of shifts under way in both Europe and the United States. The surge of populist, nationalist movements in Europe, and their apparent counterpart in the US, has stirred unhappy memories and has, perhaps inevitably, had commentators and others reaching for the historical yardstick to see if today measures up to 80 years ago.

Why is it the 1930s to which we return, again and again? For some sceptics, the answer is obvious: it’s the only history anybody knows. According to this jaundiced view of the British school curriculum, Hitler and Nazis long ago displaced Tudors and Stuarts as the core, compulsory subjects of the past. When we fumble in the dark for a historical precedent, our hands keep reaching for the 30s because they at least come with a little light.

The more generous explanation centres on the fact that that period, taken together with the first half of the 1940s, represents a kind of nadir in human affairs. The Depression was, as Larry Elliott wrote last week, “the biggest setback to the global economy since the dawn of the modern industrial age”, leaving 34 million Americans with no income. The hyperinflation experienced in Germany – when a thief would steal a laundry-basket full of cash, chucking away the money in order to keep the more valuable basket – is the stuff of legend. And the Depression paved the way for history’s bloodiest conflict, the second world war which left, by some estimates, a mind-numbing 60 million people dead. At its centre was the Holocaust, the industrialised slaughter of 6 million Jews by the Nazis: an attempt at the annihilation of an entire people.

In these multiple ways, then, the 1930s function as a historical rock bottom, a demonstration of how low humanity can descend. The decade’s illustrative power as a moral ultimate accounts for why it is deployed so fervently and so often.

Less abstractly, if we keep returning to that period, it’s partly because it can justifiably claim to be the foundation stone of our modern world. The international and economic architecture that still stands today – even if it currently looks shaky and threatened – was built in reaction to the havoc wreaked in the 30s and immediately afterwards. The United Nations, the European Union, the International Monetary Fund, Bretton Woods: these were all born of a resolve not to repeat the mistakes of the 30s, whether those mistakes be rampant nationalism or beggar-my-neighbour protectionism. The world of 2017 is shaped by the trauma of the 1930s.

The international and economic architecture that still stands today was built in reaction to the havoc of the 1930s

One telling, human illustration came in recent global polling for the Journal of Democracy, which showed an alarming decline in the number of people who believed it was “essential” to live in a democracy. From Sweden to the US, from Britain to Australia, only one in four of those born in the 1980s regarded democracy as essential. Among those born in the 1930s, the figure was at or above 75%. Put another way, those who were born into the hurricane have no desire to feel its wrath again.

Most of these dynamics are long established, but now there is another element at work. As the 30s move from living memory into history, as the hurricane moves further away, so what had once seemed solid and fixed – specifically, the view that that was an era of great suffering and pain, whose enduring value is as an eternal warning – becomes contested, even upended.

Witness the remarks of Steve Bannon, chief strategist in Donald Trump’s White House and the former chairman of the far-right Breitbart website. In an interview with the Hollywood Reporter, Bannon promised that the Trump era would be “as exciting as the 1930s”. (In the same interview, he said “Darkness is good” – citing Satan, Darth Vader and Dick Cheney as examples.)

“Exciting” is not how the 1930s are usually remembered, but Bannon did not choose his words by accident. He is widely credited with the authorship of Trump’s inaugural address, which twice used the slogan “America first”. That phrase has long been off-limits in US discourse, because it was the name of the movement – packed with nativists and antisemites, and personified by the celebrity aviator Charles Lindbergh – that sought to keep the US out of the war against Nazi Germany and to make an accommodation with Hitler. Bannon, who considers himself a student of history, will be fully aware of that 1930s association – but embraced it anyway.

That makes him an outlier in the US, but one with powerful allies beyond America’s shores. Timothy Snyder, professor of history at Yale and the author of On Tyranny: Twenty Lessons from the Twentieth Century, notes that European nationalists are also keen to overturn the previously consensual view of the 30s as a period of shame, never to be repeated. Snyder mentions Hungary’s prime minister, Viktor Orban, who avowedly seeks the creation of an “illiberal” state, and who, says Snyder, “looks fondly on that period as one of healthy national consciousness”.

The more arresting example is, perhaps inevitably, Vladimir Putin. Snyder notes Putin’s energetic rehabilitation of Ivan Ilyin, a philosopher of Russian fascism influential eight decades ago. Putin has exhumed Ilyin both metaphorically and literally, digging up and moving his remains from Switzerland to Russia.

Among other things, Ilyin wrote that individuality was evil; that the “variety of human beings” represented a failure of God to complete creation; that what mattered was not individual people but the “living totality” of the nation; that Hitler and Mussolini were exemplary leaders who were saving Europe by dissolving democracy; and that fascist holy Russia ought to be governed by a “national dictator”. Ilyin spent the 30s exiled from the Soviet Union, but Putin has brought him back, quoting him in his speeches and laying flowers on his grave.

European nationalists are keen to overturn the view of the 1930s as a period of shame, never to be repeated.

Still, Putin, Orbán and Bannon apart, when most people compare the current situation to that of the 1930s, they don’t mean it as a compliment. And the parallel has felt irresistible, so that when Trump first imposed his travel ban, for example, the instant comparison was with the door being closed to refugees from Nazi Germany in the 30s. (Theresa May was on the receiving end of the same comparison when she quietly closed off the Dubs route to child refugees from Syria.)

When Trump attacked the media as purveyors of “fake news”, the ready parallel was Hitler’s slamming of the newspapers as the Lügenpresse, the lying press (a term used by today’s German far right). When the Daily Mail branded a panel of high court judges “enemies of the people”, for their ruling that parliament needed to be consulted on Brexit, those who were outraged by the phrase turned to their collected works of European history, looking for the chapters on the 1930s.

The Great Depression

So the reflex is well-honed. But is it sound? Does any comparison of today and the 1930s hold up?

The starting point is surely economic, not least because the one thing everyone knows about the 30s – and which is common to both the US and European experiences of that decade – is the Great Depression. The current convulsions can be traced back to the crash of 2008, but the impact of that event and the shock that defined the 30s are not an even match. When discussing our own time, Krugman speaks instead of the Great Recession: a huge and shaping event, but one whose impact – measured, for example, in terms of mass unemployment – is not on the same scale. US joblessness reached 25% in the 1930s; even in the depths of 2009 it never broke the 10% barrier.

The political sphere reveals another mismatch between then and now. The 30s were characterised by ultra-nationalist and fascist movements seizing power in leading nations: Germany, Italy and Spain most obviously. The world is waiting nervously for the result of France’s presidential election in May: victory for Marine Le Pen would be seized on as the clearest proof yet that the spirit of the 30s is resurgent.

There is similar apprehension that Geert Wilders, who speaks of ridding the country of ‘Moroccan scum”, has led the polls ahead of Holland’s general election on Wednesday. And plenty of liberals will be perfectly content for the Christian Democrat Angela Merkel to prevail over her Social Democratic rival, Martin Schulz, just so long as the far-right Alternative Fur Deutschland makes no ground. Still, so far and as things stand, in Europe only Hungary and Poland have governments that seem doctrinally akin to those that flourished in the 30s.

That leaves the US, which dodged the bullet of fascistic rule in the 30s – although at times the success of the America First movement which at its peak could count on more than 800,000 paid-up members, suggested such an outcome was far from impossible. (Hence the intended irony in the title of Sinclair Lewis’s 1935 novel, It Can’t Happen Here.)

Donald Trump has certainly had Americans reaching for their history textbooks, fearful that his admiration for strongmen, his contempt for restraints on executive authority, and his demonisation of minorities and foreigners means he marches in step with the demagogues of the 30s.

But even those most anxious about Trump still focus on the form the new presidency could take rather than the one it is already taking. David From, a speechwriter to George W. Bush, wrote a much-noticed essay for the Atlantic titled, “How to build an autocracy”. It was billed as setting out “the playbook Donald Trump could use to set the country down a path towards illiberalism”. He was not arguing that Trump had already embarked on that route, just that he could (so long as the media came to heel and the public grew weary and worn down, shrugging in the face of obvious lies and persuaded that greater security was worth the price of lost freedoms).

Similarly, Trump has unloaded rhetorically on the free press – castigating them, Mail-style, as “enemies of the people” – but he has not closed down any newspapers. He meted out the same treatment via Twitter to a court that blocked his travel ban, rounding on the “so-called judge” – but he did eventually succumb to the courts’ verdict and withdrew his original executive order. He did not have the dissenting judges sacked or imprisoned; he has not moved to register or intern every Muslim citizen in the US; he has not suggested they wear identifying symbols.

These are crumbs of comfort; they are not intended to minimise the real danger Trump represents to the fundamental norms that underpin liberal democracy. Rather, the point is that we have not reached the 1930s yet. Those sounding the alarm are suggesting only that we may be travelling in that direction – which is bad enough.

Two further contrasts between now and the 1930s, one from each end of the sociological spectrum, are instructive. First, and particularly relevant to the US, is to ask: who is on the streets? In the 30s, much of the conflict was played out at ground level, with marchers and quasi-military forces duelling for control. The clashes of the Brownshirts with communists and socialists played a crucial part in the rise of the Nazis. (A turning point in the defeat of Oswald Mosley, Britain’s own little Hitler, came with his humbling in London’s East End, at the 1936 battle of Cable Street.)

But those taking to the streets today – so far – have tended to be opponents of the lurch towards extreme nationalism. In the US, anti-Trump movements – styling themselves, in a conscious nod to the 1930s, as “the resistance” – have filled city squares and plazas. The Women’s March led the way on the first day of the Trump presidency; then those protesters and others flocked to airports in huge numbers a week later, to obstruct the refugee ban. Those demonstrations have continued, and they supply an important contrast with 80 years ago. Back then, it was the fascists who were out first – and in force.

Snyder notes another key difference. “In the 1930s, all the stylish people were fascists: the film critics, the poets and so on.” He is speaking chiefly about Germany and Italy, and doubtless exaggerates to make his point, but he is right that today “most cultural figures tend to be against”. There are exceptions – Le Pen has her celebrity admirers, but Snyder speaks accurately when he says that now, in contrast with the 30s, there are “few who see fascism as a creative cultural force”.

Fear and loathing

So much for where the lines between then and now diverge. Where do they run in parallel?

The exercise is made complicated by the fact that ultra-nationalists are, so far, largely out of power where they ruled in the 30s – namely, Europe – and in power in the place where they were shut out in that decade, namely the US. It means that Trump has to be compared either to US movements that were strong but ultimately defeated, such as the America First Committee, or to those US figures who never governed on the national stage.

In that category stands Huey Long, the Louisiana strongman, who ruled that state as a personal fiefdom (and who was widely seen as the inspiration for the White House dictator at the heart of the Lewis novel).

“He was immensely popular,” says Tony Badger, former professor of American history at the University of Cambridge. Long would engage in the personal abuse of his opponents, often deploying colourful language aimed at mocking their physical characteristics. The judges were a frequent Long target, to the extent that he hounded one out of office – with fateful consequences.

Long went over the heads of the hated press, communicating directly with the voters via a medium he could control completely. In Trump’s day, that is Twitter, but for Long it was the establishment of his own newspaper, the Louisiana Progress (later the American Progress) – which Long had delivered via the state’s highway patrol and which he commanded be printed on rough paper, so that, says Badger, “his constituents could use it in the toilet”.

All this was tolerated by Long’s devotees because they lapped up his message of economic populism, captured by the slogan: “Share Our Wealth”. Tellingly, that resonated not with the very poorest – who tended to vote for Roosevelt, just as those earning below $50,000 voted for Hillary Clinton in 2016 – but with “the men who had jobs or had just lost them, whose wages had eroded and who felt they had lost out and been left behind”. That description of Badger’s could apply just as well to the demographic that today sees Trump as its champion.

Long never made it to the White House. In 1935, one month after announcing his bid for the presidency, he was assassinated, shot by the son-in-law of the judge Long had sought to remove from the bench. It’s a useful reminder that, no matter how hate-filled and divided we consider US politics now, the 30s were full of their own fear and loathing.

“I welcome their hatred,” Roosevelt would say of his opponents on the right. Nativist xenophobia was intense, even if most immigration had come to a halt with legislation passed in the previous decade. Catholics from eastern Europe were the target of much of that suspicion, while Lindbergh and the America Firsters played on enduring antisemitism.

This, remember, was in the midst of the Great Depression, when one in four US workers was out of a job. And surely this is the crucial distinction between then and now, between the Long phenomenon and Trump. As Badger summarises: “There was a real crisis then, whereas Trump’s is manufactured.”

And yet, scholars of the period are still hearing the insistent beep of their early warning systems. An immediate point of connection is globalisation, which is less novel than we might think. For Snyder, the 30s marked the collapse of the first globalisation, defined as an era in which a nation’s wealth becomes ever more dependent on exports. That pattern had been growing steadily more entrenched since the 1870s (just as the second globalisation took wing in the 1970s). Then, as now, it had spawned a corresponding ideology – a faith in liberal free trade as a global panacea – with, perhaps, the English philosopher Herbert Spencer in the role of the End of History essayist Francis Fukuyama. By the 1930s, and thanks to the Depression, that faith in globalisation’s ability to spread the wealth evenly had shattered. This time around, disillusionment has come a decade or so ahead of schedule.

The second loud alarm is clearly heard in the hostility to those deemed outsiders. Of course, the designated alien changes from generation to generation, but the impulse is the same: to see the family next door not as neighbours but as agents of some heinous worldwide scheme, designed to deprive you of peace, prosperity or what is rightfully yours. In 30s Europe, that was Jews. In 30s America, it was eastern Europeans and Jews. In today’s Europe, it’s Muslims. In America, it’s Muslims and Mexicans (with a nod from the so-called alt-right towards Jews). Then and now, the pattern is the same: an attempt to refashion the pain inflicted by globalisation and its discontents as the wilful act of a hated group of individuals. No need to grasp difficult, abstract questions of economic policy. We just need to banish that lot, over there.

The third warning sign, and it’s a necessary companion of the second, is a growing impatience with the rule of law and with democracy. “In the 1930s, many, perhaps even most, educated people had reached the conclusion that democracy was a spent force,” says Snyder. There were plenty of socialist intellectuals ready to profess their admiration for the efficiency of Soviet industrialisation under Stalin, just as rightwing thinkers were impressed by Hitler’s capacity for state action. In our own time, that generational plunge in the numbers regarding democracy as “essential” suggests a troubling echo.

Today’s European nationalists exhibit a similar impatience, especially with the rule of law: think of the Brexiters’ insistence that nothing can be allowed to impede “the will of the people”. As for Trump, it’s striking how very rarely he mentions democracy, still less praises it. “I alone can fix it” is his doctrine – the creed of the autocrat.

The geopolitical equivalent is a departure from, or even contempt for, the international rules-based system that has held since 1945 – in which trade, borders and the seas are loosely and imperfectly policed by multilateral institutions such as the UN, the EU and the World Trade Organisation. Admittedly, the international system was weaker to start with in the 30s, but it lay in pieces by the decade’s end: both Hitler and Stalin decided that the global rules no longer applied to them, that they could break them with impunity and get on with the business of empire-building.

If there’s a common thread linking 21st-century European nationalists to each other and to Trump, it is a similar, shared contempt for the structures that have bound together, and restrained, the principal world powers since the last war. Naturally, Le Pen and Wilders want to follow the Brexit lead and leave, or else break up, the EU. And, no less naturally, Trump supports them – as well as regarding Nato as “obsolete” and the UN as an encumbrance to US power (even if his subordinates rush to foreign capitals to say the opposite).

For historians of the period, the 1930s are always worthy of study because the decade proves that systems – including democratic republics – which had seemed solid and robust can collapse. That fate is possible, even in advanced, sophisticated societies. The warning never gets old.

But when we contemplate our forebears from eight decades ago, we should recall one crucial advantage we have over them. We have what they lacked. We have the memory of the 1930s. We can learn the period’s lessons and avoid its mistakes. Of course, cheap comparisons coarsen our collective conversation. But having a keen ear tuned to the echoes of a past that brought such horror? That is not just our right. It is surely our duty.

The Guardian