Category Archives: Poverty & Inequality

New Zealand’s political leadership has failed for decades on housing policy – Shamubeel Eaqub. 

New Zealand’s political leadership has failed for decades on housing policy, leading to the rise of a Victorian-style landed gentry, social cohesion coming under immense pressure and a cumulative undersupply of half a million houses over the last 30 years.

House prices are at the highest level they have ever been. And they have risen really, really fast since the 90s, but more so since the early 2000s and have far outstripped every fundamental that we can think of.

After nearly a century of rising home ownership in New Zealand, since 1991 home ownership has been falling. In the last census, the home ownership rate was the lowest level since 1956. And for my estimate for the end of 2016, it’s the lowest level since 1946.

We’ve gone back a long way in terms of the promise and the social pact in New Zealand that home ownership is good, and if you work hard you’re going to be able to afford a house.

The reality is that that social pact, that right of passage has not been true for many, many decades. The solutions are going to be difficult and they are going to take time.

Before you come and tell me that you paid 20% interest rates, the reality is that, yes interest rates are much lower. But the really big problem is, house prices have risen so much that it’s almost impossible in fact to save for the deposit. People could have saved a deposit and paid it off in about 20-30 years in the early 1990s. Fast forward to today, and that’s more like 50 years. How long do you want to work to pay off your mortgage?

What we’re talking about is the rise of Generation Rent. Those who manage to buy houses are in mortgage slavery for a long period of time.

There is a widening societal gap. If younger generations want to access housing, it’s not enough to have a job, nor enough to have a good job. You must now have parents that are wealthy, and home-owners too. The idea of New Zealand being an egalitarian country is no longer true. The kind of societal divide we’re talking about is very Victorian. We’re in fact talking about the rise of a landed gentry.

For those who are born after the 1980s, the chance of you doing better than your parents are less than 50%.

What we’re creating is a country where opportunities are going to be more limited for our children and when it comes to things like housing, than ourselves. I worry that what we’re creating in New Zealand is a social divide that is only going to keep growing. This is only one manifestation of this divide.

There has been a change in philosophy in what underpins the housing market. One very good example is what we have done with our social housing sector.

Housing NZ started building social housing in the late 1930s and stock accumulated over the next 50-60 years to a peak in 1991.

Since then we have not added more social housing. On a per capita basis we have the lowest number of social housing in New Zealand since the 1940s.

This is an ideological position where we do not want to create housing supply for the poor. We don’t want to. This is not about politicians. This is a reflection on us. It is our ideology, it is our politics. Our politicians are doing our bidding. The society that we’re living in today does not want to invest in the bottom half of our society.

The really big kicker has been credit. Significant reductions in mortgage rates over time have driven demand for housing. But we have misallocated our credit. We’re creating more and more debt, but most of that debt is chasing the existing houses. We’re buying and selling from each other rather than creating something new. The housing boom could not have happened on its own. The banking sector facilitated it. We have seen more and more credit being created and more of that credit is now more likely to go towards buying and selling houses from each other rather than funding businesses or building houses.

One of the saddest stories at the moment is, even though we have an acute housing shortage in Auckland, the most difficult to find funding for now is new developments. When the banks pull away credit, the first thing that goes is the riskiest elements of the market.

Seasonally adjusted house sales in Auckland are at the lowest level since 2011. This is worrying because what happens in the property market expands to the economy, consents and the construction sector.

I fully expect a construction bust next year. We are going to have a construction bust before we have a housing bust. We haven’t built enough houses for a very long period of time. And if we’re going to keep not building enough houses, I’m not confident that whatever correction we have in the housing market is going to last.

New money created in the economy is largely chasing the property market. Household debt to GDP has been rising steadily since the 1990s. People were now taking on more debt, but banks have started to cut back on the amount of credit available overall.

For every unit of economic growth over the course of the last 10, 20 years, we needed more and more debt to create that growth. We are more and more addicted to debt to create our economic growth.

Credit is now going backwards. If credit is not going to be available in aggregate, we know the biggest loses are in fact going to be businesses and property development.

It means we are not going to be building a lot of the projects that have been consented, and we know the construction cycle is going to come down. I despair.

I despair that we still talk so much more about buying and selling houses than actually starting businesses. The cultural sclerosis that we see in New Zealand has as much to do with the problem of the housing market as to do with our rules around the Resource Management Act, our banking sector.

On demand, we know there’s been significant growth in New Zealand’s population. Even though it feels like all of that population growth has come from net migration, the reality is that it’s actually natural population growth that’s created the bulk of the demand.

But net migration has created a volatility that we can’t deal with. A lot of the cyclicality in New Zealand’s housing market and demand, comes from net migration and we simply cannot respond.

We do know that there is money that’s global that is looking for a safe haven, and New Zealand is part of that story. We don’t have very good data in New Zealand because we refuse to collect it. There is a lack of leadership regarding our approach to foreign investment in our housing market.

Looking at what’s happening in Canada and Australia would indicate roughly 10% of house sales in Auckland are to foreign buyers. Yes it matters, but when 90% of your sales are going to locals, I think it’s a bit of a red herring.

Historical context of where demand for housing comes from shows the biggest chunk is from natural population growth. The second biggest was from changes in household size as families got smaller – more recently that has stopped, ie kids refusing to leave home.

There has been a massive variation in what happens with net migration.

New Zealand needs about 21,000 houses a year to keep up with population growth and changes that are taking place. But over the course of the last four years, we’ve needed more like 26,000. We’re nowhere near building those kinds of houses.

This means we need to think about demand management from a policy perspective. It’s more about cyclical management rather than structural management.

Population growth has always been there. Whether it’s from migration or not doesn’t matter. The problem is our housing market, our land supply, our infrastructure supply, can’t keep up with any of it.

While immigration was a side problem it nevertheless was an important conversation to have due to the volatility that can be created. I struggle with the fact that we have no articulated population strategy in New Zealand. We have immigration because we have immigration. That’s not a very good reason.

Why do we want immigration, how big do we want to be, do you want 15 million people or do you want five?

What sort of people do we want? Are we just using immigration as shorthand for not educating our kids because we can’t fill the skills shortages that we have in our industries?

Let’s not pretend that it’s all about people wanting to live in houses.

You’d be very hard pressed to argue that people want to buy houses in Epsom at a 3% rental yield for investment purposes. They want to buy houses in Epsom at 3% rental yield because they want to speculate on the capital gains. Let’s be honest with ourselves.

If your floating mortgage rate is 5.5% and you’re getting 3% from your rent, what does that tell you about your investment? It tells you that you’re not really doing it for cash-flow purposes. You’re doing because you expect capital gains, and you expect those capital gains to compensate you.

The real story in Auckland is that a lot of additional demand is coming from investment.

Land supply in New Zealand is slow, particularly in places like Auckland. But it’s not just in terms of sections, it’s also about density. The Unitary Plan was a win for Auckland. The reality is that if we only do greenfields, we will just see more people sitting out in traffic at the end of Drury.

The majority of New housing supply are large houses, when the majority of new households being formed are 1-2 person households.

Between the last two censuses, most of the housing stock built in New Zealand were four bedrooms or more. In contrast, the majority of households that were created were people that were single or couples. We have ageing populations, we have the empty nesters, we have young people who are having kids later…and we’re building stand-alone houses, with four bedrooms.

We have to think very hard about how to create supply not just for the top end, even though we know in theory building just enough houses is good for everybody, when you’re starting from a point of not enough houses, it means the bottom end gets screwed for longer. We have to think very hard about whether we want to use things like inclusionary zoning; we have to think very hard about what we want to do with social housing.

Right now we’re not building houses for everybody in our community. We are failing by building the wrong sorts of houses in our communities.

Right at the top is land costs. If we think about what has been driving up the cost of housing, the biggest one is the value of land. It’s true that we should also look at what’s happening in the rental market and what was happening with the costs of construction. But those are not the things that have been the majority driver of the very unaffordable house prices that we see in New Zealand today.

The biggest constraint is in land, and that is where the speculation is taking place.

We know we’re not building enough. In the 1930s to 1940s we had very different types of governments and ideology. We actually built more houses per capita back then than we have in the last 30 years.

In the late 40s-early 70s, with the rise of the welfare state and build-up of infrastructure. On a per capita basis, we built massive amounts of houses.

But since the oil shock and the 1980s reforms, we have never structurally managed to build as many houses as we did pre-1980. That cumulative gap between the trend that we have seen in the last 30 years, versus what we had seen in the 40s, 50s and 60s, is around half a million houses.

So there is something that is fundamentally and structurally different in what we have done in terms of housing supply in New Zealand over a very long period of time.

The changes in the way that we do our planning rules, the advent of the RMA, the way that we fund and govern our local government. All these things have changed. So the nature of the provision of infrastructure, the provision of land, then provision of consents, all of these things have changed massively. But the net result is we’re not building as many houses, and that is a fundamental problem.

In Auckland there is a massive gap between targets set by government for house building over the past three years and the amount of consents issued. On top of this, the targets themselves were still not high enough.

Somehow we’re still not able to respond to the growth that Auckland is facing. Consistently we have underestimated how many people want to live in a place like Auckland.

But it’s not just Auckland. Carterton surprises every year, it’s because they’ve got a fantastic train line and people live there, it’s not surprising.

But we are failing. We have been failing and we continue to fail. We have to be far more responsive and we have to have a much longer time horizon to have the provision for housing that’s needed.

There is in fact no real plan. The Unitary Plan is fantastic in that it actually plans for just enough houses for the projections for population. We can confidently say that projection is going to be pessimistic, we’re going to have way more people in Auckland.

Trump and Brexit have marked a shift in politics and a polarisation in the public’s view of politics. In New Zealand I think one of the catalysts could be Generation Rent. In the last census, 51% of adults, over 15 year olds, rented. It is no longer the minority that rent, but the majority of individuals that rent.

I’m not saying we’ll see the same kind of uprising in New Zealand, but what we saw in Brexit was that discontent was the majority of voters. If young people had actually turned up to vote, Brexit wouldn’t have happened. The same is true for New Zealand.

It is strange that there was no sense of crisis or urgency. For a lot of the voters, things are just fine. For the people for whom it’s not fine, they’re not voting and they feel disengaged.

The kind of politics that we will start to see in the next 10 years is something much more activist, the ‘urgency of now’.

The promise of democracy is to create an economy that is fit for everyone. It is about creating opportunities for everyone. Right now, particularly when it comes to housing, we are failing. We are not creating a democratic community when it comes to our housing supply because young people are locked out, because young people are going to suffer, and we know there are some big differences across the different parts of New Zealand.

It’s not going to be enough, when we’re starting from a position of crisis, to simply create more housing that will appease the public. We have to make sure that we’re far more activist in making sure that we’re creating housing that is fit for purpose, not just for the general populous, but for the bottom half who are clearly losing out from what is going on.

We know what the causes are. I’m sick of arguing why we’re here. We know why we’re here, because we haven’t ensured enough political leadership to deal with the problems that are there.

We can’t implement the solutions unless we have political leadership, political cohesion, and endurance over the political cycle. This is a big challenge, but a big opportunity.

Shamubeel Eaqub

***

  • There has been a cumulative 500,000 gap in housing supply over the last 30 years.
  • Eaqub predicted a construction bust next year, led by banks tightening lending.
  • It’s remarkable NZ authorities do not have proper data on foreign buyers. While he estimates 10% of purchases in Auckland are made by foreign investors, he said the main focus should be on the other 90% by local.
  • However, migration creates cyclical volatility that we can’t deal with; it is unbelievable that New Zealand doesn’t have a stated population policy.
  • New Zealand is still not building the right sized houses – the majority of properties being built in recent years have had four-plus bedrooms, while household sizes have grown smaller
  • The majority of New Zealand’s adult population is now renting. This could be the catalyst for a Brexit/Trump-style rising up of formerly disengaged voters – young people in our case – to engage at this year’s election.
  • New Zealand’s home ownership level is now at its lowest point since 1946.
  • We have a cultural sclerosis of buying and selling existing houses to one another.

interest.co.nz

Undoing poverty’s negative effect on brain development with cash transfers – Cameron McLeod. 

An upcoming experiment into brain development and poverty by Kimberly G Noble, associate professor of neuroscience and education at Columbia University’s Teachers College, asks whether poverty may affect the development, “the size, shape, and functioning,” of a child’s brain, and whether “a cash stipend to parents” would prevent this kind of damage.

Noble writes that “poverty places the young child’s brain at much greater risk of not going through the paces of normal development.” Children raised in poverty perform less well in school, are less likely to graduate from high school, and are less likely to continue on to college. Children raised in poverty are also more likely to be underemployed when adults. Sociological research and research done in the area of neuroscience has shown that a childhood spent in poverty can result in “significant differences in the size, shape and functioning” of the  brain. Can the damage done to children’s brains  be negated  by the intervention of a subsidy for brain health?

This most recent study’s fundamental difference from past efforts is that it explores what kind of effect “directly supplementing” the incomes of families will have on brain development. “Cash transfers, as opposed to counseling, child care and other services, have the potential to empower families to make the financial decisions they deem best for themselves and their children.” Noble’s hypothesis is that a “cascade of positive effects” will follow from the cash transfers, and that if proved correct, this has implications for public policy and “the potential to…affect the lives of millions of disadvantaged families with young children.”

Brain Trust, Kimberly G. Noble

  • Children who live in poverty tend to perform worse than peers in school on a bevy of different tests. They are less likely to graduate from high school and then continue on to college and are more apt to be underemployed once they enter the workforce.
  • Research that crosses neuroscience with sociology has begun to show that educational and occupational disadvantages that result from growing up poor can lead to significant differences in the size, shape and functioning of children’s brains.
  • Poverty’s potential to hijack normal brain development has led to plans for studying whether a simple intervention might reverse these injurious effects. A study now in the planning stages will explore if a modest subsidy can enhance brain health.

BasicIncome.org

***

The goal of Dr. Noble’s research is to better characterize socioeconomic disparities in children’s cognitive and brain development. Ongoing studies in her lab address the timing of neurocognitive disparities in infancy and early childhood, as well as the particular exposures and experiences that account for these disparities, including access to material resources, richness of language exposure, parenting style and exposure to stress. Finally, she is interested in applying this work to the design of interventions that aim to target gaps in school readiness, including early literacy, math, and self-regulation skills. She is honored to be part of a national team of social scientists and neuroscientists planning the first clinical trial of poverty reduction, which aims to estimate the causal impact of income supplementation on children’s cognitive, emotional and brain development in the first three years of life.

Columbia University

***

A short review on the link between poverty, children’s cognition and brain development, 13th March 2017

In the latest issue of the Scientific American, Kimberly Noble, associate professor in neuroscience and education, reviews her work and introduces an ambitious research project that may help understand the cause-and-effect connection between poverty and children’s brain development.

For the past 15 years, Noble and her colleagues have gathered evidence to explain how socioeconomic disparities may underlie differences in children’s cognition and brain development. In the course of their research they have found for example that children living in poverty tend to have reduced cognitive skills – including language, memory skills and cognitive control (Figure 1).

Figure 1. Wealth effect

More recently, they published evidence showing that the socio-economic status of parents (as assessed using parental education, income and occupation) can also predict children’s brain structure.

By measuring the cortical surface area of children’s brains (ie the area of the surface of the cortex, the outer layer of the brain which contains all the neurons), they found that lower family income was linked to smaller cortical surface area, especially in brain regions involved in language and cognitive control abilities (Figure 2 – in magenta).

Figure 2. A Brain on Poverty

In the same research, they also found that longer parental education was linked to increased hippocampus volume in children, a brain structure essential for memory processes.

Overall, Noble’s work adds to a growing body of research showing the negative relation between poverty and brain development and these findings may explain (at least in part) why children from poor families are less likely to obtain good grades at school, graduate from high-school or attend college.

What is less known however, is the causal mechanism underlying this relationship. As Noble describes, differences in school and neighbourhood quality, chronic stress in the family home, less nurturing parenting styles or a combination of all these factors might explain the impact of poverty on brain development and cognition.

To better understand the causal effect of poverty, Noble has teamed up with economists and developmental psychologists and together, they will soon launch a large-scale experiment or “randomised control trial”. As part of this experiment, 1000 US women from low-income backgrounds will be recruited soon after giving birth and will be followed over a three-year period. Half of the women will receive $333 per month (if they are part of the “experimental” group) and the other half will receive $20 per month (if they are part of the “control” group). Mothers and children will be monitored throughout the study, and mothers will be able to spend the money as they wish, without any constrains.

By comparing children belonging to the experimental group to those in the control group, researchers will be able to observe how increases in family income may directly benefit cognition and brain development. They will also be able to test whether the way mothers use the extra income is a relevant factor to explain these benefits.

Noble concludes that “although income may not be the only factor that determines a child’s developmental trajectory, it may be the easiest one to alter” through social policy. And given that 25% of American children and 12% of British children are affected by poverty (as reported by UNICEF in 2012), policies designed to alleviate poverty may have the capacity to reach and improve the life chances of millions of children.

NGN is looking forward to see the results of this large-scale experiment. We expect that this project, in association with other research studies, will improve our understanding of the link between poverty and child development, and will help design better interventions to support disadvantaged children.

Nature Groups

***

Socioeconomic inequality and children’s brain development. 

Research addresses issues at the intersection of psychology, neuroscience and public policy.

By Kimberly G. Noble, MD, PhD

Kimberly Noble, MD, PhD, is an associate professor of neuroscience and education at Teachers College, Columbia University. She received her undergraduate, graduate and medical degrees at the University of Pennsylvania. As a neuroscientist and board-certified pediatrician, she studies how inequality relates to children’s cognitive and brain development. Noble’s work has been supported by several federal and foundation grants, and she was named a “Rising Star” by the Association for Psychological Science. Together with a team of social scientists and neuroscientists from around the United States, she is planning the first clinical trial of poverty reduction to assess the causal impact of income on cognitive and brain development in early childhood.

Kimberley Noble website.

What can neuroscience tell us about why disadvantaged children are at risk for low achievement and poor mental health? How early in infancy does socioeconomic disadvantage leave an imprint on the developing brain, and what factors explain these links? How can we best apply this work to inform interventions? These and other questions are the focus of the research my colleagues and I have been addressing for the last several years.

What is socioeconomic status and why is it of interest to neuroscientists?

The developing human brain is remarkably malleable to experience. Of course, a child’s experience varies tremendously based on his or her family’s circumstances (McLoyd, 1998). And so, as neuroscientists, we can use family circumstance as a lens through which to better understand how experience relates to brain development.

Family socioeconomic status, or SES, is typically considered to include parental educational attainment, occupational prestige and income (McLoyd, 1998); subjective social status, or where one sees oneself on the social hierarchy, may also be taken into account (Adler, Epel, Castellazzo & Ickovics, 2000). A large literature has established that disparities in income and human capital are associated with substantial differences in children’s learning and school performance. For example, socioeconomic differences are observed across a range of important cognitive and achievement measures for children and adolescents, including IQ, literacy, achievement test scores and high school graduation rates (Brooks-Gunn & Duncan, 1997). These differences in achievement in turn result in dramatic differences in adult economic well-being and labor market success.

However, although outcomes such as school success are clearly critical for understanding disparities in development and cognition, they tell us little about the underlying neural mechanisms that lead to these differences. Distinct brain circuits support discrete cognitive skills, and differentiating between underlying neural substrates may point to different causal pathways and approaches for intervention (Farah et al., 2006; Hackman & Farah, 2009; Noble, McCandliss, & Farah, 2007; Raizada & Kishiyama, 2010). Studies that have used a neurocognitive framework to investigate disparities have documented that children and adolescents from socioeconomically disadvantaged backgrounds tend to perform worse than their more advantaged peers on several domains, most notably in language, memory, self-regulation and socio-emotional processing (Hackman & Farah, 2009; Hackman, Farah, & Meaney, 2010; Noble et al., 2007; Noble, Norman, & Farah, 2005; Raizada & Kishiyama, 2010).

Family socioeconomic circumstance and children’s brain structure

More recently, we and other neuroscientists have extended this line of research to examine how family socioeconomic circumstances relate to differences in the structure of the brain itself. For example, in the largest study of its kind to date, we analyzed the brain structure of 1099 children and adolescents recruited from socioeconomically diverse homes from ten sites across the United States (Noble, Houston et al., 2015). We were specifically interested in the structure of the cerebral cortex, or the outer layer of brain cells that does most of the cognitive “heavy lifting.” We found that both parental educational attainment and family income accounted for differences in the surface area, or size of the “nooks and crannies” of the cerebral cortex. These associations were found across much of the brain, but were particularly pronounced in areas that support language and self-regulation — two of the very skills that have been repeatedly documented to show large differences along socioeconomic lines.

Several points about these findings are worth noting. First, genetic ancestry, or the proportion of ancestral descent for each of six major continental populations, was held constant in the analyses. Thus, although race and SES tend to be confounded in the U.S., we can say that the socioeconomic disparities in brain structure that we observed were independent of genetically-defined race. Second, we observed dramatic individual differences, or variation from person to person. That is, there were many children and adolescents from disadvantaged homes who had larger cortical surface areas, and many children from more advantaged homes who had smaller surface areas. This means that our research team could in no way accurately predict a child’s brain size simply by knowing his or her family income alone. Finally, the relationship between family income and surface area was nonlinear, such that the steepest gradient was seen at the lowest end of the income spectrum. That is, dollar for dollar, differences in family income were associated with proportionately greater differences in brain structure among the most disadvantaged families.

More recently, we also examined the thickness of the cerebral cortex in the same sample (Piccolo, et al., 2016). In general, as we get older, our cortices tend to get thinner. Specifically, cortical thickness decreases rapidly in childhood and early adolescence, followed by a more gradual thinning, and ultimately plateauing in early- to mid-adulthood (Raznahan et al., 2011; Schnack et al., 2014; Sowell et al., 2003). Our work suggests that family socioeconomic circumstance may moderate this trajectory. 

Specifically, at lower levels of family SES, we observed relatively steep age-related decreases in cortical thickness earlier in childhood, and subsequent leveling off during adolescence. In contrast, at higher levels of family SES, we observed more gradual age-related reductions in cortical thickness through at least late adolescence. We speculated that these findings may reflect an abbreviated period of cortical thinning in lower SES environments, relative to a more prolonged period of cortical thinning in higher SES environments. It is possible that socioeconomic disadvantage is a proxy for experiences that narrow the sensitive period, or time window for certain aspects of brain development that are malleable to environmental influences, thereby accelerating maturation (Tottenham, 2015).

Are these socioeconomic differences in brain structure clinically meaningful? Early work would suggest so. In our work, we have found that differences in cortical surface area partially accounted for links between family income and children’s executive function skills (Noble, Houston et al., 2015). Independent work in other labs has suggested that differences in brain structure may account for between 15 and 44 percent of the family income-related achievement gap in adolescence (Hair, Hanson, Wolfe & Pollak, 2015; Mackey et al., 2015). This line of research is still in its infancy, however, and several outstanding questions remain to be addressed.

How early are socioeconomic disparities in brain development detectable?

By the start of school, it is apparent that dramatic socioeconomic disparities in children’s cognitive functioning are already evident, and indeed, several studies have found that socioeconomic disparities in language (Fernald, Marchman & Weisleder, 2013; Noble, Engelhardt et al., 2015; Rowe & Goldin-Meadow, 2009) and memory (Noble, Engelhardt et al., 2015) are already present by the second year of life. But methodologies that assess brain function or structure may be more sensitive to differences than are tests of behavior. This raises the question of just how early we can detect socioeconomic disparities in the structure or function of children’s brains.

 One group reported socioeconomic differences in resting electroencephalogram (EEG) activity — which indexes electrical activity of the brain as measured at the scalp — as early as 6–9 months of age (Tomalski et al., 2013). Recent work by our group, however, found no correlation between SES and the same EEG measures within the first four days following birth (Brito, Fifer, Myers, Elliott & Noble, 2016), raising the possibility that some of these differences in brain function may emerge in part as a result of early differences in postnatal experience. Of course, a longitudinal study assessing both the prenatal and postnatal environments would be necessary to formally test this hypothesis. Furthermore, another group recently reported that, among a group of African-American, female infants imaged at 5 weeks of age, socioeconomic disadvantage was associated with smaller cortical and deep gray matter volumes (Betancourt et al., 2015). It is thus also likely that at least some socioeconomic differences in brain development are the result of socioeconomic differences in the prenatal environment (e.g., maternal diet, stress) and/or genetic differences.

Disentangling links among socioeconomic disparities, modifiable experiences and brain development represents a clear priority for future research. Are the associations between SES and brain development the result of differences in experiences that can serve as the targets of intervention, such as differences in nutrition, housing and neighborhood quality, parenting style, family stress and/or education? Certainly, the preponderance of social science evidence would suggest that such differences in experience are likely to account at least in part for differences in child and adolescent development (Duncan & Magnuson, 2012). However, few studies have directly examined links among SES, experience and the brain (Luby et al., 2013). In my lab, we are actively focusing on these issues, with specific interest in how chronic stress and the home language environment may, in part, explain our findings.

How can this work inform interventions?

Quite a few interventions aim to reduce socioeconomic disparities in children’s achievement. Whether school-based or home-based, many are quite effective, though frequently face challenges: High-quality interventions are expensive, difficult to scale up and often suffer from “fadeout,” or the phenomenon whereby the positive effects of the intervention dwindle with time once children are no longer receiving services.

What about the effects of directly supplementing family income? Rather than providing services, such “cash transfer“ interventions have the potential to empower families to make the financial decisions they deem best for themselves and their children. Experimental and quasi-experimental studies in the social sciences, both domestically and in the developing world, have suggested the promise of direct income supplementation (Duncan & Magnuson, 2012).

To date, linkages between poverty and brain development have been entirely correlational in nature; the field of neuroscience is silent on the causal connections between poverty and brain development. As such, I am pleased to be part of a team of social scientists and neuroscientists who are currently planning and raising funds to launch the first-ever randomized experiment testing the causal connections between poverty reduction and brain development.

The ambition of this study is large, though the premise is simple. We plan to recruit 1,000 low-income U.S. mothers at the time of their child’s birth. Mothers will be randomized to receive a large monthly income supplement or a nominal monthly income supplement. Families will be tracked longitudinally to definitively assess the causal impact of this unconditional cash transfer on cognitive and brain development in the first three years following birth, when we believe the developing brain is most malleable to experience.

We hypothesize that increased family income will trigger a cascade of positive effects throughout the family system. As a result, across development, children will be better positioned to learn foundational skills. If our hypotheses are borne out, this proposed randomized trial has the potential to inform social policies that affect the lives of millions of disadvantaged families with young children. While income may not be the only or even the most important factor in determining children’s developmental trajectories, it may be the most manipulable from a policy perspective.

American Psychological Association

Getting Basic Income Right – Kemal Dervis.  

Universal basic income (UBI) schemes are getting a lot of attention these days. Of course, the idea – to provide all legal residents of a country a standard sum of cash unconnected to work – is not new. The philosopher Thomas More advocated it back in the sixteenth century, and many others, including Milton Friedman on the right and John Kenneth Galbraith on the left, have promoted variants of it over the years. But the idea has lately been gaining much more traction, with some regarding it as a solution to today’s technology-driven economic disruptions. Can it work?

The appeal of a UBI is rooted in three key features: it provides a basic social “floor” to all citizens; it lets people choose how to use that support; and it could help to streamline the bureaucracy on which many social-support programs depend. A UBI would also be totally “portable,” thereby helping citizens who change jobs frequently, cannot depend on a long-term employer for social insurance, or are self-employed.

Viewing a UBI as a straightforward means to limit poverty, many on the left have made it part of their program. Many libertarians like the concept, because it enables – indeed, requires – recipients to choose freely how to spend the money. Even very wealthy people sometimes support it, because it would enable them to go to bed knowing that their taxes had finally and efficiently eradicated extreme poverty.

The UBI concept also appeals to those who focus on how economic development can replace at least some of the in-kind aid that is now given to the poor. Already, various local social programs in Latin America contain elements of the UBI idea, though they are targeted at the poor and usually conditional on certain behavior, such as having children regularly attend school.

But implementing a full-blown UBI would be difficult, not least because it would require answering a number of complex questions about goals and priorities. Perhaps the most obvious balancing act relates to how much money is actually delivered to each citizen (or legal resident).

In the United States and Europe, a UBI of, say, $2,000 per year would not do much, except perhaps alleviate the most extreme poverty, even if it was added to existing social-welfare programs. An UBI of $10,000 would make a real difference; but, depending on how many people qualify, that could cost as much as 10% or 15% of GDP – a huge fiscal outlay, particularly if it came on top of existing social programs.

Even with a significant increase in tax revenue, such a high basic income would have to be packaged with gradual reductions in some existing public spending – for example, on unemployment benefits, education, health, transportation, and housing – to be fiscally feasible. The system that would ultimately take shape would depend on how these components were balanced.

In today’s labor market, which is being transformed by digital technologies, one of the most important features of a UBI is portability. Indeed, to insist on greater labor-market flexibility, without ensuring that workers, who face a constant need to adapt to technological disruptions, can rely on continuous social-safety nets, is to advocate a lopsided world in which employers have all the flexibility and employees have very little.

Making modern labor markets flexible for employers and employees alike would require a UBI’s essential features, like portability and free choice. But only the most extreme libertarian would argue that the money should be handed out without any policy guidance. It would be more advisable to create a complementary active social policy that guides, to some extent, the use of the benefits.

Here, a proposal that has emerged in France is a step in the right direction. The idea is to endow each citizen with a personal social account containing partly redeemable “points.” Such accounts would work something like a savings account, with their owners augmenting a substantial public contribution to them by working, studying, or performing certain types of national service. The accounts could be drawn upon in times of need, particularly for training and re-skilling, though the amount that could be withdrawn would be guided by predetermined “prices” and limited to a certain amount in a given period of time.

The approach seems like a good compromise between portability and personal choice, on the one hand, and sufficient social-policy guidance, on the other. It contains elements of both US social security and individual retirement accounts, while reflecting a commitment to training and reskilling. Such a program could be combined with a more flexible retirement system, and thus developed into a modern and comprehensive social-solidarity system.

The challenge now – for the developed economies, at least – is to develop stronger and more streamlined social-solidarity systems, create room for more individual choice in the use of benefits, and make benefits portable. Only by striking the right balance between individual choice and social-policy guidance can modern economies build the social-safety programs they need.

Social Europe

Abuse breeds child abusers – Jarrod Gilbert. 

Often when I’m doing research I dance a silly jig when I gleefully unearth a gem of information hitherto unknown or long forgotten. In studying the violent deaths of kids that doesn’t happen.

There was no dance of joy when I discovered New Zealanders are more likely to be homicide victims in their first tender years than at any other time in their lives. But nothing numbs you like the photographs of dead children.

Little bodies lying there limp with little hands and little fingers, covered in scratches and an array of bruises some dark black and some fading, looking as vulnerable dead as they were when they were alive.

James Whakaruru’s misery ended when he was killed in 1999. He had endured four years of life and that was all he could take. He was hit with a small hammer, a jug cord and a vacuum cleaner hose. During one beating his mind was so confused he stared blankly ahead. His tormentor responded by poking him in the eyes. It was a stomping that eventually switched out his little light. It was a case that even the Mongrel Mob condemned, calling the cruelty “amongst the lowest of any act”.

An inquiry by the Commissioner for Children found a number of failings by state agencies, which were all too aware of the boy’s troubled existence. The Commissioner said James became a hero because changes made to Government agencies would save lives in the future. Yet such horrors have continued. My colleague Greg Newbold has found that on average nine children (under 15) have been killed as a result of maltreatment since 1992 and the rate has not abated in recent years. In 2015, there were 14 such deaths, one of which was three-year-old Moko Rangitoheriri, or baby Moko as we knew him when he gained posthumous celebrity.

Moko’s life was the same as James’s, and he too died in agony; he endured weeks of being beaten, kicked, and smeared with faeces. That was the short life he knew. Most of us will struggle to comprehend these acts but we are desperate to stop them. Desperate to ensure state agencies are capable of intervening to protect those who can not protect themselves and, through no fault of their own, are subjected to cruelty by those who are meant to protect them.

The reasons for intervening don’t stop with the imperative to save young lives. For every child killed there are dozens who live wretched existences and from this cohort of unfortunates will come the next generation of abusers.  Solving the problems of today, then, is not just a moral imperative but is also about producing a positive ripple effect.

And this is why, In the cases of James Whakaruru and baby Moko the best and most efficient time for intervention was not in the period leading up to their abuse, but rather many years before they were born. The men involved in each of those killing came from the same family. And it seems their lives were transient and tragic: one spent time in the now infamous Epuni Boys home, which is ground zero for calls for an inquiry into state care abuse (and incidentally the birth place of the Mongrel Mob).

Once young victims themselves, those boys crawled into adulthood and became violent men capable of imparting cruelty onto kids in their care.

This cycle of abuse is well known, yet state spending on the problem is poorly aligned to it, and our targeting of the problem is reactionary and punitive rather than proactive and preventative.

Of the $1.4 billion we spend on family and sexual violence annually, less than 10 per cent is spent on interventions, of which just 1.5 per cent is spent on primary prevention. The morality of that is questionable, the economics even more so.

Not only must things be approached differently but there needs to be greater urgency in our thinking. It’s perhaps trite to say, but if nine New Zealanders were killed every year in acts of terrorism politicians would never stop talking about it and it would be priority number one.

In an election year, that’s exactly where this issue should be. If the kids in violent homes had a voice, that’s what they’d be saying.

But if the details of such deaths don’t move our political leaders to urgent action, I rather fear nothing will. Maybe they should be made to look at the photographs.

• Dr Jarrod Gilbert is a sociologist at the University of Canterbury and the lead researcher at Independent Research Solutions.

The 1930s were humanity’s darkest, bloodiest hour. Are you paying attention? – Jonathan Freedland. 

Even to mention the 1930s is to evoke the period when human civilisation entered its darkest, bloodiest chapter. No case needs to be argued; just to name the decade is enough. It is a byword for mass poverty, violent extremism and the gathering storm of world war. “The 1930s” is not so much a label for a period of time than it is rhetorical shorthand – a two-word warning from history.

Witness the impact of an otherwise boilerplate broadcast by the Prince of Wales last December that made headlines. “Prince Charles warns of return to the ‘dark days of the 1930s’ in Thought for the Day message.” Or consider the reflex response to reports that Donald Trump was to maintain his own private security force even once he had reached the White House. The Nobel prize-winning economist Paul Krugman’s tweet was typical: “That 1930s show returns.”

Because that decade was scarred by multiple evils, the phrase can be used to conjure up serial spectres. It has an international meaning, with a vocabulary that centres on Hitler and Nazism and the failure to resist them: from brownshirts and Goebbels to appeasement, Munich and Chamberlain. And it has a domestic meaning, with a lexicon and imagery that refers to the Great Depression: the dust bowl, soup kitchens, the dole queue and Jarrow. It was this second association that gave such power to a statement from the usually dry Office for Budget Responsibility, following then-chancellor George Osborne’s autumn statement in 2014. The OBR warned that public spending would be at its lowest level since the 1930s; the political damage was enormous and instant.

In recent months, the 1930s have been invoked more than ever, not to describe some faraway menace but to warn of shifts under way in both Europe and the United States. The surge of populist, nationalist movements in Europe, and their apparent counterpart in the US, has stirred unhappy memories and has, perhaps inevitably, had commentators and others reaching for the historical yardstick to see if today measures up to 80 years ago.

Why is it the 1930s to which we return, again and again? For some sceptics, the answer is obvious: it’s the only history anybody knows. According to this jaundiced view of the British school curriculum, Hitler and Nazis long ago displaced Tudors and Stuarts as the core, compulsory subjects of the past. When we fumble in the dark for a historical precedent, our hands keep reaching for the 30s because they at least come with a little light.

The more generous explanation centres on the fact that that period, taken together with the first half of the 1940s, represents a kind of nadir in human affairs. The Depression was, as Larry Elliott wrote last week, “the biggest setback to the global economy since the dawn of the modern industrial age”, leaving 34 million Americans with no income. The hyperinflation experienced in Germany – when a thief would steal a laundry-basket full of cash, chucking away the money in order to keep the more valuable basket – is the stuff of legend. And the Depression paved the way for history’s bloodiest conflict, the second world war which left, by some estimates, a mind-numbing 60 million people dead. At its centre was the Holocaust, the industrialised slaughter of 6 million Jews by the Nazis: an attempt at the annihilation of an entire people.

In these multiple ways, then, the 1930s function as a historical rock bottom, a demonstration of how low humanity can descend. The decade’s illustrative power as a moral ultimate accounts for why it is deployed so fervently and so often.

Less abstractly, if we keep returning to that period, it’s partly because it can justifiably claim to be the foundation stone of our modern world. The international and economic architecture that still stands today – even if it currently looks shaky and threatened – was built in reaction to the havoc wreaked in the 30s and immediately afterwards. The United Nations, the European Union, the International Monetary Fund, Bretton Woods: these were all born of a resolve not to repeat the mistakes of the 30s, whether those mistakes be rampant nationalism or beggar-my-neighbour protectionism. The world of 2017 is shaped by the trauma of the 1930s.

The international and economic architecture that still stands today was built in reaction to the havoc of the 1930s

One telling, human illustration came in recent global polling for the Journal of Democracy, which showed an alarming decline in the number of people who believed it was “essential” to live in a democracy. From Sweden to the US, from Britain to Australia, only one in four of those born in the 1980s regarded democracy as essential. Among those born in the 1930s, the figure was at or above 75%. Put another way, those who were born into the hurricane have no desire to feel its wrath again.

Most of these dynamics are long established, but now there is another element at work. As the 30s move from living memory into history, as the hurricane moves further away, so what had once seemed solid and fixed – specifically, the view that that was an era of great suffering and pain, whose enduring value is as an eternal warning – becomes contested, even upended.

Witness the remarks of Steve Bannon, chief strategist in Donald Trump’s White House and the former chairman of the far-right Breitbart website. In an interview with the Hollywood Reporter, Bannon promised that the Trump era would be “as exciting as the 1930s”. (In the same interview, he said “Darkness is good” – citing Satan, Darth Vader and Dick Cheney as examples.)

“Exciting” is not how the 1930s are usually remembered, but Bannon did not choose his words by accident. He is widely credited with the authorship of Trump’s inaugural address, which twice used the slogan “America first”. That phrase has long been off-limits in US discourse, because it was the name of the movement – packed with nativists and antisemites, and personified by the celebrity aviator Charles Lindbergh – that sought to keep the US out of the war against Nazi Germany and to make an accommodation with Hitler. Bannon, who considers himself a student of history, will be fully aware of that 1930s association – but embraced it anyway.

That makes him an outlier in the US, but one with powerful allies beyond America’s shores. Timothy Snyder, professor of history at Yale and the author of On Tyranny: Twenty Lessons from the Twentieth Century, notes that European nationalists are also keen to overturn the previously consensual view of the 30s as a period of shame, never to be repeated. Snyder mentions Hungary’s prime minister, Viktor Orban, who avowedly seeks the creation of an “illiberal” state, and who, says Snyder, “looks fondly on that period as one of healthy national consciousness”.

The more arresting example is, perhaps inevitably, Vladimir Putin. Snyder notes Putin’s energetic rehabilitation of Ivan Ilyin, a philosopher of Russian fascism influential eight decades ago. Putin has exhumed Ilyin both metaphorically and literally, digging up and moving his remains from Switzerland to Russia.

Among other things, Ilyin wrote that individuality was evil; that the “variety of human beings” represented a failure of God to complete creation; that what mattered was not individual people but the “living totality” of the nation; that Hitler and Mussolini were exemplary leaders who were saving Europe by dissolving democracy; and that fascist holy Russia ought to be governed by a “national dictator”. Ilyin spent the 30s exiled from the Soviet Union, but Putin has brought him back, quoting him in his speeches and laying flowers on his grave.

European nationalists are keen to overturn the view of the 1930s as a period of shame, never to be repeated.

Still, Putin, Orbán and Bannon apart, when most people compare the current situation to that of the 1930s, they don’t mean it as a compliment. And the parallel has felt irresistible, so that when Trump first imposed his travel ban, for example, the instant comparison was with the door being closed to refugees from Nazi Germany in the 30s. (Theresa May was on the receiving end of the same comparison when she quietly closed off the Dubs route to child refugees from Syria.)

When Trump attacked the media as purveyors of “fake news”, the ready parallel was Hitler’s slamming of the newspapers as the Lügenpresse, the lying press (a term used by today’s German far right). When the Daily Mail branded a panel of high court judges “enemies of the people”, for their ruling that parliament needed to be consulted on Brexit, those who were outraged by the phrase turned to their collected works of European history, looking for the chapters on the 1930s.

The Great Depression

So the reflex is well-honed. But is it sound? Does any comparison of today and the 1930s hold up?

The starting point is surely economic, not least because the one thing everyone knows about the 30s – and which is common to both the US and European experiences of that decade – is the Great Depression. The current convulsions can be traced back to the crash of 2008, but the impact of that event and the shock that defined the 30s are not an even match. When discussing our own time, Krugman speaks instead of the Great Recession: a huge and shaping event, but one whose impact – measured, for example, in terms of mass unemployment – is not on the same scale. US joblessness reached 25% in the 1930s; even in the depths of 2009 it never broke the 10% barrier.

The political sphere reveals another mismatch between then and now. The 30s were characterised by ultra-nationalist and fascist movements seizing power in leading nations: Germany, Italy and Spain most obviously. The world is waiting nervously for the result of France’s presidential election in May: victory for Marine Le Pen would be seized on as the clearest proof yet that the spirit of the 30s is resurgent.

There is similar apprehension that Geert Wilders, who speaks of ridding the country of ‘Moroccan scum”, has led the polls ahead of Holland’s general election on Wednesday. And plenty of liberals will be perfectly content for the Christian Democrat Angela Merkel to prevail over her Social Democratic rival, Martin Schulz, just so long as the far-right Alternative Fur Deutschland makes no ground. Still, so far and as things stand, in Europe only Hungary and Poland have governments that seem doctrinally akin to those that flourished in the 30s.

That leaves the US, which dodged the bullet of fascistic rule in the 30s – although at times the success of the America First movement which at its peak could count on more than 800,000 paid-up members, suggested such an outcome was far from impossible. (Hence the intended irony in the title of Sinclair Lewis’s 1935 novel, It Can’t Happen Here.)

Donald Trump has certainly had Americans reaching for their history textbooks, fearful that his admiration for strongmen, his contempt for restraints on executive authority, and his demonisation of minorities and foreigners means he marches in step with the demagogues of the 30s.

But even those most anxious about Trump still focus on the form the new presidency could take rather than the one it is already taking. David From, a speechwriter to George W. Bush, wrote a much-noticed essay for the Atlantic titled, “How to build an autocracy”. It was billed as setting out “the playbook Donald Trump could use to set the country down a path towards illiberalism”. He was not arguing that Trump had already embarked on that route, just that he could (so long as the media came to heel and the public grew weary and worn down, shrugging in the face of obvious lies and persuaded that greater security was worth the price of lost freedoms).

Similarly, Trump has unloaded rhetorically on the free press – castigating them, Mail-style, as “enemies of the people” – but he has not closed down any newspapers. He meted out the same treatment via Twitter to a court that blocked his travel ban, rounding on the “so-called judge” – but he did eventually succumb to the courts’ verdict and withdrew his original executive order. He did not have the dissenting judges sacked or imprisoned; he has not moved to register or intern every Muslim citizen in the US; he has not suggested they wear identifying symbols.

These are crumbs of comfort; they are not intended to minimise the real danger Trump represents to the fundamental norms that underpin liberal democracy. Rather, the point is that we have not reached the 1930s yet. Those sounding the alarm are suggesting only that we may be travelling in that direction – which is bad enough.

Two further contrasts between now and the 1930s, one from each end of the sociological spectrum, are instructive. First, and particularly relevant to the US, is to ask: who is on the streets? In the 30s, much of the conflict was played out at ground level, with marchers and quasi-military forces duelling for control. The clashes of the Brownshirts with communists and socialists played a crucial part in the rise of the Nazis. (A turning point in the defeat of Oswald Mosley, Britain’s own little Hitler, came with his humbling in London’s East End, at the 1936 battle of Cable Street.)

But those taking to the streets today – so far – have tended to be opponents of the lurch towards extreme nationalism. In the US, anti-Trump movements – styling themselves, in a conscious nod to the 1930s, as “the resistance” – have filled city squares and plazas. The Women’s March led the way on the first day of the Trump presidency; then those protesters and others flocked to airports in huge numbers a week later, to obstruct the refugee ban. Those demonstrations have continued, and they supply an important contrast with 80 years ago. Back then, it was the fascists who were out first – and in force.

Snyder notes another key difference. “In the 1930s, all the stylish people were fascists: the film critics, the poets and so on.” He is speaking chiefly about Germany and Italy, and doubtless exaggerates to make his point, but he is right that today “most cultural figures tend to be against”. There are exceptions – Le Pen has her celebrity admirers, but Snyder speaks accurately when he says that now, in contrast with the 30s, there are “few who see fascism as a creative cultural force”.

Fear and loathing

So much for where the lines between then and now diverge. Where do they run in parallel?

The exercise is made complicated by the fact that ultra-nationalists are, so far, largely out of power where they ruled in the 30s – namely, Europe – and in power in the place where they were shut out in that decade, namely the US. It means that Trump has to be compared either to US movements that were strong but ultimately defeated, such as the America First Committee, or to those US figures who never governed on the national stage.

In that category stands Huey Long, the Louisiana strongman, who ruled that state as a personal fiefdom (and who was widely seen as the inspiration for the White House dictator at the heart of the Lewis novel).

“He was immensely popular,” says Tony Badger, former professor of American history at the University of Cambridge. Long would engage in the personal abuse of his opponents, often deploying colourful language aimed at mocking their physical characteristics. The judges were a frequent Long target, to the extent that he hounded one out of office – with fateful consequences.

Long went over the heads of the hated press, communicating directly with the voters via a medium he could control completely. In Trump’s day, that is Twitter, but for Long it was the establishment of his own newspaper, the Louisiana Progress (later the American Progress) – which Long had delivered via the state’s highway patrol and which he commanded be printed on rough paper, so that, says Badger, “his constituents could use it in the toilet”.

All this was tolerated by Long’s devotees because they lapped up his message of economic populism, captured by the slogan: “Share Our Wealth”. Tellingly, that resonated not with the very poorest – who tended to vote for Roosevelt, just as those earning below $50,000 voted for Hillary Clinton in 2016 – but with “the men who had jobs or had just lost them, whose wages had eroded and who felt they had lost out and been left behind”. That description of Badger’s could apply just as well to the demographic that today sees Trump as its champion.

Long never made it to the White House. In 1935, one month after announcing his bid for the presidency, he was assassinated, shot by the son-in-law of the judge Long had sought to remove from the bench. It’s a useful reminder that, no matter how hate-filled and divided we consider US politics now, the 30s were full of their own fear and loathing.

“I welcome their hatred,” Roosevelt would say of his opponents on the right. Nativist xenophobia was intense, even if most immigration had come to a halt with legislation passed in the previous decade. Catholics from eastern Europe were the target of much of that suspicion, while Lindbergh and the America Firsters played on enduring antisemitism.

This, remember, was in the midst of the Great Depression, when one in four US workers was out of a job. And surely this is the crucial distinction between then and now, between the Long phenomenon and Trump. As Badger summarises: “There was a real crisis then, whereas Trump’s is manufactured.”

And yet, scholars of the period are still hearing the insistent beep of their early warning systems. An immediate point of connection is globalisation, which is less novel than we might think. For Snyder, the 30s marked the collapse of the first globalisation, defined as an era in which a nation’s wealth becomes ever more dependent on exports. That pattern had been growing steadily more entrenched since the 1870s (just as the second globalisation took wing in the 1970s). Then, as now, it had spawned a corresponding ideology – a faith in liberal free trade as a global panacea – with, perhaps, the English philosopher Herbert Spencer in the role of the End of History essayist Francis Fukuyama. By the 1930s, and thanks to the Depression, that faith in globalisation’s ability to spread the wealth evenly had shattered. This time around, disillusionment has come a decade or so ahead of schedule.

The second loud alarm is clearly heard in the hostility to those deemed outsiders. Of course, the designated alien changes from generation to generation, but the impulse is the same: to see the family next door not as neighbours but as agents of some heinous worldwide scheme, designed to deprive you of peace, prosperity or what is rightfully yours. In 30s Europe, that was Jews. In 30s America, it was eastern Europeans and Jews. In today’s Europe, it’s Muslims. In America, it’s Muslims and Mexicans (with a nod from the so-called alt-right towards Jews). Then and now, the pattern is the same: an attempt to refashion the pain inflicted by globalisation and its discontents as the wilful act of a hated group of individuals. No need to grasp difficult, abstract questions of economic policy. We just need to banish that lot, over there.

The third warning sign, and it’s a necessary companion of the second, is a growing impatience with the rule of law and with democracy. “In the 1930s, many, perhaps even most, educated people had reached the conclusion that democracy was a spent force,” says Snyder. There were plenty of socialist intellectuals ready to profess their admiration for the efficiency of Soviet industrialisation under Stalin, just as rightwing thinkers were impressed by Hitler’s capacity for state action. In our own time, that generational plunge in the numbers regarding democracy as “essential” suggests a troubling echo.

Today’s European nationalists exhibit a similar impatience, especially with the rule of law: think of the Brexiters’ insistence that nothing can be allowed to impede “the will of the people”. As for Trump, it’s striking how very rarely he mentions democracy, still less praises it. “I alone can fix it” is his doctrine – the creed of the autocrat.

The geopolitical equivalent is a departure from, or even contempt for, the international rules-based system that has held since 1945 – in which trade, borders and the seas are loosely and imperfectly policed by multilateral institutions such as the UN, the EU and the World Trade Organisation. Admittedly, the international system was weaker to start with in the 30s, but it lay in pieces by the decade’s end: both Hitler and Stalin decided that the global rules no longer applied to them, that they could break them with impunity and get on with the business of empire-building.

If there’s a common thread linking 21st-century European nationalists to each other and to Trump, it is a similar, shared contempt for the structures that have bound together, and restrained, the principal world powers since the last war. Naturally, Le Pen and Wilders want to follow the Brexit lead and leave, or else break up, the EU. And, no less naturally, Trump supports them – as well as regarding Nato as “obsolete” and the UN as an encumbrance to US power (even if his subordinates rush to foreign capitals to say the opposite).

For historians of the period, the 1930s are always worthy of study because the decade proves that systems – including democratic republics – which had seemed solid and robust can collapse. That fate is possible, even in advanced, sophisticated societies. The warning never gets old.

But when we contemplate our forebears from eight decades ago, we should recall one crucial advantage we have over them. We have what they lacked. We have the memory of the 1930s. We can learn the period’s lessons and avoid its mistakes. Of course, cheap comparisons coarsen our collective conversation. But having a keen ear tuned to the echoes of a past that brought such horror? That is not just our right. It is surely our duty.

The Guardian

Why we should all have a basic income | World Economic Forum – Scott Santens. 

Consider for a moment that from this day forward, on the first day of every month, around $1,000 is deposited into your bank account – because you are a citizen. This income is independent of every other source of income and guarantees you a monthly starting salary above the poverty line for the rest of your life. What do you do? Possibly of more importance, what don’t you do? How does this firm foundation of economic security and positive freedom affect your present and future decisions, from the work you choose to the relationships you maintain, to the risks you take?

The idea is called unconditional or universal basic income, or UBI.

It’s like social security for all, and it’s taking root within minds around the world and across the entire political spectrum, for a multitude of converging reasons Rising inequality, decades of stagnant wages, the transformation of lifelong careers into sub-hourly tasks, exponentially advancing technology like robots and deep neural networks increasingly capable of replacing potentially half of all human labour, world-changing events like Brexit and the election of Donald Trump – all of these and more are pointing to the need to start permanently guaranteeing everyone at least some income.

A promise of equal opportunity

“Basic income” would be an amount sufficient to secure basic needs as a permanent earnings floor no one could fall beneath, and would replace many of today’s temporary benefits, which are given only in case of emergency, and/or only to those who successfully pass the applied qualification tests. UBI would be a promise of equal opportunity, not equal outcome, a new starting line set above the poverty line.

It may surprise you to learn that a partial UBI has already existed in Alaska since 1982, and that a version of basic income was experimentally tested in the United States in the 1970s. The same is true in Canada, where the town of Dauphin managed to eliminate poverty for five years. Full UBI experiments have been done more recently in places such as Namibia, India and Brazil. Other countries are following suit: Finland, the Netherlands and Canada are carrying out government-funded experiments to compare against existing programmes. Organizations like Y-Combinator and GiveDirectly have launched privately funded experiments in the US and East Africa respectively.

I know what you’re thinking. It’s the same thing most people think when they’re new to the idea. Giving money to everyone for doing nothing? That sounds both incredibly expensive and a great way to encourage people to do nothing. Well, it may sound counter-intuitive, but the exact opposite is true on both accounts. What’s incredibly expensive is not having basic income, and what really motivates people to work is, on one hand, not taking money away from them for working, and on the other hand, not actually about money at all.

Basic income in numbers

What tends to go unrealized about the idea of basic income, and this is true even of many economists – but not all – is that it represents a net transfer. In the same way it does not cost $20 to give someone $20 in exchange for $10, it does not cost $3 trillion to give every adult citizen $12,000 and every child $4,000, when every household will be paying varying amounts of taxes in exchange for their UBI.

Instead it will cost around 30% of that, or about $900 billion, and that’s before the full or partial consolidation of other programmes and tax credits immediately made redundant by the new transfer. In other words, for someone whose taxes go up $4,000 to pay for $12,000 in UBI, the cost to give that person UBI is $8,000, not $12,000, and it’s coming from someone else whose taxes went up $20,000 to pay for their own $12,000. However, even that’s not entirely accurate, because the consolidation of the safety net and tax code UBI allows could drive the total price even lower.

Now, this idea of replacing existing programmes can scare some just as it appeals to others, but the choice is not all or nothing: partial consolidation is possible. As an example of partial consolidation, because most seniors already effectively have a basic income through social security, they could either choose between the two, or a percentage of their social security could be converted into basic income. Either way, no senior would earn a penny less than now in total, and yet the UBI price tag could be reduced by about $220 billion. Meanwhile, just a few examples of existing revenue that could and arguably should be fully consolidated into UBI would likely be food and nutrition assistance ($108 billion), wage subsidies ($72 billion), child tax credits ($56 billion), temporary assistance for needy families ($17 billion), and the home mortgage interest deduction (which mostly benefits the wealthy anyway, at a cost of at least $70 billion per year). That’s $543 billion spent on UBI instead of all the above, which represents only a fraction of the full list, none of which need be healthcare or education.

So what’s the true cost?

The true net cost of UBI in the US is therefore closer to an additional tax revenue requirement of a few hundred billion dollars – or less – depending on the many design choices made, and there exists a variety of ideas out there for crossing such a funding gap in a way that many people might prefer, that would also treat citizens like the shareholders they are (virtually all basic research is taxpayer funded), and that could even reduce taxes on labour by focusing more on capital, consumption, and externalities instead of wages and salaries. Additionally, we could eliminate the $540 bill in tax expenditures currently being provided disproportionately to the wealthiest, and also some of the $850 billion spent on defence. 

Universal basic income is thus entirely affordable and essentially Milton Friedman’s negative income tax in net outcome (and he himself knew this), where those earning below a certain point are given additional income, and those earning above a certain point are taxed additional income. UBI does not exist outside the tax system unless it’s provided through pure monetary expansion or extra-governmental means. In other words, yes, Bill Gates will get $12,000 too but as one of the world’s wealthiest billionaires he will pay far more than $12,000 in new taxes to pay for it. That however is not similarly true for the bottom 80% of all US households, who will pay the same or less in total taxes.

To some, this may sound wasteful. Why give someone money they don’t need, and then tax their other income? Think of it this way: is it wasteful to put seat belts in every car instead of only in the cars of those who have gotten into accidents thus demonstrating their need for seat belts? Good drivers never get into accidents, right? So it might seem wasteful. But it’s not because we recognize the absurd costs of determining who would and wouldn’t need seat belts, and the immeasurable costs of being wrong. We also recognize that accidents don’t only happen to “bad” drivers. They can happen to anyone, at any time, purely due to random chance. As a result, seat belts for everyone.

The truth is that the costs of people having insufficient incomes are many and collectively massive. It burdens the healthcare system. It burdens the criminal justice system. It burdens the education system. It burdens would-be entrepreneurs, it burdens both productivity and consumer buying power and therefore entire economies. The total cost of all these burdens well exceeds $1 trillion annually, and so the few hundred billion net additional cost of UBI pays for itself many times over. That’s the big-picture maths.

The real effects on motivation

But what about people then choosing not to work? Isn’t that a huge burden too? Well that’s where things get really interesting. For one, conditional welfare assistance creates a disincentive to work through removal of benefits in response to paid work. If accepting any amount of paid work will leave someone on welfare barely better off, or even worse off, what’s the point? With basic income, all income from paid work (after taxes) is earned as additional income so that everyone is always better off in terms of total income through any amount of employment – whether full time, part time or gig. Thus basic income does not introduce a disincentive to work. It removes the existing disincentive to work that conditional welfare creates.

Fascinatingly, improved incentives are where basic income really shines. Studies of motivation reveal that rewarding activities with money is a good motivator for mechanistic work but a poor motivator for creative work. Combine that with the fact that creative work is to be what’s left after most mechanistic work is handed off to machines, and we’re looking at a future where increasingly the work that’s left for humans is not best motivated extrinsically with money, but intrinsically out of the pursuit of more important goals. It’s the difference between doing meaningless work for money, and using money to do meaningful work.

Basic income thus enables the future of work, and even recognizes all the unpaid intrinsically motivated work currently going on that could be amplified, for example in the form of the $700 billion in unpaid work performed by informal caregivers in the US every year, and all the work in the free/open source software movement (FOSSM) that’s absolutely integral to the internet.

There is also another way basic income could affect work incentives that is rarely mentioned and somewhat more theoretical. UBI has the potential to better match workers to jobs, dramatically increase engagement, and even transform jobs themselves through the power UBI provides to refuse them.

A truly free market for labour

How many people are unhappy with their jobs? According to Gallup, worldwide, only 13% of those with jobs feel engaged with them. In the US, 70% of workers are not engaged or actively disengaged, the cost of which is a productivity loss of around $500 billion per year. Poor engagement is even associated with a disinclination to donate money, volunteer or help others. It measurably erodes social cohesion.

At the same time, there are those among the unemployed who would like to be employed, but the jobs are taken by those who don’t really want to be there. This is an inevitable result of requiring jobs in order to live. With no real choice, people do work they don’t wish to do in exchange for money that may be insufficient – but that’s still better than nothing – and then cling to that paid work despite being the “working poor” and/or disengaged. It’s a mess.

Basic income – in 100 people

Take an economy without UBI. We’ll call it Nation A. For every 100 working-age adults there are 80 jobs. Half the work force is not engaged by their jobs, and half again as many are unemployed with half of them really wanting to be employed, but, as in a game of musical chairs, they’re left without a chair.

Basic income fundamentally alters this reality. By unconditionally providing income outside of employment, people can refuse to do the jobs that aren’t engaging them. This in turn opens up those jobs to the unemployed who would be engaged by them. It also creates the bargaining power for everyone to negotiate better terms. How many jobs would become more attractive if they paid more money or required fewer hours? How would this reorganizing of the labour supply affect productivity if the percentage of disengaged workers plummeted? How much more prosperity would that create?

Consider now an economy with basic income. Let’s call it Nation B. For every 100 working age adults there are still 80 jobs, at least to begin with. The disengaged workforce says “no thanks” to the labour market as is, enabling all 50 people who want to work to do the jobs they want. To attract those who demand more compensation or shorter work weeks, some employers raise their wages. Others reduce the required hours. The result is a transformed labour market of more engaged, more employed, better paid, more productive, workers. Fewer people are excluded. 

Simply put, a basic income improves the market for labour by making it optional. The transformation from a coercive market to a free market means that employers must attract employees with better pay and more flexible hours. It also means a more productive work force that potentially obviates the need for market distorting minimum wage laws. Friction might even be reduced, so that people can move more easily from job to job, or from job to education/retraining to job, or even from job to entrepreneur, all thanks to more individual liquidity and the elimination of counter-productive bureaucracy and conditions.

Perhaps best of all, the automation of low-demand jobs becomes further incentivized through the rising of wages. The work that people refuse to do for less than a machine would cost to do it becomes a job for machines. And thanks to those replaced workers having a basic income, they aren’t just left standing in the cold in the job market’s ongoing game of musical chairs. They are instead better enabled to find new work, paid or unpaid, full-time or part-time, that works best for them.

The tip of a big iceberg

The idea of basic income is deceivingly simple sounding, but in reality it’s like an iceberg with far more to be revealed as you dive deeper. Its big picture price tag in the form of investing in human capital for far greater returns, and its effects on what truly motivates us are but glimpses of these depths. There are many more. Some are already known, like the positive effects on social cohesion and physical and mental health as seen in the 42% drop in crime in Namibia and the 8.5% reduction in hospitalizations in Dauphin, Manitoba. Debts tend to fall. Entrepreneurship tends to grow. Other effects have yet to be discovered by further experiments. But the growing body of evidence behind cash transfers in general point to basic income as something far more transformative to the future of work than even its long history of consideration has imagined.

It’s like a game of Monopoly where the winning teams have rewritten the rules so players no longer collect money for passing Go. The rule change functions to exclude people from markets. Basic income corrects this. But it’s more than just a tool for improving markets by making them more inclusive; there’s something more fundamental going on.

Humans need security to thrive, and basic income is a secure economic base – the new foundation on which to transform the precarious present, and build a more solid future. That’s not to say it’s a silver bullet. It’s that our problems are not impossible to solve. Poverty is not a supernatural foe, nor is extreme inequality or the threat of mass income loss due to automation. They are all just choices. And at any point, we can choose to make new ones.

Based on the evidence we already have and will likely continue to build, I firmly believe one of those choices should be unconditional basic income as a new equal starting point for all.

World Economic Forum

How economic boom times in the West came to an end – Marc Levinson. 

Unprecedented growth marked the era from 1948 to 1973. Economists might study it forever, but it can never be repeated. Why? 

The second half of the 20th century divides neatly in two. The divide did not come with the rise of Ronald Reagan or the fall of the Berlin Wall. It is not discernible in a particular event, but rather in a shift in the world economy, and the change continues to shape politics and society in much of the world today.

The shift came at the end of 1973. The quarter-century before then, starting around 1948, saw the most remarkable period of economic growth in human history. In the Golden Age between the end of the Second World War and 1973, people in what was then known as the ‘industrialised world’ – Western Europe, North America, and Japan – saw their living standards improve year after year. They looked forward to even greater prosperity for their children. Culturally, the first half of the Golden Age was a time of conformity, dominated by hard work to recover from the disaster of the war. The second half of the age was culturally very different, marked by protest and artistic and political experimentation. Behind that fermentation lay the confidence of people raised in a white-hot economy: if their adventures turned out badly, they knew, they could still find a job.

The year 1973 changed everything. High unemployment and a deep recession made experimentation and protest much riskier, effectively putting an end to much of it. A far more conservative age came with the economic changes, shaped by fears of failing and concerns that one’s children might have it worse, not better. Across the industrialised world, politics moved to the Right – a turn that did not avert wage stagnation, the loss of social benefits such as employer-sponsored pensions and health insurance, and the secure, stable employment that had proved instrumental to the rise of a new middle class and which workers had come to take for granted. At the time, an oil crisis took the blame for what seemed to be a sharp but temporary downturn. Only gradually did it become clear that the underlying cause was not costly oil but rather lagging productivity growth – a problem that would defeat a wide variety of government policies put forth to correct it.

The great boom began in the aftermath of the Second World War. The peace treaties of 1945 did not bring prosperity; on the contrary, the post-war world was an economic basket case. Tens of millions of people had been killed, and in some countries a large proportion of productive capacity had been laid to waste. Across Europe and Asia, tens of millions of refugees wandered the roads. Many countries lacked the foreign currency to import food and fuel to keep people alive, much less to buy equipment and raw material for reconstruction. Railroads barely ran; farm tractors stood still for want of fuel.

Everywhere, producing enough coal to provide heat through the winter was a challenge. As shoppers mobbed stores seeking basic foodstuffs, much less luxuries such as coffee and cotton underwear, prices soared. Inflation set off waves of strikes in the United States and Canada as workers demanded higher pay to keep up with rising prices. The world’s economic outlook seemed dim. It did not look like the beginning of a golden age.

As late as 1948, incomes per person in much of Europe and Asia were lower than they had been 10 or even 20 years earlier. But 1948 brought a change for the better. In January, the US military government in Japan announced it would seek to rebuild the economy rather than exacting reparations from a country on the verge of starvation. In April, the US Congress approved the economic aid programme that would be known as the Marshall Plan, providing Western Europe with desperately needed dollars to import machinery, transport equipment, fertiliser and food. In June, the three occupying powers – France, the United Kingdom and the US – rolled out the deutsche mark, a new currency for the western zones of Germany. A new central bank committed to keeping inflation low and the exchange rate steady would oversee the deutsche mark.

Postwar chaos gave way to stability, and the war-torn economies began to grow. In many countries, they grew so fast for so long that people began to speak of the ‘economic miracle’ (West Germany), the ‘era of high economic growth’ (Japan) and the 30 glorious years (France). In the English-speaking world, this extraordinary period became known as the Golden Age.

What was it that made the Golden Age exceptional? Part of the answer is that economies were making up for lost time: after years of depression and wartime austerity, enormous needs for housing, consumer goods, equipment for farms, factories, railroads and electric generating plants stood ready to drive growth. But much more lay behind the Golden Age of economic growth than pent-up demand. Two factors deserve special attention.

First, the expanding welfare state. The Second World War shook up the social structures in all the wealthy countries, fundamentally altering domestic politics, in particular exerting an equalising force. As societies embarked on reconstruction, no one could deny that citizens who had been asked to sacrifice in war were entitled to share in the benefits of peace. In many cases, labour unions became the representatives of working people’s claims to peacetime dividends. Indeed, union membership reached historic highs, and union leaders sat alongside business and government leaders to hammer out social policy. Between 1944 and 1947, one country after another created old-age pension schemes, national health insurance, family allowances, unemployment insurance and more social benefits. These programmes gave average families a sense of security they had never known. Children from poor families could visit the doctor without great expense. The loss of a job or the death of a wage-earner no longer meant destitution.

Second, in addition to the growing welfare state, strong productivity growth contributed to rising living standards. Rising productivity – increasing the efficiency with which an economy uses labour, capital and other resources – is the main force that makes an economy grow. Because new technologies and better ways of doing business take time to filter through the economy, productivity improvements are usually slow. But in the postwar years, productivity grew very quickly. A unique combination of circumstances propelled it. In just a few years, millions of people moved from low-productivity farm work – more than 3 million mules still plowed furrows on US farms in 1945 – to construction and factory jobs that used the latest machinery.

In 1940, the average working-age adult in western Europe had less than five years of formal education. As governments invested heavily in high schools and universities after the war, they produced a more educated and literate workforce with the skills to produce far more wealth. Advances in national infrastructure gave direct boosts to national productivity. High-speed motorways enabled truck drivers to carry bigger loads over longer distances at higher speeds, greatly expanding markets for farms and factories. Six rounds of trade negotiations between 1947 and 1967, ultimately involving nearly 50 countries that signed the General Agreement on Tariffs and Trade (GATT), brought a massive increase in cross-border trade, forcing manufacturers to modernise or give up. Firms moved to take advantage of technological innovations to operate more productively, such as jet aircraft and numerically controlled machinery.

Between 1951 and 1973, propelled by strong productivity gains, the world economy grew at an annual rate of nearly 5 per cent. The impact on living standards was dramatic. Jobs were just for the asking; in 1966, West Germany’s unemployment rate touched an unprecedented 0.5 per cent. Electricity, indoor plumbing and television sets became common. Stoves burning coal or peat were replaced by central heating systems. Homes grew larger, and tens of millions of families acquired refrigerators and automobiles. The higher living standards did much more than simply bring new material goods. Retirement by 65, or even earlier, became the norm. Life expectancy jumped. Importantly, in Western Europe, North America and Japan, people across society shared in those gains. Prosperity was not limited to the urban elite. Most people began to live better, and they knew it. In the span of a quarter-century, living standards doubled and then, in many countries, doubled again.

The good times rolled on so long that people took them for granted. Between 1948 and 1973, Australia, Japan, Sweden and Italy had not a single year of recession. West Germany and Canada did almost as well. Governments and the economists who advised them happily claimed the credit. Careful economic management, they said, had put an end to cyclical ups and downs. Governments possessed more information about citizens and business than ever before, and computers could crunch the data to help policymakers determine the best course of action. In a lecture at Harvard University in 1966, Walter Heller, formerly chief economic adviser to presidents John F Kennedy and Lyndon B Johnson, trumpeted the success of what he called the ‘new economics’. ‘Conceptual advances and quantitative research in economics,’ he declared, ‘are replacing emotion with reason.’

Wages and investment were private decisions, but Schiller hoped government guidelines would contribute to ‘collective rationality’

The most influential proponent of such ideas was Karl Schiller, who became economy minister of West Germany, Europe’s largest economy, in 1966. A former professor at the University of Hamburg, where his students included the future West German Chancellor Helmut Schmidt, Schiller was a centrist Social Democrat. He stood apart from those on the Left who favoured state ownership of industry, but also from extreme free-market conservatives. His advice called for ‘a synthesis of planning and competition’. Schiller defined his philosophy thus: ‘As much competition as possible, as much planning as necessary.’

Most fundamentally, Schiller believed that government should commit itself to maintaining high employment, steady growth and stable prices. And it should do this all while keeping its international account in balance, within the framework of a free-market economy. These four commitments made the corners of what he called the ‘magic square’. In December 1966, when Schiller became economy minister in a new coalition government, the magic square became official policy. Following Schiller’s version of Keynesian economics, his ministry’s experts advised federal and state governments how to adjust their budgets to achieve ‘equilibrium of the entire economy’. The ministry’s advice was based on an elaborate planning exercise that churned out five-year projections. In the spring of 1967, the finance ministry was told to adjust taxes and spending plans to increase business investment while slowing the growth of consumer spending. These moves, Schiller’s economic models promised, would bring economic growth averaging 4 per cent through 1971, along with 0.8 per cent unemployment, 1 per cent annual inflation and a 1 per cent current account surplus.

But in an economy that was overwhelmingly privately run, government alone could not reach perfection. Four or five times a year, Schiller summoned corporate executives, union presidents and the heads of business organisations to a conference room in the ministry. There he described the economic outlook and announced how much wages and investment could rise without compromising his national economic targets. Of course, he would add, wages and investment were private decisions, but he hoped that the government’s guidelines would contribute to ‘collective rationality’. Such careful stage management cemented Schiller’s fame. In 1969, for the first time, the Social Democrats outpolled every other party. The election that year became known as the ‘Schiller election’.

Schiller insisted that his policies had brought West Germany to ‘a sunny plateau of prosperity’ where inflation and unemployment were permanently vanquished. Year after year, however, the economy failed to perform as he instructed. In July 1972, when Schiller was denied control over the exchange rate, he stormed out of the cabinet and left elected office forever.

Schiller left with the West German economy roaring. Within 18 months, his claim that the government could ensure stable prices, robust growth and jobs for all blew up.

The headline event of 1973 was the oil crisis. On 6 October, Egyptian and Syrian armies attacked Israeli positions, starting the conflict that became known as the Yom Kippur War. By agreeing to slash production and raise the price of oil, Saudi Arabia, Iraq, Iran and other Middle Eastern oil exporters quickly backed the two Arab countries. Shipments to countries that supported Israel, including the US and the Netherlands, were cut off altogether.

Oil-importing countries responded in dramatic fashion. Western European countries lowered speed limits and rationed diesel supplies. From Italy to Norway, driving was banned on four consecutive Sundays in order to save fuel. The Japanese government shut down factories and told citizens to turn out the pilot lights on their water heaters. US truck drivers blocked highways to protest high fuel prices, and motorists queued for hours to top off their gasoline tanks. In a televised address, the US President Richard Nixon warned Americans: ‘We are heading toward the most acute shortages of energy since the Second World War.’

Faced with higher petroleum prices, economic growth in 1974 collapsed. Around the world, inflation soared. When oil prices receded, the world economy failed to bounce back. Double-digit inflation dramatically undermined workers’ wage gains. From 1973 to 1979, average income per worker grew only half as fast as it had before 1973. Help-wanted signs vanished as unemployment rose. The economic experts, only recently so confident that their rational mathematical analysis had brought permanent prosperity, were flummoxed. Stable economic growth had given way to violent gyrations.

The underlying problem, it turned out, was not expensive petroleum but slow productivity growth. Through the 1960s and early ’70s, across the wealthy world, productivity had risen a strong 5 per cent a year. After 1973, the trend shifted clearly downward. Through the rest of the 20th century, productivity growth in the wealthy economies averaged less than 2 per cent a year. Diminished productivity growth translated directly into sluggish economic growth. The days when people could feel their living standards rising from one year to the next were over. As the good times failed to return, voters turned their fury on political leaders. In fact, there was little any Western politician could do to put their economies back on their previous tracks.

To give a short-term boost to an underperforming economy, central banks and governments have a variety of tools they can use. They can lower interest rates to make it cheaper to buy a car or build a factory. They can lower taxes to give consumers more money to spend. They can increase government spending to pump more cash into the economy. They can change regulations to make it easier for banks to lend money. But when it comes to an economy’s long-term growth potential, productivity is vital. It matters more than anything else – and productivity growth after the early 1970s was simply slower than before.

Turning innovative ideas into economically valuable products and services can involve years of trial and error

The reasons behind slowed productivity growth had nothing to do with any government’s economic policy. The historic move of rural peoples to the cities, around the world, could not be repeated. Once masses of peasant farmers and sharecroppers had shifted into more productive work in the cities, it was done. The great flow of previously unemployed women into the labour force was over. In the 1960s, building thousands of miles of superhighways brought massive economic benefits. But once those roads were open to traffic, adding lanes or exit ramps was far less consequential. In rich countries, literacy had risen to almost universal levels. After that historic jump, the effects of additional small increases in average education were comparatively slight. If higher productivity growth were to be regained, it would have to come from developing technological innovations and new approaches to business, and putting them to use in ways that allowed the business sector to operate more effectively.

When it comes to influencing innovation, governments have power. Grants for scientific research and education, and policies that make it easy for new firms to grow, can speed the development of new ideas. But what matters for productivity is not the number of innovations, but the rate at which innovations affect the economy – something almost totally beyond the ability of governments to control. Turning innovative ideas into economically valuable products and services can involve years of trial and error. Many of the basic technologies behind mobile telephones were developed in the 1960s and ’70s, but mobile phones came into widespread use only in the 1990s. Often, a new technology is phased in only over time as old buildings and equipment are phased out. Moreover, for reasons no one fully understands, productivity growth and innovation seem to move in long cycles. In the US, for example, between the 1920s and 1973, innovation brought strong productivity growth. Between 1973 and 1995, it brought much less. The years between 1995 and 2003 saw high productivity gains, and then again considerably less thereafter.

When the surge in productivity following the Second World War tailed off, people around the globe felt the pain. At the time, it appeared that a few countries – France and Italy for a few years in the late 1970s, Japan in the second half of the ’80s – had discovered formulas allowing them to defy the downward global productivity trend. But their economies revived only briefly before productivity growth waned. Jobs soon became scarce again, and improvements in living standards came more slowly. The poor productivity growth of the late 1990s was not due to taxes, regulations or other government policies in any particular country, but to global trends. No country escaped them.

Unlike the innovations of the 1950s and ’60s, which were welcomed widely, those of the late 20th century had costly side effects. While information technology, communications and freight transportation became cheaper and more reliable, giant industrial complexes became dinosaurs as work could be distributed widely to take advantage of labour supplies, transportation facilities or government subsidies. Workers whose jobs were relocated found that their years of experience and training were of little value in other industries, and communities that lost major employers fell into decay. Meanwhile, the welfare state on which they had come to rely began to deteriorate, its financial underpinnings stressed due to the slow growth of tax revenue in economies that were no longer buoyant. The widespread sharing in the mid-century boom was not repeated in the productivity gains at the end of the century, which accumulated at the top of the income scale.

For much of the world, the Golden Age brought extraordinary prosperity. But it also brought unrealistic expectations about what governments can do to assure full employment, steady economic growth and rising living standards. These expectations still shape political life today. Between 1979 and 1982, citizens in one country after another threw out the leaders who stood for the welfare state and voted in a wave of more Right-wing politicians – Margaret Thatcher, Reagan, Helmut Kohl, Yasuhiro Nakasone and many others – who promised to tame big government and let market forces, lower tax rates and deregulation bring the good times back. Today, nearly 40 years on, voters are again turning to the Right, hoping that populist leaders will know how to make slow-growing economies great again.

More than a generation ago, the free-market policies of Thatcher and Reagan proved no more successful at improving productivity and raising economic growth than the policies they supplanted. There is no reason to think that the populists of our day will do much better. The Golden Age was wonderful while it lasted, but it cannot be repeated. If there were a surefire method for coaxing extraordinary performance from mature economies, it likely would have been discovered a long time ago.

Aeon