It is past the time that business schools should smarten up, jettison this “dumb” shareholder value dogma, and start teaching a version of capitalism less damaging to the interests of society.
“If the economy had grown after the global financial crisis at the same rate as the number of books written about it, then we would have been back at full employment some while ago.
Modern economics has encouraged ways of thinking that make crises more probable. Economists have brought the problem upon themselves by pretending that they can forecast. No one can easily predict an unknowable future, and economists are no exception.
The fragility of our financial system stems directly from the fact that banks are the main source of money creation. Banks are man made institutions, important sources of innovation, prosperity and material progress, but also of greed, corruption and crises. For better or worse, they materially affect human welfare.
Unless we go back to the underlying causes we will never understand what happened and will be unable to prevent a repetition and help our economies truly recover.
The financial crisis of 2007-9 was merely the latest manifestation of our collective failure to manage the relationship between finance, the structure of money and banking, and a capitalist system.”
The former governor of the Bank of England on reforming global finance.
Mervyn King was governor of the Bank of England in 2003-13. In “The End of Alchemy” there is no gossip and few revelations. Instead Lord King uses his experience of the crisis as a platform from which to present economic ideas to non-specialists.
He does a good job of putting complex concepts into plain English. The discussion of the evolution of money, from Roman times to 19th-century America to today, is a useful introduction for those not quite sure what currency really is.
He explains why economies need central banks: at best, they are independent managers of the money supply and rein in the banking system. Central bankers like giving the impression that they have played such roles since time immemorial, but as Lord King points out the reality is otherwise. The Fed was created only in 1913; believe it or not, until 1994 it would not reveal to the public its interest rate decisions until weeks after the event. Even the Bank of England, founded in 1694, got the exclusive right to print banknotes, in England and Wales, only in 1844.
At times, Lord King can be refreshingly frank. He is no fan of austerity policies, saying that they have imposed “enormous costs on citizens throughout Europe”. He also reserves plenty of criticism for the economics profession. Since forecasting is so hit and miss, he thinks, the practice of giving prizes to the best forecasters “makes as much sense as it would to award the Fields Medal in mathematics to the winner of the National Lottery”.
The problem leading up to the global financial crisis, as Lord King sees it, is that commercial banks had little incentive to hold large quantities of safe, liquid assets. They knew that in a panic, the central bank would provide liquidity, no matter the quality of their balance sheets; in response they loaded up on risky investments.
‘It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity …’ Charles Dickens, A Tale of Two Cities
The End of Alchemy, Mervyn King
The past twenty years in the modern world were indeed the best of times and the worst of times. It was a tale of two epochs in the first growth and stability, followed in the second by the worst banking crisis the industrialised world has ever witnessed. Within the space of little more than a year, between August 2007 and October 2008, what had been viewed as the age of wisdom was now seen as the age of foolishness, and belief turned into incredulity. The largest banks in the biggest financial centres in the advanced world failed, triggering a worldwide collapse of confidence and bringing about the deepest recession since the 1930s.
How did this happen? Was it a failure of individuals, institutions or ideas? The events of 2007-8 have spawned an outpouring of articles and books, as well as plays and films, about the crisis. If the economy had grown after the crisis at the same rate as the number of books written about it, then we would have been back at full employment some while ago.
Most such accounts like the media coverage and the public debate at the time focus on the symptoms and not the underlying causes. After all, those events, vivid though they remain in the memories of both participants and spectators, comprised only the latest in a long series of financial crises since our present system of money and banking became the cornerstone of modern capitalism after the Industrial Revolution in the eighteenth century. The growth of indebtedness, the failure of banks, the recession that followed, were all signs of much deeper problems in our financial and economic system.
Unless we go back to the underlying causes we will never understand what happened and will be unable to prevent a repetition and help our economies truly recover. This book looks at the big questions raised by the depressing regularity of crises in our system of money and banking. Why do they occur? Why are they so costly in terms of lost jobs and production? And what can we do to prevent them? It also examines new ideas that suggest answers.
In the spring of 2011, I was in Beijing to meet a senior Chinese central banker. Over dinner in the Diaoyutai State Guesthouse, where we had earlier played tennis, we talked about the lessons from history for the challenges we faced, the most important of which was how to resuscitate the world economy after the collapse of the western banking system in 2008. Bearing in mind the apocryphal answer of Premier Chou Enlai to the question of what significance one should attach to the French Revolution (it was ‘too soon to tell’), I asked my Chinese colleague what importance he now attached to the Industrial Revolution in Britain in the second half of the eighteenth century.
He thought hard. Then he replied: ‘We in China have learned a great deal from the West about how competition and a market economy support industrialisation and create higher living standards. We want to emulate that.’ Then came the sting in the tail, as he continued: ‘But I don’t think you’ve quite got the hang of money and banking yet.’ His remark was the inspiration for this book.
Since the crisis, many have been tempted to play the game of deciding who was to blame for such a disastrous outcome. But blaming individuals is counterproductive, it leads you to think that if just a few, or indeed many, of those people were punished then we would never experience a crisis again. If only it were that simple. A generation of the brightest and best were lured into banking, and especially into trading, by the promise of immense financial rewards and by the intellectual challenge of the work that created such rich returns. They were badly misled. The crisis was a failure of a system, and the ideas that underpinned it, not of individual policy makers or bankers, incompetent and greedy though some of them undoubtedly were. There was a general misunderstanding of how the world economy worked. Given the size and political influence of the banking sector, is it too late to put the genie back in the bottle? No it is never too late to ask the right questions, and in this book I try to do so.
If we don’t blame the actors, then why not the playwright? Economists have been cast by many as the villain. An abstract and increasingly mathematical discipline, economics is seen as having failed to predict the crisis. This is rather like blaming science for the occasional occurrence of a natural disaster. Yet we would blame scientists if incorrect theories made disasters more likely or created a perception that they could never occur, and one of the arguments of this book is that economics has encouraged ways of thinking that made crises more probable. Economists have brought the problem upon themselves by pretending that they can forecast. No one can easily predict an unknowable future, and economists are no exception.
Despite the criticism, modern economics provides a distinctive and useful way of thinking about the world. But no subject can stand still, and economics must change, perhaps quite radically, as a result of the searing experience of the crisis. A theory adequate for today requires us to think for ourselves, standing on the shoulders of giants of the past, not kneeling in front of them.
Economies that are capable of sending men to the moon and producing goods and services of extraordinary complexity and innovation seem to struggle with the more mundane challenge of handling money and banking. The frequency, and certainly severity, of crises has, if anything, increased rather than decreased over time.
In the heat of the crisis in October 2008, nation states took over responsibility for all the obligations and debts of the global banking system. In terms of its balance sheet, the banking system had been virtually nationalised but without collective control over its operations. That government rescue cannot conveniently be forgotten. When push came to shove, the very sector that had espoused the merits of market discipline was allowed to carry on only by dint of taxpayer support. The creditworthiness of the state was put on the line, and in some cases, such as Iceland and Ireland, lost. God may have created the universe, but we mortals created paper money and risky banks. They are man made institutions, important sources of innovation, prosperity and material progress, but also of greed, corruption and crises. For better or worse, they materially affect human welfare.
For much of modern history, and for good reason, money and banking have been seen as the magical elements that liberated us from a stagnant feudal system and permitted the emergence of dynamic markets capable of making the long-term investments necessary to support a growing economy. The idea that paper money could replace intrinsically valuable gold and precious metals, and that banks could take secure short-term deposits and transform them into long-term risky investments, came into its own with the Industrial Revolution in the eighteenth century. It was both revolutionary and immensely seductive. It was in fact financial alchemy, the creation of extraordinary financial powers that defy reality and common sense. Pursuit of this monetary elixir has brought a series of economic disasters from hyperinflations to banking collapses.
Why have money and banking, the alchemists of a market economy, turned into its Achilles heel?
The purpose of this book is to answer that question. It sets out to explain why the economic failures of a modern capitalist economy stem from our system of money and banking, the consequences for the economy as a whole, and how we can end the alchemy. Our ideas about money and banking are just as much a product of our age as the way we conduct our politics and imagine our past.
The twentieth century experience of depression, hyperinflation and war changed both the world and the way economists thought about it. Before the Great Depression of the early 1930s, central banks and governments saw their role as stabilising the financial system and balancing the budget. After the Great Depression, attention turned to policies aimed at maintaining full employment. But post-war confidence that Keynesian ideas, the use of public spending to expand total demand in the economy, would prevent us from repeating the errors of the past was to prove touchingly naive. The use of expansionary policies during the 1960s, exacerbated by the Vietnam War, led to the Great Inflation of the 1970s, accompanied by slow growth and rising unemployment, the combination known as ‘stagflation’.
The direct consequence was that central banks were reborn as independent institutions committed to price stability. So successful was this that in the 1990s not only did inflation fall to levels unseen for a generation, but central banks and their governors were hailed for inaugurating an era of economic growth with low inflation, the Great Stability or Great Moderation. Politicians worshipped at the altar of finance, bringing gifts in the form of lax regulation and receiving support, and sometimes campaign contributions, in return. Then came the fall: the initial signs that some banks were losing access to markets for short-term borrowing in 2007, the collapse of the industrialised world’s banking system in 2008, the Great Recession that followed, and increasingly desperate attempts by policy-makers to engineer a recovery. Today the world economy remains in a depressed state. Enthusiasm for policy stimulus is back in fashion, and the wheel has turned full circle.
The recession is hurting people who were not responsible for our present predicament, and they are, naturally, angry. There is a need to channel that anger into a careful analysis of what went wrong and a determination to put things right. The economy is behaving in ways that we did not expect, and new ideas will be needed if we are to prevent a repetition of the Great Recession and restore prosperity.
Many accounts and memoirs of the crisis have already been published. Their titles are numerous, but they share the same invisible subtitle: ‘how I saved the world’. So although in the interests of transparency I should make clear that I was an actor in the drama, Governor of the Bank of England for ten years between 2003 and 2013, during both the Great Stability, the banking crisis itself, the Great Recession that followed, and the start of the recovery, this is not a memoir of the crisis with revelations about private conversations and behind the scenes clashes. Of course, those happened as in any walk of life. But who said what to whom and when can safely, and properly, be left to dispassionate and disinterested historians who can sift and weigh the evidence available to them after sufficient time has elapsed and all the relevant official and unofficial papers have been made available.
Instant memoirs, whether of politicians or officials, are usually partial and self-serving. I see little purpose in trying to set the record straight when any account that I gave would naturally also seem self-serving. My own record of events and the accompanying Bank papers will be made available to historians when the twenty-year rule permits their release.
This book is about economic ideas. My time at the Bank of England showed that ideas, for good or ill, do influence governments and their policies. The adoption of inflation targeting in the early 1990s and the granting of independence to the Bank of England in 1997 are prime examples. Economists brought intellectual rigour to economic policy and especially to central banking. But my experience at the Bank also revealed the inadequacies of the ‘models’, whether verbal descriptions or mathematical equations, used by economists to explain swings in total spending and production. In particular, such models say nothing about the importance of money and banks and the panoply of financial markets that feature prominently in newspapers and on our television screens.
Is there a fundamental weakness in the intellectual economic framework underpinning contemporary thinking?
An exploration of some of these basic issues does not require a technical exposition, and I have stayed away from one. Of course, economists use mathematical and statistical methods to understand a complex world, they would be remiss if they did not. Economics is an intellectual discipline that requires propositions to be not merely plausible but subject to the rigour of a logical proof. And yet there is no mathematics in this book. It is written in (I hope) plain English and draws on examples from real life. Although I would like my fellow economists to read the book in the hope that they will take forward some of the ideas presented here, it is aimed at the reader with no formal training in economics but an interest in the issues.
In the course of this book, I will explain the fundamental causes of the crisis and how the world economy lost its balance; how money emerged in earlier societies and the role it plays today; why the fragility of our financial system stems directly from the fact that banks are the main source of money creation; why central banks need to change the way they respond to crises; why politics and money go hand in hand; why the world will probably face another crisis unless nations pursue different policies; and, most important of all, how we can end the alchemy of our present system of money and banking.
By alchemy I mean the belief that all paper money can be turned into an intrinsically valuable commodity, such as gold, on demand and that money kept in banks can be taken out whenever depositors ask for it. The truth is that money, in all forms, depends on trust in its issuer. Confidence in paper money rests on the ability and willingness of governments not to abuse their power to print money. Bank deposits are backed by long-term risky loans that cannot quickly be converted into money. For centuries, alchemy has been the basis of our system of money and banking. As this book shows, we can end the alchemy without losing the enormous benefits that money and banking contribute to a capitalist economy.
Four concepts are used extensively in the book: disequilibrium, radical uncertainty, the prisoner’s dilemma and trust. These concepts will be familiar to many, although the context in which I use them may not. Their significance will become clear as the argument unfolds, but a brief definition and explanation may be helpful at the outset.
Disequilibrium is the absence of a state of balance between the forces acting on a system. As applied to economics, disequilibrium is a position that is unsustainable, meaning that at some point a large change in the pattern of spending and production will take place as the economy moves to a new equilibrium. The word accurately describes the evolution of the world economy since the fall of the Berlin Wall, which I discuss in Chapter 1.
Radical uncertainty refers to uncertainty so profound that it is impossible to represent the future in terms of a knowable and exhaustive list of outcomes to which we can attach probabilities. Economists conventionally assume that ‘rational’ people can construct such probabilities. But when businesses invest, they are not rolling dice with known and finite outcomes on the faces; rather they face a future in which the possibilities are both limitless and impossible to imagine. Almost all the things that define modern life, and which we now take for granted, such as cars, aeroplanes, computers and antibiotics, were once unimaginable. The essential challenge facing everyone living in a capitalist economy is the inability to conceive of what the future may hold. The failure to incorporate radical uncertainty into economic theories was one of the factors responsible for the misjudgements that led to the crisis.
The prisoner’s dilemma may be defined as the difficulty of achieving the best outcome when there are obstacles to cooperation. Imagine two prisoners who have been arrested and kept apart from each other. Both are offered the same deal: if they agree to incriminate the other they will receive a light sentence, but if they refuse to do so they will receive a severe sentence if the other incriminates them. If neither incriminates the other, then both are acquitted. Clearly, the best outcome is for both to remain silent. But if they cannot cooperate the choice is more difficult. The only way to guarantee the avoidance of a severe sentence is to incriminate the other. And if both do so, the outcome is that both receive a light sentence. But this non-cooperative outcome is inferior to the cooperative outcome. The difficulty of cooperating with each other creates a prisoner’s dilemma. Such problems are central to understanding how the economy behaves as a whole (the field known as macroeconomics) and to thinking through both how we got into the crisis and how we can now move towards a sustainable recovery. Many examples will appear in the following pages. Finding a resolution to the prisoner’s dilemma problem in a capitalist economy is central to understanding and improving our fortunes.
Trust is the ingredient that makes a market economy work. How could we drive, eat, or even buy and sell, unless we trusted other people? Everyday life would be impossible without trust: we give our credit card details to strangers and eat in restaurants that we have never visited before. Of course, trust is supplemented with regulation, fraud is a crime and there are controls of the conditions in restaurant kitchens but an economy works more efficiently with trust than without. Trust is part of the answer to the prisoner’s dilemma. It is central to the role of money and banks, and to the institutions that manage our economy. Long ago, Confucius emphasised the crucial role of trust in the authorities: ‘Three things are necessary for government: weapons, food and trust. If a ruler cannot hold on to all three, he should give up weapons first and food next. Trust should be guarded to the end: without trust we cannot stand.’
Those four ideas run through the book and help us to understand the origin of the alchemy of money and banking and how we can reduce or even eliminate that alchemy.
When I left the Bank of England in 2013, I decided to explore the flaws in both the theory and practice of money and banking, and how they relate to the economy as a whole. I was led deeper and deeper into basic questions about economics. I came to believe that fundamental changes are needed in the way we think about macroeconomics, as well as in the way central banks manage their economies.
A key role of a market economy is to link the present and the future, and to coordinate decisions about spending and production not only today but tomorrow and in the years thereafter. Families will save if the interest rate is high enough to overcome their natural impatience to spend today rather than tomorrow. Companies will invest in productive capital if the prospective rate of return exceeds the cost of attracting finance. And economic growth requires saving and investment to add to the stock of productive capital and so increase the potential output of the economy in the future. In a healthy growing economy all three rates, the interest rate on saving, the rate of return on investment, and the rate of growth are well above zero. Today, however, we are stuck with extraordinarily low interest rates, which discourage saving, the source of future demand and, if maintained indefinitely, will pull down rates of return on investment, diverting resources into unprofitable projects. Both effects will drag down future growth rates. We are already some way down that road. It seems that our market economy today is not providing an effective link between the present and the future.
I believe there are two reasons for this failure. First, there is an inherent problem in linking a known present with an unknowable future. Radical uncertainty presents a market economy with an impossible challenge how are we to create markets in goods and services that we cannot at present imagine? Money and banking are part of the response of a market economy to that challenge. Second, the conventional wisdom of economists about how governments and central banks should stabilise the economy gives insufficient weight to the importance of radical uncertainty in generating an occasional large disequilibrium. Crises do not come out of thin air but are the result of the unavoidable mistakes made by people struggling to cope with an unknowable future. Both issues have profound implications and will be explored at greater length in subsequent chapters.
Inevitably, my views reflect the two halves of my career. The first was as an academic, a student in Cambridge, England, and a Kennedy scholar at Harvard in the other Cambridge, followed by teaching positions on both sides of the Atlantic. I experienced at first hand the evolution of macroeconomics from literary exposition where propositions seemed plausible but never completely convincing, into a mathematical discipline where propositions were logically convincing but never completely plausible. Only during the crisis of 2007-9 did I look back and understand the nature of the tensions between the surviving disciples of John Maynard Keynes who taught me in the 1960s, primarily Richard Kahn and Joan Robinson, and the influx of mathematicians and scientists into the subject that fuelled the rapid expansion of university economics departments in the same period. The old school ‘Keynesians’ were mistaken in their view that all wisdom was to be found in the work of one great man, and as a result their influence waned. The new arrivals brought mathematical discipline to a subject that prided itself on its rigour. But the informal analysis of disequilibrium of economies, radical uncertainty, and trust as a solution to the prisoner’s dilemma was lost in the enthusiasm for the idea that rational individuals would lead the economy to an efficient equilibrium. It is time to take those concepts more seriously.
The second half of my career comprised twenty-two years at the Bank of England, the oldest continuously functioning central bank in the world, from 1991 to 2013, as Chief Economist, Deputy Governor and then Governor. That certainly gave me a chance to see how money could be managed. I learned, and argued publicly, that this is done best not by relying on gifted individuals to weave their magic, but by designing and building institutions that can be run by people who are merely professionally competent. Of course individuals matter and can make a difference, especially in a crisis. But the power of markets, the expression of hundreds of thousands of investors around the world is a match for any individual, central banker or politician, who fancies his ability to resist economic arithmetic. As one of President Clinton’s advisers remarked, ‘I used to think if there was reincarnation, I wanted to come back as the president or the Pope or a .400 baseball hitter. But now I want to come back as the bond market. You can intimidate everybody.’ Nothing has diminished the force of that remark since it was made over twenty years ago.
In 2012, I gave the first radio broadcast in peacetime by a Governor of the Bank of England since Montagu Norman delivered a talk on the BBC in March 1939, only months before the outbreak of the Second World War. As Norman left Broadcasting House, he was mobbed by British Social Credits Party demonstrators carrying flags and slogan-boards bearing the words: CONSCRIPT THE BANKERS FIRST! Feelings also ran high in 2012. The consequences of the events of 2007-9 are still unfolding, and anger about their effects on ordinary citizens is not diminishing. That disaster was a long time in the making, and will be just as long in the resolving.
But the cost of lost output and employment from our continuing failure to manage money and banking and prevent crises is too high for us to wait for another crisis to occur before we act to protect future generations.
Charles Dickens’ novel A Tale of Two Cities has not only a very famous opening sentence but an equally famous closing sentence. As Sydney Carton sacrifices himself to the guillotine in the place of another, he reflects: ‘It is a far, far better thing that I do, than I have ever done …’ If we can find a way to end the alchemy of the system of money and banking we have inherited then, at least in the sphere of economics, it will indeed be a far, far better thing than we have ever done.
THE GOOD, THE BAD AND THE UGLY
‘I think that Capitalism, wisely managed, can probably be made more efficient for attaining economic ends than any alternative system yet in sight.’ John Maynard Keynes, The End of Laissez-faire (1926)
‘The experience of being disastrously wrong is salutary; no economist should be spared it, and few are.’ John Kenneth Galbraith, A Life in Our Times (1982)
History is what happened before you were born. That is why it is so hard to learn lessons from history: the mistakes were made by the previous generation. As a student in the 1960s, I knew why the 1930s were such a bad time. Outdated economic ideas guided the decisions of governments and central banks, while the key individuals were revealed in contemporary photographs as fuddy-duddies who wore whiskers and hats and were ignorant of modern economics. A younger generation, in academia and government, trained in modern economics, would ensure that the Great Depression of the 1930s would never be repeated.
In the 1960s, everything seemed possible. Old ideas and conventions were jettisoned, and a new world beckoned. In economics, an influx of mathematicians, engineers and physicists brought a new scientific approach to what the nineteenth-century philosopher and writer Thomas Carlyle christened the ‘dismal science’. It promised not just a better understanding of our economy, but an improved economic performance.
The subsequent fifty years were a mixed experience. Over that period, national income in the advanced world more than doubled, and in the so-called developing world hundreds of millions of people were lifted out of extreme poverty. And yet runaway inflation in the 1970s was followed in 2007-9 by the biggest financial crisis the world has ever seen. How do we make sense of it all? Was the post-war period a success or a failure?
The origins of economic growth
The history of capitalism is one of growth and rising living standards interrupted by financial crises, most of which have emanated from our mismanagement of money and banking. My Chinese colleague spoke an important, indeed profound, truth.
The financial crisis of 2007-9 (hereafter ‘the crisis’) was not the fault of particular individuals or economic policies. Rather, it was merely the latest manifestation of our collective failure to manage the relationship between finance, the structure of money and banking, and a capitalist system.
Failure to appreciate this explains why most accounts of the crisis focus on the symptoms and not the underlying causes of what went wrong. The fact that we have not yet got the hang of it does not mean that a capitalist economy is doomed to instability and failure. It means that we need to think harder about how to make it work.
Over many years, a capitalist economy has proved the most successful route to escape poverty and achieve prosperity.
Capitalism, as I use the term here, is an economic system in which private owners of capital hire wage earners to work in their businesses and pay for investment by raising finance from banks and financial markets.
The West has built the institutions to support a capitalist system, the rule of law to enforce private contracts and protect property rights, intellectual freedom to innovate and publish new ideas, anti-trust regulation to promote competition and break up monopolies, and collectively financed services and networks, such as education, water, electricity and telecommunications, which provide the infrastructure to support a thriving market economy. Those institutions create a balance between freedom and restraint, and between unfettered competition and regulation. It is a subtle balance that has emerged and evolved over time. And it has transformed our standard of living. Growth at a rate of 2.5 per cent a year, close to the average experienced in North America and Europe since the Second World War, raises real total national income twelvefold over one century, a truly revolutionary outcome.
Over the past two centuries, we have come to take economic growth for granted. Writing in the middle of that extraordinary period of economic change in the mid-eighteenth century, the Scottish philosopher and political economist, Adam Smith, identified the source of the breakout from relative economic stagnation, an era during which productivity (output per head) was broadly constant and any increase resulted from discoveries of new land or other natural resources, to a prolonged period of continuous growth of productivity: specialisation. It was possible for individuals to specialise in particular tasks, the division of labour, and by working with capital equipment to raise their productivity by many times the level achieved by a jack-of-all-trades. To illustrate his argument, Smith employed his now famous example of a pin factory:
A workman could scarce, perhaps, with his utmost industry, make one pin in a day, and certainly could not make twenty. But in the way in which this business is now carried on, not only the whole work is a peculiar trade, but it is divided into a number of branches. One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head The important business of making a pin is, in this manner, divided into about eighteen distinct operations, which, in some manufactories, are all performed by distinct hands.
The factory Smith was describing employed ten men and made over 48,000 pins in a day.
The application of technical knowhow to more and more tasks increased specialisation and raised productivity. Specialisation went hand in hand with an even greater need for both a means to exchange the fruits of one’s labour for an ever wider variety of goods produced by other specialists, money, and a way to finance the purchase of the capital equipment that made specialisation possible, banks.
As each person in the workforce became more specialised, more machinery and capital investment was required to support them, and the role of money and banks increased. After a millennium of roughly constant output per person, from the middle of the eighteenth century productivity started, slowly but surely, to rise. Capitalism was, quite literally, producing the goods. Historians will continue to debate why the Industrial Revolution occurred in Britain, population growth, plentiful supplies of coal and iron, supportive institutions, religious beliefs and other factors all feature in recent accounts.
But the evolution of money and banking was a necessary condition for the Revolution to take off.
Almost a century later, with the experience of industrialisation and a massive shift of labour from the land to urban factories, socialist writers saw things differently. For Karl Marx and Friedrich Engels the future was clear. Capitalism was a temporary staging post along the journey from feudalism to socialism. In their Communist Manifesto of 1848, they put forward their idea of ‘scientific socialism’ with its deterministic view that capitalism would ultimately collapse and be replaced by socialism or communism. Later, in the first volume of Das Kapital (1867), Marx elaborated (at great length) on this thesis and predicted that the owners of capital would become ever richer while excessive capital accumulation would lead to a falling rate of profit, reducing the incentive to invest and leaving the working class immersed in misery. The British industrial working class in the nineteenth century did indeed suffer miserable working conditions, as graphically described by Charles Dickens in his novels. But no sooner had the ink dried on Marx’s famous work than the British economy entered a long period of rising real wages (money wages adjusted for the cost of living). Even the two world wars and the intervening Great Depression in the 1930s could not halt rising productivity and real wages, and broadly stable rates of profit. Economic growth and improving living standards became the norm.
But if capitalism did not collapse under the weight of its own internal contradictions, neither did it provide economic security. During the twentieth century, the extremes of hyperinflations and depressions eroded both living standards and the accumulated wealth of citizens in many capitalist economies, especially during the Great Depression in the 1930s, when mass unemployment sparked renewed interest in the possibilities of communism and central planning, especially in Europe. The British economist John Maynard Keynes promoted the idea that government intervention to bolster total spending in the economy could restore full employment, without the need to resort to fully fledged socialism.
After the Second World War, there was a widespread belief that government planning had won the war and could be the means to win the peace. In Britain, as late as 1964 the newly elected Labour government announced a ‘National Plan’. Inspired by a rather naive version of Keynesian ideas, it focused on policies to boost the demand for goods and services rather than the ability of the economy to produce them. As the former outstripped the latter, the result was inflation. On the other side of the Atlantic, the growing cost of the Vietnam War in the late 1960s also led to higher inflation.
Rising inflation put pressure on the internationally agreed framework within which countries had traded with each other since the Bretton Woods Agreement of 1944, named after the conference held in the New Hampshire town in July of that year. Designed to allow a war-damaged Europe slowly to rebuild its economy and reintegrate into the world trading system, the agreement created an international monetary system under which countries set their own interest rates but fixed their exchange rates among themselves. For this to be possible, movements of capital between countries had to be severely restricted otherwise capital would move to where interest rates were highest, making it impossible to maintain either differences in those rates or fixed exchange rates. Exchange controls were ubiquitous, and countries imposed limits on investments in foreign currency. As a student, I remember that no British traveller in the 1960s could take abroad with them more than £50 a year to spend.
The new international institutions, the International Monetary Fund (IMF) and the World Bank, would use funds provided by its members to finance temporary shortages of foreign currency and the investment needed to replace the factories and infrastructure destroyed during the Second World War. Implicit in this framework was the belief that countries would have similar and low rates of inflation. Any loss of competitiveness in one country, as a result of higher inflation than in its trading partners, was assumed to be temporary and would be met by a deflationary policy to restore competitiveness while borrowing from the IMF to finance a short-term trade deficit. But in the late 1960s differences in inflation across countries, especially between the United States and Germany, appeared to be more than temporary, and led to the breakdown of the Bretton Woods system in 1970-1. By the early 1970s, the major economies had moved to a system of ‘floating’ exchange rates, in which currency values are determined by private sector supply and demand in the markets for foreign exchange.
Inevitably, the early days of floating exchange rates reduced the discipline on countries to pursue low inflation. When the two oil shocks of the 1970s, in 1973, when an embargo by Arab countries led to a quadrupling of prices, and 1979, when prices doubled after disruption to supply following the Iranian Revolution hit the western world, the result was the Great Inflation, with annual inflation reaching 13 per cent in the United States and 27 per cent in the United Kingdom.
From the late 1970s onwards, the western world then embarked on what we can now see were three bold experiments to manage money, exchange rates and the banking system better. The first was to give central banks much greater independence in order to bring down and stabilise inflation, subsequently enshrined in the policy of inflation targeting, the goal of national price stability. The second was to allow capital to move freely between countries and encourage a shift to fixed exchange rates both within Europe, culminating in the creation of a monetary union, and in a substantial proportion of the most rapidly growing part of the world economy, particularly China, which fixed its exchange rates against the US dollar, the goal of exchange rate stability. And the third experiment was to remove regulations limiting the activities of the banking and financial system to promote competition and allow banks both to diversify into new products and regions and to expand in size, with the aim of bringing stability to a banking system often threatened in the past by risks that were concentrated either geographically or by line of business, the goal of financial stability.
These three simultaneous experiments might now be best described as having three consequences the Good, the Bad and the Ugly. The Good was a period between about 1990 and 2007 of unprecedented stability of both output and inflation the Great Stability. Monetary policy around the world changed radically. Inflation targeting and central bank independence spread to more than thirty countries. And there were significant changes in the dynamics of inflation, which on average became markedly lower, less variable and less persistent.
The Bad was the rise in debt levels. Eliminating exchange rate flexibility in Europe and the emerging markets led to growing trade surpluses and deficits. Some countries saved a great deal while others had to borrow to finance their external deficit. The willingness of the former to save outweighed the willingness of the latter to spend, and so long-term interest rates in the integrated world capital market began to fall. The price of an asset, whether a house, shares in a company or any other claim on the future, is the value today of future expected returns (rents, the value of housing services from living in your own home, or dividends). To calculate that price one must convert future into current values by discounting them at an interest rate. The immediate effect of a fall in interest rates is to raise the prices of assets across the board. So as long-term interest rates in the world fell, the value of assets especially of houses rose. And as the values of assets increased, so did the amounts that had to be borrowed to enable people to buy them. Between 1986 and 2006, household debt rose from just under 70 per cent of total household income to almost 120 per cent in the United States and from 90 per cent to around 140 per cent in the United Kingdom.
The Ugly was the development of an extremely fragile banking system. In the USA, Federal banking regulators’ increasingly lax interpretation of the provisions to separate commercial and investment banking introduced in the 1933 Banking Act (often known as Glass-Steagall, the senator and representative respectively who led the passage of the legislation) reached its inevitable conclusion with the Gramm-Leach-Bliley Act of 1999, which swept away any remaining restrictions on the activities of banks. In the UK, the so-called Big Bang of 1986, which started as a measure to introduce competition into the Stock Exchange, led to takeovers of small stockbroking firms and mergers between commercial banks and securities houses. Banks diversified and expanded rapidly after deregulation. In continental Europe so-called universal banks had long been the norm. The assets of large international banks doubled in the five years before 2008. Trading of new and highly complex financial products among banks meant that they became so closely interconnected that a problem in one would spread rapidly to others, magnifying rather than spreading risk.
Banks relied less and less on their own resources to finance lending and became more and more dependent on borrowing. The equity capital of banks, the funds provided by the shareholders of the bank accounted for a declining proportion of overall funding. Leverage, the ratio of total assets (or liabilities) to the equity capital of a bank, rose to extraordinary levels. On the eve of the crisis, the leverage ratio for many banks was 30 or more, and for some investment banks it was between 40 and 50. A few banks had ratios even higher than that. With a leverage ratio of even 25 it would take a fall of only 4 per cent in the average value of a bank’s assets to wipe out the whole of the shareholders’ equity and leave it unable to service its debts.
By 2008, the Ugly led the Bad to overwhelm the Good. The crisis, one might say catastrophe of the events that began to unfold under the gaze of a disbelieving world in 2007, was the failure of all three experiments. Greater stability of output and inflation, although desirable in itself, concealed the build-up of a major disequilibrium in the composition of spending. Some countries were saving too little and borrowing too much to be able to sustain their path of spending in the future, while others saved and lent so much that their consumption was pushed below a sustainable path. Total saving in the world was so high that interest rates, after allowing for inflation, fell to levels incompatible in the long run with a profitable growing market economy. Falling interest rates led to rising asset values and increases in the debt taken out against those more valuable assets. Fixed exchange rates exacerbated the burden of the debts, and in Europe the creation of monetary union in 1999 sapped the strength of many of its economies, as they became increasingly uncompetitive. Large, highly leveraged banks proved unstable and were vulnerable to even a modest loss of confidence, resulting in contagion to other banks and the collapse of the system in 2008.
At their outset the ill-fated nature of the three experiments was not yet visible. On the contrary, during the 1990s the elimination of high and variable inflation, which had undermined market economies in the 1970s, led to a welcome period of macroeconomic stability. The Great Stability, or the Great Moderation as it was dubbed in the United States, was seen, as in many ways it was, as a success for monetary policy. But it was unsustainable. Policy-makers were conscious of problems inherent in the first two experiments, but seemed powerless to do anything about them. At international gatherings, such as those of the IMF, policy-makers would wring their hands about the ‘global imbalances’ but no one country had any incentive to do anything about it. If a country had, on its own, tried to swim against the tide of falling interest rates, it would have experienced an economic slowdown and rising unemployment without any material impact on either the global economy or the banking system. Even then the prisoner’s dilemma was beginning to rear its ugly head.
Nor was it obvious how the unsustainable position of the world economy would come to an end. I remember attending a seminar of economists and policy-makers at the IMF as early as 2002 where the consensus was that there would eventually be a sharp fall in the value of the US dollar, which would produce a change in spending patterns. But long before that could happen, the third experiment ended with the banking crisis of September and October 2008. The shock that some of the biggest and most successful commercial banks in North America and Europe either failed, or were seriously crippled, led to a collapse of confidence which produced the largest fall in world trade since the 1930s. Something had gone seriously wrong.
Opinions differ as to the cause of the crisis. Some see it as a financial panic in which fundamentally sound financial institutions were left short of cash as confidence in the credit-worthiness of banks suddenly changed and professional investors stopped lending to them, a liquidity crisis. Others see it as the inevitable outcome of bad lending decisions by banks, a solvency crisis, in which the true value of banks’ assets had fallen by enough to wipe out most of their equity capital, meaning that they might be unable to repay their debts. But almost all accounts of the recent crisis are about the symptoms, the rise and fall of housing markets, the explosion of debt and the excesses of the banking system rather than the underlying causes of the events that overwhelmed the economies of the industrialised world in 2008. Some even imagine that the crisis was solely an affair of the US financial sector. But unless the events of 2008 are seen in their global economic context, it is hard to make sense of what happened and of the deeper malaise in the world economy.
The story of what happened can be explained in little more than a few pages, everything you need to know but were afraid to ask about the causes of the recent crisis. So here goes.
The story of the crisis
By the start of the twenty-first century it seemed that economic prosperity and democracy went hand in hand. Modern capitalism spawned growing prosperity based on growing trade, free markets and competition, and global banks. In 2008 the system collapsed. To understand why the crisis was so big, and came as such a surprise, we should start at the key turning point, the fall of the Berlin Wall in 1989. At the time it was thought to represent the end of communism, indeed the end of the appeal of socialism and central planning.
For some it was the end of history. For most, it represented a victory for free market economics. Contrary to the prediction of Marx, capitalism had displaced communism. Yet who would have believed that the fall of the Wall was not just the end of communism but the beginning of the biggest crisis in capitalism since the Great Depression?
What has happened over the past quarter of a century to bring about this remarkable change of fortune in the position of capitalist economies?
After the demise of the socialist model of a planned economy, China, countries of the former Soviet Union and India embraced the international trading system, adding millions of workers each year to the pool of labour around the world producing tradeable, especially manufactured, goods. In China alone, over 70 million manufacturing jobs were created during the twenty-first century, far exceeding the 42 million working in manufacturing in 2012 in the United States and Europe combined. The pool of labour supplying the world trading system more than trebled in size. Advanced economies benefited from an influx of cheap consumer goods at the expense of employment in the manufacturing sector.
The aim of the emerging economies was to follow Japan and Korea in pursuing an export-led growth strategy. To stimulate exports, their exchange rates were held down by fixing them at a low level against the US dollar. The strategy worked, especially in the case of China. Its share in world exports rose from 2 per cent to 12 per cent between 1990 and 2013. China and other Asian economies ran large trade surpluses. In other words, they were producing more than they were spending and saving more than they were investing at home. The desire to save was very strong. In the absence of a social safety net, households in China chose to save large proportions of their income to provide self-insurance in the event of unemployment or ill-health, and to finance retirement consumption. Such a high level of saving was exacerbated by the policy from 1980 of limiting most families to one child, making it difficult for parents to rely on their children to provide for them in retirement.
Asian economies in general also saved more in order to accumulate large holdings of dollars as insurance in case their banking system ran short of foreign currency, as happened to Korea and other countries in the Asian financial crisis of the 1990s.
The End of Alchemy: Money, Banking and the Future of the Global Economy
by Mervyn King
get it at Amazon.com
Supply creates its own demand?
When you build more stuff, it is not true that all the costs of production are introduced into the economy as new money. You have merely injected new money to the extent you have incurred additional marginal costs, labour and materials mostly. Most fixed costs don’t rise with the increased production.
And it is unlikely that a producer would take the risk of ramping up production in a troubled economic environment if all that could be recovered was the marginal costs.
And no reasonable business person thinks that enough new money can be introduced by increasing production alone. If they did, record amounts of cash wouldn’t be sitting in corporate bank accounts doing nothing.
Obviously, the people who control this cash don’t believe that supply creates its own demand. They think that increasing production without first seeing an increase in demand would be foolhardy.
‘If you build it, they will come.’
It’s a Latin saying, Si tu id aeficas, ei venient, but it’s probably more recognisable because it sounds like what that disembodied voice says to Kevin Costner in the film Field of Dreams (1989). And in the film, Costner does build it, a baseball field, and people do come. In either case, it’s a good way of summing up the case for supply-side economics.
But to understand that case, we need to break it down into its constituent elements. And the thinking behind it goes like this: if you want to stimulate the economy, then cut taxes on the rich, those who invest in and build things, and they will use this extra money to produce more stuff. Why? Because supply creates its own demand, so if they produce more they will sell more, and the economy will expand. An expanding economy, in turn, benefits everybody. There will be more jobs, wages will be higher, and government budget deficits will shrink.
This latter effect, of course, might seem counterintuitive. But the argument is that even though tax rates go down, the amount of economic activity these cuts unleash will grow everyone’s income to such an extent that the total tax collected by the government, even at these lower rates, will actually go up.
That’s what the supply-siders contend.
Given that the supply-side approach has been the policy of the Republican Party for decades, this argument has proved convincing to a lot of people. But let’s look at it a little more carefully.
The notion that supply creates its own demand is known as Say’s law, after the French economist Jean Baptiste Say (1767-1832) who is credited with its formulation. The thought is that when you produce more, you have to spend additional money to do so. This additional spending, in turn, provides people with extra income, and therefore the wherewithal to buy the additional goods you have created.
Of course, you cannot just build anything. To sell more of something, it has to be something that people actually want. Say himself acknowledged this. But let’s focus on the mechanics of how increasing production is supposed to give people the wherewithal to purchase more of what they do want. Unfortunately, the presumptions here just don’t make sense.
First, at most, building more goods simply introduces funds equivalent to their cost of production into the economy. But things don’t sell for their cost of production no one builds anything unless they think they can price it at a profit. And if you don’t think people will have enough new money to pay this price, why would you increase production?
Second, a lot of the costs of production are what economists call ‘fixed’ costs; that is, the cost of big things such as factories and office buildings and expensive machines and equipment, rather than the costs of the additional labour and supplies necessary to build one extra thing, which are called ‘marginal’ costs. The total cost of production combines fixed and marginal costs, and fixed costs usually represent the far greater share. This means that when you build more stuff, it is not true that all the costs of production are introduced into the economy as new money.
You have merely injected new money to the extent you have incurred additional marginal costs.
And it is unlikely that a producer would take the risk of ramping up production in a troubled economic environment if all that could be recovered was the marginal costs.
Third, producers receive many of the goods needed in the production of further goods from their suppliers on credit. Why presume that all the marginal costs of additional production have actually been paid at the time the goods hit the shelves? Or that the ultimate consumer is going to be willing to use credit to increase consumption in troubled times, even if those higher up the chain have used credit to increase production?
More concerning still, if consumers do use credit, unless we later provide them with more income, we will have simply set ourselves up for another financial collapse when the teaser rates on their loans time out and further payments become unaffordable, as happened in 2008.
None of these problems with supply-side thinking will come as a surprise to anyone who runs a business. They are happy to see their taxes cut, sure, but they are not going to use this extra money to increase production unless they think that their customers will have enough new money to buy these additional goods. And no reasonable business person thinks that enough new money can be introduced by increasing production alone. If they did, record amounts of cash wouldn’t be sitting in corporate bank accounts doing nothing, which is what has been happening for years now. Obviously, the people who control this cash don’t believe that supply creates its own demand. They think that increasing production without first seeing an increase in demand would be foolhardy.
History is also not on the supply-siders’ side. To see the failure of the supply-side approach at the national level, all we need do is look at the 2001 and 2003 tax cuts signed into law by the then president of the United States, George W Bush.
These tax cuts did not increase investment or production. Rather, the rich either hoarded this additional money or used it to bid up the price of existing assets, creating asset bubbles and exponentially increasingly economic inequality. And because economic activity did not increase enough to offset the loss of government revenue from reduced taxes, the deficit exploded. To see a similar result on the state level, in turn, we can look at the recent supply-side ‘Kansas experiment’. There, massive tax cuts on the rich and corporations almost bankrupted the state.
Remember also that in the 19th century, when Say devised his law, there was a huge amount of untapped demand for new goods; most costs were marginal costs; and most transactions were for cash, not credit. At that time, perhaps it seemed like supply did create its own demand. But not today.
Today, to stimulate the economy, we need to increase demand first. And the best way to do this is by putting more money in the hands of the people whom the economist John Maynard Keynes described in 1936 as having the highest ‘marginal propensity to consume’.
These are not the rich, but rather the poor and middle-class. For, as a group, these are the people who can be counted on to spend all their income whereas, as we have already seen, the rich are likely to keep a chunk of it in cash.
Once demand is increased among the poor and middle class, Keynes argued, production will rise to meet it.
In deciding whether to go with the supplyside or the Keynesian approach to stimulating the economy, there is one more consideration that is relevant.
Recent history has shown that we can’t be sure that economic expansion alone will solve our wider economic problems. Almost all of the benefits of economic growth during the past 30 years or so have accrued to the rich, and mostly to the super-rich. Real income for most people has been stagnant or even declined. The new jobs that have been created are mostly temporary, low-wage, nobenefit jobs. Permanent, good-wage jobs with benefits have continued to disappear.
Rather than giving money to the rich in these circumstances and hoping that it trickles down to the rest of us, as the supply-siders suggest, it would be better to give money to the poor and middle-class, as the Keynesians suggest. The Keynesian approach, after all, has worked many times in the past. Indeed, it’s how the West emerged from the Great Depression. But most importantly, if for some reason it doesn’t work, at least we will have made the right people better off.
Mark R Reiff has taught political, legal and moral philosophy at the University of Manchester, the University of Durham and the University of California, Davis, and he was a Faculty Fellow at the Safra Center for Ethics at Harvard University.
The major part of a homeowner’s return on investment is the avoided cost of renting a similar property. It falls within the purist definition of income.
But perhaps the most unfortunate restriction on the scope of the Tax Working Group’s task, from the standpoint of fairness, is that it has been instructed not to consider the interaction between the tax system and the transfer (welfare) system.
It is the net effect of both tax and transfers which reduces the stark inequality of market incomes into the (still substantial) inequality of disposable incomes.
The OECD says New Zealand has the fourth least redistributive tax and transfer system among 30 rich countries it looked at.
The Government has given the Tax Working Group chaired by Sir Michael Cullen a well nigh impossible task.
Its terms of reference are peppered with the word “fair”. It occurs seven times. How can the tax system be made ”fair, balanced and efficient”?
But the terms of reference go on to rule out all the obvious ways of making it fairer.
For a start, the working group is not allowed to recommend any increase in income tax rates.
Yet there might well be a case for a more progressive income tax scale on vertical equity grounds. That is, the principle that those with higher incomes, or ability to pay, should pay a greater amount of tax.
And if they wanted to reduce the impact of bracket creep in the middle of the income distribution, the revenue cost could be reduced by introducing a new top rate for those with most ability to pay. Too bad, it is forbidden.
Inheritance tax is also ruled out of consideration, despite the obvious equity argument for having one. Yet legacies fall within economists’ conception of income what can be consumed in a given period while keeping real wealth intact and if you have an income tax, you should tax income. The broader the base, the lower rates can be.
Real capital gains are also income, and the Government has clearly pointed the tax working group’s gaze in that direction.
But the family home is off limits for any capital gains tax, and the ground under it for any land tax.
That is despite the fact that housing equity represents the lion’s share of household wealth, which is distributed even more unequally than income is.
The “third rail” (touch it and you die) status of owner-occupied housing would also rule out any attempt to tax imputed rents, which also count as economic income.
The major part of a homeowner’s return on investment is the avoided cost of renting a similar property. It falls within the purist definition of income, but the electoral fate of Gareth Morgan’s party last year suggests that persuading voters of the merits of such a reform is a forbidding challenge.
But perhaps the most unfortunate restriction on the scope of the Tax Working Group’s task, from the standpoint of fairness, is that it has been instructed not to consider the interaction between the tax system and the transfer (welfare) system.
Victoria University of Wellington economists Simon Chapple and Toby Moore, in a wide ranging and trenchant submission to the Tax Working Group, argue that this makes no sense.
It is the net effect of both tax and transfers which reduces the stark inequality of market incomes into the (still substantial) inequality of disposable incomes.
And, it turns out, less so here than in most other developed countries. Officials, in a background paper for the working group on tax and fairness, point out that the OECD reckons New Zealand has the fourth least redistributive tax and transfer system among 30 rich countries it looked at.
As one example of the incoherence of the tax benefit system, Chapple and Moore cite child support payments.
For someone on one of the main benefits, like Sole Parent Support, every dollar of child support reduces their benefit by $1, an effective tax rate of 100 per cent until the benefit payment is entirely clawed back by the Government.
For someone else who has a job well enough paid not to need a benefit, the same child support is tax-free income. How fair is that?
The tax welfare interface is riddled with such anomalies and perverse incentives.
Research by Patrick Nolan at the Productivity Commission, into the effective marginal tax rates which arise from the targeting of income support policies, indicates it is not uncommon for people to find themselves facing rates as high as 100 per cent. That is, they could significantly increase the number of hours they work and be not one dollar better off, because ofthe thresholds and abatement rates which apply.
These poverty traps are not so much a ditch as a crevasse.
Chapple and Moore also highlight the inconsistency between the tax and benefit systems in terms of the units they apply to individuals or families: “The benefit (or negative tax) system assesses need on the basis of family income. The income tax system assesses ability to pay on the basis of individual income.” We assess fairness or equity issues in terms of families, not individuals’ circumstances.
“Yet we have a tax system which bases income tax, which has as an important goal equity on individuals rather than families.”
They urge the Tax Working Group to look into the pros and cons of a family-based tax system.
“A lot of OECD countries, including the United States, adjust income tax to family circumstances,” says Chapple, a former chief economist at the Ministry of Social Development who now heads the Institute for Governance and Policy Studies.
All this would not be so bad if the newly formed Welfare Expert Advisory Group was going to get to grips with the complexities of the tax/welfare interface.
But it is not clear that it will.
Its brief does include giving “high level” recommendations for improving Working for Families.
And it is instructed to give ”due consideration to interactions between the welfare overhaul and related Government work programmes such as the Tax Working Group”.
So it arguably does have a mandate, should it wish, to get to grips with the often toxic interaction between the tax and welfare systems.
But it is far from clear that the Welfare Expert Advisory Group has the expertise, or for that matter the budget, to do so.
More likely, the issue will fall between the two stools.
For a Government that proclaims the reduction of child poverty to be a central goal, it is a striking omission.
From an MMT (Modern Monetary Theory) perspective, there are no financial limits on the support governments can provide public education. There is also no sense to the notion that public education should “make profits” in a competitive market.
One of the ways in which the neoliberal era has entrenched itself and, in this case, will perpetuate its negative legacy for years to come is to infiltrate the educational system.
This has occurred in various ways over the decades as the corporate sector has sought to have more influence over what is taught and researched in universities. The benefits of this influence to capital are obvious. They create a stream of compliant recruits who have learned to jump through hoops to get delayed rewards.
In the period after full employment was abandoned firms also realised they no longer had to offer training to their staff in the same way they did when vacancies outstripped available workers. As a result they have increasingly sought to impose their ‘job specific’ training requirements onto universities, who under pressure from government funding constraints have, erroneously, seen this as a way to stay afloat. So traditional liberal arts programs have come under attack, they don’t have a ‘product’ to sell as the market paradigm has become increasingly entrenched. There has also been an attack on ‘basic’ research as the corporate sector demands universities innovate more. That is code for doing the privatising public research to advance profit.
But capital still can see more rewards coming if they can further dictate curriculum and research agendas. So how to proceed. Invent a crisis. If you can claim that universities will become irrelevant in the next decade unless they do what capital desires of them then the policy debate becomes further skewed away from where it should be. That ‘crisis invention‘ happened this week in Australia.
This is a case of a vested interest starting with a series of false assumptions and a non-problem and then creating a series of ‘solutions’ to that problem which have no meaning if the actual situation is correctly understood and appraised.
It is just assumed that education has to be provided on a competitive basis in a market for profit. It is never questioned whether that is an applicable paradigm in which to operate.
Then it is just assumed that within that ‘market’ some ‘firms’ (universities) will go out of ‘business’. Why? Because it is just assumed that governments will not be able to fund them any longer because it has limited ‘money’.
See the trend. One myth creates a construction that leads to further deductions that are equally false and so it goes.
That is public policy formation neoliberal style.
The so-called ‘professional services’ firm Ernst and Young, which began life as an accounting firm and morphed into something much more comprehensive and neoliberal.
Its recent history is littered with a plethora of scandals involving accounting and audit fraud, including being associated with the collapse of the Akai Holdings (2009), Sons of Gwalia (2009), Moulin Global Eyecare (2010), Lehman Brothers in 2010, along with many other incidents, where EY (as it is now known) were forced into paying settlements.
It was eviscerated by the US government for its part in “criminal tax avoidance” schemes in 2013. In 2010, it paid “$10 million to settle a New York lawsuit accusing the accounting firm of helping Lehman Brothers Holdings Inc deceive investors in the years leading up to its 2008 collapse” and facilitating a “massive accounting fraud”. This unsavoury firm has established a long list of ‘deals’ with various authorities to avoid criminal prosecution. The question is why its executives have not served time for their part in these scandals.
The 2017 book by Jesse Eisinger The Chickenshit Club: Why the Justice Department Fails to Prosecute Executives is worth a read in that regard. He says that the outcomes of increased political lobbying, a decline in culture at the US Department of Justice and the networking of defense lawyers resulted in a “blunting and removal of prosecutorial tools in white-collar corporate investigations.”
He wrote that there was a ‘revolving door’ between government justice officials and the major law firms representing these banksters and financial fraudsters which meant that the Justice Department was skewed to producing outcomes that were “ultimately to the benefit of corporations”.
As the Slate review noted (July 18, 2017), “government lawyers have too often decided they’re satisfied shaking down companies for settlement money paid for by shareholders, instead of taking on the much harder task of bringing charges against individual executives”.
We are facing a similar situation to that outlined in his book in Australia at present with the Royal Commission on Banking. Whether the criminal behaviour being revealed almost on a daily basis as the hearings continue will result in jail time is yet to be seen. In 2015, though, Australian authorities did lock up a former EY executive for 14 years for his part in “a tax fraud and moneylaundering” racket.
So there is hope.
So, overall, I would assess this firm has been an entrenched part of the neoliberal machine by providing services to all manner of questionable and criminal behaviour all around the World.
Anyway, as we have seen in history, these characters have no shame and re-emerge from scandal with new names (EY rather than Ernst and Young), new logos, flash new WWW sites and mountains of bluster and push.
In the last week (May 1, 2018), its Oceania office has released a Report The university of the future which outlines how insidious these types of outfits really are.
The main claims made by company in this report are:
“The dominant university model in Australia, a broad-based teaching and research institution, supported by a large asset base and a large, predominantly in-house back office will prove unviable in all but a few cases over the next 10-15 years.”
Because universities will have to “merge parts of the education sector with other sectors venture capital” etc.
Increased “Contestability of markets and funding” “governments face tight budgetary environments” mean that “Universities will need to compete for students and government funds as never before”.
The globalisation argument is wheeled out. Why not? It has worked as a smokescreen for some decades now. So, “global mobility will grow for students, academics, and university brands. This will not only intensify competition, but also create opportunities for much deeper global partnerships and broader access to student and academic talent.”
And then the actual agenda is unveiled:
Universities will need to build significantly deeper relationships with industry in the decade ahead to differentiate teaching and learning programs, support the funding and application of research, and reinforce the role of universities as drivers of innovation and growth.
Instrumentalism to the fore.
A spokesperson for the Report told the press that:
We should not underestimate the challenge, it’s not clear that all institutions will be able to make the leap. Universities are faculty focused and prioritise the needs of teaching and research staff over students.
And was quoted as saying:
A lot of the content of degrees no longer matches the actual work that students will be doing.
The neoliberal era has attempted to define every aspect of society in terms of the stylised free market paradigm.
Imposing a mainstream economics textbook model of the market as the exemplar of how we should value things is deeply flawed.
Even within its own logic the model succumbs to “market failure”. The existence of external effects (to the transaction) means that the private market over-allocates resources when social costs exceed social benefits and under-allocates resources to this activity when social costs are less than social benefits.
But they persist in championing the concept and primacy of ‘consumer sovereignty’, which in textbooks is held out as being the force that delivers the optimal allocation of resources because competitive firms provide goods and services at the lowest cost to satisfy the desires of the consumers.
Even in these simplistic textbook stories the dominance of the ‘supply-side’ is ignored (advertising, collusion, etc).
If ever we needed a reminder of how the firms can monopolise information, break laws (consumer protections etc), we just have to think about the behaviour of the banksters in the lead up to the GFC and beyond.
While the demand-side sovereignty story is compromised by supply-side dominance, in the area of education, it is totally inapplicable, given the nature of the process.
Education cannot be reduced to being a ‘product’ that consumers choose. Education is a process of transferring knowledge that the ‘Master’ possesses to the ‘Apprentice’ who has no knowledge (in the area). By definition, the Apprentice doesn’t know what they do not know and cannot be in a position to ‘choose’ optimal outcomes. That has to be the prerogative of the ‘Master’, who has spent years amassing knowledge and craft.
In the case of education, how can the child know what is best? How can they meaningfully appraise what is a good quality education and what is a poor quality education?
The fact that the funding cuts have led to a stream of fly-by-night education providers in Australia who have left thousands of students stranded when they have gone broke is evidence of the failure of a market model.
The reality is that children do not demand programs. The universities are increasingly pressured by politicians (via funding) and corporations (via grants etc) to tailor the programs to the “market” agenda.
Higher education can only ever be a supply-determined activity and at that point the “market model” breaks down irretrievably.
But notwithstanding all this, the neo-liberal era has imposed a very narrow conception of value in relation to our consideration of human activity and endeavour. We have been hectored and bullied into thinking that value equals private profit and that public life has to fit this conception. In doing so we severely diminish the quality of life.
In the education sphere, the bean-counters have no way of knowing what these social costs or benefits are and so the decision-making systems become more crude, how much money will an academic program make relative to how much it costs in dollars?
In some cases, this is drilled down to how much money an individual academic makes relative to his/her cost? This is a crude application of the private market calculus. It is a totally unsuitable way of thinking about education provision. It has little relevance to deeper meaning and the sort of qualities which bind us as humans to ourselves, into families, into communities, and as nations. It imposes a poverty on all of us by diminishing our concept of knowledge and forcing us to appraise everything as if it should be “profitable”.
So constructing educational activity in terms of “what students will be doing” is a fundamentally flawed way of thinking about it.
This is really what the agenda is. The Ernst and Young spokesperson claimed that:
There will most likely be much more work-integrated learning in tertiary courses, which is not necessarily students doing work experience but firms co-developing the curriculum and actually getting students to work through complex real-life problems under the mentorship of academic and industry leaders.
So the firms want to set what students are exposed to.
Education becomes training and specific, profit-oriented training at that. This is the anathema of a progressive future. It is the exemplar of the complete infiltration of neoliberal values into our core social institutions. The neoliberal era has created a conflict in the schooling and higher education sector between traditional liberal approaches and the so-called instrumentalist paradigm.
The assault on public education is one of the neoliberal battlefronts along with labour and product market regulations, public ownership, trade rules, etc. This conflict has come from three sources:
First, governments have become infested with the neoliberal myths and have imposed various cutbacks to school and higher education spending in the misguided attempt to ‘save’ money and cut fiscal deficits.
Second, this fiscal attack has been accompanied by an elevation in the view that education should be more market oriented and models of ‘consumer-driven’ structures have been imposed on educational institutions.
Schooling system administrators and a new breed of university managers took up the neo-liberal agenda with relish, not the least because their own pay sky-rocketed and the previous relativities within the academic hierarchy between the staff who taught and researched and those that took management roles lost all sense of proportion.
Instead of rebelling and making the funding cuts and the increased demand for STEM type activity (and a disregard for liberal arts/humanities curricula) a political issue which in Australia at least would have seen the government back down, the higher education managers embraced the new agenda without fail.
Come in, the bean-counters! The over-paid managers then created a phalanx of managerial bean-counters who have become obsessed with KPls and ‘busy work’ harassing staff with ever expanding lists of requirements and measurements. The bean-counters (for example, Finance divisions with Universities) are largely unproductive drains on institutional revenue and are increasingly drawn from the corporate sector with little experience in education. This trend has then dovetailed with the third source of conflict between liberal and instrumentalist views on education.
Third, capitalists have always tried to embrace the educational system as a tool for their own advancement but social democratic movements have, in varying ways, resisted the sheer instrumentalism that the business sector seeks. The education system is continually pressured by the dominant elites to act as a breeder for ‘capitalist values’ and to reproduce the hierarchical and undemocratic social relationships that are required to keep the workers at bay and expand the interests of capital.
So there is an overlap between the way education is organised and the way the workplace is organised.
Capital also sees education as being primarily involved in the development of job-specific skills (vocational, instrumental) rather than serving any broader goals. The neo-liberal era has seen this type of corporate instrumentalism within education advanced to new heights. The revolving door between profit-seeking corporations and senior management positions within the educational sector is testimony of how corporate values are being elevated above traditional educational aspirations.
You only have to considered Ernst and Young’s “Framework for Assessing and Designing a University Future Model”, which they summarise in this graphic:
Consider the language: “Customers” (not students), “Products” (not knowledge creation), “Role within Value Chain” (not pure knowledge), “Brand and market position”, etc.
I don’t consider this graphic to be remotely relevant to the educational process where knowledge is imparted in a heterodox environment and critical reasoning capacities are developed. The idea that education is a product sold in a market is as far from a progressive ideal as you can get.
From an MMT (Modern Monetary Theory) perspective, there are no financial limits on the support governments can provide public education. There is also no sense to the notion that public education should “make profits” in a competitive market.
The only way that these sorts of debates will progress, however, is to take them out of the fiscal policy realm where they are largely inapplicable and start talking about rights and higher human values and what different interpretations of these rights and value concepts have for real resources allocations and redistribution.
Apart from their scandalous history, Ernst and Young are, in my view disqualified from being taking seriously as a result of their inputs to the public macroeconomics debate.
In their Feeding the animal spirits Budget 2018 report, the spokesperson claims that:
There are good reasons to worry about persistent budget deficits and the national finances do need to be fixed.
And that summarises how stupid and venal the company is.
The postwar generation, now retiring in luxury, stands accused of a wilful failure to safeguard young people’s interests.
The late 1940s were about bombsites, rationing, loss and mourning, but amid the gloom a new generation was emerging. In the grim, grey aftermath of war, children were born on an unprecedented scale in a population explosion: the baby boomers born between 1946 and the mid 60s had arrived. It was time for a new life. It was time for the young to grow up with faith in a better tomorrow.
When we baby boomers reached adolescence, creating the teenager in the process, it was as if the floodlights had been switched on, revealing a colourful, contrary, anti authoritarian Britain. In our teens, with rock’n’roll if not much cash, we were the lucky, cocky generation.
Anthropologist Helen Fisher inelegantly described the maturing of this huge postwar bulge in the population as “like a pig moving through a python”, changing society as we grew older on a scale never known before. We challenged the Victorian puritanism, censorship, class snobbery and inhibitions of the establishment. Full employment put money in the pockets of managers and factory workers alike. In spanking new houses with inside lavatories and proper bathrooms, hire purchase allowed him (and less so, her) to spend, spend, spend as if, overnight, everyone had become a toff. It could only get better.
Yet today, “baby boomer” is a toxic phrase, shorthand for greed and selfishness, for denying the benefits we took for granted to subsequent generations, notably beleaguered millennials, who reached adulthood in the early years of this century. So, where did it all go so very wrong?
It may have been due partly to our irritating habit of hogging the cultural limelight, with constant references to the swinging 60s serenaded by endless revivals of Lazarus like pop groups who refuse to die. But there are a much more serious set of charges, too. We were and are accused of sabotaging our children’s future, hoarding power and money while expecting those with the least to foot the potentially hefty bills as we march towards our 90s.
Leading the flood of critics has been David Willetts, author in 2010 of The Pinch, a book that sparked a flood of furious debate. Lord Willetts, then a Conservative minister, is now chair of the Resolution Foundation think tank.
“The charge,” he declared, “is that the boomers have been guilty of a monumental failure to protect the interests of future generations.”
Next week, one jury will deliver its verdict: the Resolution Foundation’s intergenerational commission publishes its groundbreaking two -year investigation into millennial Britain on 8 May. The commission rightly says that intergenerational fairness is a major issue, but so too are the troubling inequalities within the generations. “Intergenerational war doesn’t reflect how people feel about the issues or how they live their lives as families,” says Torsten Bell, director of the foundation.
The commission signals that it is time to remodel the welfare state, which was crafted at a time when a pension was only expected to last a handful of years before death took its toll. Young people are deeply anxious about housing, jobs and pensions, while those who are older are concerned about health, social care, the fragility of the welfare state and the future of their own children.
“Everybody wants to fix this,” says Bell. “If we want to keep our promises to the different generations and maintain the welfare state we have to think radically about how we do this.”
The commission has already revealed a profound cross generational pessimism about the prospects of the young. The escalator that for decades ensured the younger generation had a better standard of living than their parents has stalled and that has ramifications for life. Britain along with Greece is now the most pessimistic among the advanced economic countries, a mood that has potentially catastrophic implications for a country’s wellbeing and resilience.
“When an entire cohort is not succeeding, everyone understandably feels a real need to fight tooth and nail to keep what they’ve got. The rich are much better suited to win that war,” Bell warns. “That’s a real challenge for social mobility, and a disaster for politics.”
Pessimism is not new. In the 1970s and 80s, inflation reached 21%, three million people were on the dole, poverty was soaring and Thatcherism had laid waste to manufacturing and industry. On a full grant, and from a working class background, I was one of the 8% who went to university, charged up with notions of (women’s) liberation, revolution and an envy of Mary Quant’s unaffordable wardrobe.
My best friend had left school at 16 and moved up the managerial ladder, earning what our mothers called “a good wage”. Pessimism was the national trait but we all had hope. I was 28 and single when I put a deposit on a tiny flat; my parents were in their late 40s before they took out a hefty mortgage on a modest bungalow. That escalator in action.
Life, money and opportunities had an elasticity that they lack today. A minority of millennials are rich but the majority are definitely not, while almost two million older pensioners, mainly female, the silent generation live in deep poverty.
The Resolution Foundation has provided forensic detail of how this has come about and why. A complex mix of reasons includes the financial crisis, austerity and reluctance by successive governments (albeit ones filled with baby boomers) to radically tackle the challenges of housing, health, social care, employment and a woefully deregulated market at a time when people are living so much longer, but no baby boomer banditry. Politicians have failed to decide equitably, in this different climate, who pays and how and what they receive in return.
We are facing a new set of problems,” says Frances O’Grady, general secretary of the TUC and a member of the commission. “We have people with degrees doing Mickey Mouse jobs and young people who will have no occupational pension and no house to sell to see them through old age. That’s not the fault of mum and dad. If we think that, we are tackling the wrong problems. It’s not about redistributing the crumbs from the rich man’s table but restoring fairness.”
So, according to the foundation, what is life like for the average millennial, in and outside London? For some, including women, ethnic minorities and LGBT groups, progress has come in terms of rights and personal freedoms. However, even at a time when unemployment is at its lowest and inflation minimal, aspirations of having a home, a fulfilling job and a higher income than the generation before are being extinguished.
Only a third of millennials own their own home, compared with almost two thirds of baby boomers at the same age. It will take a millennial on average 19 years to save for a deposit, compared with three years in the 1980s. A third of millennials will, it is predicted, have a lifetime of renting with less space, poorer conditions, longer commutes and more insecurity than the baby boomers experienced.
In the 1960s, those born before the second world war spent 8% of their income on housing that has risen to almost a quarter of net income for millennials. Right to buy has reduced social housing stock 55,000 homes sold in 10 years and house building is at half the annual need required. At the same time, private landlords lack proper regulation, so families can be evicted at two months’ notice, and rents can be astronomic.
Low wages are also endemic. Two out of five non graduate jobs are filled by people with degrees, so the less qualified, who formerly went into proper apprenticeships, are pushed into self employment, zero hours and agency work that suits some but not all. The proportion of low paid work done by young men increased by 45% between 1993 and 2015. Millennials who changed jobs in their mid 20s enjoyed 14% pay rises, higher still if they moving around the country for work. But insecurity reduces the appetite for risk, so many are staying put, receiving minimal salary increases.
“More and more young men have moved into the low paid, part time, service sector jobs that women have traditionally done,” Bell says. “They never expected to do that, and their dads didn’t do it. If a measure of your self esteem is how well you have done compared with your parents, that’s a blow. The good news is that women’s full time employment prospects have significantly improved but the goal wasn’t to improve poor job options for women while making them bad for young men.”
On current trends, given high rents, low wages, Brexit and, for some, the debt of university tuition fees, will millennials have sufficient funds in retirement? Under auto enrolment, 5% of a wage by 2019 will go into a pension pot, but on a low income, will increasing numbers of millennials opt out? In several decades’ time, millions of older people may be dependent on housing benefit, living in rented accommodation, and surviving on a state pension, which currently at £7,000 a year, is already not fit for purpose.
In contrast, some of us baby boomers, newly retired, are apparently living in an experiential paradise cruising the globe, contemplating buying a second home albeit dazed at how good pensions, secure employment and the fluke of buying at the outset of a period of rapid house inflation has catapulted us so much further, financially, than many of us expected to travel.
Some baby boomers are extremely wealthy in assets. We have power insofar as, currently, we are more likely to vote than the young. But we also pay taxes, help with childcare, volunteer and subsidise our grown up children. “Families are doing their bit,” says Lord Willetts. “The state needs to as well.”
The commission heard research carried out by Professor Karen Rowlingson, from the University of Birmingham, on the scale of support given by families. The middle classes, she says, pay for their grandchildren’s education and children’s housing; the working class take out loans and sell possessions to help with their children’s debts and day to day living. It’s self help that further entrenches deep inequality.
“The younger generations, aged 20-45, make up the majority of the electorate. The older voters are dying off,” says Rowlingson. “We can change policies and tackle this as a partnership between the generations.”
According to the Resolution Foundation, the cost of education, health and social security as a slice of GDP is predicted to rise at today’s prices by £24bn each year to 2030 and by £63bn a year to 2040. It’s plain that the tax burden on young people, already struggling to match the living standards of older generations, has to ease.
Ending the triple lock that updates the state pension by the highest of annual inflation, earnings or 2.500 could see 700,000 more pensioners living in poverty by 2040, Age UK has warned. “It’s not who gets the bigger pension,” O’Grady says. “It’s why no state pension is sufficient.”
The report’s proposals will be revealed next month. Options being considered include help with rental deposits, better regulation of the private rented sector, a re-evaluation of council tax (a family in a £100,000 house may pay five times more than a person living in a E 1m property); a look at inheritance and a radical reform of property taxation to help first time buyers.
The intergenerational commission’s invaluable work exposes how urgently capitalism has to be brought under control. It’s time to restore fairness to the contributory principle at the heart of a renewed welfare state, re-establish social justice and repair the damage done to confidence, trust, wellbeing and optimism by a situation where 90% of people manage on increasingly less, while the other 10% rapidly accrue more and more.
“Wealth differences also risk bleeding into other areas of life where they do not belong,” Bell warns. “Wealth status could determine not only where you can live, but the education you can get, the risks you can take and the job you can do. Wealth is profoundly reshaping Britain.”
When the political class adopted Neoliberalism, it effectively transferred significant amounts of political power, the democratic power of governments, to private corporations.
We need to take it back! (Hans)
David McKnight makes the case for a people power that doesn’t scapegoat immigrants or minorities.
Here’s a quick quiz. What do the following political figures have in common: Pauline Hanson, Bill Shorten, Donald Trump, Jeremy Corbyn and Bernie Sanders?
Answer: all have been accused of populism. Whether they’ve bashed banks, billionaires or boat people, they’ve been damned as populists. Yet these political figures come from wildly different parts of the Left and Right. Can they all be populists?
Mostly, when I hear people damning someone as a populist, they are talking about a right-wing version. But it’s not that simple. In this book, I argue that a progressive version of populism exists too.
A progressive populism takes up the genuine economic grievances of everyday Australians without scapegoating migrants or minorities in the way Donald Trump and the proBrexit forces have done. In fact, a progressive form of populism is the best way of defeating the racist backlash of right-wing populism because it addresses the social and economic problems which partly drive the rise of right-wing populism. As well, it asserts our common humanity, whatever diversity we also express.
I first discovered populism when I began teaching investigative journalism in the late 1990s at university. I had some understanding of the subject already, having worked on the ABC’s investigative TV program Four Corners. Like other journalists, I knew about the role of investigative journalism in the Watergate scandal of the early 1970s. However, to teach it as an academic course I needed to know about its historical origins. I found that investigative journalism (originally called muckraking) began in the United States around 1900 during what Americans call ‘the Progressive Era’. It was called this because it was a period of radical ideas and activism about social reform. One expression of this was the emergence of a new political party, the People’s Party, in 1890-91. It stood for the interests of ordinary people farmers and workers against the ‘robber barons’ in the privately owned banking, oil and railway industries. Friends and enemies alike described the approach of the People’s Party as Populism and its supporters as Populists.
The muckraking journalists were crusaders on issues which they shared with the Populists. For example, in his book The Jungle, writer Upton Sinclair exposed the dangerous and filthy conditions endured by the Chicago meatworkers. Years later his book was recognised as one of the forces behind the introduction of food safety laws. One of the first female muckrakers, Ida Tarbell, exposed the ruthless practices of Standard Oil in crushing rival companies in a series of articles published in McClure’s Magazine, and eventually a book, The History of the Standard Oil Company. Today, Standard Oil is better known as Exxon and remains a ruthless corporation. Lincoln Steffens’ book The Shame of the Cities exposed the corruption of political machines linked to gambling, prostitution and bribery. Other muckrakers attacked the role of big money in government and the power of Wall Street. Their journalism, I realised, was a key contribution to the progressive causes shared with the Populists.
The key idea of the Populists was that the interests of ordinary people were in conflict with those of the elite. Some of the Populists had conspiratorial ideas about money and power but their movement was a powerful challenge to aggressive, unregulated big business. Having been on the Left of politics since my teens, I found this history of a forgotten reform movement fascinating. its goals of economic and social justice for ordinary people are still relevant today.
Years later I rediscovered American populism when I read a book by journalist Thomas Frank, What’s the Matter with Kansas? Published in the wake of the election of George W Bush, his book pointed out that Kansas, now a conservative Republican state, was once a centre of radical activity. One Kansas town produced a socialist newspaper, Appeal to Reason, which sold hundreds of thousands of copies. In the 1890s its farmers, driven to the brink of ruin by years of bad prices and debt, held huge meetings where Kansas radicals like Mary Elizabeth Lease urged the farmers to ‘raise less corn and more hell’. From this situation, the People’s Party emerged as the enemy of the ‘money power’ and as an alternative to both Democrats and Republicans. It advocated publicly owned railways and banks along with a progressive income tax on the rich. For this, Frank tells us, they were reviled ‘for their bumpkin assault on free market orthodoxy’.
In 2015 and 2016 I found myself hearing commentators talk about the rise of modern forms of populism during the looming US presidential election. Both Donald Trump and Bernie Sanders were referred to as populists. Sanders had opened his campaign with the statement: ‘This country and our government belongs to all of us, not just a handful of billionaires’. It was a modern echo of the progressive side of the American populist tradition. Although he didn’t win the Democrats’ presidential nomination, Sanders shifted the political agenda and challenged the untrammelled power of the wealthy in the name of ordinary people.
Trump, a right-wing populist, represented the worst aspects of popular prejudice. Yet he won. Like many others, I was stunned as I read the first online news reports announcing this. How could it have happened? One of the most illuminating insights came from Thomas Frank, who argued that Trump’s populist campaign on economic issues was far more important than most people realised at the time and had been the key to him winning crucial states. The abandoned factories and crumbling buildings in cities devastated by free trade deals had created a ‘heartland rage’ that swamped the Democrats.
All of this was ‘the utterly predictable fruit of the Democrats’ neoliberal turn’, he said. ‘Every time our liberal leaders signed off on some lousy trade deal, figuring that working-class people had “nowhere else to go”, they were making what happened last November, Trump’s win, a little more likely.’
Such sentiments inspired this book. And all of this is relevant to Australia because both our Labor and Liberal politicians have, in recent decades, largely accepted the principles of deregulation, privatisation and small government, together known as neoliberalism. In part, this book is an investigation into the failures of these principles in Australia.
The final reason for writing this book is more personal. I grew up in a single-income, bluecollar family with my mother suffering from a severe mental illness. Yet we survived and thrived thanks in part to a strong public sector, especially in health and education. This public sector was grounded in the major parties’ consensus that it was both morally obligatory and economically sound that important public services should be equally available to all and provided collectively. Now this consensus is being broken apart and discarded. This is not some misty-eyed memory about a non-existent golden age, an error often made by right-wing populists when they equate the White Australia Policy years with better conditions overall. Australia is a better and more open society today, not least because it is more culturally diverse. But in terms of simple practical things such as expecting a secure wellpaid job, social services and a home to live in, we are going backwards.
When I started researching this book in the wake of the shock Trump victory and the vote for Brexit I was already a critic of neoliberalism. But as I probed more deeply I grew angrier and angrier. My research revealed that the orthodoxies of deregulation and privatisation, regarded as supreme common sense by the political and economic elite, are radically transforming Australia. The gulf between billionaires and the poor is widening as old egalitarian Australia crumbles; deregulated banks have become parasitic to the rest of the economy; corporate tax avoidance is out of control; and our pay and conditions are being eroded. As it had with me, this has angered many ordinary Australians. Some falsely blame migrants and refugees while others rightly blame a corporate and political elite. To change things, we need to rebuild a new progressive agenda which unites ordinary Australians against these elitedriven policies.
Of prime importance in such a renewed progressive agenda is genuine action on the biggest danger of all, irreversible climate change, which will hit ordinary Australians first. A progressive populist approach aims to unite Australians in the broadest possible new movement one that will provide the necessary people power to avert the worst kinds of changes in the future. Nothing less than the survival of humanity is at stake.
THE POLITICS OF POPULISM
We forced discussions on issues the establishment had swept under the rug for too long. We brought attention to the grotesque level of income and wealth inequality in this country and the importance of breaking up the large banks, we are stronger when we stand together and do not allow demagogues to divide us by race, gender, sexual orientation or where we were born.
US presidential candidate, Bernie Sanders
The establishment complains I don’t play by the rules. By which they mean their rules. We can’t win, they say, because we don’t play their game. We don’t fit in their cosy club. We don’t accept that it is natural for Britain to be governed by a ruling elite, the City and the tax-dodgers, and we don’t accept that the British people just have to take what they’re given.
British Labour leader, Jeremy Corbyn
With Donald Trump’s successful campaign to win the US presidency and Britain’s decision to ‘Brexit’ from Europe, we suddenly began to hear a lot of the word ‘populism’ in the political discourse. At first it was used to describe the attack Donald Trump made on illegal Mexican immigration when he announced his candidacy for the Republican nomination in mid 2015. With his trademark bombast, he declaimed, ‘When Mexico sends its people they’re not sending their best They’re sending people who have lots of problems They’re bringing drugs. They’re bringing crime. They’re rapists’. He then added, ‘and some, I assume, are good people’. His call to build a wall on the US-Mexico border (‘which Mexico will pay for’) became a recurrent theme of his campaign and later, his presidency.
Nor was his abuse limited to Mexicans. After a Muslim US citizen committed a terrorist attack in San Bernadino, California, Trump called for a ban preventing Muslims from entering the United States, at one point including those who were American citizens currently abroad. Trump’s campaign received what seemed to be a certain death blow in October 2016, when the Washington Post revealed an audio tape of his boast that, because he was ‘a star’, he could grab women ‘by the pussy’ and get away with it.
By the normal rules of elections in the United States and elsewhere, his popular support should have shrunk. Trump’s coded appeals to racism, crude misogyny and calculated abuse should have fatally wounded his bid for the White House. But his popular support grew and Trump eventually attained the most powerful position in the world. In office, he has confirmed the worst expectations, responding to North Korea’s threat to the United States with a warning that North Korea ‘will be met with fire and fury like the world has never seen before’, a thinly disguised threat to unleash a nuclear war.
How did we get into this situation? Trump’s election victory owed a lot to two factors. One was his economic populism, which criticised free trade and globalisation. This received a warm response from many working Americans. He threatened to withdraw the United States from the North American Free Trade Agreement. He promised to impose high tariffs on runaway US companies which moved production overseas. He threatened restrictions on imported Chinese goods. Globalisation, he said, helped ‘the financial elite’ while leaving ‘millions of our workers with nothing but poverty and heartache’. All the while he targeted the states hardest hit by economic globalisation. Much of this was downplayed or never reported by both social media and the traditional news media, which preferred to concentrate on his more colourful outbursts and tweets.
The second key to his victory was his skilful use of social media, which he credited with being a way to counteract what he called the ‘fake news’ propagated by mainstream news media. On Facebook and Twitter his popularity eclipsed that of Hillary Clinton and it was there that he circulated his own ‘alternative facts’. The algorithms of social media, which suggest news based on past activity, transformed this popularity into self-reinforcing echo chambers of Trump supporters. And all of this was fed by the crisis in traditional journalism, whose capacity to report news had been eroded by the power of that selfsame social media.
The election of Donald Trump has taken us all into a new and dangerous place. If it had been an isolated incident it would not matter so much. But it was far from that. A few months before Trump’s election, Britain went to the polls to decide whether or not to leave the European Union (EU). The vote was voluntary but the turnout was high. More than 30 million people voted, with a majority in favour of Britain’s exit, styled Brexit. Another victory for populism, said the commentators.
The British vote to leave the EU spanned traditional Right and Left and drew support from unexpected places. While the ‘Leave’ vote was highest in traditionally Conservative areas, it was also high in some working-class Labour strongholds. For some, voting to leave the EU was a protest against the economic effects of the globalised economy, with its problems of unemployment and low wages. For others, their main concern was the immigration which had ensued from open borders. ‘We want our country back!’ was a common cry. Donald Trump, then campaigning for president, hailed the Brexit vote as a ‘great victory’ and drew parallels to his own opposition to ‘rule by the global elite’. A new populist Right was on the move globally.
Soon populism seemed to be everywhere. In Europe the established parties saw their dominance challenged by right-wing populism. In France in 2017 the antiimmigrant and anti-Muslim National Front achieved 34 per cent of the presidential vote, its highest yet. That same year the far right Alternative for Germany won an unprecedented 13 per cent of the vote and 90 seats in the Bundestag. In the Netherlands Geert Wilders’ xenophobic Party for Freedom advanced in the 2017 general elections.
In Australia too Trump-style political disaffection is taking hold. A reputable study by academics at the Australian National University (ANU) shows that key indicators, including satisfaction with democracy, trust in government and loyalty to major parties are at record lows among Australians. The study was conducted following the July 2016 election and found that only 26 per cent of Australians think the government can be trusted (the lowest level since it was first measured in 1969). Forty per cent of Australians were not satisfied with democracy (the lowest level since the period after Gough Whitlam was dismissed in 1975); and there was a record low level of interest (30 per cent) in the 2016 election.
The study’s lead researcher, Professor Ian McAllister, said that we are seeing ‘the stirrings among the public of what has happened in the United States of the likes of Trump, Brexit in Britain, in Italy and a variety of other European countries, it’s coming here and I would have thought this a wake-up call for the political class’. Australian conservatives, hoping to take advantage of this disillusion, welcomed Trump’s victory, with Tony Abbott tweeting: ‘Congrats to the new president who appreciates that Middle America is sick of being taken for granted’. Mining magnate Gina Rinehart urged Australia to follow Trump’s lead and Andrew Bolt told his audience: ‘The revolution is on!’ Very much part of this phenomenon, Pauline Hanson’s One Nation party achieved an unprecedented four seats in the Senate in the 2016 election.
But what is populism?
To many, ‘populism’ is a shorthand term for pandering to people’s baser instincts, exemplified in Trump’s campaign and his presidency. It inflames a desire to blame ethnic and religious minorities; it is a lust for cheap popularity and it is a phony hostility to the Establishment and to ‘the elite’, such is the common understanding. Populist leaders are seen to be posing as outsiders and as representatives of the underdog. Above all, populism is regarded as a right-wing phenomenon.
But it’s not that simple. This book argues that a progressive version of populism exists too. A progressive populism fights for the genuine economic grievances of everyday people without blaming minorities or migrants. In fact, a progressive populism is a very good way to neutralise this sort of scapegoating because it addresses the social and economic problems which partly drive the rise of right-wing populism.
Populism is a notoriously loose description of a political stance. In many ways it is a style of doing politics rather than a series of particular policies. Some people think populism means trying to be popular, but this is misleading. The words populist and populism come from the Latin word for ‘the people’ (populus) what today we’d call the public. The meaning survives in the expression vox populi, the voice of the people. Generally speaking, populism is a style of politics which frames politics as a conflict between the people and an elite. But the identity of the people and the nature of the elite can vary widely. On this basis populism can be either a right-wing or a left-wing phenomenon. In some countries today, the traditional battle between right and left is being channelled through a populist filter.
Academic Margaret Canovan conducted one of the early studies of populism. She argues that there are two broad strands to populist movements. The first is rural, based on organisations of peasants or farmers, a kind which typically emerges when these people are confronting modernisation. The second is characterised by highlighted tensions between the elite and the grassroots. This can take the form of ‘idealisations of the man in the street or of politicians’ attempts to hold together shaky coalitions in the name of “the people”’. Canovan concludes that populism can take right-wing or left-wing forms but that ‘all forms of populism without exception involve some kind of exaltation of and appeal to “the people” and all are in one sense or another anti-elitist’.
The American writer John Judis, author of the recent book The Populist Explosion, also argues that populism is ‘not an ideology but a way of thinking about politics’. He too supports the view that populism can exist in both left and right forms. Left-wing populists champion the people against an elite or establishment (as in Occupy Wall Street’s slogan about the One Per Cent versus the 99 per cent). Right-wing populists are against an elite ‘that they accuse of coddling a third group, which can consist, for instance, of immigrants, lslamists or African American militants’.
Judis notes that the original US People’s Party was formed in the 1890s when Kansas farmers united with an early workers’ organisation and challenged the existing establishment of Republicans and Democrats. The People’s Party developed policies against monopolistic railroads and greedy banks and in favour of progressive income tax and expanding public controls. As one populist writer said, they aimed to get rid of ‘the plutocrats, the aristocrats, and all the other rats’. To the Australian Labor Party, emerging in the same tumultuous decade of the 1890s, the US People’s Party was something of a model and there were early proposals to call the new Australian party the People’s Party, rather than the Labour Party.
This progressive strand within American populism re-emerged in 2015-16 when Bernie Sanders competed with Hillary Clinton to become the Democrats’ presidential candidate. At the start of that campaign he was seen as little more than an eccentric, rumpled, 70-plus year old running an unusual campaign. One newspaper described him as a ‘grumpy grandfather type’ who ‘embraces his reputation for being gruff, abrupt and honest and promises to be bold’. As time went on, observers began to note the cheering, youthful crowds that he drew, his calls for a ‘political revolution’ and his strong social media campaign on Facebook.
Although he did not win the Democratic nomination, Sanders surprised everyone by doing well enough in the battle for the presidential nomination to win 23 primary and caucus races to Clinton’s 34. With no big corporate donors, he raised millions of dollars in small donations from a growing support base, especially from the young. Most surprising of all were his campaign’s public statements and appeals. Sanders attacked ‘the One Per Cent’ of super-rich people who had benefitted enormously from the globalised economy while others struggled to survive. In one speech at Liberty University, he said: ‘In my view there is no economic justice when the 15 wealthiest people in this country in the last two years saw their wealth increase by $170 billion’. It was a fact he repeated all through his energetic campaign.
Another Sanders target was the deregulated banking system that had caused the global financial crisis. Sanders charged: ‘Wall Street used their wealth and power to get Congress to do their bidding for deregulation and then, when Wall Street collapsed, they used their wealth and power to get bailed out’.‘ The contrast he pointed out in several speeches was with the 41 per cent of American workers who didn’t take a single day of paid vacation in 2015 and with the third of workers in the private sector who cannot even claim paid sick leave.
Like Trump, Bernie Sanders was also widely regarded as a populist, reviving a long American tradition in which the central conflict is seen to be between the people and the elite.
Sanders happily described himself as a democratic socialist and pointed to the socialdemocratic states of Scandinavia as models. In his platform, Sanders said he supported: a national public healthcare system; an end to corporate welfare; abolishing fees for college degrees; a full employment policy; raising the minimum wage to $15 an hour; and preventing ‘greed and profiteering of the fossil fuel industry’. The money to achieve these aims was to be raised by compelling wealthy individuals and corporations to pay their fair share of tax.
All of these policies, advocating a stronger role for government, effectively rejected the decades-long dominance of the ideology known as neoliberalism the ideology of small government, of globalising in the form of deregulated markets and of faith in market forces to guide and manage the economy.
Progressive populism in the Sanders mould attributes today’s social and economic problems not to migrants or minorities nor to the ‘politically correct’ mainstream media, but to the failure of neoliberal policies. And because progressive populism addresses the forces driving the rise of right-wing populism, it is the most effective antidote.
The political theorist Chantal Mouffe is not surprised by the rise of right-wing populism:
In a context where the dominant discourse proclaims that there is no alternative to the current neoliberal form of globalisation and that we have to submit to its diktats, it is small wonder that more and more workers are keen to listen to those who claim that alternatives do exist, and that they will give back to the people the power to decide.
And this is just what Trump promised. Unlike the campaign of Hillary Clinton, issues of economic injustice featured heavily in his winning campaign. Just a few days before the November election, Trump told a huge crowd in an aircraft hangar in Pittsburgh: ‘When we win, we are bringing steel back, we are going to bring steel back to Pennsylvania, like it used to be. We are putting our steel workers and miners back to work’. Trump touched a raw nerve. No steel mills now exist in Pittsburgh and hundreds of thousands of steelworkers had lost their jobs since the 1980s, in part due to freer global trade. Whether Trump was sincere in (or even capable of delivering) his promise to bring steel jobs back to Pittsburgh is not the point. Identifying economic grievances and blaming them on free trade and globalisation is almost unprecedented by a Republican candidate. More importantly, it was a challenge which Hillary Clinton, as a long time supporter of neoliberal free trade, could not rebut. As it turned out, Trump did win in Pennsylvania. It was one of the three ‘rust belt’ states that made the difference to victory or defeat in the presidential election.
Both Trump and Sanders were outsiders in US politics. Both denounced the domination of big business and the banks and blamed them for much of US economic woes. Both based their campaign on appeals to ordinary Americans and both were described as populists. Unlike Trump, Sanders was a progressive populist. When he talked about the elite and the establishment, he meant the economic elite and the corporate establishment. Unlike Trump, Sanders did not scapegoat immigrants or ethnic minorities.
The groundswell grows
The groundswell of populism soon saw Sanders joined by the leader of the British Labour Party, Jeremy Corbyn. When he began his election campaign in April 2017, Corbyn faced deep opposition from many of his fellow Labour members of parliament. Like most media commentators, they also believed that because of his leftwing history and left-wing policies he could not possibly win. And certainly he was in trouble at the beginning of the campaign, when polls were placing Labour up to 24 points behind the Conservatives.
From the start of the 2017 election campaign Corbyn framed the contest in the language of progressive populism. He described the election as a battle of ‘the establishment versus the people’ and promised to overturn ‘a rigged system’ that favoured the rich and powerful. Under him, Labour would not be part of the ‘cosy club’ whose members think it is natural for Britain to be ‘governed by a ruling elite, the City and the tax dodgers’, he said. His opponents believed such deeply controversial rhetoric was guaranteed to result in a huge loss.
But his message was straightforward and cut through the spin and PR fog of traditional political rhetoric. And these policies proved popular among the British people. Early opinion polling showed that up to 71 per cent of people supported his proposal to raise the minimum wage to ten pounds an hour. A similar proportion of the British public (62 per cent) supported his plan to raise taxes on the rich and high income earners.
Corbyn’s manifesto broke other unspoken rules of the economic consensus of neoliberalism. He argued that the railways and water supply should return to public ownership. He promised to extend free school meals by a tax on private school fees. He also urged increased funding for social housing, and his pledge to abolish university fees helped build a powerful momentum among young people, who registered to vote at unprecedented levels and voted Labour on election day.
The Conservatives had called the election, confident they would increase their majority in parliament, and Corbyn’s campaign of progressive populism destroyed their majority and almost beat them.
There were close parallels between the movements around Bernie Sanders and Jeremy Corbyn. Officials from Sanders’ campaign helped Corbyn with ideas on strategy and fundraising. Sanders himself visited Britain just days before the election campaign and drew comparisons between his own policies and Corbyn’s:
Too many people run away from the grotesque levels of income and wealth inequality that exist in the United States, the UK and all over the world Globalisation has left far too many people behind. Workers all over the world are seeing a decline in their standard of living. Unfettered free trade has allowed multinational companies to enjoy huge profits and make the very rich even richer while workers are sucked into a race for the bottom.
The spread of progressive populist ideas has not been confined to the United States and Britain. In Spain the progressive populist party Podemos emerged in 2014 and grew so rapidly that it secured 20 per cent in the 2015 elections, campaigning on an anti-austerity platform, supporting increased public spending and strong anti-corruption measures. In the 2016 election it retained its electoral support. In Greece, another new progressive party, Syriza, formed out of a coalition of left-wing and environment groups and received 35 per cent of the vote in the 2015 elections, later forming government. While the majority trend within European populism is rightwing, the significance of a new left populism should not be underestimated.
Driving the emergence of right and left-wing populism is the set of policies known as neoliberalism.
Neoliberalism became the mindset of the political class in the 1980s and was a very deliberate project to wind back the welfare state, reducing the public sphere with its public goods of health, education, transport and culture, along with the tax system which paid for it.
The neoliberal project is based on the idea that the market is the most efficient distributor of goods because it combines the profit motive and competition. It takes no account of justice, inequality or social cohesion. Ultimately this promotes the transformation of all human relationships (not just economic ones) into commercial transactions.
It was neoliberalism with its floating currencies and deregulated markets which drove the present form of globalisation. But neoliberal globalisation means much more than a loosening of trade. It means the unplanned transfer of blue and white-collar jobs from erstwhile industrial countries to less developed nations. It also means national governments are less able to control what happens in their own society and economy.
When the political class adopted neoliberalism, it effectively transferred significant amounts of political power, the democratic power of governments, to private corporations. While benefitting a corporate elite, the neoliberal experiment demonstrably failed in the global financial crisis and the effects of that failure are still with us.
What had been a crisis of private debt was transformed by government bail-outs into an alleged crisis of public debt. This sleight of hand reinforced the neoliberal dogma that the problem was always governments. The ideology of ‘small government’ meant that governments imposed even more stringent cost-cutting measures.
The failure of neoliberalism in Australia
The populist groundswell in the United States, Britain and Europe and elsewhere is reflected by similar movements in Australia, prompted by similar causes. In the following chapters I examine the ways in which Neo-liberalism has failed to produce a good society, as well as its role in fostering a populist backlash.
First and most significantly, 30 years of neoliberal globalisation and deregulation have produced a polarisation of wealth which has undermined Australia’s egalitarian ethos. The gulf between the super-rich and the rest of us is widening. We are becoming a more divided society with a tiny wealthy elite at one extreme and a significant group of poor at the other.
Nor is it solely a matter of fuelling material inequality. As important as inequality (and more important in the long term) is climate change. The ideology of small government and deregulation is impeding our response to accelerating climate change despite the clear warning signs in record high temperatures and the bleaching of the Great Barrier Reef in Australia. Whatever combination of market and state arrangements is best at fostering renewable energy, it will need tough government action to implement this and to defeat the power of the coal and oil industries. To support such action we need a broad populist coalition of all the diverse forces demanding real action on climate.
And just as it has in the United States and Britain, privatisation is spreading throughout Australian society, changing services that used to be provided to all citizens into profitmaking enterprises. The sale of public assets like seaports, airports and electricity poles and wires has simply created expensive monopolies. Billions have also been wasted in attempting to privatise technical and vocational education. Despite these failures, private companies are now being encouraged to move deeper into education, aged care and disability services.
Likewise, Australia has its own rust belt of closed factories and, for those in employment, jobs are increasingly casual, part time and less secure. The deregulation of Australian workplaces means that for younger workers, jobs with paid holidays and fair wages are becoming less common. And thanks to a variety of temporary overseas visa schemes a casualised, cashinhand underclass is spreading in the agriculture, retail and hospitality sectors. Such workers are exploited and their labour conditions undermine those of local workers. This is not occurring accidentally but because economic orthodoxy (backed by employers) demands this labour deregulation. The resulting job insecurity combined with low wages is one factor stoking a right-wing populist backlash based on xenophobia and hostility to overseas workers.
While low-paid workers are made increasingly vulnerable, at the other end of the scale big corporations do everything they can to avoid paying tax, a practice made easier in the globalised world of neoliberalism. In 2014, the Australian branch of the tech giant Apple paid $80 million in tax just 1 per cent of its total Australian income of $6 billion. Its rival, Microsoft, paid just 5 per cent of its income. Over several years big mining companies like BHP and Rio Tinto shifted billions through Singapore, where tax rates can be a mere 2.5 per cent. Nor is it just corporations. Some of Australia’s richest families and individuals pay little or no tax. When the Panama Papers were leaked, up to 800 wealthy Australians were associated with shell companies in tax havens like Panama. Meanwhile, ordinary Australians are left to pick up the tab for hospitals, roads and schools, effectively subsidising those who refuse to pay their share.
Finally, compounding the problem of wealth inequality, the banking and finance sector has swollen enormously since it was deregulated. In Australia we have some of the biggest and most profitable banks in the world. Together they form a rapacious oligopoly which extracts more than $30 billion in profits each year from the rest of Australia.
In their zeal to lend money, deregulated banks have fuelled a housing price boom, the result of which is that fewer Australians now own their own home than 40 years ago.
It’s now time to look again at regulating banks and the finance industry to ensure that they act in the public interest.
Overall, the spread of neoliberal orthodoxy through society has corroded many of the institutions and relationships on which citizens rely and which offer protection from the vagaries of the market. This orthodoxy has shrunk the democratic space by removing all sorts of functions from the public to the private sphere. The real meaning of ‘small government’ is that we have ended up with a small democracy, because governments are still the only institutions we have for exercising our democratic, collective voice. The zealous advocacy of theories of selfinterest, competition and small government has led to a dead end.
All of this spawns populisms of both the Right and Left. The crucial point of difference between them concerns the meaning of and response to globalisation. Are the problems of globalisation primarily issues of economics and economic justice or are they mainly an issue of immigrants and of changing the ethnic mix? Progressive populists are alarmed by the damage that open economic borders, which import cheap products and export jobs, do to local jobs and the national economy. Right-wing populism dredges the deepest and most dangerous emotions to reject the changing ethnic mix which results after years of relatively open immigration.
When right-wing populists define what they mean by the ‘elite’ they take aim at the progressive middle class, the so-called politically correct, who abhor racism and gender inequality. Progressive populists, by contrast, define the ‘elite’ in economic terms as the super-rich and corporate moguls. When talking about ‘the people’, progressives seek to unify the middle class and working class in an alliance for reform. Progressive populists emphasise the common ground which the majority of people share on issues of economic justice.
By focussing attention on genuine economic grievances, a progressive populist agenda can undercut the way ethnic and religious minorities are demonised.
Some see progressive populism as the natural continuation and revival of social-democratic and labour politics which have been compromised by their turn to ‘third way’ politics. One critic is political theorist Chantal Mouffe. She argues that the neoliberal consensus between conservative and once-radical workers’ parties has created a favourable ground for the rise of populism because many people feel their voices are unheard and ignored in the representative system. The problem is that this often takes the form of a right-wing populism which sees ‘the people’ defined to exclude immigrants and minorities.
On this basis many people criticise ‘populism’ negatively. She responds:
This is a mistake, because populism represents an important dimension of democracy. Democracy understood as ‘the power of the people’ requires the existence of a ‘demos’ a people. Instead of rejecting the term populist, we should reclaim it.
In this book my intention is to reclaim populism by fostering a progressive version of it which puts the interests of the common man and woman first, ahead of the priorities of a wealthy global elite whose interests and priorities have dominated for far too long.
THE RISE AND RISE OF THE SUPER-RICH
There’s class warfare, all right, but it’s my class, the rich class, that’s making war and we ’re winning.
Warren Buffett, billionaire investor
With their collective wealth estimated at $US7.7 trillion, the global elite of the super-rich are the natural opponents of progressive populism. Some of that global elite are household names in Australia. In July 2016 over 200 of them gathered for a huge celebration in the dazzling blue waters of the Mediterranean. Trucking billionaire Lindsay Fox was throwing an all expenses paid birthday party and had invited his closest friends to enjoy a cruise from Athens to Venice via Corfu. Fox’s ship of choice was Seabourn Odyssey, renting for over a million dollars a week and containing 225 luxury suites.
Lindsay Fox’s own wealth totals $2.9 billion. His fellow passengers included the mining billionaires Gina Rinehart and Andrew Forrest.
According to the Australian Financial Review’s 2017 Rich List they are worth (in Australian dollars) $10.4 billion and $6.8 billion respectively. Shopping centre king John Gandel ($6.1 billion) and retail giant Solomon Lew ($2.3 billion) also took part in the exclusive celebration on the high seas. Further down the guest list for the stylish cruise were former Liberal Treasurer Joe Hockey, former Liberal Victorian Premier Jeff Kennett, media businessman Harold Mitchell and golfer Greg Norman.‘
This public airing of the details of a party for Australian billionaires is rare. It’s not always easy to get information on the super-rich. A key source is the Financial Review’s annual Rich List. The 2017 edition identified 60 Australian billionaires, headed by the paper manufacturing magnate Anthony Pratt ($12.6 billion). The most modest billionaire (scraping in at just $1 billion) is Melbourne-based Peter Gunn, who owns PGA Group, a private investment business that holds a property empire in office and industrial blocks and is also involved in cattle production. Others among the ten wealthiest are James Packer ($4.75 billion), whose money is in casinos; Harry Triguboff ($11.45 billion), who made a fortune from building tower blocks; and Frank Lowy ($8.26 billion), the Westfield shopping centre magnate. The eighth wealthiest Australian is Hui Wang Mau ($6 billion), who made much of his money in Hong Kong property and took out Australian citizenship after studying in South Australia in the early 1990s.
by David McKnight
get it at Amazon.com
Marshall Steinbaum, Eric Harris Bernstein and John Sturm.
As workers, as consumers, and as citizens, Americans are increasingly powerless in today’s economy. A 40 year assault on antitrust and competition policy, the laws and regulations meant to guard against the concentration of power in private hands has tipped the economy in favor of powerful corporations and their shareholders. Under the false assumption that the unencumbered ambitions of private business will align with the public good, the pro-monopoly policies of the “Chicago School” of antitrust lurk behind today’s troubling trends: high profits, low corporate investment, rising markups, low wages, declining entrepreneurship, and lack of access to unbiased information. Market power and lax competition policy ensure our economy serves the few over the many.
In a new report, Marshall Steinbaum, Eric Harris Bernstein and John Sturm build on the growing progressive consensus that the economic threat of market power goes far beyond prices. The paper demonstrates the disastrous consequences that unrestrained market power has had on workers, communities, and democracy.
The authors begin by explaining the dangers of market power and the role of competition policy in maintaining a level playing field. They then outline how lax competition policy has handed incumbent corporations and their shareholders an unfair advantage and a more generous slice of the economic pie. They document the consolidation and exploitation of market power that has occurred in this environment and highlight key pieces of evidence that illustrate how weak competition is harming the economy, holding back new businesses, investment, wages, and growth.
The subsequent section reviews recent research that shows how concentrated corporate power impacts the everyday lives of Americans, surveying these effects through three lenses: the effects on consumers, on workers, and on society at large. In the final section, the authors propose policy remedies that could help rebuild inclusive growth, foster economic innovation, and restore an equitable economy that serves all of its stakeholders.
As workers, as consumers, and as citizens, Americans are increasingly powerless in today’s society. Rhetoric extolling the virtues and power of free markets belie this fact, but instinctively, Americans understand that something is wrong:
The vast majority of Americans believe the economy is “rigged” in favor of corporations. And they are correct: A 40 year assault on antitrust and competition policy, the laws and regulations meant to guard against the concentration of power in private hands has helped tip the economy in favor of powerful corporations under the false pretense that the unencumbered ambitions of private business will align with the public good.
The single biggest problem with this simplistic view of “free” markets is that it ignores power dynamics and implies the existence of some natural state in which markets flourish without oversight.
In reality, no state of natural market equilibrium exists. Healthy markets depend on rules to create an equitable balance of market power between workers, consumers, and businesses. And when those rules skew the balance of power, markets favor the most powerful to the detriment of others.
In reality, firms use market power to extract from other participants rather than compete to create the best products. This not only hurts those targeted, but also results in less growth and innovation overall. Accordingly, while corporate profits have risen, wages and investment have stagnated; rather than investing in research and development (R&D) to generate innovative products, corporations have relied on lax merger regulation to buy out competitors, or they have employed a litany of anti competitive practices to prevent them from entering markets in the first place.
Knowing that consumers and workers have few alternatives, powerful corporations have jacked up prices and lowered wages. Additionally, in many instances, technological developments, free of regulatory oversight, have exacerbated these problems, allowing companies like Google, Facebook, and Amazon to achieve market dominance by collecting reams of data and acting as an all knowing middleman between customers and upstream suppliers.
The pro monopoly ideology of the so called “Chicago School” lurks behind all of these trends, ceding already dominant incumbent firms and their shareholders more and more power that they can wield to their sole benefit and at the expense of society at large.
Although the evidence of rising market power and its impact on the economy are often found in broad economic data, the consequences of market power are anything but theoretical. From rising prices, to low wages, to the way we access information, market power and lax competition policy are entrenching the intrinsic advantages of wealth and power in society. Private interests increasingly determine access to critical goods and services, prioritizing privileged groups and thus exacerbating existing inequities of race, gender, and class.
In this paper, we provide evidence supporting our thesis, as well as illustrative examples of how this behavior has manifested itself in the lived experiences of regular Americans.
Finally, we discuss the antitrust reforms that can begin to rebalance the economy in favor of equity, inclusion, and democratic rule.
In Massachusetts, a 19 year old is forced to forego her summer job as a camp counselor because of a non compete clause she unknowingly signed with a different summer camp the year before.‘
In Chicago, a 69 year old United Airlines passenger is beaten and forced off a plane for refusing to give up his seat to a United Airlines employee.
In Hedgesville, West Virginia, two parents overdose on heroin at their daughter’s softball practice. Like millions of Americans, they became addicted to opioids after being prescribed OxyContin, a painkiller manufactured and marketed under false pretenses by Purdue Pharmaceuticals. OxyContin part of a class of drugs responsible for 33,000 US. deaths in 2015, according to the American Society of Addiction Medicine (2016) has churned out $35 billion in revenue for Purdue. The company has yet to face legal repercussions.‘
Despite calls for disaster relief and gun control in the fall of 2017, as citizens in Puerto Rico were without water and electricity following the devastation of Hurricane Maria and the city of Las Vegas was reeling after yet another mass shooting, Congress’s attention was elsewhere: Heeding Wall Street lobbyists, the Senate voted to strip Americans of their right to hold banks and credit providers accountable for malfeasance.
As workers, as consumers, and as citizens, Americans are increasingly powerless in today’s society. Rhetoric extolling the virtues and power of free markets belie this fact, but instinctively, Americans understand that something is wrong: The vast majority of Americans believe the economy is “rigged” in favor of corporations, according to a poll by Edison Research (2016). And they are correct: A 40 year assault on antitrust and competition policy, the laws and regulations meant to guard against the concentration of power in private hands, has helped tip the economy in favor of powerful corporations and wealthy shareholders over regular Americans
Beginning in the 1970s, a concerted movement referred to as the “Chicago School” of antitrust beat back anti monopoly policy through like minded executive and judicial appointments, court rulings, and agency actions. The Chicago School argued that large corporations were large because they were efficient and because the free market incentivized them to operate in the best interest of consumers, If they didn’t, so the story went, then new entrants were always at hand to ensure the economy “naturally” served the broad public interest. Government action to break up or regulate corporations, the Chicago School argued, would only impede their efficiency or protect incumbents at the expense of entrants.
Under this regime, corporations and corporate conduct were presumed pro competitive, or economically efficient. Even for potentially anti competitive behavior, the burden of proof was raised high enough to forestall regulatory relief.
This created a dramatic departure from vigorous antitrust protections that helped make the United States the world’s most robust economy, and among the most equitable, during the postwar era. Prior to the 1970s, dating back to the age of Teddy Roosevelt and railroad robber barons, but especially after the late 1930s, regulators took an active role in ensuring equal footing for workers, consumers, and small businesses. Authorities blocked mergers that would result in dominant businesses, broke up monopolies, and closely regulated networked industries like telecommunications, banning restrictive contractual arrangements likely to benefit incumbents at the expense of consumers and new entrants. In combination with a comprehensive social safety net and powerful labor unions, antitrust protections fostered healthy competition in which firms could succeed only by offering valuable products at reasonable prices and by attracting good workers with fair wages; firms that failed to innovate or satisfy customers were out competed by new entrants. In this environment, wages and investment boomed and small businesses fueled strong employment.
In stark contrast, the results of the 40 year experiment in Chicago School antitrust have spelled disaster for the American workforce, middle class, and economy overall. While corporate profits have risen, wages and investment have stagnated.
A recent study by De Loecker and Eeckhout (2017) shows that average firm level markups, the amount charged over the cost of production, have more than tripled since 1980. And while waves of mergers have led to larger and more powerful corporations, small businesses form less often and struggle to survive. Recovery from the 2008 financial crisis hastened by government bailouts for those at the top has yet to benefit those at the middle and the bottom of income distribution.
The pro monopoly policies of the Chicago School lurk behind all of these trends, ceding already dominant incumbent firms and their shareholders more and more power that they can wield to their sole benefit and at the expense of society at large. Rather than investing in research and development (R&D) to generate innovative products, corporations have relied on lax merger regulation to buy out competitors, or they have employed a litany of anti competitive practices to prevent them from entering the market in the first place. Knowing that consumers and workers have few alternatives, powerful corporations have jacked up prices and lowered wages. In many instances, technological developments, free of regulatory oversight, have exacerbated these problems, allowing companies like Google, Facebook, and Amazon to achieve market dominance by collecting reams of data and acting as an all knowing middleman between customers and upstream suppliers. When firms achieve such power, their incentive to produce better products and services disappears, and they act instead to maintain their market stranglehold by any means necessary.
We define “market power” as the ability to skew market outcomes in one’s own interest, without creating value or serving the public good.
We argue that market power, and the anti competitive behavior that it enables, is a negative sum game: Anti competitive economies, like the one we have today, produce fewer jobs at lower wages, with more expensive goods and less innovation. We aim to both document the rise of market power and illustrate how it has affected the day to day lives and general well being of American workers, consumers, and the productivity of the economy overall. In short, we show that Chicago School inspired deregulation has enabled the rich and powerful to profit by taking a larger share of the economic pie, rather than making the pie bigger by offering valuable products and services at better prices.
Increased market power of consolidated firms is especially threatening to marginalized communities, which tend to have the fewest alternatives to exploitative goods and services providers. ACA exchanges in large swathes of rural America have only one health insurance provider, and that provider is free to charge exorbitant rates. For many urban neighborhoods, gentrification is the only hope of attracting a decent broadband connection, in which case the threat of rising rent sours the payoff. In markets for labor, consumer goods, or financial services, the first victims of predatory practices are the most vulnerable, be they young people, women, or people of color.
The Chicago School has championed the benefits of “free markets” but has in fact worked to thwart them. Conflating power with freedom, the Reagan era ideology has used the free market as a rallying cry to justify policy changes that in reality benefit wealthy incumbent businesses at the expense of all others. This is the antithesis of the diffusion of economic power that is required to ensure that the economy rewards honest work, erodes privileged rent extraction in all of its forms, and ultimately operates in the public interest.
While professing to champion competition, the Chicago School has acted only to protect the unearned profits of monopolists while stifling entrepreneurship.
A recommitment to active antitrust policy is key not only to overturning the accumulations of wealth and power we see today, but also to reaping all of the societal benefits that come from undoing market power.
This report begins by explaining the dangers of market power and the role of competition policy in maintaining a level playing field. We then outline how lax competition policy has handed incumbent corporations and their shareholders an unfair advantage and a more generous slice of the economic pie. We document the consolidation and exploitation of market power that has occurred in this environment and highlight key pieces of evidence that illustrate how weak competition is harming the economy holding back new businesses, investment, wages, and growth. In the subsequent section, we review recent research that shows how concentrated corporate power impacts the everyday lives of Americans. We survey these effects through three lenses: the effects on consumers, on workers, and on society at large. In the final section, we discuss policy remedies that could help rebuild inclusive growth, foster economic innovation, and restore an equitable economy that serves all of its stakeholders.
The Theoretical and Institutional Background for Antitrust and Competition Policy
THE “FREE” MARKET IN THEORY
Market economies rest on the theory that private self interest can be aligned with the public good. In its simplest form, this theory holds that, because market interactions require the willing participation of workers, consumers, and businesses, each party will only participate in an interaction if it makes that party better off. If a worker is paid a satisfactory wage, a business owner receives a return on his or her investment, and a consumer is able to purchase a product they value at an acceptable price, then each individual benefits. In this context, firms that develop better products or reduce prices are rewarded with a larger share of the market and are thus incentivized to innovate; similarly, firms that offer a higher quality of life for employees through better pay and working conditions will attract the most productive workers and are therefore encouraged to raise wages. Competition among firms thus drives productive innovation and higher standards of living. Conversely, firms that overcharge for their products, fail to innovate, or pay low wages will lose out.
It is an elegant and important theory, but it is also just that: theory.
THE RULES MATTER
The single biggest problem with this simplistic view of markets is that it ignores power dynamics and implies the existence of some natural state in which markets flourish without oversight. In reality, no state of natural market equilibrium exists, and the entrenched power of wealth poses an omnipresent threat to the equity of outcomes;
healthy markets depend on rules to create an equitable balance of market power between workers, consumers, and businesses,
and when those rules skew the balance of power, markets favor the most powerful to the detriment of others. Thus, an economy with no labor protections will favor employers over workers, while a society with confiscatory tax rates on investment returns will make it difficult for shareholders to exercise power over the businesses they own.
As this analysis implies, setting the rules to achieve an equitable balance of power between market participants is crucial. Furthermore, beyond laws and regulations, the rules include all manner of social, cultural, and political factors, from the things we invest in and the things we neglect, to which groups are discriminated against and which are privileged. When rules preference one group over another, that group may skew transactions in their own favor, driving up profits at others’ expense. For example, redlining policies of the New Deal’s federal housing finance agencies made it impossible for black communities to accumulate wealth through homeownership.
Historically, marginalized communities have been underserved by public goods, including transportation and communications infrastructure. Physically and economically isolated, these populations became prey for firms that could get away with offering poor service and high prices due to a lack of alternatives, leading to today’s discriminatory, segmented markets. Examples like this illustrate how the rules bear a deep and complex relationship to market outcomes.
We define market power as the ability to skew market outcomes in one’s own interest, without creating value or serving the public good.
This definition acknowledges that harm to workers, consumers, or other businesses can be wrought in a number of ways not connected directly to price. For example, if a firm eliminates the threat of competition by raising barriers to entry, consumers can feel the negative impact through the reduction in service, even if quoted prices remain the same. This definition allows us to consider the broader impact that market power has on innovation, wages, and other considerations beyond consumer price.
MARKET POWER AND MARKET FAILURE
When firms possess market power and use it to extract from other participants rather than compete to create the best products, it not only hurts those targeted, but also results in less growth and innovation overall. In the 1990s, for example, Microsoft sought to dominate the software market by leveraging its ubiquitous operating system, Windows.
At the time, Microsoft made its Office software compatible only with its Windows operating system, and Microsoft also ensured that Windows was the exclusive option for newly purchased desktop hardware. Crucially, it tied licensing contracts for Windows to its proprietary web browser, Internet Explorer. Thus, the company attempted to systemically eliminate competition in every market where it competed. This drove up Microsoft’s share of the market for both operating systems and software-at the direct expense of its competitors. Since it sought to exclude rather than out perform competitors, at every level of the supply chain, and because it interfered with healthy competition, this sort of behavior is referred to as “predatory” or “anti competitive”, behavior that the federal government sought to address in its eventual antitrust suit against Microsoft in the late 1990s and early 2000s.
Anti competitive behavior redistributes the surplus from market transactions away from less advantaged firms, customers, and workers and toward the wealthy and powerful. Taking the analogy of a basketball game, we can liken anti competitive behavior to repeatedly elbowing an opponent in the face, bribing referees, or rigging the scoreboard. Such strategies can lead to victory in a narrow sense, but to anyone with an understanding of basketball, it is anathema to the sport and defeats the purpose of the game. If a basketball game turns into a brawl, it’s not “bad basketball” or “tough basketball”, it’s not basketball at all.
It bears repeating that market power and the exercise of anti competitive strategies shrink the economic pie overall. When firms like Microsoft erect barriers to entry, they prevent new competitors from entering the market, strangling new businesses and depriving the economy of the benefits of those businesses namely, jobs and innovation. Thus, “anti-competitive” strategies do not simply result in higher prices for consumers, but in a slower pace of innovation and growth for the economy as a whole, notwithstanding empirically questionable research that ostensibly shows monopoly power promotes innovation. In short, when firms exercise market power, everyone else loses. Markets are only valuable when the rules provide for an even playing field between market participants.
MAKING MARKETS WORK: COMPETITION POLICY AND ITS ORIGINS
Recognizing the threat that disproportionate corporate power posed to the economy and our society, policymakers sought tools to combat market domination by large firms as early as the late 19th century. When the monopoly power of trusts like Standard Oil and the Pennsylvania Railroad sparked public outrage over high prices and poor service, Congress passed the Sherman and Clayton Antitrust acts, providing the legal means by which to regulate firms so that their size and power, and their use of predatory behavior, would not upend markets.
This new body of laws and regulations dubbed antitrust in the United States and, more generally and in an international context, “competition policy” was intended to guarantee that firms competed with one another on a level playing field, and that they did not become so powerful as to dominate workers, consumers, or smaller firms. In concert with labor and consumer protections, antitrust laws are one of three policy prongs intended to create an equitable balance of power between market actors. So, while competition policy cannot wholly eliminate market power from the economy, it is an important tool in limiting the ways in which market power can be deployed. Antitrust laws seek do this through three primary objectives:
1. Limiting the consolidation of power by regulating market structure.
Market structure generally refers to the number of firms in a given market, as well as their relationship to consumers and suppliers. The structure of a market plays a large part in determining how much influence certain firms have over a market and how they are able to use it. So, in some ways, regulating market structure is the most foundational component of competition policy.
In the era of active antitrust regulation, this was accomplished by policing a range of factors that included:
Merger enforcement: Regulating concentration was primarily accomplished by reviewing the impact of mergers. If authorities deemed that a merger resulted in a detrimental reduction in competition, they would seek to block or undo it.
Monopoly regulation: When a firm becomes the sole seller of a given good or service, it is a monopoly. When a firm achieved a monopoly, authorities would either attempt to break up the firm or begin to closely regulate its practices to ensure it did not abuse its powerful position.
Vertical integration: Firms active in more than one market, especially when they compete with their own suppliers or customers, were once considered anti competitive due to the ability to favor themselves and exclude entrants, leveraging market power in one market to dominate others. But under the influence of the Chicago School, scrutiny of vertical integration was reduced almost to nonexistence; consequently, vertically integrated firms and even whole industries are far more prevalent today.
2. Curtailing anti-competitive behaviors.
Firms are often able to engage in anti competitive practices when structural regulation fails to eliminate market power or when firms simply break the rules. Collusion, for example through which would be competitors collaborate to set prices artificially high, is always possible, even in a theoretically competitive market. In Microsoft’s case, its attempt to limit competition in software markets was based, in part, on the fact that it held a large share of the market for operating systems, but it was also based on the technological advantage that competing software companies depended on the same operating systems to run, an example in which market structure creates the scope for anti competitive conduct.
So, while anti competitive practices are often enabled by some market power advantage, which structural regulation failed to address, they must sometimes be dealt with on a sectoral or case by case basis. Antitrust laws, therefore, made anti competitive practices expressly illegal, and agencies tasked with identifying and prosecuting violations were created. Some key concerns included:
Collusion or “price fixing”: As described above, collusion is when multiple ostensibly competing firms conspire to create a defacto monopoly and set prices artificially high.
Predatory pricing: In order to drive out would be competitors, powerful firms sometimes sold goods and services under the cost of production. Seeing how such behavior threatened entrepreneurship and robust competition, authorities prohibited this practice.
Vertical restraints: This involves imposing restrictive contractual arrangements on counterparties (e.g., Microsoft requiring any hardware equipped with its Windows operating system to also carry Internet Explorer).
Barriers to entry: Once they are established, firms may seek to prevent competition by erecting barriers to entry. Such strategies can vary enormously, but some popular methods include: aggressive patent protections, leveraging relationships with federal regulators responsible for approving products, or collaborating with outside businesses to squeeze out new entrants.
3. Establishing public utility regulation of essential industries and “natural monopolies.”
Certain goods, such as water or electricity, are necessities of modern life. Consumers cannot simply choose to eschew these goods or find substitutes like they can with other products. This places firms that sell these goods and services at an enormous advantage. In many instances, the need for universal provision, combined with the cost of the infrastructure necessary to supply them, naturally lends these industries to monopoly, especially within a given region. Recognizing this, and also recognizing that limited private sector competition for the provision of utilities could be positive, authorities established laws to more strictly regulate these markets. This guaranteed access and prevented firms from exploiting the widespread need for their products or services essentially holding consumers hostage, in order to extract undue profits. Some key industries included: Telephone networks, Railroads and Electricity.
Beyond these economic concerns, 20th century antitrust was founded on the notion that the concentration of private power threatens not just economic equality, but democratic legitimacy as well. This view was perhaps best articulated and championed by Supreme Court Justice Louis Brandeis, who argued that, if left unchecked, wealthy firms and individuals would leverage their power to exert disproportionate influence over government decision making. The danger of lax protections was not just the potential for runaway economic inequality, but also for the deterioration of democratic governance.
THE LIMITS OF ANTITRUST
Though antitrust protections of the mid 20th century were crucially important, recent history has proven that they were not enough. The success of these laws hinged on how they were interpreted and enforced by regulators and the judiciary. As interpretations have grown more lax, unanticipated threats have multiplied. Data aggregation and the proliferation of digital platforms like Google and Amazon, for example, pose new, unforeseen threats to workers, consumers, and society as a whole that remain completely unaddressed by existing competition policy. These challenges will require new laws, as well as new applications of existing ones.
Although antitrust reform is essential to limiting the consolidation of power by the wealthiest corporations and individuals, it will by no means ensure a just and equitable society on its own. Reining in dominance of those at the top will be impossible without also building power to oppose it through worker organizing. Political reform, to curtail or outlaw the sort of big money lobbying and advocacy that currently dominates our political system is also essential, as is a stronger social safety net, to ensure that the price of economic competition is not widespread poverty. So, although this report is focused on the evolution and challenges presented by market power, stronger antitrust is by no means a panacea to all economic challenges. Reform in these and many other areas, including on issues of gender and race inequality, is essential to the health of the economy and American society.
ANTITRUST IN PERIL
As early as the 1940s despite the clear success of revolutions in antitrust and pro labor policies wealthy firms and conservative political interests began a concerted effort to roll back strong competition policies in the hopes of taking a larger share of the economic pie for themselves. Ultimately, these interests hit on a novel argument: Private firms were naturally inclined to innovate, so any regulation only impeded growth and efficiency; rules that favored large firms would benefit consumers through lower prices; and any price markups in the event of reduced competition would be more than offset by cost savings in production.
But what this theory offered in novelty and an appealing sort of counter intuitive logic, it lacked in empirical support and rigor. The pro business camp placed enormous faith in optimistic predictions of firm behavior based on theoretical assumptions that were never tested. In doing so, it wholly ignored the importance of an equitable balance of power between market participants.
Despite the lack of evidence for the theories they articulated, the Chicago School’s intellectual proponents, academics like George Stigler, Harold Demsetz, and Robert Bork benefitted from an air of scholarly and technocratic authority. Emanating from reputable institutions and influential think tanks in Washington, proponents of this new laissez faire competition policy claimed superior insight on the basis of their original work in economic theory. They downplayed the risks of consolidated private power and labeled Brandeis’s approach to antitrust as dated and simplistic. (in reality, Brandeis himself had been noted for his introduction of empirical research into his jurisprudence?)
Bork and others from the Chicago School hoped to cast out all but the most minimal of antitrust enforcements. As Baker (2015) summarizes, they held that “law should be reformed and refocused to strike at only three classes of behavior: ‘naked’ horizontal agreements to fix prices or divide markets, horizontal mergers to duopoly or monopoly, and a limited class of exclusionary conduct.” This view reflected the Chicago School’s founding assumption that other than in the most extreme cases, firms would only ever work to increase market share through price reduction, and that, thus, regulation of all but the most blatant forms of anti competitive conduct could be eliminated.
Starting in the 1980s, this formed the dominant view of competition policy in the United States, and with the election of the enthusiastically supportive Ronald Reagan, the Chicago School’s vision began a swift conversion to reality.
Soon, a wave of executive and judicial appointees beholden to its core beliefs set to work, regulating markets in the mold of the philosophy’s pro corporation, anti statist beliefs, benefiting large incumbent firms over consumers, workers, and small businesses.
The success and prevalence of this ideology has been so profound that today, even Democratic Party appointed senior antitrust regulators assert “we are all Chicago School now” in their public appearances. The economic impact of that elite consensus is clear.
KEY CHANGES IN ANTITRUST UNDER THE CHICAGO SCHOOL:
Relaxed merger guldelines: Under this new regime, mergers that may once have been deemed anti competitive were increasingly permissible. leading to higher levels of consoiidation along with the proliferation of market power.
An end to the scrutiny of vertical Integration: Mergers, in which the parties did not compete directly, were presumed to be motivated by efficiences in production rather than the ability to exclude rivals from the market and siphon their market share.
Elevated burdens of proof: Very broadly, the Chicago School raised the burdens of proof for a range of predatory behavior. Thus, while the behaviors remained potentially illegal, it became increasingly difficult to prevent or punish harmful practices because spurious defenses were given undue credence, eg, that through some convoluted and empirically unproven mechanism, exclusionary conduct benefited consumers.
The Market Power Economy
In this lax regulatory environment, we have seen precisely the sort of economic outcomes that we would expect from a lopsided economy. A growing body of evidence indicates that whatever mechanism once translated economic surplus into shared growth is now broken. As seen in Figure 1, corporate profits when measured as a share of the economy-are at a historic peak. And even though the cost of borrowing is low, incumbents are not investing or expanding operations to out compete one another (Furman and Orszag, 2015; Barkai 2016; Gutierrez and Phillippon, 2017).
This suggests that powerful firms, operating with little competition, have been able to profit by raising prices and cutting wages, rather than by investing in new, valuable products. Recent work by De Loecker and Eeckhout (2017) finds that firm level markups have increased from 18 percent to 67 percent since 1980 a pattern that holds across all industries.
In keeping with the market power hypothesis, we also see that profits have increased most in the industries that have become more concentrated, and that wage growth has been most stagnant in these same concentrated industries (Barkai 2016; Gutierrez and Philippon, 2017; Grullon, 2016).
Of course, concentration is not synonymous with market power, but when combined with substantial policy changes at the federal level and a host of qualitative observations, the case that rising market power and anti competitive behavior have caused our current growth and wage stagnation looks compelling. Below we present six pieces of evidence that, when taken together, strongly support this view:
Fact 1: Fewer firms, less competition
Since the rise of the Chicago School antitrust policy, US. markets have consolidated dramatically. The number of mergers and acquisitions has skyrocketed, increasing from less than 2,000 in 1980 to roughly 14,000 per year since 2000. As a result, Grullon et al. (2016) found that between 1997 and 2012. more than 75 percent of U.S. industries became more concentrated, meaning a smaller number of larger firms account for most of the revenues. The number of publicly traded corporations and their share of the total market are also lower than at any time in the last 100 years.
Furthermore, conventional measures of concentration do not even capture concerns over common ownership: As institutional investors have come to dominate stock markets, they have bought shares of multiple firms in the same industry. Azar et al. (2016) document that individual institutional investors, firms like Vanguard and Blackrock, own large fractions of all main “competitors” in the technoloy, drug store, banking, and airlines industries.
It is increasingly apparent that this consolidation has had detrimental effects on the overall economy. Recent research highlights several key indicators.
Fact 2: Higher prices
A spate of recent studies shows consumer prices rising in conjunction with consolidation. Gutierrez and Philippon (2017) document that markups of prices over the cost of production have increased in line with aggregate trends in consolidation, and that these shifts are driven by large firms and concentrating industries.
Kwoka (2013) conducts a meta analysis of merger retrospectives studies comparing prices that companies charged before and after they merged. Combining the data from retrospectives on 46 mergers since 1970, Kwoka finds an average price increase of 729 percent. This study doesn’t include enough mergers to conclusively settle the debate, but it’s enough to cast serious doubt on the theory that underlies the past 40 years of competition policy.
Most recently, De Loecker and Eeckhout (2017) use a database of publicly traded firms to find that markups, the amount a firm charges above its costs, have risen to an astounding average of 67 percent, compared to just 18 percent in 1980. Although De Loecker and Eeckhout do not offer a causal analysis, other studies of markups and consolidation lend credence to the link between consolidation, market power, and rising prices.
But consolidation is not the only way that market power can impact prices in today’s economy. The increasing role of institutional investors in capital markets has exacerbated the lack of competition and the rise of prices in consumer markets. As previously noted, investors like Vanguard and BlackRock own large shares of multiple businesses within an industry. In terms of competition and price, this “common ownership” can have similar, or even more severe, ramifications as a merger.
In a recent paper, Azar, Schmalz, and Tecu (2016) measure the effects of common ownership in the airline industry. Comparing routes of independent airlines to those owned by similar shareholders, the economists find that prices would be 3 to 7 percent lower if all airlines were owned independently Importantly, these studies did not count the much documented rise of ancillary fees, which, in addition to being a thorn in the side of cash strapped flyers, have grown appreciably in recent years.“ In other work, Azar, Raina, and Schmalz (2016) show that common ownership of banks decreases interest rates and increases fees for depositors.
Fact 3: New and small businesses are struggling while large incumbents thrive
Furman (2016) documents that for 40 years, the rate of firm entry has decreased, as has the share of sales and employment corresponding to young businesses. This suggests that it has become harder for new companies, facing larger, often predatory incumbents, to overcome barriers to entry. This is especially problematic, given that new businesses, as disproportionate creators of jobs, are essential to a healthy economy.
At the same time, the largest firms are thriving: Gutierrez and Phillipon (2017) document that since 1980, measures of profitability have increased for the largest firms while remaining constant for small ones. Other data shows that the gap between the profitability of median and high performing firms has increased dramatically with time, and that the most profitable firms tend to maintain their high returns year after year.
Fact 4: Corporate investment is low, especially in concentrated industries
If barriers to entry and other predatory practices are indeed isolating incumbents from competition, then we would expect them to exercise their monopoly power by producing less and charging more, rather than by making new investments to scale up operations or develop cost cutting technologies. And indeed, evidence shows this is precisely what is occurring. Gutierrez and Philippon (2016) document that corporate investment is low compared to what firms’ market values would predict, and that this lowered investment corresponds to more consolidated industries.
In a 2017 paper, the same authors go a step further, using two methods to show that this relationship is causal. First, they demonstrate that leading manufacturing firms invested and innovated more in response to increases in Chinese competition. Second, they document higher levels of investment in industries that, likely due to bubbles or optimistic venture capitalists in the 1990s were less concentrated during the 2000s.
Fact 5: Workers are more productive, but their pay has stagnated
For 40 years, median wages have stagnated, even as workers become more productive, and the share of GDP paid as income to workers has declined since 2000. While economists have tested many explanations for these shifts technological change and automation, global competition between workers, the rising cost of benefits, none of the factors considered explains why corporate profits have grown over the same period. Indeed, Barkai (2016) documents that while corporations have paid out less of their revenue as wages, they have also spent less on capital assets like machines, offices, and software, further increasing their profits. Barkai’s work points to a different theory: The labor share of income has decreased most in consolidating industries, suggesting that corporations are paying low wages simply because their power and the lack of competition with other firms allow them to.
Even as the total share of private sector revenue paid as wages has declined, the rise of market power has increased wage inequality, by contributing to median wage stagnation and enabling runaway gains at the top. For example, the ratio of a top CEO’S compensation to that of an average worker has increased roughly ten fold, from 30 to 1 in 1978, to 271 to 1 in 2016. This relates to market power because, rather than simply paying employees less, large firms have sought to lower labor costs by pushing workers out of direct employment altogether, outsourcing them instead. Powerful “lead firms” are thereby able to avoid liability under substantial components of US. labor law, while leveraging their market power to drive down wages through a litany of extractive tactics aimed at the outside firms employing their former workers. This is related both to the “tissured workplace,” described by David Wei] (2014), and to interfirm inequality, described by Song et al. (2016) and Furman and Orszag (2015).
Fact 6: Workers have a harder time changing and accessing jobs
Trends of consolidation and declining wage growth coincide with decreases in geographic, job, and occupational mobility. Konczal and Steinbaum (2016) argue that with fewer alternative employers, workers are receiving fewer offers to work at other firms, thus forcing them to stay at the same job and tolerate lower wage growth.
Song et al. (2016) compare wages across firms and reveal that workers with similar levels of education and experience receive starkly different pay depending on their employers, and that the degree of wage segregation by firm has increased starkly over the same period in which inequality has risen (since 1980) In a competitive labor market, firm pay differentials for similar work done by similar workers should be driven to zero. Therefore, inequality in interfirm earnings suggests that pay may have less to do with an individual’s productivity and more to do with their ability to bargain or to gain access to particular firms and individuals and benefit from those personal connections. The idea that worker side variables cannot explain observed wage inequality is a fundamental challenge to the notion that the labor market is competitive.
On their own, these aggregate trends cannot establish individual instances of anti competitive behavior, but they do imply that there is a significant and growing problem of consolidated market power.
In the following sections, we delve deeper into this body of research, considering evidence of market power and exploring how it affects the everyday lives of American consumers and workers, as well as society at large.
Market Power in Everyday Life
Economic data on rising prices and stagnating wages helps to drive home the point that the ill effects of market power and Chicago School policies are anything but theoretical. Corporations increasingly exert unopposed influence over the lived experience of American consumers, workers, and citizens.
From rising prices, to low wages, to the way we access information, market power and lax competition policy are entrenching the intrinsic advantages of wealth and power in society.
Private interests increasingly determine access to critical goods and services, prioritizing privileged groups and thus exacerbating existing inequities of race, gender, and class.
In this section, we aim to illustrate how the broad policy changes discussed above result in poor outcomes for American consumers, workers, and society. We show how consolidation and predatory behavior lead not only to higher prices, worse service, and less choice for consumers, but also threaten the pace of innovation. We show how consolidation results in fewer jobs and lower pay for workers, and how firms are using anti competitive and predatory practices in order to further entrench their labor market dominance. Finally, we provide a broader view on the impacts of market power and Chicago School antitrust, showing how the consolidation of power affects geographic inequality, the flow of information, and the long term health of our democratic system.
MARKET POWER AND CONSUMERS: LESS INNOVATION, HIGHER PRICES
Although it has hinged on alleged benefits to consumers, the Chicago School’s lax approach to competition policy has allowed corporations to pursue promarket making strategies through which consumers lose twice: first, because powerful firms facing little competition are able to raise prices at will; second, because these firms choose to reinvest profits in attaining more market power, which not only reduces consumer choice, but also detracts from investment and competition aimed at developing better products and lowering prices.
In this predatory environment, the most significant innovations have been new methods of obtaining unfair gains, by misleading customers, entrapping them, or discriminating to extract consumer surplus. Even when the resulting conglomerates do invest in new technology and gain an innovative edge, weak competition means they face no pressure to pass the value created along to consumers. Across the board, market power enables corporations to profit by taking advantage of consumers, rather than by serving them.
Fewer choices, worse service, less innovation
Massive consolidation has left consumers with fewer choices and firms with less incentive to compete for customers. Walk into a retailer to buy a new pair of eyeglasses and you will likely find yourself overwhelmed with options. Upon closer inspection, however, you may notice striking similarities between models. You may also notice prices that are similarly high. That is because, whether buying from Prada, Oakley, or Target brand, you actually have a 4 in 5 chance of buying Luxottica, the Italian monopoly that owns 80 percent of major eyewear brands. Likewise, think of every food brand you have ever seen on the shelves of any major grocery store. Chances are these products are owned by one, often international conglomerates like Unilever, Kellogg’s, and General Mills.
Until the Chicago School successfully beat back regulatory standards, competition authorities closely monitored such market dominance.
Today, we are left to ask: With such large shares in their respective markets, how hard will companies work to develop new, appealing products and win over new consumers? The ramifications of such broad consolidation and the erosion of competition are severe.
Fewer firms holding more and more power not only means fewer choices for consumers, but also creates less of an incentive for firms to focus on providing the best products and service. Tim Wu (2012) points out that a firm can invest in stifling competitors directly by erecting barriers to entry or acquiring other firms, rather than investing in capital or R&D that would help it outcompete them in the marketplace. Indeed as mentioned in the previous section recent research shows that corporations are investing at record low levels, especially in the most consolidated markets. The outcome for consumers can range from irksome to deadly.
Instead of investing in R&D, many pharmaceutical companies plan their business models around their ability to purchase smaller firms that have shouldered the burden of developing new products. Strategies like these are predicated on the notion that lax competition policy will green light mergers with minimal scrutiny, even though this environment holds innovation back: Ornaglu’ (2009) finds that after merging, pharmaceutical companies have lower R&D spending, fewer new patents, and fewer patents per R&D spending, compared to non consolidated competitors. Among pharmaceutical firms in Europe, Haucup and Stiebale (2016) find that even competitors of merged companies innovate less, Nowhere else could the costs of market power and anti competitive behavior be more clear or more severe:
Even when the discovery of a new product could save thousands of lives, powerful pharmaceutical companies have based their business strategies on acquiring and maintaining market power.
Even in more benign examples, we can observe that less competition has translated into worsening consumer experience. A regular survey conducted by Arizona State University’s W.P. Carey School of Business (2017) found that customer dissatisfaction in the US. has climbed 20 percentage points over the past 40 years, while customer satisfaction has fallen. Comporting with the thesis that much of this is driven by consolidation and lacking competition, we observe that the decline in service has been led by TV, phone, and Internet service providers, some of the most concentrated industries in the country, with customer satisfaction ratings routinely below that of the IRS. As firms become more powerful, their incentive to please customers will almost always decrease.
Amazon’s activity in shoe retail serves as another good illustration of the connection between competition policy, market power, and threats to consumer well being. When Zapposcom executives refused to sell the company to Amazon in 2007, Amazon began lowering its prices on Zappos shoes and offering additional services like free express shipping in an effort to out compete the popular online shoe retailer. Normally, this sort of competition would be a good thingfor consumers, since it results in lower prices. Amazon’s strategy, however, was based on power not innovation:
Over the course of a two year battle with Zappos, Amazon drew on its vast wealth, pre existing distribution network, and large customer base, running up losses of $150 million in an effort to eliminate its competitors. Lacking Amazon‘s vast wealth and power, Zappos capitulated and sold to Amazon in 2009. The tactic of lowering prices below cost in order to starve competitors known as predatory pricing is technically illegal, but Chicago School policymakers have raised the burden of proof so high that companies can employ this strategy without fear of triggering regulatory scrutiny. Amazon alone has used this precise tactic in several high profile cases, including with Diapers.com, which Amazon purchased and shut down in 2017.
Given Amazon’s professed commitment to service in the Zappos example, some may assume that consumers, despite having lost a popular vendor, are not much worse off; this, however, is shortsighted. In the long run, anti competitive behavior not only reduces the incentive to improve products and services, but also may deter entrepreneurs from entering consolidated industries to begin with. With the disappearance of customer first firms like Zappos, Amazon lacks the competitive pressure to maintain the high level of service and low prices it offered in the effort to drive them out of business. And while Amazon’s customer satisfaction ratings remain high, there is no guarantee that this behavior won’t fade as competition dries up. The decision to offer good service, in other words, has been left entirely to the good graces of Amazon’s management. This dynamic is true wherever competition is appreciably reduced and should weigh heavily on the minds of consumers and regulators alike.
Examples of the slowed pace of innovation and evaporating consumer choice suggest that lax antitrust may be far from optimal, even if consolidation does drive lower prices, as Chicago School dogma suggests it should. Even in the narrow category of consumer prices, however, we find ample evidence that our anti competitive economy has actually led to higher prices.
If Chicago School antitrust deregulation purported one thing for Americans, it was a dramatic reduction in the costs of the goods and services they rely on to survive. And yet, as described in Fact 2, numerous studies across many industries suggest that consolidation and other anti competitive practices have actually caused prices to rise for American consumers.
Although undetected by most Americans, the issues of market power and anti competitive behavior have dramatic consequences in daily life. If less competition results in an additional 5 percent markup on grocery prices, that increase can be enough to break the bank for a working class family. Lower prices can give consumers flexibility, relieve financial burdens, and make it possible to save for investments like college tuition, but the alleged cost benefits of consolidation, if they do arise, must also be considered. Meanwhile, higher prices seen today mean higher profits for shareholders and CEOs.
The realization that markups might actually be rising elevates the false promise of Chicago School policies: The implicit trade off offered by the Chicago School antitrust authorities was one of lower prices for less competition. But if less competition actually results in higher prices, as the data suggests it does, then American consumers are being subjected to a lose lose agreement.
TARGETED PREDATORY BEHAVIOR
Firms also exercise their market power by charging different prices to different customers a practice called price discrimination. This can be benign, as in the case of discounted movie tickets for children and the elderly, but price discrimination can also serve as a tool for corporations to exploit the most desperate and least informed consumers With the fewest alternatives.
Price discrimination often targets neighborhoods of color, whose populations are disproportionately low income and where firms make use of the structural absence of market access. Bayer, Ferreira. and Ross (2016) show that after controlling for credit scores and other risk factors, African American and Hispanic borrowers are roughly twice as likely to have high cost home mortgages because they are served by higher cost mortgage providers, so called “market segmentation.”
A recent analysis by Angwin et al. (2016) of ProPublica finds similar trends In auto insurance. Across several states, auto insurers charge higher premiums in minority neighborhoods, relative to the actual cost of paying out liability claims. ProPublica also discovered a similar trend within the test prep industry. Princeton Review clients of Asian descent are almost twice as likely to be offered a higher price as their non Asian counterparts (Angwin and Larson 2015).
Price discrimination is an especially pertinent risk online, where sellers can use IP addresses and browsmg data to differentiate between consumers, and where dominant platforms like Amazon may be able to corner whole markets, thus allowing them to target prices individually, without fear of being undercut by competition. Hannak et al (2014) find instances where major retailers and travel websites show different results and prices depending on a customer’s digital activity. Due to their private and algorithmic nature these practices are difficult to regulate and are likely to proliferate as companies develop new ways to gather and analyze data.
Crucially, the theoretical possibility of online alternatives has not proven sufticient to discipline behavior and prevent these explotative practices.
MARKET POWER AND WORKERS: FEWER JOBS, LOWER WAGES, AND LESS POWER
Contemporary antitrust policy mostly ignores the plight of American workers and, as a result, has spelled fewer jobs, lower pay, and worse conditions. For much of the 20th century, American wages grew in accordance with the productivity of American workers. But around the start of the Reagan era, the growth of workers’ wages and worker productivity began to diverge. Despite productivity having climbed nearly 75 percent from 1973 to 2016, wages only climbed by 12 percent. As consolidation, corporate profits, and top incomes skyrocketed, workers were left behind; the typical male worker made more in 1973 than he did in 2014. And while declining worker protections and union density explain a substantial portion of this stagnation, the runaway power of employers can be seen as the other side of the same coin.
Modern antitrust policy does little to protect and in fact actively hurts the standing of the American workforce. Ignoring the impact that market power and anti competitive behavior has on workers is a feature, not a defect, of Chicago School antitrust policies. Today, in addition to merger related job loss, we see ample evidence of just how effective these policies have been at weakening worker standing. Firms engage in predatory wage suppressing collusion across industries with “no poaching” agreements, and they have standardized anti competitive contracts designed to strip workers of their mobility and bargaining power. These tactics, endorsed by permissive antitrust policy when it comes to non price vertical restraints, are remaking the American labor market to resemble indentured servitude. Part of the solution must come from antitrust, specifically, a ban on such exploitative contract provisions.
Structurally, powerful firms have abused poor antitrust enforcement in order to restructure labor markets to their own liking. Since the consumer welfare paradigm ignores upstream “monopsony”, the power a firm can wield over its suppliers, including suppliers of labor, firms outsource workers into upstream contractors, which they could more easily dominate thanks to the weakening of antitrust scrutiny for vertical contractual provisions, both price and non price. Outsourcing labor into subservient contractors not only enabled so called “lead firms” to avoid meaningful negotiation, but has also turned wage setting into competitive bidding.
In this sub section, we document how corporate consolidation and the rise of market power hurt the labor market. We discuss three primary mechanisms: First, we analyze the reduction of wages, employment, and worker power that has occurred as a result of general consolidation and decreased economic activity. Second, we outline a number of discrete anti competitive strategies used by employers to stifle worker mobility and power. And finally, building on our description of predatory labor market practices, we outline the antitrust implications of corporate disaggregation and the so called “fissured workplace,” showing how this trend places workers at a systematic disadvantage.
Fewer jobs, lower wages
As firms accrue market power and consolidate, employment and wages decrease through two mechanisms. First, firms in concentrated industries tend to lower production and raise prices, reducing the demand for labor. Second, less competition between firms means fewer options and less mobility for workers.
Just like reducing consumers’ options allows businesses to charge more, reducing workers’ options allows businesses to payless.
Such power is referred to as monopsony, the labor market equivalent of monopoly power in product markets. As Jason Furman and Peter Orszag explain in a 2015 paper, “firms are wage setters rather than wage takers in a less than perfectly competitive marketplace.” The same is true for working conditions: In a concentrated economy, workers are forced to take what they are given. Monopsonistic firms, then, are no less a threat to America’s economic well being than monopolistic ones.
These theories are easy to square with the experiences of working Americans: In 2009, pharmaceutical giant Pfizer acquired Wyeth and announced it would cut 20,000 jobs worldwide; after combining in 2015, Kraft Heinz announced plans to cut 5 percent of its workforce; most recently, rumors swirled about cuts to Whole Foods’s workforce following its sale to Amazon. And while Chicago School advocates claim that consolidation brings cost savings for consumers, something we called into question in the previous section, the concurrent claim that such savings stimulate enough demand to create more jobs than they destroy is an even greater stretch.
While the impact of consolidation and competition policy on labor markets is a relatively new question for economists, evidence that consolidation is leading to fewer jobs is already mounting. Barkai (2016) shows that the largest decreases in the labor share of income, that is, the total fraction of private sector revenue paid to workers, have come in industries with the largest increases in concentration. This suggests that weak competition causes firms to cut jobs and reduce wages. Konczal and Steinbaum (2016) relate low wage growth to patterns of low job to job mobility, scarce outside job offers, and low geographic mobility. They argue that these trends are all indicative of weak labor demand and monopsony power.
The impact of such monopsony power can be every bit as economically damaging as monopoly power, and it is only due to the Chicago School’s myopic focus on consumer welfare that policymakers and the public more broadly eschewed such considerations. In our current environment, the anti competitive threats to labor markets have multiplied and intensified.
Again, addressing the loss of worker standing will largely rely on rebuilding worker power, but allowing large firms to accrue and wield unlimited market power is a substantial contributor to existing labor market power disparities, which require a multi faceted solution.
MARKET POWER AND LABOR MARKET DISCRIMINATION
Similar monopsonistic wage setting effects also help to explain pay gaps between demographic groups. Several studies document that wage gaps between employees of the same business can be explained by assuming that employers systematically pay workers less who they know are less likely to quit as a result, a practice called wage discrimination. If systemically disadvantaged workers tend to be less sensitive to wages, then they may sort into industries that underpay all of their workers, possibly contributing to the concentration of women of color in the low paying care industry, as discussed In Folbre and Smlth (2017) Looking at data from the Portuguese workforce, Card et al. (2016) show that the combined effects of within firm wage discrimination and between firm sorting, account for about one fifth of the gender pay gap.
In perfectly competitive labor markets where workers are pald according to the value they create, such effects would not exist.
Antitrust policies that examine the effects of monopsony would not only be good for all workers but would be best for society’s most vulnerable workers.
Strategic attacks on worker standing
In addition to increased leverage over workers gained from consolidation, employers use other anti competitive tactics to increase their labor market power. The tactics we describe here are expressly anti competitive, intended to prevent positive competition among employers in order to reduce labor costs and suppress workers. Despite the seeming conflict with the core principles of antitrust policy, the brazen use of anti competitive practices in labor markets has become increasingly widespread in the Chicago School era.
Non compete contracts, for instance, prevent workers from joining competing firms until after they have left their employer and waited, presumably unemployed, for extended periods of time. While non competes have some merit in protecting trade secrets and incentivizing investment in workers, the Treasury Department (2015) points out that they are used with startling frequency among low income workers and those without a college degree, less than half of whom profess to possess trade secrets.
Far from promoting innovation and investment, these agreements simply discourage workers from searching for new jobs, allowing their employers to pay less and demand more. Crucially, they are best understood as both a symptom and a cause of declining labor market mobility and worker power: A symptom because in an earlier era, employers would never have been able to get away with inserting such terms in employment agreements; and a cause because any worker who signs one has effectively voided their ability to attract a higher wage or better job in the industry of their choice
The tactics we describe here are expressly anti-competitive, intended to prevent positive competition among employers in order to reduce labor costs and suppress workers.
Mandatory arbitration is another combination symptom and cause of low worker standing. Gupta and Khan (2017) discuss the severe impact of contractual clauses which force workers to surrender their right to sue their employer, insisting instead that employees enter into confidential arbitration in the event of a dispute. Depriving workers of this core democratic right is a win for powerful corporations, who are able to keep misdeeds out of the media and away from the eyes and ears of other employees. Like non compete clauses, mandatory arbitration is both a cause and an effect of labor market monopsony. Indeed, it would be difficult to think of a tactic more indicative of massive monopsony power than a firm forcing workers to surrender their legal rights as a condition of employment; as with non compete agreements, such a clause would never have proliferated in a more pro worker environment. Meanwhile, mandatory arbitration diminishes worker power by curtailing a worker’s ability to push back against unfair labor practices in cases of abuse.
Labor market power and the fissured workplace
These more granular examples of anti competitive labor market practices are only part of a broader pattern of firms leveraging their market power to circumvent labor protections and obtain a structural advantage over workers. In his landmark book, The Fissured Workplace, David Weil shows how powerful corporations shifted workers out of formal employment and into alternate arrangements, such as subcontracting and franchising, in order to lessen their obligations to workers. By pushing low pay workers into separate subcontracting firms, lead firms are able to wield their market power over other firms, rather than having to do so directly over workers, which could raise issues of liability under labor law.
Once pushed outside of the firm ’s organizational structure, workers receive a smaller share of the company’s revenue and face steep barriers to bargain for more.
Whereas direct employees simply receive a regular salary, outsourced workers are forced to competitively bid against one another for every contract, driving costs down for the lead firm and wages for subcontracted workers. Once pushed outside of the firm’s organizational structure, workers receive a smaller share of the company’s revenue and face steep barriers to bargain for more.
With less power and wealth than the firms that ultimately pay them, and with competing contractors threatening to under cut them, outsourced workers are driven to the lowest common denominator for workplace standards. Indeed, Dube and Kaplan (2008) find that subcontracted security guards and janitors suffer a wage penalty of up to 8 and 24 percent, respectively, while a 2013 study by ProPublica found that temp workers, another large category of outsourced Labor, were between 36 and 72 percent more likely to be injured on the job than their full time counterparts.
Weil’s analysis shows how powerful lead firms place the onus of maintaining brand standards on franchisees and their employees even as they squeeze them to reduce costs otten resulting in the low wages, dangerous work conditions, and labor law violations that are widely observed today. Outsourcing strategies are utilized up and down the supply chains of large companies from Walmart, which outsources its shipping and logistics operations, to Verizon, which outsources the sale and installation of broadband services. This system allows corporations to have their cake and eat it too; they can secure favorable contracts with suppliers while maintaining a high degree of control over an outsourced workforce.
The National Labor Relations Board’s (NLRB) recent ruling in Browning Ferris supports this view, by asserting that firms contracting out workers from external partners may be considered joint employers. But until this view is more widely embodied in the economy, standards will continue to sink as powerful firms subjugate the workers of less powerful firms.
All of these practices are predicated on the immense power of lead firms, from internationally recognized franchises like McDonald’s, to retail powerhouses like Walmart, that are only able to get away with such broad wage and condition setting power because they each represent such large shares of their industries. Although the topic is still nascent among economists, it is not a stretch to say that lax antitrust protection is largely to blame for the fissuring practices of powerful firms.
In fact, hard evidence linking anti competitive behavior and poor labor market outcomes continues to emerge. In addition to non compete and mandatory arbitration clauses, Krueger and Ashenfelter (2017) call attention to the negative wage and employment effects of “no poaching” agreements through which franchises have agreed not to hire workers from rival businesses, in order to suppress wages and worker power. These agreements are plainly anti competitive, created expressly to disrupt healthy competition in the labor market.
The ill effects of weak antitrust and disaggregation are echoed in the gig economy: Workers who would once have operated either as employees or as truly independent businesses are now finding work in a quasi independent role for centralized tech firms like Uber and TaskRabbit. Legally, they remain independent contractors without employee benefits; however, they lack the degree of control over their own operations that independent contractors are typically afforded. In fact, research shows these platforms exercise substantial control over participant behavior, disciplining workers for undesirable behavior and controlling the prices they set, despite their lack of employer status.
Like the franchising and subcontracting firms in Weil’s fissured workplace, these firms are leveraging market power, as well as proprietary technology, to have it both ways, controlling their labor supply without shouldering responsibility for it. Much has been said about misclassification of Uber drivers, but few have made the opposite point: If Uber drivers are not employees, then they are businesses, and thus Uber’s price setting amounts to a cartel, an organizational structure that is illegal under existing antitrust policy.
Addressing deficiencies in competition policy will be essential in combatting the structural abuse of workers. As we have stated, pro big business deregulatory competition policies were sold on the explicit grounds that consumer effects were the only effects that should be considered with regard to competition policy. As long as the consumer came out ahead, in other words, any negative ramifications for small business, and especially for workers, could be tolerated. To anyone concerned with the overall health of the American economy, this begs a simple question, which the Chicago School is unprepared to answer: What good are consumer savings if consumers have no income to save?
AMAZON’S ANTITRUST THREAT
Founded In 1994 as an onllne book retailer, Amazon has grown into the world’s fourth most valuable company, commanding a sprawling supply chain that offers everythmg from cloud computing services to audlo books. Amazon’s recent purchase of Whole Foods for $14.3 billion reignited concerns about the company‘s immense size, and yet policymakers and journalists find It dlflicult to grasp the precise threat that the tech giant poses to competition.
Through Amazon’s example, we aim to illustrate how such powerful tech firms imperil healthy competition in ways that do not align with the Chlcago School’s conception of the role of competition policy.
In some respects Amazon’s devotion to growth and investment is laudable. Rather than passing what would be enormous profits from existing business lines back to its shareholders, the company largely reinvests the proceeds into new product lines and technologIes.
The fact that Amazon employs over 1,000 people working on far off AI technology, for example, is potentially good for the economy. This is to say nothing of the fact that Amazon is wildly popular with a large, devoted user base, compelled by its low prices, streamlined service and delivery system, and wide product offerings. Indeed, it has been evident on many occasions that not only do consumers value the company, but so do the competition authorities, for they have used their regulatory and enforcement powers to put would be competitors out of business. Despite investments in innovation and consumer favorability, the fact remains many aspects of Amazon’s conduct are deeply problematic.
To see the threat posed by Amazon, it’s important to understand how the company has risen to its current status. With nearly half of all ecommerce passing through the platform, Amazon’s success is predicated by what economists call “network effects”: The more vendors sell through Amazon, the more customers will want to use it; and the more customers use Amazon, the more vendors will be forced to sell through It.
Even if a new platform were to offer a superIor service, say, by taking a smaller cut of sales no one would use it, simply because no one else was. And to compete with Amazon’s unparalleled logistics network at this point would require an unimaginable upfront investment, one that Amazon could quickly make into a debacle by further cutting its prices and denying placement to suppliers who did business with the competitor.
This special barrier to entry affords Amazon the ability to set the terms for consumers and vendors. Amazon is able, for instance, to keep 15 percent of every sale on its platform and to attract even reluctant vendors like Nike, who after resisting to sell directly on Amazon due to copyright infringement that occurs through the sale of unlicensed products on their website, eventually caved in 2017. In another prominent example of Amazon flexing its power, the retailer suspended pre-orders of all books published by Hatchett, including House Speaker Paul Ryan’s The Way Forward in order to gain leverage and secure better terms in its ebook agreements.
The Institute for Local Self Reliance (2016), among others, has underscored Amazon’s use of predatory pricing, as is commonly understood, though impossmle to prove under existing antitrust, precedent to eliminate competitors such as Zappos.com and Diapers.com.
If established firms are powerless to resist Amazon’s platform, the implications for small businesses selling on Amazon‘s Marketplace are enormous. Satirically attributed to Amazon CEO Jeff Benzos, a recent article from The Onion captured the problem well. “My advice to anyone starting a business is to remember that someday I Will crush you.”
Amazon compounds its platform advantage with a technological one. Since Amazon both runs a retail platform and sells goods. It not only competes With its own partners. but also unilaterally sets the terms of that competition. Amazon preferences its own goods in search results, and, by collecting data on all of its transactions and customers, knows both buyers and sellers, including the stategic use of its dominant cloud computing bussiess, Amazon Web Services, to monitor profitable third party vendors and consumer behavior that lets Amazon plan future acquisitions. It’s well established that Amazon uses this data to make personalized recommendations to induce customers to make purchases.
More alarming is that the company reorders options so as to extract more from customers whose past purchases and other characteristics indicate a wlllingness to pay more. This is not the kind of price discrimination that can be found in a grocery store, where quantities are priced differently in order to entlce different buyers (e g . moms and dads opting for a gallon of milk for an additional $1 instead of a quart), but a secretive and personalized form that ensures each consumer pays their maximum and that Amazon captures the difference.
Jeff Bezos, Amazon founder, 1998
MARKET POWER AND SOCIETY: SEGREGATION, CONTROL OF INFORMATION, AND POLITICAL MANIPULATION
Although market power’s impact on workers, consumers, and businesses is severe, the narrow economic analyses can overlook the dangers it poses for society as a whole. Franklin D. Roosevelt recognized these threats in 1938 when he said:
“The first truth is that the liberty of a democracy is not safe if the people tolerate the growth of private power to a point where it becomes stronger than their democratic state itself. That, in its essence, is Fascism, ownership of Government by an individual, by a group, or by any other controlling private power.”
As relevant now as they were then, FDR‘s comments suggest that beyond lost jobs, innovation, and businesses, concentrated market power poses a threat to our democracy and national sovereignty. Today, this threat manifests in a number of ways, ranging from the obvious, such as the consequences of corporate lobbying on democracy, to the more subtle, such as the massive power a small number of tech platforms now hold over the distribution of information. In this section, we discuss three broad societal threats posed by market power.
Geography and economic segregation
Increasingly, the wealthy and powerful have isolated themselves geographically and used their market power to prey on vulnerable areas and populations. The stark divide between urban and rural voting patterns in the 2016 election is just one recent example of how economic, social, and political divisions manifest geographically. Market power reinforces this new geography, threatening to calcify a class stratification that is anathema to American values.
Market power redistributes wealth and opportunity away from disadvantaged communities, be they poor, minority, or physically isolated. Wealthy Americans, clustered in wealthy suburbs and a few large cities, do not patronize local businesses, pay taxes, or otherwise engage with the economies of poor rural or urban communities.“ Nonetheless, they are able to extract profits from them. A merger may result in windfall profits, but Wall Street and Silicon Valley will absorb that money as distant plants close and local economies across the country are decimated from the deal.
In Hanover, Illinois, for example, the purchase of machine pan manufacturer Invensys spelled the end of a 50 year old factory, despite its 18 percent profit margin. The jobs were sent to Mexico, and the profits were shifted to Sun Capital in New York City.
Today, many hollowed out cities and towns remain trapped in a cycle of dependence on the very same corporate giants that eroded their communities.
Unbridled market power threatens locally owned businesses, which play an essential role in their communities, and which cannot be replaced by externally owned and managed corporations. Writing for Washington Monthly, Brian S. Feldman (2017) presents numerous examples of black owned businesses that were consumed by larger competitors as a result of the relaxed antitrust regime. Not only had these businesses provided jobs and wealth to black workers, but they also served as pillars of the community in a time when many larger, white-owned businesses were either indifferent or actively hostile to the priorities of the black community. Black business owners in Selma, Alabama, for example, provided a physical foothold for civil rights activities in the 1960s. But without antitrust protections, these small businesses could not withstand the power of consolidating giants like Walmart, which used anti competitive practices, including predatory pricing, to drive small competitors out.
Meanwhile, Bentonville, Arkansas, home of Walmart’s Walton family flourishes, with a plethora of privately funded parks and schools.
To make matters worse, weak local economies are self reinforcing: Less economic activity means less tax revenue for schools, public transportation, and other basic needs, which, as shown by Chetty and Hendren (2017), results in less economic mobility for future generations. As geographic segregation becomes more entrenched, it has become easier and easier for firms to identify and prey on vulnerable populations. As Hwang et al. (2015) demonstrate, this predatory behavior has been prevalent in both home mortgages and car insurance, where providers charge high prices and provide restricted service in areas with large minority populations. If areas continue to segregate racially and economically, this behavior will only intensify.
Nowhere is the self reinforcing mechanism of geographic segregation more evident than in the contemporary struggle to expand broadband coverage to underserved communities, both rural and urban. Many Internet service providers (ISPs) have avoided expanding service to such areas because doing so is less profitable. Through economic, political, and legal means, they have blocked efforts to provide multiple options directly. As Internet access becomes an effective (even necessary) prerequisite to entering the job market, underserved populations remain economically isolated and exploitable by powerful local monopolists. Market power compounds these issues: ISPs spend millions of dollars lobbying against the creation or expansion of proven municipal broadband networks (see introduction). In protecting their market share, entrenched incumbents not only reinforce social inequities but also actively prevent some of the least privileged Americans from accessing the modem economy.
Market power and the flow of information
Recognizing that access to unbiased information was no less essential to a democracy than water is to survival, and seeing the threat that industry consolidation and market power posed to that freedom of information, antitrust regulators once closely monitored the structure and content of newspapers and other media. Ownership of multiple competing news outlets was capped, and the Federal Communications Commission’s (FCC) Fairness Doctrine required media outlets to fairly cover issues of national importance. In the wake of the Chicago School revolution, many of these regulations fell by the wayside with severe consequences for society and our democracy.
Just as the decline of antitrust protections altered the flow of firm revenue, it has also altered the flow of information with grave ramifications for society. The weakening, and in some cases the repeal, of key protections have greatly contributed to the obstruction of quality information that was so evident during the 2016 election. In print, radio, TV, and online, our sources of information have consolidated under openly biased ownership; media conglomerates have purchased local newsrooms en masse and geared them for profit over quality; online news is increasingly filtered by social media giants, with no eye for credibility or fairness, but with ultimate discretion as gatekeepers between readers and journalists; and television and radio stations held by politically biased media companies spew false information with little oversight.
After decades of consolidation, local news, a key source of information for nearly half of all Americans, according to Pew Research (2017) can scarcely be described as such. In 2016, the five largest local TV companies owned 37 percent of all stations. This pattern will likely worsen under Trump’s FCC Chairman, Ajit Pai, who hinted that he would further relax merger scrutiny shortly after his appointment.
As consumers of media, the information we receive is increasingly controlled by a small select group of very powerful corporations. The decline of quality local reporting is problematic alone, but, as the Knight Commission (2009) points out, will be even more harmful if the loss contributes to the further erosion of community engagement, helping to worsen already substantial distrust of local institutions.
The erosion of decades old antitrust protections has had serious ramifications for the freedom and quality of journalistic institutions, but online, there is doubt as to whether antitrust authorities will address emerging threats at all. Gatekeepers like Google and Facebook threaten to end the era of democratized information that the Internet was supposed to create. Every day, millions log on to Facebook to see which stories their friends are sharing, creating advertising revenue for Facebook but not for the organizations that created the content. Likewise, Google’s “Incognito Window” feature enables users to avoid pay walls put in place by publishers to protect copyrighted material. This is to say nothing of outright discrimination within search results, which recently drew the ire of European authorities.
Platforms not only capture profits of journalism, they also control who sees what. Emily Bell, Director of the Tow Center for Digital Journalism, discussed this concern in a 2016 piece for the Columbia Journalism Review.
“In truth, we have little or no insight into how each company is sorting its news. If Facebook decides, for instance, that video stories will do better than text stories, we cannot know that unless they tell us or unless we observe it. This is an unregulated field. There is no transparency into the internal working of these systems… We are handing the controls of important parts of our public and private lives to a very small number of people, who are unelected and unaccountable.”
Examples of the problems created by this centralized, unaccountable control of information are everywhere: During the 2016 election, the proliferation of fake news articles on Facebook and automated “bots” on Twitter interfered with the honest and free exchange of information. Unregulated “news” sharing on Facebook and Twitter popularized numerous conspiracy theories. And since Facehook has no obligation to provide a neutral platform, electoral campaigns, interest groups, and repositories of outside “dark money” are free to spend whatever it takes to not only get their content in front of targeted users but also prevent the opposition’s content from reaching its intended audience. Meanwhile, Woolley and Guilbeault (2017) show how Twitter bots sought to obstruct social media messaging of supporters on both sides.
When it comes to something as essential for democracy as the free flow of accurate information, the power is simply too great to be left unregulated, in the hands of a few powerful corporations.
By keeping revenue from content creators, by lending a voice to biased and false news outlets, and by artificially amplifying chosen content, powerful platforms will reduce the amount of legitimate news produced (and shared) and will instead encourage the production of whatever content their opaque algorithms favor. Rather than democratizing the spread of information, many online platforms have consolidated this process for private gain. This is a new question for antitrust authorities, and one that must be addressed soon.
Compromising the political system
The influence of large campaign donors and highly paid corporate lobbyists on American politics is no secret, but bears repeating: Individual corporations and industry trade associations, not to mention a host of more secretive “dark money” pools, leverage their wealth to exert enormous influence on legislators, executives, and other government officials at all levels. This influence amplifies the voice of the powerful and, as Jacob Hacker (2011) discusses in Winner Take All Politics, was key in bringing about the Chicago School revolution in antitrust policy in the first place.
Less discussed than the influence of money in politics is how economic policy reinforces the phenomenon. By creating larger firms and enabling them to generate excess profits, consolidation increases the number of businesses with the means necessary to invest in serious lobbying efforts, including the number of firms whose business models depend on doing so. Rather than attempting to satisfy broad constituencies of disparate interests, politicians are tempted to cater to a select few: those who can afford to both amplify their voices and offer campaign funds in exchange for political favors.
Close ties to large corporations not only help politicians fundraise, but also let them access the lucrative “revolving door” to high paying jobs in the private sector throughout or after a political career. Politicians, then, have a considerable incentive to demonstrate their usefulness to the large corporations that hold power over their political and professional wellbeing. The same goes for appointed public officials and bureaucrats.
By enhancing the ability of large corporations to win, not by innovating or improving, but by buying government favoritism, a compromised political system entrenches the concentration of corporate market power. That is why, at the dawn of an earlier revolution in antitrust, Louis Brandeis noted that we may have concentrations of wealth, or we may have democracy, but not both.
Indeed, today’s mega corporations are so large and so powerful that individual politicians may feel powerless to oppose them. Even those with enough integrity or personal wealth to avoid direct dependence on corporate funders still face pressure to conform to the views of their caucuses. And voters are watching, it’s hardly surprising that, according to Gallup polls, the percentage of Americans reporting a “great deal” or “fair amount” of confidence in the federal government has hit record lows in recent years.
Ultimately, the problem of money in politics, and its interplay with market power threatens to erode the possibility of effective government, both directly and through the trust of its citizens.
Restoring Competition Policy to Build Healthy Markets and Inclusive Growth
This report has explained the theoretical threat of concentrated market power and demonstrated its real world consequences: Instead of creating shared value, powerful businesses, specifically their owners and executives, extract wealth from workers, consumers, disadvantaged competitors, and entire communities. This section describes the role that competition policy can play in realigning incentives and reshaping markets to create a level playing field and an equitable, inclusive economy.
Competition policy is the set of laws and institutions that aim to realign private and public interests by changing the structure of markets and governing the actions taken within them. It can be understood as serving three distinct roles:
(1) To regulate market structure and prevent the aggregation of private power, primarily by blocking mergers that concentrate too much power and breaking up pre existing, overly powerful firms
(2) To curtail anti competitive behavior by banning firms from and punishing firms for engaging in extractive practices like colluding to raise prices or deceiving consumers, including practices that may persist even without full monopolization
(3) To regulate “natural monopolies” as utilities and intervene when competition fails, either through more comprehensive regulation or the provision of public options, especially in key natural monopolies like telecommunications and energy with high fixed costs of doing business, where fierce private competition tends to give rise to boom and bust cycles that impair the steady provision of necessary services.
To limit the consolidation of power by regulating market structure, authorities should:
– Revise merger guidelines to scrutinize the potential for anti competitive behavior throughout supply chains, not merely targeting consumers
– Closely examine negative effects of vertical integration and vertical restraints that are likely to arise from proposed and consummated mergers
– Use Section 2 of the Sherman Act to break up existing monopolies and firms whose structure and business models threaten other market participants and the economy more broadly
– Scrutinize the many ways in which ownership and management have consolidated, including the common ownership of multiple firms in an industry by the same major shareholders
– Implement intellectual property (IP) reform to encourage entrepreneurship and weaken protections for incumbents, including by compelling free licensing of patent portfolios
In many instances, market power arises from the structure of a given market, that is, the number and size of firms and the ties between them. Consolidated markets lack competition, often allowing firms to charge high prices, offer bad service, and pay low wages. Such market power can therefore be addressed by preventing consolidation, either by limiting mergers between competitors or by breaking up excessively large firms. Merger review has always been a key facet of competition policy, but it is much less stringent today than it was prior to the 1980.
Under existing merger guidelines, antitrust agencies assess mergers primarily on their expected short run effect on price and output. But as we have discussed this narrow approach overlooks important effects on innovation, wages, jobs, and supply chains; even its track record on price effects is mixed at best. Congress should enact antitrust legislation that would require agencies to revise existing merger guidelines to consider the merging parties’ ability to engage in anti competitive behavior throughout the supply chain, to look for ways it may harm any and all market participants, not just consumers. That would enable the Department of Justice (DOJ) and the Federal Trade Commission (FTC) to scrutinize vertical consolidation as well as a merger’s effects on innovation, labor markets, data privacy, and discrimination by race, gender, and geography.
The agencies should also consider exercising their authority under Section 2 of the Sherman Act to break up existing firms whose structure and business models render them impossible to regulate in accord with the new standard. Finally, an increase in the antitrust resources of regulatory agencies would complement this agenda.
In some cases, market power arises not through consolidation but through intellectual property rights (IPRs) exclusive, government enforced rights to profit from an innovation. While IPRs are intended to incentivize investment in R&D, current laws may actually be hindering innovation by slowing the spread of existing knowledge and hindering knock on discoveries, new ideas built on previous innovations. Policymakers should consider weakening such protections; doing so can promote growth and simultaneously lower prices and expand access to goods, especially medicines.
While policies to prevent and reverse consolidation and restrict anti competitive behavior at the firm level are necessary to maintain competition, another aspect of our market power crisis is the combination of management with shareholders into one corporate interest, a relationship that is then used to profit at the expense of other stakeholders.
Examples of this broad phenomenon can be seen in the rise of private equity, the lifting of regulations on corporate stock buybacks, the use of dual classes of shareholders, the decline in initial public offerings (IP0s) and the reduction in the share of the economy accounted for by traditional publicly traded corporations, and the so called “common ownership” of multiple firms in an industry by the same small set of large institutional investors.
This issue is of sufficient concern to warrant an investigation by a temporary panel with representatives from a number of government agencies with access to the data that are necessary to understand the potential threat of shareholder management consolidation. These agencies include the IRS, the U.S. Census, the Bureau of Economic Analysis, and the Bureau of Labor Statistics, as well as the competition authorities at the FTC and DOJ. The core issue to investigate is whether the various mechanisms that shareholders have for influencing firm behavior and benefiting from firm profits are used for their private benefit (and at other stakeholders’ expense) versus for the public good. Such a panel should examine corporate structures, such as private equity, tiered shareholding, private partnerships, and common ownership, as well as behavior like labor outsourcing, stock buybacks, and dividend recapitalization, that arguably serve to benefit shareholders at the expense of everyone else.
To curtail anti-competitive behaviors, policy must:
– Increase enforcement at the state and federal level
– Increase funding of federal and state competition authorities
– Increase punishments for anti competitive behavior
– Expand the scope of antitrust action at the state level
– Use antitrust law against the fissured workplace
– Challenge the ability of tech platforms to extract rents from their supply chain
_ Scrutinize price discrimination enabled by “Big Data”
– Protect low skill workers from non compete and anti poaching clauses
– Reform the Federal Arbitration Act
While preventing the accumulation of market power through large scale consolidation is important, this tactic does not guarantee that firms will always compete fairly. Even in less concentrated markets, firms may seek an advantage through anti competitive tactics, especially where vertical integration enables them to exploit a strategic position in one market for advantage in another.
Businesses can collude to charge more, take advantage of workers with contracts they don’t understand and cannot litigate, and maneuver consumers into disadvantageous terms of service or discriminate among them using data consumers do not realize has been harvested from them. And although many of these tactics are illegal, weak enforcement and judicial precedents establishing impossibly high burdens of proof and excessively narrow theories about how conduct can be anti competitive encourage abuse among firms who see no threat of repercussion.
To deter anti competitive practices and realign market incentives with the public interest, we advocate strengthening the enforcement of existing antitrust law and broadening the application of that law in key areas, especially as it relates to the emergence of new and unregulated technology.
Ultimately, anti competitive practices will only be curbed when firms operate on the assumption that nefarious behavior will be discovered and duly punished. Therefore, more prosecutorial action against anti competitive behavior and harsher penalties are necessary to discourage firms from abusing power.
Larger regulatory budgets for the DOJ Antitrust Division and for enforcement at the FTC will help achieve this goal by providing regulators with more staff and resources to carry out investigations. Most importantly, the burden on plaintiffs for winning an anti competitive conduct case is far too high.
At the federal level, of course, neither more aggressive prosecution of private firms nor a significant increase in regulatory funding seems likely in the current political climate, so we also propose expanded activity and funding of these activities at the state level. Generally, state attorneys have a well established tradition of antitrust action, and in many cases, are unburdened by the ideological baggage that plagues federal agencies and judicial precedent. These activities should be expanded wherever possible.
While poor enforcement has permitted some anti competitive behaviors, narrow court interpretations have pushed others out from under regulation entirely. For example, predatory pricing has long been a violation of antitrust statutes, but under current jurisprudence, it can only be recognized when the defendant is found to have both intent to remove competition and the ability to recoup losses incurred by selling below cost. This extremely high legal hurdle has prevented the prosecution of predatory pricing, despite evident instances of it, such as Amazon‘s behavior towards Diapers.com.
The crucial issue for prosecuting conduct cases under Section 2 of the Sherman Act is the requirement to prove monopoly power, generally by a preponderant share of the relevant market. This means that conduct in which powerful companies use their domination of one market to extract concessions in another is very hard to prove. For example, Google has used its dominance in the search market to redirect advertising revenues from content companies back to itself, Those companies have to make their content available to Google for fear of losing all Internet traffic, which then means that Google is the market participant earning the ad revenue. Google claims that it is not a monopolist in Search, “competition is just a click away,” as the saying goes and yet the observed behavior of users of its mobile platform and the restrictions that Google places on third party applications all but guarantee the tech giant will control the flow of information. (And hence reaps the reward therefrom.)
Finally, the rise of digital platforms has revived concerns about the abuse of vertical consolidation and price discrimination, issues the Chicago School had rendered largely dormant. Because they not only host sellers but also compete with them directly, platforms like Google and Amazon have ample opportunity to profit by placing other firms at a disadvantage.
The European Commission recently fined Google 2.4 billion euros for prioritizing its own Comparison shopping service over that of competitors. Regulators must either bar firms from competing on their own platforms, in effect, enact a ban on vertical integration for platform companies, or at the very least closely regulate such behavior by imposing neutrality on crucial internet era utilities.
A similar principle applies to platforms like Uber who have used their power to extract gains from workers. Since Uber drivers are technically independent businesses, the competition authorities should regard Uber’s price setting as price fixing.
Finally, access to data has enabled a new wave of advanced price discrimination through which platforms are able to charge different customers different amounts based on past behavior and other factors. Antitrust authorities must pursue vigorous enforcement of existing price discrimination laws.
To establish public utility regulation of essential industries and “natural monopolies,” policymakers should:
– Explore public utility regulation to rein in Google, Facebook, Amazon
– Promote expansion of municipal broadband networks
– Consider creation of public options for financial services
Private markets are not always able to sustain competition. This can happen when there are high fixed costs to production, as with utilities or airlines or when firms experience “network effects” that make it more useful to have one large platform than several small competitors, as with Facebonk or Amazon. In these instances, it maybe impossible to generate competition by breaking up large corporations horizontally; at the same time, outright bans on abusive practices may be too blunt an instrument to properly regulate firms’ behavior. Such situations call for further intervention, either through more fine tuned “public utility” regulation or through the introduction of public market players.
Historically, public utility regulation has been used to ensure that essential goods and services are provided accessibly and at fair prices. In domains such as telecommunications, where network effects prevented robust competition, the imposition of a “common carrier” status required providers to serve all comers at reasonable rates and without unjust discrimination. More recently, the FCC harkened back to these principles in the context of net neutrality, preventing internet service providers from exercising bias based on the content (such as a competitor’s service) they are transmitting.
Looking forward, policymakers should consider a public utility approach to regulating prominent technology firms. As gatekeepers to essential economic and social goods, Google and Facebook to news and information, and Amazon to avast logistics and shipping architecture these businesses threaten to limit or influence participation in markets and civil society. Regulatory firewalls could prevent such platforms from privileging their own content (say, in search results), thus maintaining a level playing field for smaller competitors. Further oversight could require firms to respect the interests of the users whose data they collect and sell and could guard against discriminatory treatment, including as an “unintended” consequence of revenue optimizing algorithms.
In some instances, barriers to entry prevent competition in a market, even though there is no inherent advantage to consolidation. Here, the government can intervene by simply becoming a competitor, by offering a so called public option. This approach is not a new one: The Tennessee Valley Authority’s provision of rural electrification dates back to the New Deal.
Public options have three main advantages:
First, they offer an additional, straightforward option to consumers at reasonable rates and without discrimination.
Second, they increase market competition, encouraging pre existing businesses to lower prices and offer better service.
Third, they can expand subsidized access to consumers unable to afford essential goods at market prices.
Policymakers should promote this approach in key areas, such as municipal broadband and banking.
For roughly 40 years, the American economy has been remade to serve the powerful and the wealthy, with no regard for everyday consumers, workers, or the health of our society.
Today, we see just how effectively policymakers and regulators have championed the needs of the few over those of the many. Ironically, the policies that helped get us here were sold on the notion that they would deliver the most good to the most people, the precise opposite of what ultimately occurred.
Furthermore, Chicago School policies were predicated on the idea that providing for the public good is simple. This was a convenient view for policymakers and other parties whose key interest amounted to diminishing the role that government plays in administering society.
But Chicago School ideology was also, as ample evidence shows, precisely incorrect.
Robust competition is every bit as important for economic well being as scions of the Chicago School suggested, but this school of thought presented false and misleading ideas about how to best achieve it. Here, we have combined emerging research with historical narrative aimed at explaining how antitrust policy has been a key contributor to an increasingly top heavy economy.
In the simplest way possible, we aim to show that competition is as essential as it is delicate, and that economies work best when power is evenly distributed among market actors. In this regard, competition policy is an essential tool.
Banks are pushing for deregulation and roll backs of Dodd-Frank’s regular check-ups on their financial health. We should be worried.
The great financial crisis with its peak in the fall of 2008 was not inevitable; it could and should have been prevented. Had it been, the economy would not have lost trillions of dollars, millions of Americans would have been spared eviction and unemployment, and the U.S.’s vibrant financial sector would have remained intact.
But few saw a crisis of these proportions coming: Almost no one raised red flags when the first mortgage originators filed for bankruptcy, or when a roster of medium-sized banks experienced liquidity shortage. Even when margin calls began putting pressure even on larger financial firms, and credit lines that had been put in place to provide only short-term, stopgap measures were running hot, almost no one sounded alarms.
Every incident was eyed in isolation even as, slowly but surely, stress built in the system as a whole, as it always does in finance, from the weaker, less resilient periphery of mortgage originators to the very core of storied investment banks whose ultimate fall threatened to bring down the entire system.
The 16th of March of this year marked the 10-year anniversary of the marriage between Bear Stearns and JP Morgan Chase, which was arranged and co-financed by the Federal Reserve. It was by no means the beginning of the crisis; rather, it marked the beginning of the end. It was the final stage in which financial firms at the core were still able to keep their heads above water; however, fearing for their own survival, they would extend another lifeline to fellow banks only with government help.
By September, it took a massive government bailout to prevent a financial meltdown.
Of course, the government could have stepped aside and allowed the banking and finance system to selfdestruct. Members of Congress who voted against the Troubled Asset Relief Program (TARP) Act at the time certainly thought it should.
Allowing it to implode would have created a much cleaner slate for rebuilding finance on sound footing just as the collapse of the financial system in the 1930s had. On the downside, no one could say for sure whether the system would recover and what it would cost to get there. Fearing the abyss, the Fed, the Treasury and Congress stepped in and did what it took (to paraphrase then-Fed Chairman Ben Bernanke) to stabilize the financial system which has since returned to making huge profits.
No good deed goes unpunished.
Instead of a new foundation for sound finance, all we got for this massive public intervention was the same old system, merely patched up with rules and regulations to make it more resilient.
Among the most far-reaching of the reform measures was a new set of essentially preventive care measures for the financial sector: annual check-ups for banks beyond a certain size, strategies for designing tests that would make it harder for financial intermediaries to game them, and additional discretionary powers for regulators to impose additional prudential measures on banks in the name of system stability. Given the massive regulatory failure in spotting early signs of trouble and preventing the 2008 crisis, that looked like the least Congress could do to keep Americans safe from financial instability.
Yet, Republicans, along with a breakaway bloc of Democratic senators, have begun to strip away much of the preventive measures in the post-crisis regulatory legislation, known as Dodd-Frank. Following lobbying by the financial industry, the US. Senate just voted to proceed with considering a new bill, which provides that only banks with consolidated assets worth $100 billion or more (up from $50 billion) will be subject to regular stress tests conducted by the Fed; but these checkups will be “periodic” rather than annual, and they will include only two rather than three stress scenarios. Similarly, in the new design, company-run internal stress tests are required now only for firms with more than $50 billion in assets (up from $10 billion). They, too, can dispense with annual checks: periodic ones will do.
This massive act of deregulation will not be accompanied by greater discretion for the Fed to impose supervisory measures on select companies. On the contrary, the Fed’s discretionary powers have been circumscribed in the bill.
Thresholds are always arbitrary; it is simply impossible to find an optimal point that imposes just enough costs on banks and financial institutions to ensure financial stability. But with these new changes, financial stability has been placed on the back burner. Instead we are presented with back-of-the-envelope calculations about the positive effects scaling back regulation will have on credit expansion, and therefore on growth. It’s a dangerous game, least because this calculation does not include the costs of future crises that result from reckless credit expansion.
If financial stability were the goal, as it should be, we might want to do away with asset thresholds altogether and instead empower regulators to diagnose and treat threats to financial instability wherever they find them. No doubt, the financial industry would reject such a move, because it craves regulatory “certainty”, the very reason its lobbyists pushed for the thresholds in the first place. But we should not fool ourselves: Limiting financial preventive care to large banks at the core of the system deprives regulators of the tools they need to diagnose a crisis in the making, and leaves us exactly where we were in the run-up to the last one.
Waiting for the Chinese Bear Stearns
Unregulated, speculative lending markets nearly brought down the global financial system 10 years ago. Now, Western banks are exporting this failed model to the developing world.
What a difference a decade makes’, mused Mark Carney, the head of the Financial Stability Board (FSB), in a recent speech. Carney was measuring, and applauding, regulatory progress since shadow banking brought Bear Stearns down in March 2008 and Lehman Brothers six months later, and since 2013, when he warned that shadow banking in developing and emerging countries (DECS) was the threat to global financial stability. A lot has changed since.
Shadow banking is no longer used pejoratively. The IMF recently noted that DEC shadow banking ‘might yield greater efficiencies and risk sharing capacity. In scholarly and policy literature, DEC shadow banking is portrayed as an activity confined by national borders, connected closely to banks that move activities in the shadows, circumventing regulation or financial repression, complementary to traditional banks that underserve (SME) entrepreneurs, be it because of market imperfections or the priorities of the developmental state (China).
Another C is relevant for China: constructed by the Chinese state as a quasi-fiscal lever. After Lehman, China’s fiscal stimulus involved encouraging local governments to tap shadow credit, often from large state-owned banks through local Government Financing Vehicles; Yet systemic risks pale in comparison to those that gave us the Bear Sterns and Lehman moments, since (a) complex securitization and wholesale funding markets are (still) absent and (b) DECS have preserved autonomy to design regulatory regimes proportional to the risks posed by shadow banks important to economic development. At worst, DECs may have to backstop shadow credit creation, just like high-income countries did after Lehman’s collapse.
The ‘viable alternative’ story has one shortcoming. It stops short of theorizing shadow banking as a phenomenon intricately linked to financial globalization. In so doing, it misses out a recent development.
The global agenda of reforming shadow banking has morphed into a project of constructing resilient market-based finance that seeks to organize DEC financial systems around securities markets. The project re-invigorates a pre-crisis plan designed by G8 countries, led by Germany’s central bank, the Bundesbank, together with the World Bank and the IMF, to promote local currency bond markets, a plan that G20 countries endorsed in 2011. As one Bundesbank official put it then: “more developed domestic bond markets enhance national and global financial stability. Therefore, it is not surprising that this is a topic which generates an exceptional high international consensus and interest even beyond the G20.”
Deeper local securities markets, it is argued, would (a) reduce DEC dependency on short-term foreign currency debt by (b) tapping into growing demand from foreign institutional investors and their asset managers while (c) expanding the investor base to domestic institutional investors that could act as a buffer, increasing DEC’s capacity to absorb large capital inflows without capital controls; and (d) reduce global imbalances, since large DECs (for example, China and other Asian countries) would no longer need to recycle savings in US. financial markets. Everyone wins if DECs develop missing (securities) markets.
Despite paying lip service to the potential fragility of capital flows into DEC securities markets, this is a project of policy-engineered financial globalization.
The key to understanding this is in the plumbing. Plumbing, for building and securities markets, holds little to excite the imagination. Until it goes wrong.
The plumbing of securities markets refers to the money markets where securities can be financed. According to the International Monetary Fund (IMF) and the World Bank: “the money market is the starting point to developing… fixed income (i.e. securities) markets.” The institutions refer to a special segment, known as the repo market. Repo is the “plumbing” that circulates securities between asset managers, institutional investors, market-making banks and leveraged investors, “greasing” securities’ liquidity (ease of trading). It allows financial institutions to borrow against securities collateral and to lend securities to those betting on a change in price.
This is why international institutions, from the FSB to the IMF and World Bank, have insisted that DECS seeking to build resilient market-based finance need to (re)model their repo plumbing according to a ‘Western’ blueprint.
The official policy advice coincides with the View of securities markets’ lobbies, as expressed, for instance, by the Asia Securities Industry and Financial Markets Association, in the 2013 India Bond Market Roadmap and the 2017 China’s Capital Markets: Navigating the Road Ahead.
The advice ignores economist Hyman Minsky’s insights on fragile plumbing (and the lesson of the Bear Sterns and Lehman moments). Minsky was deeply interested in the plumbing of financial markets, where he looked for signs of evolutionary changes that would make monetary policy less effective while sowing the seeds of fragile finance.
Fragility, he warned, arises where: ‘the viability of loans mainly made because of collateral, however, depends upon the expected market value of the assets that are pledged. An emphasis by bankers on the collateral value and the expected values of assets is conducive to the emergence of a fragile financial structure’
Western or classic repo plumbing does precisely that. It orients (shadow) bankers towards the daily market value of collateral. For both borrower and lender, the daily market value of the security collateral is critical: the borrower does not want to leave more collateral with the lender than the cash it has borrowed, and vice-versa. This is why repo plumbing enables aggressive leverage during good times, when securities prices go up, the borrower gets cash/securities back, and can borrow more against them to buy more securities, drive their price up etc. Conversely, when securities prices fall, borrowers have to find, on a daily basis, more cash or more collateral.
Shadow bankers live with daily anxieties. One day, they may find that the repo supporting their securities portfolios is no longer there, as Bear Stearns did. Then they have to firesale collateral, driving securities’ prices down, creating more funding problems for other shadow bankers until they fold, as Lehman Brothers did.
It was such destabilizing processes that prompted the FSB to identify repos as systemic shadow markets that need tight regulation in 2011. Since then, regulatory ambitions to make the plumbing more resilient have been watered down significantly, as the global policy community turned to the project of constructing market-based finance.
Paradoxically, when the Bundesbank advises DECS to make (shadow) bankers more sensitive to the daily dynamic of securities markets, it ignores its own history. Two decades ago, finance lobbies pressured the Bundesbank to relax its strong grip on German repo markets. The Bundesbank resisted because it believed only tight control would safeguard financial stability and monetary policy effectiveness. Eventually, the Bundesbank abandoned this Minsky-like stance because it worried other Euro-area securities would be more attractive for global investors.
Since the 1980s, the policy engineering of liquid securities markets has been a project of promoting shadow plumbing, first in Europe and the US, now in DECs. Take China. Since 2009, Chinese securities markets have grown rapidly to become the third largest in the world, behind the US and Japan. Such rapid growth reflects policies to re-organize Chinese shadow banking into market-based finance, driven by a broader renminbi (RMB, China’s currency) internationalization strategy that views deep local securities markets as a critical pillar. The repo plumbing of Chinese securities markets expanded equally fast, to around US$ 8 trillion by June 2017. Chinese plumbing is now roughly similar in volumes to European and US repo markets, when in 2010 it was only a fifth of those markets. Since then, Chinese (shadow) banks increased repo funding from 10% to 30% of total funding.
Yet China’s repo is fundamentally different. Legal and market practice there does not force the Chinese (shadow) banker to care about, or to make profit from, daily changes in securities prices. Without daily collateral valuation practices, the “archaic” regime makes for patient (shadow) bankers and more resilient plumbing. This is the case in most DEC countries.
The pressure is on China to open repo markets to foreign investors and to abandon “archaic” rules if it wants RMB internationalization. While China may be able to resist such pressures, it is difficult to see how other DECs will. The global push for market-based finance prepares the terrain for organizing international development interventions via securities markets, as suggested by the growing popularity of green bonds, bond markets for infrastructure, impact investment and digital financial inclusion approaches to poverty reduction. After all, the new mantra is “development’s future is finance, not foreign aid.”
In sum, the shadow-banking-into-resilient-market based-finance agenda seeks to define the terms on which DEC countries join the global supply of securities. It silently threatens the monetary power of DEC countries to manage capital flows and the effects of global financial cycles, a hard-fought victory to weaken the political clout of what Jagdish Bhagwati termed the “Wall Street-Treasury complex” that successfully pressured DECs to open their capital accounts.
This policy-engineered financial globalization seeks a clean break from “the engineered industrialization” that involved capital controls, bank credit guided by the priorities of industrial strategies and competitive exchange rate management.
Instead, it seeks to accelerate the global diffusion of the architecture of US. securities markets and their plumbing, despite well-documented fragilities and contested social efficiency.
Questions of sustainability, credit creation and growth should not be left to securities markets. Carefully designed developmental states, historical experience suggests, work better.
For parents who have been enjoying the freedom of living child free, now comes research to spoil it all.
The bedrooms have been redecorated in grown up colours, the 25 year old soft toys chucked out, the washing machine is blissfully underused and, thanks to the apparent current raging addictions of baby boomers, a holiday or two cruising in the Med, the Antarctic, anywhere that avoids dry land have been booked. And then they’re back.
According to a recent study by the London School of Economics (LSE), adult children who return to the family home after a period away often at university cause a significant decline in their parents’ quality of life and wellbeing.
The first study of its kind to measure the impact of the “boomerang generation” looked at 17 countries including France Germany and Italy. Dr Marco Tosi and Prof Emily Grundy applied “quality of life” measures that included feelings of control, autonomy, pleasure and self realisation in everyday life”.
When a child returns home, researchers found the score went down by an average of 0.8 points, an effect on quality of life similar to developing an age related disability such as mobility difficulties. Protestant countries showed a greater decline than Catholic ones, presumably because these nations are more accustomed to living in multigenerational, extended families.
“When children leave the parental home, marital relationships improve and parents find a new equilibrium,” says Tosi. “They enjoy this stage in life, finding new hobbies and activities. When adult children move back, it is a violation of that equilibrium.”
When a grown up child does return, often reverting to tricky adolescence, there is something comfortingly familiar about doors slamming, noise accelerating and wellbeing sliding down the scale, it’s called parenting. But this time round, it can be particularly gruelling.
It’s not easy for a twenty something whose aspirations are battered by ridiculous housing costs, student debt and low wages to have to witness the daily spectacle of baby boomers bent on rediscovering their 60s mojo with late nights and long lie ins, all the while being hard of hearing, digitally illiterate and short on memory.
Repetition and constant interrogation about the strangeness of modern life are the price the returner must pay. “Did you say you’d be back for supper?” ;“Six times.” “What’s that thing that works the TV?”, “The remote control.”
And the rules of engagement are far from clear given that nowadays it’s more likely to be the baby boomer who is rolling a spliff and starting on a second bottle before the end of The Archers.
Last week, a series of notes from parents admonishing children and teenagers was published. “Every time you don’t eat your sandwich, a unicorn dies. Love Dad,” read one lunchbox note. In a boomerang household, it’s more likely the child will leave an admonishing Post It stuck to an empty case of wine, such as drink kill!
Around one in four young adults now live with their parents in the UK, the highest number since records on the trend began in 1996. In the 60s, it was the newly marrieds who returned to live with the in-laws. The UK wasn’t part of the LSE study, but Tosi says refilling the empty nest is likely to have the same impact. And we have history.
In the 18th century, young men would leave home in their teens to serve as apprentices and young women would fly the nest into domestic service, according to the sociologist Wally Seccombe’s history of working-class life, Weathering the Storm. But by the 1850s, the Industrial Revolution had led to mass “in-migration” to cities. “Home ownership was out of the question for the vast majority,” writes Seccombe. Families huddled together, subs and took in lodgers.
In 1851, in Preston, housing costs and low wages contributed to eight out of 10 males aged 15 to 19 living at home. It could take a woman, also a wage earner, up to three days to do the weekly wash by hand. Today, a returning adult child may find that the newly liberated woman of the house has resigned from all domestic duties in the name of self-realisation. The nest is no longer what it was.
That said, one vital element is missing from the LSE study, how long does the return of he boomerang child last? A decade and he or she risks turning into a carer, while a year or two has its pluses, someone to feed the cat while Mum and Dad are paddling up the Amazon or, if finances are depleted by more mouths to feed again, down the Ouse.
There are also surprising trade offs. Research on the brain by two American psychologists, Mara Mather and Susan Turk Charles, involved tests on people up to the age of 80. Results indicated that as we get older our fight or flight dictating amygdala reacts less to negative information. We tend to see the good rather than the bad, not least because time is precious. “In younger people, the negative response is more at the ready,” says Charles.
So in what appears to be an age of perpetual anxiety for adult offspring who are perhaps temporarily suspending the quest for independence, to go back home is not just about cheap living (and potential continued warfare if more than one sibling also rejoins the nest). Mum and Dad may find their equilibrium, newfound hobbies and partnership wrecked, but there are compensations in making room for a broke son or daughter. Like all good enough parents, in tough times they can make things seem not quite as bad as they might otherwise have. Even while queueing for the shower.
Unlike the Great Depression of the 1930s, which produced Keynesian economics, and the stagflation of the 1970s, which gave rise to Milton Friedman’s monetarism, the Great Recession has elicited no such response from the economics profession. Why?
The tenth anniversary of the start of the Great Recession was the occasion for an elegant essay by the Nobel laureate economist Paul Krugman, who noted how little the debate about the causes and consequences of the crisis have changed over the last decade. Whereas the Great Depression of the 1930s produced Keynesian economics, and the stagilation of the 1970s produced Milton Friedman’s monetarism, the Great Recession has produced no similar intellectual shift.
This is deeply depressing to young students of economics, who hoped for a suitably challenging response from the profession.
Why has there been none?
Krugman’s answer is typically ingenious: the old macroeconomics was, as the saying goes, “good enough for government work.” It prevented another Great Depression. So students should lock up their dreams and learn their lessons.
A decade ago, two schools of macroeconomists contended for primacy: the New Classical or the “freshwater” School, descended from Milton Friedman and Robert Lucas and headquartered at the University of Chicago, and the New Keynesian, or “saltwater,” School, descended from John Maynard Keynes, and based at MIT and Harvard.
Freshwater-types believed that budgets deficits were always bad, whereas the saltwater camp believed that deficits were beneficial in a slump. Krugman is a New Keynesian, and his essay was intended to show that the Great Recession vindicated standard New Keynesian models.
But there are serious problems with Krugman’s narrative. For starters, there is his answer to Queen Elizabeth II’s nowfamous question: “Why did no one see it coming?” Krugman’s cheerful response is that the New Keynesians were looking the other way. Theirs was a failure not of theory, but of “data collection.” They had “overlooked” crucial institutional changes in the financial system. While this was regrettable, it raised no “deep conceptual issue” that is, it didn’t demand that they reconsider their theory.
Faced with the crisis itself, the New Keynesians had risen to the challenge. They dusted off their old sticky-price models from the 1950s and 1960s, which told them three things. First, very large budget deficits would not drive up near zero interest rates. Second, even large increases in the monetary base would not lead to high inflation, or even to corresponding increases in broader monetary aggregates. And, third, there would be a positive national income multiplier, almost surely greater than one, from changes in government spending and taxation.
These propositions made the case for budget deficits in the aftermath of the collapse of 2008. Policies based on them were implemented and worked “remarkably well.” The success of New Keynesian policy had the ironic effect of allowing “the more inflexible members of our profession [the New Classicals from Chicago] to ignore events in a way they couldn’t in past episodes.” So neither school, sect might be the better word, was challenged to re-think first principles.
This clever history of pre- and post-crash economics leaves key questions unanswered.
First, if New Keynesian economics was “good enough,” why didn’t New Keynesian economists urge precautions against the collapse of 2007-2008? After all, they did not rule out the possibility of such a collapse a priori.
Krugman admits to a gap in “evidence collection.” But the choice of evidence is theory-driven. In my view, New Keynesian economists turned a blind eye to instabilities building up in the banking system, because their models told them that financial institutions could accurately price risk. So there was a “deep conceptual issue” involved in New Keynesian analysis: its failure to explain how banks might come to “underprice risk worldwide,” as Alan Greenspan put it.
Second, Krugman fails to explain why the Keynesian policies vindicated in 2008-2009 were so rapidly reversed and replaced by fiscal austerity. Why didn’t policymakers stick to their stodgy fixed-price models until they had done their work? Why abandon them in 2009, when Western economies were still 4-5% below their precrash levels?
The answer I would give is that when Keynes was briefly exhumed for six months in 2008-2009, it was for political, not intellectual, reasons. Because the New Keynesian models did not offer a sufficient basis for maintaining Keynesian policies once the economic emergency had been overcome, they were quickly abandoned.
Krugman comes close to acknowledging this: New Keynesians, he writes, “start with rational behavior and market equilibrium as a baseline, and try to get economic dysfunction by tweaking that baseline at the edges.” Such tweaks enable New Keynesian models to generate temporary real effects from nominal shocks, and thus justify quite radical intervention in times of emergency. But no tweaks can create a strong enough case to justify sustained interventionist policy.
The problem for New Keynesian macroeconomists is that they fail to acknowledge radical uncertainty in their models, leaving them without any theory of what to do in good times in order to avoid the bad times. Their focus on nominal wage and price rigidities implies that if these factors were absent, equilibrium would readily be achieved. They regard the financial sector as neutral, not as fundamental (capitalism’s “ephor,” as Joseph Schumpeter put it).
Without acknowledgement of uncertainty, saltwater economics is bound to collapse into its freshwater counterpart. New Keynesian “tweaking” will create limited political space for intervention, but not nearly enough to do a proper job. So Krugman’s argument, while provocative, is certainly not conclusive. Macroeconomics still needs to come up with a big new idea.
Robert Skidelsky, Professor Emeritus of Political Economy at Warwick University and a fellow of the British Academy in history and economics, is a member of the British House of Lords. The author of a three-volume biography of John Maynard Keynes.
Oxford Review of Economic Policy, 2018
Good enough for government work? Macroeconomics since the crisis
This paper argues that when the financial crisis came policy-makers relied on some version of the Hicksian sticky-price IS-LM as their default model; these models were ”good enough for government work’.
While there have been many incremental changes suggested to the DSGE model. there has been no single ‘big new idea” because the even simpler lS-LM type models were what worked well. In particular, the policy responses based on lS-LM were appropriate.
Specifically, these models generated the insights that large budget deficits would not drive up interest rates and, while the economy remained at the zero lower bound, that very large increases in monetary base wouldn’t be inflationary, and that the multiplier on government spending was greater than 1.
The one big exception to this satisfactory understanding was in price behaviour. A large output gap was expected to lead to a large fall in inflation, but did not. If new research is necessary. it is on pricing behaviour. While there was a failure to forecast the crisis, it did not come down to a lack of understanding of possible mechanisms, or of a lack of data, but rather through a lack of attention to the right data.
It’s somewhat startling, at least for those of us who bloviate about economics for a living, to realize just how much time has passed since the 2008 financial crisis. Indeed, the crisis and aftermath are starting to take on the status of an iconic historical episode, like the stagflation of the 1970s or the Great Depression itself, rather than that of freshly remembered experience. Younger colleagues sometimes ask me what it was like during the golden age of economics blogging, mainly concerned with macroeconomic debates, which they think of as an era that ended years ago.
Yet there is an odd, interesting difference, both among economists and with a wider audience, between the intellectual legacies of those previous episodes and what seems to be the state of macroeconomics now.
Each of those previous episodes of crisis was followed both by a major rethinking of macroeconomics and, eventually, by a clear victor in some of the fundamental debates. Thus, the Great Depression brought on Keynesian economies, which became the subject of fierce dispute, and everyone knew how those disputes turned out: Keynes, or Keynes as interpreted by and filtered through Hicks and Samuelson, won the argument.
In somewhat the same way, stagflation brought on the Friedman Phelps natural rate hypothesis, yes, both men wrote their seminal papers before the 1970s, but the bad news brought their work to the top of the agenda. And everyone knew, up to a point anyway, how the debate over that hypothesis ended up: basically everyone accepted the natural rate idea, abandoning the notion of a long-run trade-off between inflation and unemployment. True, the profession then split into freshwater and saltwater camps over the effectiveness or lack thereof of short-run stabilization policies, a development that I think presaged some of what has happened since 2008. But I’ll get back to that.
For now, let me instead just focus on how different the economics profession response to the post-2008 crisis has been from the responses to depression and stagflation. For this time there hasn’t been a big new idea, let alone one that has taken the profession by storm. Yes, there are lots of proclamations about things researchers should or must do differently, many of them represented in this issue of the Oxford Review. We need to put finance into the heart of the models! We need to incorporate heterogeneous agents! We need to incorporate more behavioural economics! And so on.
But while many of these ideas are very interesting, none of them seems to have emerged as the idea we need to grapple with. The intellectual impact of the crisis just seems far more muted than the scale of crisis might have led one to expect. Why?
Well, I’m going to offer what I suspect will be a controversial answer: namely, macroeconomics hasn’t changed that much because it was. in two senses, what my father’s generation used to call ‘good enough for government work”. On one side, the basic models used by macroeconomists who either practise or comment frequently on policy have actually worked quite well, indeed remarkably well. On the other, the policy response to the crisis, while severely lacking in many ways, was sufficient to avert utter disaster, which in turn allowed the more inflexible members of our profession to ignore events in a way they couldn‘t in past episodes.
In what follows I start with the lessons of the financial crisis and Great Recession, which economists obviously failed to predict. I then move on to the aftermath, the era of fiscal austerity and unorthodox monetary policy, in which I’ll argue that basic macroeconomics, at least in one version, performed extremely well. I follow up with some puzzles that remain. Finally, I turn to the policy response and its implications for the economics profession.
II. The Queen’s question
When all hell broke loose in financial markets, Queen Elizabeth II famously asked why nobody saw it coming. This was a good question but maybe not as devastating as many still seem to think.
Obviously, very few economists predicted the crisis of 2008-9; those who did, with few exceptions I can think of, also predicted multiple other crises that didn’t happen. And this failure to see what was coming can’t be brushed aside as inconsequential.
There are, however, two different ways a forecasting failure of this magnitude can happen, which have very different intellectual implications. Consider an example from a different field, meteorology. In 1987 the Met Office dismissed warnings that a severe hurricane might strike Britain; shortly afterwards, the Great Storm of 1987 arrived, wreaking widespread destruction. Meteorologists could have drawn the lesson that their fundamental understanding of weather was fatally flawed which they would presumably have done if their models had insisted that no such storm was even possible. Instead, they concluded that while the models needed refinement, the problem mainly involved data collection that the network of weather stations, buoys, etc. had been inadequate, leaving them unaware of just how bad things were looking.
How does the global financial crisis compare in this respect? To be fair, the DSGE models that occupied a lot of shelf space in journals really had no room for anything like this crisis. But macroeconomists focused on international experience, one of the hats I personally wear, were very aware that crises triggered by loss of financial confidence do happen, and can be very severe. The Asian financial crisis of 1997-9, in particular, inspired not just a realization that severe l930s-type downturns remain possible in the modern world, but a substantial amount of modelling of how such things can happen.
So the coming of the crisis didn’t reveal a fundamental conceptual gap. Did it reveal serious gaps in data collection? My answer would be, sort of, in the following sense: crucial data weren’t so much lacking as overlooked.
This was most obvious on the financial side. The panic and disruption of financial markets that began in 2007 and peaked after the fall of Lehman came as a huge surprise, but one can hardly accuse economists of having been unaware of the possibility of bank runs. lf most of us considered such runs unlikely or impossible in modern advanced economies, the problem was not conceptual but empirical: failure to take on board the extent to which institutional changes had made conventional monetary data inadequate.
This is clearly true for the United States, where data on shadow banking on the repo market, asset-backed commercial paper, etc. were available but mostly ignored. In a less obvious way, European economists failed to pay sufficient intention to the growth of interbank lending as a source of finance. In both cases the institutional changes undermined the existing financial safety net, especially deposit insurance. But this wasn’t a deep conceptual issue: when the crisis struck, I’m sure I wasn’t the only economist whose reaction was not ‘How can this be happening?” but rather to yell at oneself, ‘Diamond Dybvig, you idiot!’
(The Diamond-Dybvig model is an influential model of bank runs and related financial crises. The model shows how banks’ mix of illiquid assets (such as business or mortgage loans) and liquid liabilities (deposits which may be withdrawn at any time) may give rise to selffulfilling panics among depositors.)
In a more subtle way, economists were also under-informed about the surge in housing prices that we now know represented a huge bubble, whose bursting was at the heart of the Great Recession. In this case, rising home prices were an unmistakable story. But most economists who looked at these prices focused on broad aggregates say, national average home prices in the United States. And these aggregates, while up substantially, were still in a range that could seemingly be rationalized by appealing to factors like low interest rates. The trouble, it turned out, was that these aggregates masked the reality, because they averaged home prices in locations with elastic housing supply (say, Houston or Atlanta) with those in which supply was inelastic (Florida or Spain); looking at the latter clearly showed increases that could not be easily rationalized.
Let me add a third form of data that were available but largely ignored: it’s fairly remarkable that more wasn’t made of the sharp rise in household debt, which should have suggested something unsustainable about the growth of the 2001-7 era. And in the aftermath of the crisis macroeconomists, myself included (Eggertsson and Krugman, 2012) began taking private-sector leverage seriously in a way they should arguably have been doing before.
So did economists ignore warning signs they should have heeded? Yes. One way to summarize their (our) failure is that they ignored evidence that the private sector was engaged in financial overreach on multiple fronts, with financial institutions too vulnerable, housing prices in a bubble, and household debt unsustainable. But did this failure of observation indicate the need for a fundamental revision of how we do macroeconomics? That’s much less clear.
First, was the failure of prediction a consequence of failures in the economic framework that can be fixed by adopting a radically different framework? It’s true that a significant wing of both macroeconomists and financial economists were in the thrall of the efficient markets hypothesis, believing that financial overreach simply cannot happen or at any rate that it can only be discovered after the fact, because markets know what they are doing better than any observer. But many macroeconomists, especially in policy institutions, knew better than to trust markets to always get it right especially those who had studied or been involved with the Asian crisis of the 1990s. Yet they (we) also missed some or all of the signs of overreach. Why?
My answer may seem unsatisfying, but I believe it to be true: for the most part what happened was a demonstration of the old line that predictions are hard, especially about the future. It’s a complicated world out there, and one’s ability to track potential threats is limited. Almost nobody saw the Asian crisis coming, either. For that matter, how many people worried about political disruption of oil supplies before 1973? And so on. At any given time there tends to be a set of conventional indicators everyone looks at, determined less by fundamental theory than by recent events, and big, surprise crises almost by definition happen due to factors not on that list. If you like, it’s as if meteorologists with limited resources concentrated those resources in places that had helped track previous storms, leading to the occasional surprise when a storm comes from an unusual direction.
A different question is whether, now that we know whence the 2008 crisis came, it points to a need for deep changes in macroeconomic thinking. As I’ve already noted, bank runs have been fairly well understood for a long time; we just failed to note the changing definition of banks. The bursting of the housing bubble, with its effects on residential investment and wealth, was conceptually just a negative shock to aggregate demand.
The role of household leverage and forced deleveraging is a bigger break from conventional macroeconomics, even as done by saltwater economists who never bought into efficient markets and were aware of the risk of financial crises. That said, despite the impressive empirical work of Mian and Sufi (2011) and my own intellectual investment in the subject, I don’t think we can consider incorporating debt and leverage a fundamental new idea, as opposed to a refinement at the margin.
It’s true that introducing a role for household debt in spending behaviour makes the short-run equilibrium of the economy dependent on a stock variable, the level of debt. But this implicit role of stock variables in short-run outcomes isn‘t new: after all, nobody has ever questioned the notion that investment flows depend in part on the existing capital stock, and I’m not aware that many macroeconomists consider this a difficult conceptual issue.
And I’m not even fully convinced that household debt played that large a role in the crisis. Did household spending fall that much more than one would have expected from the simple wealth effects of the housing bust?
My bottom line is that the failure of nearly all macroeconomists, even of the saltwater camp, to predict the 2008 crisis was similar in type to the Met Office failure in 1987, a failure of observation rather than a fundamental failure of concept. Neither the financial crisis nor the Great Recession that followed required a rethinking of basic ideas.
III. Not believing in (confidence) fairies
Once the Great Recession had happened, the advanced world found itself in a situation not seen since the 1930s, except in Japan, with policy interest rates close to zero everywhere. This raised the practical question of how governments and central banks should and would respond, of which more later.
For economists, it raised the question of what to expect as a result of those policy responses. And the predictions they made were, in a sense, out-of-sample tests of their theoretical framework: economists weren’t trying to reproduce the historical time-series behaviour of aggregates given historical policy regimes, they were trying to predict the effects of policies that hadn’t been applied in modern times in a situation that hadn’t occurred in modern times.
In making these predictions, the deep divide in macroeconomics came into play, making a mockery of those who imagined that time had narrowed the gap between saltwater and freshwater schools. But let me put the freshwater school on one side, again pending later discussion, and talk about the performance of the macroeconomists, many of them trained at MIT or Harvard in the 1970s, who had never abandoned their belief that activist policy can be effective in dealing with short-run fluctuations. I would include in this group Ben Bernanke, Olivier Blanchard, Christina Romer, Mario Draghi, and Larry Summers, among those close to actual policy, and a variety of academics and commentators, such as Simon Wren-Lewis, Martin Wolf, and, of course, yours truly, in supporting roles.
I think it’s fair to say that everyone in this group came into the crisis with some version of Hicksian sticky-price IS-LM as their default, back-of-the-envelope macroeconomic model. Many were at least somewhat willing to work with DSGE models, maybe even considering such models superior for many purposes. But when faced with what amounted to a regime change from normal conditions to an economy where policy interest rates couldn’t fall, they took as their starting point what the Hicksian approach predicted about policy in a liquidity trap. That is, they did not rush to develop new theories, they pretty much stuck with their existing models.
These existing models made at least three strong predictions that were very much at odds with what many inhuential figures in the political and business worlds (backed by a few economists) were saying.
First. Hicksian macroeconomics said that very large budget deficits, which one might normally have expected to drive interest rates sharply higher, would not have that effect near the zero lower bound.
Second, the same approach predicted that even very large increases in the monetary base would not lead to high inflation, or even to corresponding increases in broader monetary aggregates.
Third, this approach predicted a positive multiplier, almost surely greater than 1, on changes in government spending and taxation.
These were not common-sense propositions. Non-economists were quite sure that the huge budget deficits the US ran in 2009-10 would bring on an attack by the ‘bond vigilantes’. Many financial commentators and political figures warned that the Fed’s expansion of its balance sheet would ‘debase the dollar’ and cause high inflation. And many political and policy figures rejected the Keynesian proposition that spending more would expand the economy, spending less lead to contraction.
In fact, if you‘re looking for a post-2008 equivalent to the kinds of debate that raged in the 1930s and again in the 1970s, a conflict between old ideas based on pre-crisis thinking, and new ideas inspired by the crisis, your best candidate would be fiscal policy. The old guard clung to the traditional Keynesian notion of a government spending multiplier somewhat limited by automatic stabilizers, but still greater than 1. The new economic thinking that achieved actual real-world influence during the crisis and aftermath-as opposed, let’s be honest, to the kind of thinking found in this issue mostly involved rejecting the Keynesian multiplier in favour of the doctrine of expansionary austerity, the argument that cutting public spending would crowd in large amounts of private spending by increasing confidence (Alesina and Ardagna, 2010). (The claim that bad things happen when public debt crosses a critical threshold also played an important real-world role, but was less a doctrine than a claimed empirical observation.)
So here, at least, there was something like a classic crisis-inspired confrontation between tired old ideas and a radical new doctrine. Sad to say, however, as an empirical matter the old ideas were proved right, at least insofar as anything in economics can be settled by experience, while the new ideas crashed and burned. Interest rates stayed low despite huge deficits. Massive expansion in the monetary base did not lead to infiation. And the experience of austerity in the euro area, coupled with the natural experiments created by some of the interregional aspects of the Obama stimulus, ended up strongly supporting a conventional, Keynesian view of fiscal policy, Even the magnitude of the multiplier now looks to be around 1.5, which was the number conventional wisdom suggested in advance of the crisis.
So the crisis and aftermath did indeed produce a confrontation between innovative new ideas and traditional views largely rooted in the 1930s. But the movie failed to follow the Hollywood script: the stodgy old ideas led to broadly accurate predictions, were indeed validated to a remarkable degree, while the new ideas proved embarrassingly wrong. Macroeconomics didn’t change radically in response to crisis because old-fashioned models, confronted with a new situation, did just fine.
IV. The case of the missing deflation
I’ve just argued that the lack of a major rethinking of macroeconomics in the aftermath of crisis was reasonable, given that conventional, off-the-shelf macroeconomics performed very well. But this optimistic assessment needs to be qualified in one important respect: while the demand side of economy did just about what economists trained at MIT in the 1970s thought it would, the supply side didn’t.
As I said, the experience of stagflation effectively convinced the whole profession of the validity of the natural-rate hypothesis. Almost everyone agreed that there was no long-run inflation unemployment trade-off. The great saltwater freshwater divide was, instead, about whether there were usable short-run trade-offs.
But if the natural-rate hypothesis was correct, sustained high unemployment should have led not just to low inflation but to continually declining inflation, and eventually deflation. You can see a bit of this in some of the most severely depressed economies, notably Greece. But deflation fears generally failed to materialize.
Put slightly differently, even saltwater, activist-minded macroeconomists came into the crisis as ‘accelerationists’: they expected to see a downward-sloping relationship between unemployment and the rate of change of inflation. What we’ve seen instead is, at best, something like the 1960s version of the Phillips curve, a downward-sloping relationship between unemployment and the level of inflation and even that relationship appears weak.
Obviously this empirical failure has not gone unnoticed. Broadly, those attempting to explain price behaviour since 2008 have gone in two directions. One side, e.g. Blanchard (2016), invokes ‘anchored’ inflation expectations: the claim that after a long period of low, stable inflation, price-setters throughout the economy became insensitive to recent inflation history, and continued to build 2 per cent or so inflation into their decisions even after a number of years of falling below that target. The other side. e.g. Daly and Hobijn (2014), harking back to Tobin (1972) and Akerlof er a1. (1996), invokes downward nominal wage rigidity to argue that the natural rate hypothesis loses validity at low inflation rates.
In a deep sense, I’d argue that these two explanations have more in common than they may seem to at first sight. The anchored-expectations story may preserve the outward form of an accelerationist Phillips curve, but it assumes that the process of expectations formation changes, for reasons not fully explained, at low inflation rates. The nominal rigidity story assumes that there is a form of money illusion. opposition to outright nominal wage cuts, that is also not fully explained but becomes significant at low overall inflation rates.
Both stories also seem to suggest the need for aggressive expansionary policy when inflation is below target: otherwise there’s the risk that expectations may become unanchored on the downward side, or simply that the economy will suffer persistent, unnecessary slack because the downward rigidity of wages is binding for too many workers.
Finally. I would argue that it is important to admit that both stories are ex post explanations of macroeconomic behaviour that was not widely predicted in advance of the post-2008 era. Pre-2008, the general view even on the saltwater side was that stable inflation was a sufficient indicator of an economy operating at potential output, that any persistent negative output gap would lead to steadily declining inflation and eventually outright deflation. This view was, in fact, a key part of the intellectual case for inflation targeting as the basis of monetary policy. If inflation will remain stable at, say, 1 per cent even in a persistently depressed economy. it’s all too easy to see how policymakers might give themselves high marks even while in reality failing at their job.
But while this is a subjective impression, I haven’t done a statistical analysis of recent literature, it does seem that surprisingly few calls for a major reconstruction of macroeconomics focus on the area in which old-fashioned macroeconomics did, in fact, perform badly post-crisis.
There have, for example, been many calls for making the financial sector and financial frictions much more integral to our models than they are, which is a reasonable thing to argue. But their absence from DSGE models wasn’t the source of any major predictive failures. Has there been any comparable chorus of demands that we rethink the inflation process, and reconsider the natural rate hypothesis? Of course there have been some papers along those lines, but none that have really resonated with the profession.
Why not? As someone who came of academic age just as the saltwater freshwater divide was opening up, I think I can offer a still-relevant insight: understanding wage and price-setting is hard, basically just not amenable to the tools we as economists have in our kit. We start with rational behaviour and market equilibrium as a baseline, and try to get economic dysfunction by tweaking that baseline at the edges; this approach has generated big insights in many areas, but wages and prices isn’t one of them.
Consider the paths followed by the two schools of macroeconomics.
Freshwater theory began with the assumption that wage and price-setters were rational maximizers, but with imperfect information, and that this lack of information explained the apparent real effects of nominal shocks. But this approach became obviously untenable by the early 1980s, when inflation declined only gradually despite mass unemployment. Now what?
One possible route would have been to drop the assumption of fully rational behaviour, which was basically the New Keynesian response. For the most part, however, those who had bought into Lucas-type models chose to cling to the maximizing model, which was economics as they knew how to do it, despite attempts by the data to tell them it was wrong. Let me be blunt: real business cycle theory was always a faintly (or more than faintly) absurd enterprise, a desperate attempt to protect intellectual capital in the teeth of reality.
But the New Keynesian alternative, while far better, wasn’t especially satisfactory either. Clever modellers pointed out that in the face of imperfect competition the aggregate costs of departures from perfectly rational price-setting could be much larger than the individual costs. As a result, small menu costs or a bit of bounded rationality could be consistent with widespread price and wage stickiness.
To be blunt again. however, in practice this insight served as an excuse rather than a basis for deep understanding. Sticky prices could be made respectable just allowing modellers to assume something like one-period-ahead price-setting, in turn letting models that were otherwise grounded in rationality and equilibrium produce something not too inconsistent with real-world observation. New Keynesian modelling thus acted as a kind of escape clause rather than a foundational building block.
But is that escape clause good enough to explain the failure of deflation to emerge despite multiple years of very high unemployment? Probably not. And yet we still lack a compelling alternative explanation, indeed any kind of big idea. At some level, wage and price behaviour in a depressed economy seems to be a subject for which our intellectual tools are badly fitted.
The good news is that if one simply assumed that prices and wages are sticky, appealing to the experience of the 1930s and Japan in the 1990s (which never experienced a true deflationary spiral), one did reasonably well on other fronts.
So my claim that basic macroeconomics worked very well after the crisis needs to be qualified by what looks like a big failure in our understanding of price dynamics but this failure didn’t do too much damage in giving rise to bad advice, and hasn’t led to big new ideas because nobody seems to have good ideas to offer.
V. The system sort of worked
In 2009 Barry Eichengreen and Kevin O’Rourke made a splash with a data comparison between the global slump to date and the early stages of the Great Depression; they showed that at the time of writing the world economy was in fact tracking quite close to the implosion that motivated Keynes’s famous essay ‘The Great Slump of 1930’ (Eichengreen and O’Rourke, 2009)
Subsequent updates, however, told a different story. Instead of continuing to plunge as it did in 1930, by the summer of 2009 the world economy first stabilized, then began to recover. Meanwhile, financial markets also began to normalize; by late 2009 many measures of financial stress were more or less back to pre-crisis levels.
So the world financial system and the world economy failed to implode. Why?
We shouldn’t give policy-makers all of the credit here. Much of what went right, or at least failed to go wrong, refiected institutional changes since the 1930s. Shadow banking and wholesale funding markets were deeply stressed, but deposit insurance still protected at good part of the banking system from runs. There never was much discretionary fiscal stimulus, but the automatic stabilizers associated with large welfare states kicked in, well, automatically: spending was sustained by government transfers, while disposable income was buffered by falling tax receipts.
That said, policy responses were clearly much better than they were in the 1930s. Central bankers and fiscal authorities officials rushed to shore up the financial system through a combination of emergency lending and outright bailouts; international cooperation assured that there were no sudden failures brought on by shortages of key currencies. As a result, disruption of credit markets was limited in both scope and duration. Measures of financial stress were back to pre-Lehman levels by June 2009.
Meanwhile, although fiscal stimulus was modest, peaking at about 2 per cent of GDP in the United States, during 2008-9 governments at least refrained from drastic tightening of fiscal policy, allowing automatic stabilizers, which, as I said, were far stronger than they had been in the 1930s to work.
Overall, then, policy did a relatively adequate job of containing the crisis during its most acute phase. As Daniel Drezner argues (2012), ‘the system worked’-well enough, anyway, to avert collapse.
So far, so good. Unfortunately, once the risk of catastrophic collapse was averted, the story of policy becomes much less happy. After practising more or less Keynesian policies in the acute phase of the crisis, governments reverted to type: in much of the advanced world, fiscal policy became Hellenized, that is, every nation was warned that it could become Greece any day now unless it turned to fiscal austerity. Given the validation of Keynesian multiplier analysis, we can confidently assert that this turn to austerity contributed to the sluggishness of the recovery in the United States and the even more disappointing, stuttering pace of recovery in Europe.
Figure 1 sums up the story by comparing real GDP per capita during two episodes: Western Europe after 1929 and the EU as a whole since 2007. In the modern episode, Europe avoided the catastrophic declines of the early 1930s, but its recovery has been so slow and uneven that at this point it is tracking below its performance in the Great Depression.
Now, even as major economies turned to fiscal austerity, they turned to unconventional monetary expansion. How much did this help? The literature is confusing enough to let one believe pretty much whatever one wants to. Clearly Mario Draghi’s “whatever it takes’ intervention (Draghi, 2012) had a dramatic effect on markets, heading off what might have been another acute crisis, but we never did get a clear test of how well outright monetary transactions would have worked in practice, and the evidence on the effectiveness of Fed policies is even less clear.
The purpose of this paper is not, however, to evaluate the decisions of policy-makers, but rather to ask what lessons macroeconomists should and did take from events. And the main lesson from 2010 onwards was that policy-makers don’t listen to us very much, except at moments of extreme stress.
This is clearest in the case of the turn to austerity, which was not at all grounded in conventional macroeconomic models. True, policy-makers were able to find some economists telling them what they wanted to hear, but the basic Hicksian approach that did pretty well over the whole period clearly said that depressed economies near the zero lower bound should not be engaging in fiscal contraction. Never mind, they did it anyway.
Even on monetary policy, where economists ended up running central banks to a degree I believe was unprecedented, the influence of macroeconomic models was limited at best. A basic Hicksian approach suggests that monetary policy is more or less irrelevant in a liquidity trap. Refinements (Krugman, 1998; Eggertsson and Woodford, 2003) suggested that central banks might be able to gain traction by raising their inflation targets, but that never happened.
The point, then, is that policy failures after 2010 tell us relatively little about the state of macroeconomics or the ways it needs to change, other than that it would be nice if people with actual power paid more attention. Macroeconomists aren’t, however, the only researchers with that problem; ask climate scientists how it’s going in their world.
Meanwhile, however, what happened in 2008-9, or more precisely, what didn’t happen, namely utter disaster, did have an important impact on macroeconomics. For by taking enough good advice from economists to avoid catastrophe, policy-makers in turn took off what might have been severe pressure on economists to change their own views.
VI. That 80s show
Why hasn’t macroeconomics been transformed by (relatively) recent events in the way it was by events in the 1930s or the 1970s? Maybe the key point to remember is that such transformations are rare in economics, or indeed in any field. ‘Science advances one funeral at a time,’ quipped Max Planck: researchers rarely change their views much in the light of experience or evidence. The 1930s and the 1970s, in which senior economists changed their minds, eg. Lionel Robbins converting to Keynesianism, were therefore exceptional.
What made them exceptional? Each case was marked by developments that were both clearly inconsistent with widely held views and sustained enough that they couldn’t be written off as aberrations. Lionel Robbins published The Great Depression, a very classical/Austrian interpretation that prescribed a return to the gold standard, in 1934. Would he have become a Keynesian if the Depression had ended by the mid-1930s? The widespread acceptance of the natural-rate hypothesis came more easily, because it played into the neoclassical mindset, but still might not have happened as thoroughly if stagflation had been restricted to a few years in the early 1970s.
From an intellectual point of view, I’d argue, the Great Recession and aftermath bear much more resemblance to the 1979-82 Volcker double-dip recession and subsequent recovery in the United States than to either the 1930s or the 1970s. And here I can speak in part from personal recollection.
By the late 1970s the great division of macroeconomics into rival saltwater and freshwater schools had already happened, so the impact of the Volcker recession depended on which school you belonged to. But in both cases it changed remarkably few minds.
For saltwater macroeconomists, the recession and recovery came mainly as validation of their pre-existing beliefs. They believed that monetary policy has real effects, even if announced and anticipated; sure enough, monetary contraction was followed by a large real downturn. They believed that prices are sticky and inflation has a great deal of inertia, so that monetary tightening would produce a ‘clockwise spiral’ in unemployment and inflation: unemployment would eventually return to the NAIRU (non-accelerating inflation rate of unemployment) at a lower rate of inflation, but only after a transition period of high unemployment. And that’s exactly what we saw.
Freshwater economists had a harder time: Lucas-type models said that monetary contraction could cause a recession only if unanticipated, and as long as economic agents couldn’t distinguish between individual shocks and an aggregate fall in demand. None of this was a tenable description of 1979-82. But recovery came soon enough and fast enough that their worldview could, in effect, ride out the storm. (I was at one conference where a freshwater economist, questioned about current events, snapped ‘I’m not interested in the latest residual.’)
What I see in the response to 2008 and after is much the same dynamic. Half the macroeconomics profession feels mainly validated by events-correctly, I’d say, although as part of that faction I would say that, wouldn’t I? The other half should be reconsidering its views but they should have done that 30 years ago, and this crisis, like that one, was sufficiently well-handled by policy-makers that there was no irresistible pressure for change. (Just to be clear, I’m not saying that it was well-handled in an objective sense: in my view we suffered huge, unnecessary losses of output and employment because of the premature turn to austerity. But the world avoided descending into a full 1930s-style depression, which in effect left doctrinaire economists free to continue believing what they wanted to believe.)
If all this sounds highly cynical, well, I guess it is. There’s a lot of very good research being done in macroeconomics now, much of it taking advantage of the wealth of new data provided by bad events. Our understanding of both fiscal policy and price dynamics are, I believe, greatly improved. And funerals will continue to feed intellectual progress: younger macroeconomists seem to me to be much more flexible and willing to listen to the data than their counterparts were, say, 20 years ago.
But the quick transformation of macroeconomics many hoped for almost surely isn’t about to happen, because events haven’t forced that kind of transformation. Many economists myself included are actually feeling pretty good about our basic understanding of macro. Many others, absent real-world catastrophe, feel free to take the blue pill and keep believing what they want to believe.
The British Labour Party is currently leading the Tories in the latest YouGov opinion polls (February 19-20, Tories 40 per cent (and declining), Labour 42 per cent (and rising). They should be further in front, given the disarray of the Conservatives as they try to negotiate within their own party something remotely acceptable about Brexit.
When there is this degree of political capital available, in this case for the Labour Party, a party should use it to redefine policy agendas that have gone awry. To build a narrative that will advance their cause for the future decades.
British Labour has a chance to break out of its recent Blairite neoliberal past and present a truly progressive manifesto to the British people that will force the Tories to move closer to the centre and squeeze the extreme right-wing elements.
In part, under Jeremy Corbyn and John McDonnell, Labour is making progressive noises on a number of fronts. But ultimately, where it really matters, the macroeconomic narrative, they are remaining firmly neoliberal and this will blight their chances of pursuing a truly progressive agenda.
One of the glaring mistakes the Labour Party has made is to accept advice from neoliberal economists (so-called New Keynesians) who have instilled in them a need for fiscal rules. This is an analysis of the sort of advice that Jeremy Corbyn and John McDonnell are getting and why they should ignore it.
l have written about fiscal rules in the past. There is only one fiscal rule that a progressive government should adhere to and I outlined that in this blog post The full employment fiscal deficit condition (April 13, 2011).
See also the suite of blog posts Fiscal sustainability 101 Part 1 Fiscal sustainability 101 Part 2 Fiscal sustainability 101 Part 3 to learn how Modern Monetary Theory (MMT) constructs the concept of fiscal sustainability.
The discussion in those blog posts rejects fiscal rules that are defined exclusively in terms of financial ratios, the type that the neoliberals use to reduce the scope of government and bias policy towards austerity and elevated levels of labour underutilisation.
I wrote about the madness in the British Labour Party signing up to neoliberal ’fiscal rules’ in this blog post, British Labour Party is mad to sign up to the ’Charter of Budget Responsibility’ (September 28, 2015).
One discussion paper that seems to have influenced the Shadow Chancellor in entering these type of neoliberal agreements was published on May 20, 2014 as Discussion Paper No. 429 from the National Institute of Economic and Social Research.
The NIESR paper Issues in the Design of Fiscal Policy Rules was written by Jonathan Portes (who is the Director of the NIESR) and an Oxford academic, Simon Wren-Lewis.
l have noticed that SWL seems to get involved with vituperative exchanges with Twitter participants who challenge him on matters relating to Modern Monetary Theory (MMT). He seems to think it is smart to label people, who refuse to accept his New Keynesian blather on Twitter, as being plain dumb.
SWL was a member of Labour’s economic advisory committee that John McDonnell formed after becoming the Shadow Chancellor. He later fell out with Corbyn it seems and urged the Party to dump Corbyn as leader and install Owen Smith instead.
On July 26, 2016, he wrote that “What seems totally clear to me is that given recent events a Corbyn-led party cannot win in 2020, or even come close.”
Well that prediction might still be relevant in 2020, but the last national election outcome, where Corbyn went close (even with many of the Blairites in his own party whiteanting him) suggested that SWL hasn’t much grip on reality.
Anyway, we digress.
In their discussion of issues that arise in the design of fiscal rules, Portes and SWL fail to mention the concept of full employment in the NIESR article. Their discussion is pitched entirely in terms of ‘financial ratios’.
It is hard to see that the general public will be enamoured with a government that delivers a target fiscal deficit (for example) but at the expense of elevated levels of unemployment and poverty. Fiscal policy has to relate to things that matter.
The belief (assertion) that by running fiscal surpluses or getting a public debt below some threshold will automatically deliver prosperity (jobs for all, growing real wages, first-class public services, etc) is one of the greatest con jobs that mainstream economists have foisted upon us. Fiscal policy has to relate to targets that matter like jobs, wages growth, and the like.
Depending on what the external and the private domestic sectors are doing (with respect to spending and saving), a fiscal deficit of 10 per cent of GDP might be appropriate just as a fiscal deficit of 2 per cent, or even a fiscal surplus of 4 per cent. Context matters not some particular ratio.
As an aside, the NIESR was a foremost Keynesian research group after being founded in 1938, as the academy was embracing the rejection of neoclassical thinking (which has morphed into the modern day neoliberalism) and recognising the positive role that government fiscal policy could play.
lts capacity to engage in quantitative research to support policy was valuable.
In more recent times, it has declined and is part of the neoliberal misinformation machine. The Keynesian roots has become New Keynesian, which eliminates all the meaningful insights of the original.
I have been asked by a lot of people to comment on the NIESR paper (cited above) and I have been reluctant to do so, given how flawed it is.
But given it has been so influential in framing the way in which the British Labour Party hierarchy thinks about macroeconomics, l have decided to consider it. It is hard to discuss the paper though in non-technical terms accessible to my broad readership, given the way it is framed. So at times, this essay will disappear into jargon. Not much though. I am trying to bring the message as fairly and simply as I can, so as to demonstrate the stupidity of the analysis but not be unfair (misrepresent) the authors.
Generally, the NIESR paper falls into the realm of what I call fake knowledge.
The simple response is that it spends several pages outlining the theory of optimal debt and fiscal policy then admits such a thesis “undeveloped”.
Not to be discouraged by the inability of the ‘optimal theory’ to say anything definitive about the real world, the authors, then proceed to draw conclusions from the theory anyway, which just amount to standard assertions.
Wren-Lewis just should stick to Twitter. He seems to like that. It would save us the time reading the other stuff. in effect, the substantive conclusions from the paper have no basis in theory and could have been tweets.
Let me explain why.
The motivation of the authors is to discuss what might be a “simple rule to guide fiscal policymakers”.
They point out that central bankers have used the “Taylor rule for monetary policy”, which is a simplification in itself. But I won’t get bogged down in discussing whether decision-making in central banks has or had become so mechanistic. It has not been but that is another story.
Mainstream monetary economists certainly teach students that central banks operate in the mechanistic way described by the Taylor rule, which is just a formula the textbooks claim is used to set interest rates.
But then these characters also teach students that central banks can control the money supply, that the money multiplier is responsible for determining how the monetary base scales up into the broad money supply, that expanding bank reserves will allow banks to make loans more easily, that expanding bank reserves is inflationary and al st of the litany of lies.
None of the central propostions that are taught to macroeconomics students in this regard are valid. They are fake knowledge, a stylised world of how these neoliberal economists want to imagine the real world works because they can then derive their desired policy regimes from it.
In the real world central banks and commercial banks do not function in this way.
Some of these monetary myths spill over into the analysis presented by Portes and SWL, which I will indicate presently.
Their motivation is to “search for such a rule” that might apply to fiscal policy, although they conclude at the outset that “one single simple rule to guide fiscal policy may never be found”.
They surmise that this is because:
1. “basic theory suggests that fiscal policy actions should be very different when monetary policy is constrained in a fundamental way. They cite the case of the so-called zero lower bound” as constraining fiscal policy options. In fact, no such constraint exists. Whether interest rates are zero or something else, the currency-issuing government has the same capacities and options.
There is no evidence that monetary policy suddenly becomes effective as a counter-stabilising tool at some positive target policy rate and should be preferred over fiscal policy.
The authors also suggest that the exchange rate regime will constrain fiscal policy. This is correct, which is why Modern Monetary Theory (MMT) theorists argue against pegged arrangements, they reduce the sovereignty of the government.
If a nation pegs its exchange rate then it strictly loses its sovereignty because the central bank has to conduct monetary policy with a view of stabilising the external value of the currency, which then limits the flexibility of domestic policy.
That is why the Bretton Woods fixed exchange rate system collapsed in August 1971. It biased nations running external deficits towards elevated levels of unemployment and crippling interest rates, which proved to be politically unsustainable.
2. Portes and SWL then say: “The second reason why a fiscal equivalent of a Taylor rule may be elusive also reflects national differences, but in this case differences in political structure.”
Here we get the bizarre notion introduced that theory describes an “optimal policy” but that ”there may be a trade-off between rules that mimic optimal policy, and rules that are effective in countering deficit bias” because politicians cannot be trusted to exhibit the ‘correct’ degree of austerity and instead become drunk on net spending (their concept of a “deficit bias”).
These ‘deficit drunk’ governments are labelled “non benevolent” because they allegedly trash the future of our children. Heard that one before? Sure you have, along with ‘governments running out of money’, ‘tipping points’, etc. To solve the problem of these ‘deficit drunk’ governments, Portes and SWL think technocratic constraints are needed to prevent governments responding to the desires of the population as represented by their mandate.
Of course, imposing technocratic constraints against a democratically elected government has become a major characteristic of the neoliberal era. Portes and SWL fit right in with that trend.
All this is part of the ‘depoliticisation’ trend that has seen elected governments shed political responsibility for key decisions that have damaged the well-being of the vast majority of people in their nations by appealing to ‘external’ authorities.
The ‘we had to do it, we had no choice’ ruse, the ‘Dennis Healey, we had to borrow from the IMF because we were running out of money‘ ruse, the ‘we need to outsource fiscal policy to economic experts because politicians just want votes’ ruse.
These external authorities might be so-called independent central banks (even though they are not independent see later), the IMF, and fiscal boards (such as the Office of Budget Responsibility in the UK).
We examine that trend in our new book Reclaiming the State: A Progressive Vision of Sovereignty for a Post-Neoliberal World (Pluto Books, September 2017)
Further, the term ‘deficit bias’ is loaded. Portes and SWI would claim that continuous fiscal deficits illustrate this bias. However, in most nations, such continuity is necessary to support the saving desires of the non-government sector, while sustaining full employment.
There would be no ‘bias’ there. Just responsible fiscal practice. I will discuss that in more detail presently. Refer back to the blog post The full employment fiscal deficit condition.
Further, the so-called New Keynesian ‘optimum‘ is unlikely to have any relevance for the well-being of the population, and, in particular, the most disadvantaged citizens in society.
The standard New Keynesian ‘model’ didn’t even have unemployment in it.
If you understand the dominant New Keynesian framework, which has become the basis for a new consensus emerging among orthodox macroeconomists like Portes and SWL, then you will know the following.
1. The basic New Keynesian approach has three equations which in themselves are problematic. They claim authority based on the microfoundations that are alleged to represent rigourous optimising behaviour by all agents (people, firms, etc) captured by the model structure.
2. Because the ‘optimal’ theory, specified in the basic structure (Calvo pricing, rational expectations, intertemporal utility maximising behaviour by consumers, who face a trade-off between consumption and leisure, etc) cannot say anything much about real world data, the empirical models are modified (adjustment lags are added, etc). As a result ad hocery enters the applied domain where substantive results that are meant to apply to policy are generated.
3. But it is virtually impossible to builds these ‘modifications’ into their theoretical models from the first principles (intertermporal optimisation, etc) that they start with.
4. Which means that like most of the mainstream body of theory the claim to micro-founded ‘rigour’ is unsustainable once they respond to real world anomalies (of their theory) with ad hoc (non rigourous) tack ons.
5. The results they end up producing in empirical papers are not ‘derivable’ from first-order, microfounded principles at all. Their claim to theoretical rigour fails, At the end of the process there is no rigour at all. It becomes a false authority that they hide behind to justify their assertions.
The Portes-SWL paper is no exception.
Further, the ’Great Moderation’ was considered a move closer to the New Keynesian utopia (‘the business cycle’ was declared ‘dead’, for example).
Yet all we witnessed during this period in the 1990s and up to the onset of the GFC, was the redistribution of national income capital as real wages failed to keep pace with productivity growth, increased inequality and private debt, elevated levels of unemployment, the emergence of underemployment, and the dynamics being put in place which manifested as the GFC.
And, the burden of the GFC was not borne by the banksters or the top-end-of-town. Their criminality largely escaped unscathed while millions of workers lost their jobs and many became impoverished.
The belief that one can derive ‘optimal’ rules from a New Keynesian model that have any relevance to people or the world we live in is another characteristic of the neoliberal era. My profession basically went from bad to worse over this period.
However, none of that reality discourages Portes and SWL, who begin their analytical section by outlining this so-called New Keynesian “Optimal debt policy”.
Two propositions enter immediately:
1. taxes impose costs in terms of social welfare because they “are distortionary”. This means that they prevent people from making ‘optimal’ decisions.
The microeconomic theory these authors rely on claims that tax distortions include workers not working hard enough because the imposition of taxes create incentives for them to take more leisure.
This is a body of theory that also says unemployment is a choice workers make when the real wage (after tax) is so high that they prefer to take leisure instead of working. No problem, the workers are ‘optimising real income’ by being unemployed leisure is part of this ‘real’ income measure in these models.
If you thought that sounded like nonsense then you are right. Quits do not behave countercyclically, which would be required if unemployment was a choice made by workers.
Further, the research evidence suggests that the imposition of taxes does not alter the desire of workers to offer hours off work in any significant way.
For a start, most workers do not have continuous (hours) choices available to them. They work 40 hours (or whatever) or not at all.
But this is a digression.
Further what about carbon taxes and other similar taxes, which, even in the mainstream theory, correct market failure and enhance efficiency?
2. Then we read the “government would like to minimise these costs [from the taxes] but they need taxes to pay for government spending and any interest on debt.
Which is an absolute lie in terms of the intrinsic nature of a monetary system where the national government issues its own currency.
It is a convenient lie because they rely on it to derive the results in their paper. They also need this ‘optimality’ smokescreen to persuade politicians to take the results seriously as if their ‘assumptions’ are, in reality, natural constraints on governments.
The lie also implicitly biases the reader to accepting the ‘lower’ taxes are better than higher taxes, a proposition that depends on other assumptions they choose not to disclose because they are smart enough to know that that would push the discussion into the ideological domain and these characters want us to pretend that economics is ‘value free’ and everything they are writing is derivable from ‘optimal’ theory.
One of the first lectures an economics student is forced to endure contains assertions that there is a divide between what mainstream economists call ‘positive’ economics (value free) and ‘normative’ statements (value laden).
Mainstream theory holds itself out as being ‘positive’ and then blames dysfunctional outcomes on the ‘normative’ interventions of policy makers, who choose to depart from the ‘optimal’ world of positive economics.
If you thought this was an elaborate joke played on the students then you would be correct.
And in terms of the above, the correct statement would be that governments impose voluntary constraints on themselves, engineered by conservative ideologues. They have created accounting processes that ‘account‘ for tax receipts into, say Account A, which they then ‘account’ for their spending from. A sort of administrative fiction to give the impression that the tax receipts provide the wherewithal for government spending.
But anyone knows that these institutional practices can be altered by the government whenever they choose (unless they are embedded in constitutions and then it takes more time).
The reality is that unlike the assertion on of Portes and SWL (which drive their overall results):
Governments do not need taxes to pay for government spending. That is an ideological constraint designed to limit spending. Intrinsically, a sovereign government is never revenue constrained because it is the monopoly issuer of the currency.
Modern Monetary Theory (MMT) tells us that taxation serves to create real resource space (idle non-government productive resources), which governments can then bring into productive use to fulfill its elected socio-economic mandate. That taxation reduces the inflation risk of such spending but does not ‘fund’ it.
The fact is that a currency-issuing government can purchase anything that is for sale in its own currency including all idle labour.
MMT also recognises other roles for taxation such as taxes on bads designed to divert consumers or producers away from these goods and services. But that is another story.
Further, a government never needs to issue debt to ‘fund’ deficits.
That is another institutional practice that carries over from the fixed-exchange rate, gold standard days. It is no longer necessary and an understanding of MMT leads one to realise it is largely an exercise in the provision of corporate welfare that should be abandoned.
The point is that if you build ‘economic models’ based on these voluntary constraints, as if they are intrinsic constraints, then the results turn out radically different to the outcomes of an analytical exercise where you assume, correctly, that the government does not need taxes to ‘fund’ spending or to issue debt to fund deficits. Then the mainstream results largely collapse.
I suspect the authors in question implicitly know this. If they don’t then you can draw your own conclusions.
The paper I am using to represent the New Keynesian approach has, by all indications, been somewhat influential in the formation of the macroeconomic approach currently being espoused by the British Labour Party. In that sense, the critique aims to disabuse the Labour politicians and their apparatchiks of building policy options based on fake economic knowledge, and, instead, embrace the principles of Modern Monetary Theory (MMT), which provides an accurate depiction of how the monetary system actually operates and the policy options for a currency-issuing government such as in Britain, and the likely consequences of deploying these options.
The one major lesson that comes out is that the New Keynesian approach is an elaborate fraud. It plays around with so-called ‘optimising’ models asserting human behaviour that no other social scientist believes remotely captures the essence of human decision-making, and then derives conclusions from these models that are claimed to apply to the world we live in. Prior to the GFC, these ‘models’ didn’t even consider the financial sector.
The fact is that nothing of value in terms of specifying what a government should do can be gleaned from a New Keynesian approach. It is barren.
Above, we noted that one discussion paper that seems to have influenced the Shadow British Chancellor was published on May 20, 2014 as Discussion Paper No. 429 from the National Institute of Economic and Social Research.
The NIESR paper Issues in the Design of Fiscal Policy Rules was written by Jonathan Portes (who at the time of writing was the Director of the NIESR before he was ‘let go’) and an Oxford academic, Simon Wren-Lewis.
Here I begin by examining the way that the authors try to use the New Keynesian theory as an authority for specific policy conclusions, which they essentially admit (not in those words) cannot, in fact, be derived from the ‘optimal’ theory.
To specify what they call the ‘optimal’ state, Portes and SWL write out some simple mathematical expressions and note: that the government must satisfy its budget constraint (there is no default), and we ignore financing through printing money.
It is interesting that in defending the New Keynesian position against say Modern Monetary Theory (MMT), proponents make a claim for superiority based on their mathematical reasoning and the apparent absence of such optimising mathematics in MMT.
When useful, MMT uses formal language (mathematics) sparingly. Mostly, propositions can be established without resort to mathematics, which avoids creating a wall of comprehension that most people cannot break down.
Further, there is nothing sophisticated about the mathematics that New Keynesians use. It is just simple calculus really, the sort that I learned as an undergraduate studying mathematics. Hardcore mathematicians laugh at the way economists deploy these tools and parade them as if they are generating something deep and meaningful.
We move on.
Note that while the imposition of taxes is deemed a “cost” by Portes and SWL (discussed in earlier), their ‘model’ doesn’t allow the interest payments on the debt to be a ‘beneflt’. They are silent on that. Conveniently so.
Anyway, the equation they write out which captures the constrained optimisation process is claimed to be an ex ante financial constraint, akin to the financial constraints facing a household that must earn income, borrow, reduce savings or sell assets in order to spend.
As we know, the ‘household budget analogy’ applied to a currency-issuing government is wrong at the most elemental level.
Nothing relating to the experience of a household (the currency user) is relevant to assessing the capacities of or the choices available to such a government (currency issuer).
Further, why do they ignore “financing through printing money”? Not that “printing money” is a term that could be associated with the real world practice of government spending anyway.
They ignore it because it would not allow them to generate the results they desire.
The reality is that these so-called ‘budget constraints’ do not depict real ex ante financial constraints. They are, at best, ex post accounting statements, meaning they have to add up. There is nothing much more about them than that.
They may also reflect current institutional practice which is a political rather than an intrinsic financial artifact.
But, if the authors were to be stock-flow (accounting) consistent (which most mainstream models are not meaning they deliberately leave things out and that flows do not accumulate properly into the corresponding stocks), then they would have to include the change in bank reserves arising from central bank monetary operations associated with fiscal policy (for example, crediting banks accounts on behalf of the government).
But those operations are absent in their approach, which means their analysis is incomplete in an accounting sense. Conveniently so.
Of course, one of the glaring omissions of the New Keynesian models that people learned about after the GFC was that they didn’t even have a financial sector embedded in their basic structure. But that is also another story again.
The upshot of Portes and SWL’s mathematical gymnastics, simple though they are, is that the ‘optimal’ fiscal policy requires “tax smoothing”, so that:
if for a period government spending has to be unusually high (classically a war, but also perhaps because of a recession or natural disaster), it would be wrong to try and match this higher spending with higher tax rates. Instead taxes should only be raised by a small amount, with debt increasing instead, but taxes should stay high after government spending has come back down, to at least pay the interest on the extra debt and perhaps also to bring debt back down again.
So, they are saying:
1. Taxes are necessary to fund government spending but temporary deficits (to cope with wars or deep recessions) should be funded by debt.
2. When economic activity improves, there should be a primary fiscal surplus (”taxes at least pay the interest on the extra debt”) and spending should be cut to allow that outcome.
3. Public debt should be a target policy variable (the lower the better) but in the short-term is a “shock absorber to avoid sharp movements in taxes or government spending”.
4. This is a ‘deficit dove’ construction. We will have austerity but it will be delayed.
The questions one needs to ask is under what conditions would a primary surplus be a responsible state for a government to achieve? Portes and SWL want the primary surpluses to be a target goal for government. But such a target is unlikely to be a desirable state.
Remember, a primary fiscal balance is the difference between government spending and taxation flows less payments on outstanding public debt.
One could imagine a situation where a government would sensibly run a primary surplus or even an overall fiscal surplus (inclusive of interest payments on public debt) if it was accompanied by a robust external surplus, which was pumping net spending in the economy and financing the desire of the private domestic sector to save overall.
Then a fiscal surplus would be required to prevent inflationary pressures from emerging. But it would also be consistent with full employment, the provision of first-class public services, and the fulfillment of the overall saving desires by the private domestic sector. Think Norway.
That is not remotely descriptive of where the UK (or most nearly all nations) are at or have been at in recent decades.
The absurdity of the reasoning that arises from the sort of economic framework that Portes and SWL deploy is illustrated when they start tinkering with the parameters of the ‘model’ to see what transpires.
The exercise is trivial. The model has some equations with parameters that link the variables that describe the equation structures. The parameters are conceptual but to get certain results one has to make assumptions about their values (at the most basic level whether they are positive or negative or above or below unity, etc).
Then one muses about what specific assumptions imply for the results.
One such tinkering by Portes and SWL generates an interpretation that taxes:
gradually fall to zero. How can this happen, given that the government has spending to finance? The answer is that debt gradually declines to zero, and then the government starts to build up assets. Eventually it has enough assets that it can finance all its spending from the interest on those assets, and so taxes can be completely eliminated.
Which then raises the question of how the government gets access to any of the real resources that are available for productive use in the society.
If taxes are zero, why would people offer their labour (and other resources) for the public use? And, how will government make non-inflationary real resource space in order for them to spend (command real resources from the non-government sector)?
But discussing those issues will take us away from the main focus.
In essence, none of their mathematical ‘cases’ (the scenarios they defined with differing parameter values) can be established in reality. This is a common problem of this sort of economic reasoning.
What happens next? They ditch the ‘optimal’ results derived from the calculus and start making stuff up asserting their ‘priors’.
So as not to spoil their story, the authors just assert that “there are two reasons for believing that policy should aim to steadily reduce debt in normal times” even if the ‘optimal’ condition indicates the opposite.
First, they introduce the standard argument that “shocks may be asymmetric” with “large negative shock”(s) not being offset in the other direction.
This is a sort of ‘war chest’ argument. That a government will not be able to respond fully in a major downturn if it starts with high levels of public debt.
Why? It will not be attractive to bond investors, it will run out of money, etc.
Tell that to Japan! Fake knowledge.
Second, they write that:
large negative shocks like a financial crisis might mean that we enter a liquidity trap, so that fiscal expansion is required to assist monetary policy, while large positive shocks could be dealt with by monetary rather than fiscal contraction. There is no equivalent upper bound for interest rates, so prudent policy would reduce debt in normal times to make room for the liquidity trap possibility.
This is the standard mainstream claim that monetary policy is the more effective counter-stabilising (and preferred) policy, except in a deep recession when interest rates are cut to zero and have no further room to fall.
So to counter that ineffectiveness when rates are zero, fiscal policy has to be used. But, in general, monetary policy should be prioritised.
But then the same assertion follows. So that fiscal policy can be on standby for those times when interest rates are zero, the government should have low levels of outstanding debt.
Why? The same argument. It will not be able to fund a new fiscal stimulus if it hasn’t eliminated the impacts from a previous stimulus exercise.
That is a plain lie.
The authors just assert that the capacity of a government to net spend is inversely related to the current stock of outstanding debt.
Why? No reason can be derived from their ‘optimal’ models to justify that assertion.
And, again, tell that to Japan!
The post-GFC period has demonstrated that ’monetary policy’ is not a very effective counter-stabilising tool. Governments that used fiscal policy aggressively in the GFC resumed growth much more quickly than those that didn’t. The stimulatory effects of monetary policy are, at best, ambiguous.
Further, the truth is that the capacity of the government to spend is in no way constrained by its past fiscal stance whether it be surplus, balance or deficit.
A surplus today does not mean that the government is better placed to run a deficit tomorrow. It can always run a deficit if the non-government spending and saving decisions push it that way.
The same goes for outstanding debt, which under current institutional arrangements, will be influenced by the shifts in the flows that make up fiscal policy.
But the level of debt doesn’t constrain or alter the government’s ability to net spend.
The authors might claim that bond markets will rebel and stop funding the deficits. Even if the recipients of this corporate welfare decided to cut off their noses to spite their faces and stopped buying the debt that would not alter the government’s capacity to spend.
First, if it persisted in the unnecessary practice of issuing debt, it could instruct the central bank to set the yield and buy all the debt that the private bond markets didn’t want at that (low to zero) yield. Including all of it!
In other words, the government can always play the private bond markets out of the game if it chooses. Even in the Eurozone, where the Member States are not sovereign, the ECB has demonstrated it can set yields at whatever level it chooses. It can drive yields on long-term public debt into the negative! Who would have thought? No New Keynesian that is for sure. They think deficits ‘crowd out’ private investment spending via higher rates (see below).
Second, the government can also alter a rule or two or change legislation that embodies these voluntary accounting constraints that I noted earlier. That is the right of the legislature and beyond the power of bond markets!
In another one of their musings about parameter values, Portes and SWL tell us that:
there is an additional reason why it might be desirable to eliminate government debt completely, and that is because it crowds out productive capital. In simple overlapping generation models, agents save to fund their retirement, and this determines the size of the capital stock. If agents have an alternative means of saving, which is to invest in government debt, then this debt displaces productive capital.
Again, the authors are just rehearsing the standard and deeply flawed mainstream macroeconomic theory, which has the loanable funds model of financial markets embedded.
According to this specious approach, savings are finite and investment competes for the scarce resources. The ‘interest rate’ on loans then brings the two into balance.
The logic then says if there is a shift in the investment demand outwards (capturing in this instance the entry of the government bond to compete with corporate bonds), then the interest rate has to rise to ration off the higher demand for loans, given the finite supply (savings). Wrong at the most elemental level.
First, savings are not finite. They rise with income and if net public spending increases (rising deficit) then national income will rise and so will saving.
Second, and more importantly, real world banks do not remotely operate in a loanable funds way. They will generally extend loans to creditworthy borrowers. This lending is not reserve constrained. Banks do no wait around for depositors to drop their cash off, which they can then on lend.
Loans create deposits (liquidity). Not the other way around, as is assumed by the ‘crowding out’ argument which these authors introduce to their analysis.
So even if the government is selling debt to the non-government sector, the banks still have the capacity (under our current system) to increase private investment.
Further, there is the standard ideological assertion that public spending is ‘less efficient’ (unproductive) compared to “productive capital” (private investment).
The research evidence doesn’t support that assertion. it is just a made up claim to justify privatisation and cuts to government activities.
It has been used to justify the handing out of millions of dollars of public funds to investment bankers, lawyers, accountants etc to sell off public assets at well below market prices to grasping private investors.
We have a long record now of how disastrous most of these selloffs have been from the perspective of the quality, scope and affordability of services that were previously provided by the state.
The next furphy that Portes and SWL introduce is the intergenerational equity argument aka government debt imposes burdens on our grand kids claim.
They claim that lower debt will mean that “Future generations will enjoy a world with lower distortionary taxes, while the current generation will bear the cost of achieving that goal.”
Again, this conclusion follows their assumption that taxes pay back the debt so deficits today force future generations to incur higher costs.
Refer to the previous discussion of the actual role of taxes in a flat monetary system.
The reality is that each generation chooses its own tax and public spending profile via the political process. The way in which intergenerational inequities occur is via real resource utilisation.
We can kill the planet and the kids will then miss out. Alternativelv. we can ensure the kids get access to first-class public infrastructure (education, health, recreation, etc) and have jobs to go to when they develop their skills and knowledge.
Then the kids benefit from today’s fiscal deficits.
But after all of their tinkering with mathematical coefficients (which I have only skimmed here), the authors admit that the “analysis of the optimum long run target for government debt is undeveloped” but:
the case for aiming for a gradual reduction in debt levels seems to be reasonably strong in practice, particularly given the currently high levels of debt in most countries
In other words, the mathematical reasoning leads to nothing definitive so we will just assert things anyway.
It helps economists like this gain promotion as academics and other status that they might enjoy such as picking up ’Inside Job’ type commissions and misrepresenting ideological reports as independent research.
Remember Mishkin in Iceland?
Please read my blog Universities should operate in an ethical and socially responsible manner for more discussion on this point.
I make that comment generally rather than specifically about the authors (Portes and SWL) in question. I don’t know what they do on the side.
So, after all that, what have Portes and SWL to fall back on? Not much. Assertion based on false assumptions.
That doesn’t stop them though.
In Section 3, they still claim ‘authority’ from the discussion on optimal fiscal rules to make the following assertion:
It follows from the previous section that a welfaremaximising government would in general be expected to follow fiscal policies which broadly satisfied the following conditions: a gently declining path of debt over the medium term, but with blips in response to shocks broadly stable tax rates and recurrent government consumption.
Noting it doesn’t follow at all from any results about “welfaremaximising” behaviour that they present. The simple optimising model presented, by the authors’ own admission, is “undeveloped” and incapable of any definitive result.
The results that they claim were derived from the “previous section” are assertions.
But their point is clear. They claim that OECD governments (in general) have not followed these rules and instead the public debt ratios have “steadily increased since the 1970s”, which is evidence of what they call “deficit bias”.
Their claim then is that for various reasons, governments have been acting contrary to “welfare-maximising” behaviour meaning they are acting badly.
Simple isn’t it. Make up a benchmark using flawed assumptions that you know does not apply in the real world. Then label any departures from that fantasy world ‘bad’ and QED, you can then claim in the ‘real world’ that the government is behaving badly.
However, one can contest the benchmark.
If public debt is such an issue, why is the 10-year bond yield for Japanese government bonds at 0.058 per cent (at the time of writing) and why did ‘investors’ pay the Japanese government for the privilege of buying that debt (negative yields) at certain times last year?
Moreover, why are ‘investors’ agreeing to negative yields on all government bond maturities from 1-Year to 8-Years at present.
Further, back in the 1990s, the financial commentators and mainstream macroeconomists were claiming the outstanding Japanese government debt was the mother of all ticking time bombs and they have used this scare tactic long and hard for decades across all nations.
I recall reading some commentator claiming long ago Japan was facing the “mother of all debt-bunnies”, whatever that meant. I guess the ‘bunnies’ hopped away somewhere.
I have gone back through the records I keep and found regular references over the last 27 years to the impending insolvency of Japan because it is violating the economists’ notion of welfare maximising’ debt behaviour.
Across the Pacific, the US was apparently “near to insolvency” on Thursday, September 26, 1940.
Here is an Associated Press story from The Portsmouth Times (Ohio), which was headlines in the New York Times on the same day.
The story quotes one Robert M. Hanes, who at the time was the President of the American Bankers’ Association:
“The evangelists of the new social order are undermining the confidence of the American people in political and economic freedom.
It is a matter of grave concern that we have come to accept deficit financing as a permanent fiscal policy. We not only proceed from year to year on an unbalanced federal budget, but we have permitted the compounding of the federal debt to a huge total which threatens the entire country.
Unless we put an end to deficit financing, to profligate spending, and to indifference to the nature and extent of government borrowing, we shall surely take the road to dictatorship.
By subtle propaganda, special pleading and similar devious device The American people are being persuaded to surrender more and more of their independence to the direction and control of government. This is an evil that feeds on itself.
Deficits and borrowings call for continually larger taxation, which must be met by private enterprise.”
We can find similar remarks throughout history. And, yet, nothing happens. I guess you can cut the Americans some slack such is their penchant for OTT way of doing things.
The point is these economic models that claim public debt should be minimised to prevent costly tax burdens are pie-in-the-neoliberaI-sky sort of stuff.
Further, higher public debt to GDP ratios means that the nongovernment sector has more risk free debt as a proportion of GDP than previously and corresponding income flows.
Oxfam calls for action on gap as wealthiest people gather at World Economic Forum in Davos.
The development charity Oxfam has called for action to tackle the growing gap between rich and poor as it launched a new report showing that 42 people hold as much wealth as the 3.7 billion who make up the poorest half of the world’s population.
In a report published on Monday to coincide with the gathering of some of the world’s richest people at the World Economic Forum in Davos, Oxfam said billionaires had been created at a record rate of one every two days over the past 12 months, at a time when the bottom 50% of the world’s population had seen no increase in wealth. It added that 82% of the global wealth generated in 2017 went to the most wealthy 1%.
The charity said it was “unacceptable and unsustainable” for a tiny minority to accumulate so much wealth while hundreds of millions of people struggled on poverty pay. It called on world leaders to turn rhetoric about inequality into policies to tackle tax evasion and boost the pay of workers.
Mark Goldring, Oxfam GB chief executive, said: “The concentration of extreme wealth at the top is not a sign of a thriving economy, but a symptom of a system that is failing the millions of hardworking people on poverty wages who make our clothes and grow our food.”
Booming global stock markets have been the main reason for the increase in wealth of those holding financial assets during 2017. The founder of Amazon, Jeff Bezos, saw his wealth rise by $6bn (£4.3bn) in the first 10 days of 2017 as a result of a bull market on Wall Street, making him the world’s richest man.
Oxfam said it had made changes to its wealth calculations as a result of new data from the bank Credit Suisse. Under the revised figures, 42 people hold as much wealth as the 3.7 billion people who make up the poorer half of the world’s population, compared with 61 people last year and 380 in 2009. At the time of last year’s report, Oxfam said that eight billionaires held the same wealth as half the world’s population.
The charity added that the wealth of billionaires had risen by 13% a year on average in the decade from 2006 to 2015, with the increase of $762bn (£550bn) in 2017 enough to end extreme poverty seven times over. It said nine out of 10 of the world’s 2,043 dollar billionaires were men.
Goldring said: “For work to be a genuine route out of poverty we need to ensure that ordinary workers receive a living wage and can insist on decent conditions, and that women are not discriminated against. If that means less for the already wealthy then that is a price that we – and they – should be willing to pay.”
An Oxfam survey of 70,000 people in 10 countries, including the UK, showed support for action to tackle inequality. Nearly two-thirds of people – 72% in the UK – said they want their government to urgently address the income gap between rich and poor in their country.
In the UK, when asked what a typical British chief executive earned in comparison with an unskilled worker, people guessed 33 times as much. When asked what the ideal ratio should be, they said 7:1. Oxfam said that FTSE 100 bosses earned on average 120 times more than the average employee.
Goldring said it was time to rethink a global economy in which there was excessive corporate influence on policymaking, erosion of workers’ rights and a relentless drive to minimise costs in order to maximise returns to investors.
Mark Littlewood, director general at the Institute of Economic Affairs, said: “Oxfam is promoting a race to the bottom. Richer people are already highly taxed people – reducing their wealth beyond a certain point won’t lead to redistribution, it will destroy it to the benefit of no one. Higher minimum wages would also likely lead to disappearing jobs, harming the very people Oxfam intend to help.”
Make the Left Great Again
The West is currently in the midst of an anti-establishment revolt of historic proportions. The Brexit vote in the United Kingdom, the election of Donald Trump in the United States, the rejection of Matteo Renzi’s neoliberal constitutional reform in Italy, the EU’s unprecedented crisis of legitimation: although these interrelated phenomena differ in ideology and goals, they are all rejections of the (neo) liberal order that has dominated the world –and in particular the West –for the past 30 years.
Even though the system has thus far proven capable (for the most part) of absorbing and neutralising these electoral uprisings, there is no indication that this anti-establishment revolt is going to abate any time soon. Support for anti-establishment parties in the developed world is at the highest level since the 1930s –and growing. At the same time, support for mainstream parties –including traditional social-democratic parties –has collapsed.
The reasons for this backlash are rather obvious. The financial crisis of 2007–9 laid bare the scorched earth left behind by neoliberalism, which the elites had gone to great lengths to conceal, in both material (financialisation) and ideological (‘the end of history’) terms.
As credit dried up, it became apparent that for years the economy had continued to grow primarily because banks were distributing the purchasing power –through debt –that businesses were not providing in salaries. To paraphrase Warren Buffett, the receding tide of the debt-fuelled boom revealed that most people were, in fact, swimming naked.
The situation was (is) further exacerbated by the post-crisis policies of fiscal austerity and wage deflation pursued by a number of Western governments, particularly in Europe, which saw the financial crisis as an opportunity to impose an even more radical neoliberal regime and to push through policies designed to suit the financial sector and the wealthy, at the expense of everyone else.
Thus, the unfinished agenda of privatisation, deregulation and welfare state retrenchment –temporarily interrupted by the financial crisis –was reinstated with even greater vigour. Amid growing popular dissatisfaction, social unrest and mass unemployment (in a number of European countries), political elites on both sides of the Atlantic responded with business-as-usual policies and discourses.
As a result, the social contract binding citizens to traditional ruling parties is more strained today than at any other time since World War II –and in some countries has arguably already been broken.
Of course, even if we limit the scope of our analysis to the post-war period, anti-systemic movements and parties are not new in the West. Up until the 1980s, anti-capitalism remained a major force to be reckoned with. The novelty is that today –unlike 20, 30 or 40 years ago –it is movements and parties of the right and extreme right (along with new parties of the neoliberal ‘extreme centre’, such as the new French president Emmanuel Macron’s party En Marche!) that are leading the revolt, far outweighing the movements and parties of the left in terms of voting strength and opinion-shaping.
With few exceptions, left parties –that is, parties to the left of traditional social-democratic parties –are relegated to the margins of the political spectrum in most countries.
Meanwhile, in Europe, traditional social-democratic parties are being ‘pasokified’–that is, reduced to parliamentary insignificance, like many of their centre-right counterparts, due to their embrace of neoliberalism and failure to offer a meaningful alternative to the status quo –in one country after another.
The term refers to the Greek social-democratic party PASOK, which was virtually wiped out of existence in 2014, due to its inane handling of the Greek debt crisis, after dominating the Greek political scene for more than three decades. A similar fate has befallen other former behemoths of the social-democratic establishment, such as the French Socialist Party and the Dutch Labour Party (PvdA). Support for social-democratic parties is today at the lowest level in 70 years –and falling.
How should we explain the decline of the left –not just the electoral decline of those parties that are commonly associated with the left side of the political spectrum, regardless of their effective political orientation, but also the decline of core left values within those parties and within society in general?
Why has the anti-establishment left proven unable to fill the vacuum left by the collapse of the establishment left? More broadly, how did the left come to count so little in global politics? Can the left, both culturally and politically, become a major force in our societies again? And if so, how?
These are some of the questions that we attempt to answer in this book. Though the left has been making inroads in some countries in recent years –notable examples include Bernie Sanders in the United States, Jeremy Corbyn in the UK, Podemos in Spain and Jean-Luc Mélenchon in France –and has even succeeded in taking power in Greece (though the SYRIZA government was rapidly brought to heel by the European establishment), there is no denying that, for the most part, movements and parties of the extreme right have been more effective than left-wing or progressive forces at tapping into the legitimate grievances of the masses –disenfranchised, marginalised, impoverished and dispossessed by the 40-year-long neoliberal class war waged from above.
In particular, they are the only forces that have been able to provide a (more or less) coherent response to the widespread –and growing –yearning for greater territorial or national sovereignty, increasingly seen as the only way, in the absence of effective supranational mechanisms of representation, to regain some degree of collective control over politics and society, and in particular over the flows of capital, trade and people that constitute the essence of neoliberal globalisation. Given neoliberalism’s war against sovereignty, it should come as no surprise that ‘sovereignty has become the master-frame of contemporary politics’, as Paolo Gerbaudo notes.
After all, as we argue in Chapter 5, the hollowing out of national sovereignty and curtailment of popular-democratic mechanisms –what has been termed depoliticisation –has been an essential element of the neoliberal project, aimed at insulating macroeconomic policies from popular contestation and removing any obstacles put in the way of economic exchanges and financial flows.
Given the nefarious effects of depoliticisation, it is only natural that the revolt against neoliberalism should first and foremost take the form of demands for a repoliticisation of national decision-making processes.
The fact that the vision of national sovereignty that was at the centre of the Trump and Brexit campaigns, and that currently dominates the public discourse, is a reactionary, quasi-fascist one –mostly defined along ethnic, exclusivist and authoritarian lines –should not be seen as an indictment of national sovereignty as such. History attests to the fact that national sovereignty and national self-determination are not intrinsically reactionary or jingoistic concepts –in fact, they were the rallying cries of countless nineteenth- and twentieth-century socialist and left-wing liberation movements.
Even if we limit our analysis to core capitalist countries, it is patently obvious that virtually all the major social, economic and political advancements of the past centuries were achieved through the institutions of the democratic nation state, not through international, multilateral or supranational institutions, which in a number of ways have, in fact, been used to roll back those very achievements, as we have seen in the context of the euro crisis, where supranational (and largely unaccountable) institutions such as the European Commission, Eurogroup and European Central Bank (ECB) used their power and authority to impose crippling austerity on struggling countries.
The problem, in short, is not national sovereignty as such, but the fact that the concept in recent years has been largely monopolised by the right and extreme right, which understandably sees it as a way to push through its xenophobic and identitarian agenda. It would therefore be a grave mistake to explain away the seduction of the ‘Trumpenproletariat’ by the far right as a case of false consciousness, as Marc Saxer notes; the working classes are simply turning to the only movements and parties that (so far) promise them some protection from the brutal currents of neoliberal globalisation (whether they can or truly intend to deliver on that promise is a different matter).
However, this simply raises an even bigger question: why has the left not been able to offer the working classes and increasingly proletarianised middle classes a credible alternative to neoliberalism and to neoliberal globalisation? More to the point, why has it not been able to develop a progressive view of national sovereignty?
As we argue in this book, the reasons are numerous and overlapping. For starters, it is important to understand that the current existential crisis of the left has very deep historical roots, reaching as far back as the 1960s. If we want to comprehend how the left has gone astray, that is where we have to begin our analysis.
Today the post-war ‘Keynesian’ era is eulogised by many on the left as a golden age in which organised labour and enlightened thinkers and policymakers (such as Keynes himself) were able to impose a ‘class compromise’ on reluctant capitalists that delivered unprecedented levels of social progress, which were subsequently rolled back following the so-called neoliberal counter-revolution.
It is thus argued that, in order to overcome neoliberalism, all it takes is for enough members of the establishment to be swayed by an alternative set of ideas. However, as we note in Chapter 2, the rise and fall of Keynesianism cannot simply be explained in terms of working-class strength or the victory of one ideology over another, but should instead be viewed as the outcome of the fortuitous confluence, in the aftermath of World War II, of a number of social, ideological, political, economic, technical and institutional conditions.
To fail to do so is to commit the same mistake that many leftists committed in the early post-war years. By failing to appreciate the extent to which the class compromise at the base of the Fordist-Keynesian system was, in fact, a crucial component of that history-specific regime of accumulation –actively supported by the capitalist class insofar as it was conducive to profit-making, and bound to be jettisoned once it ceased to be so –many socialists of the time convinced themselves ‘that they had done much more than they actually had to shift the balance of class power, and the relationship between states and markets’.
Some even argued that the developed world had already entered a post-capitalist phase, in which all the characteristic features of capitalism had been permanently eliminated, thanks to a fundamental shift of power in favour of labour vis-à-vis capital, and of the state vis-à-vis the market. Needless to say, that was not the case.
Furthermore, as we show in Chapter 3, monetarism –the ideological precursor to neoliberalism –had already started to percolate into left-wing policymaking circles as early as the late 1960s. Thus, as argued in Chapters 2 and 3, many on the left found themselves lacking the necessary theoretical tools to understand –and correctly respond to –the capitalist crisis that engulfed the Keynesian model in the 1970s, convincing themselves that the distributional struggle that arose at the time could be resolved within the narrow limits of the social-democratic framework.
The truth of the matter was that the labour–capital conflict that re-emerged in the 1970s could only have been resolved one way or another: on capital’s terms, through a reduction of labour’s bargaining power, or on labour’s terms, through an extension of the state’s control over investment and production. As we show in Chapters 3 and 4, with regard to the experience of the social-democratic governments of Britain and France in the 1970s and 1980s, the left proved unwilling to go this way. This left it (no pun intended) with no other choice but to ‘manage the capitalist crisis on behalf of capital’, as Stuart Hall wrote, by ideologically and politically legitimising neoliberalism as the only solution to the survival of capitalism.
In this regard, as we show in Chapter 3, the Labour government of James Callaghan (1974–9) bears a very heavy responsibility. In an (in) famous speech in 1976, Callaghan justified the government’s programme of spending cuts and wage restraint by declaring Keynesianism dead, indirectly legitimising the emerging monetarist (neoliberal) dogma and effectively setting up the conditions for Labour’s ‘austerity lite’ to be refined into an all-out attack on the working class by Margaret Thatcher.
Even worse, perhaps, Callaghan popularised the notion that austerity was the only solution to the economic crisis of the 1970s, anticipating Thatcher’s ‘there is no alternative’(TINA) mantra, even though there were radical alternatives available at the time, such as those put forward by Tony Benn and others. These, however, were ‘no longer perceived to exist’.
In this sense, the dismantling of the post-war Keynesian framework cannot simply be explained as the victory of one ideology (‘neoliberalism’) over another (‘Keynesianism’), but should rather be understood as the result of a number of overlapping ideological, economic and political factors: the capitalists’response to the profit squeeze and to the political implications of full employment policies; the structural flaws of ‘actually existing Keynesianism’; and, importantly, the left’s inability to offer a coherent response to the crisis of the Keynesian framework, let alone a radical alternative.
These are all analysed in-depth in the first chapters of the book. Furthermore, throughout the 1970s and 1980s, a new (fallacious) left consensus started to set in: that economic and financial internationalisation –what today we call ‘globalisation’–had rendered the state increasingly powerless vis-à-vis ‘the forces of the market’, and that therefore countries had little choice but to abandon national economic strategies and all the traditional instruments of intervention in the economy (such as tariffs and other trade barriers, capital controls, currency and exchange rate manipulation, and fiscal and central bank policies), and hope, at best, for transnational or supranational forms of economic governance.
In other words, government intervention in the economy came to be seen not only as ineffective but, increasingly, as outright impossible. This process –which was generally (and erroneously, as we shall see) framed as a shift from the state to the market –was accompanied by a ferocious attack on the very idea of national sovereignty, increasingly vilified as a relic of the past. As we show, the left –in particular the European left –played a crucial role in this regard as well, by cementing this ideological shift towards a post-national and post-sovereign view of the world, often anticipating the right on these issues.
One of the most consequential turning points in this respect, which is analysed in Chapter 4, was Mitterrand’s 1983 turn to austerity –the so-called tournant de la rigueur –just two years after the French Socialists’ historic victory in 1981. Mitterrand’s election had inspired the widespread belief that a radical break with capitalism –at least with the extreme form of capitalism that had recently taken hold in the Anglo-Saxon world –was still possible. By 1983, however, the French Socialists had succeeded in ‘proving’ the exact opposite: that neoliberal globalisation was an inescapable and inevitable reality. As Mitterrand stated at the time: ‘National sovereignty no longer means very much, or has much scope in the modern world economy. …A high degree of supra-nationality is essential.’
The repercussions of Mitterrand’s about-turn are still being felt today. It is often brandished by left-wing and progressive intellectuals as proof of the fact that globalisation and the internationalisation of finance has ended the era of nation states and their capacity to pursue policies that are not in accord with the diktats of global capital. The claim is that if a government tries autonomously to pursue full employment and a progressive/redistributive agenda, it will inevitably be punished by the amorphous forces of global capital.
This narrative claims that Mitterrand had no option but to abandon his agenda of radical reform. To most modern-day leftists, Mitterrand thus represents a pragmatist who was cognisant of the international capitalist forces he was up against and responsible enough to do what was best for France. In fact, as we argue in the second part of the book, sovereign, currency-issuing states –such as France in the 1980s –far from being helpless against the power of global capital, still have the capacity to deliver full employment and social justice to their citizens.
So how did the idea of the ‘death of the state’come to be so ingrained in our collective consciousness?
As we explain in Chapter 5, underlying this post-national view of the world was (is) a failure to understand –and in some cases an explicit attempt to conceal –on behalf of left-wing intellectuals and policymakers that ‘globalisation’ was (is) not the result of inexorable economic and technological changes but was (is) largely the product of state-driven processes. All the elements that we associate with neoliberal globalisation –delocalisation, deindustrialisation, the free movement of goods and capital, etc. –were (are), in most cases, the result of choices made by governments.
More generally, states continue to play a crucial role in promoting, enforcing and sustaining a (neo) liberal international framework –though that would appear to be changing, as we discuss in Chapter 6 –as well as establishing the domestic conditions for allowing global accumulation to flourish. The same can be said of neoliberalism tout court.
There is a widespread belief –particularly among the left –that neoliberalism has involved (and involves) a ‘retreat’, ‘hollowing out’ or ‘withering away’ of the state, which in turn has fuelled the notion that today the state has been ‘overpowered’ by the market. However, as we argue in Chapter 5, neoliberalism has not entailed a retreat of the state but rather a reconfiguration of the state, aimed at placing the commanding heights of economic policy ‘in the hands of capital, and primarily financial interests’.
It is self-evident, after all, that the process of neoliberalisation would not have been possible if governments –and in particular social-democratic governments –had not resorted to a wide array of tools to promote it: the liberalisation of goods and capital markets; the privatisation of resources and social services; the deregulation of business, and financial markets in particular; the reduction of workers’ rights (first and foremost, the right to collective bargaining) and more generally the repression of labour activism; the lowering of taxes on wealth and capital, at the expense of the middle and working classes; the slashing of social programmes; and so on.
These policies were systemically pursued throughout the West (and imposed on developing countries) with unprecedented determination, and with the support of all the major international institutions and political parties.
As noted in Chapter 5, even the loss of national sovereignty –which has been invoked in the past, and continues to be invoked today, to justify neoliberal policies –is largely the result of a willing and conscious limitation of state sovereign rights by national elites.
The reason why governments chose willingly to ‘tie their hands’ is all too clear: as the European case epitomises, the creation of self-imposed ‘external constraints’ allowed national politicians to reduce the politics costs of the neoliberal transition –which clearly involved unpopular policies –by ‘scapegoating’ institutionalised rules and ‘independent’ or international institutions, which in turn were presented as an inevitable outcome of the new, harsh realities of globalisation.
Moreover, neoliberalism has been (and is) associated with various forms of authoritarian statism –that is, the opposite of the minimal state advocated by neoliberals –as states have bolstered their security and policing arms as part of a generalised militarisation of civil protest. In other words, not only does neoliberal economic policy require the presence of a strong state, but it requires the presence of an authoritarian state (particularly where extreme forms of neoliberalism are concerned, such as the ones experimented with in periphery countries), at both the domestic and international level (see Chapter 5).
In this sense, neoliberal ideology, at least in its official anti-state guise, should be considered little more than a convenient alibi for what has been and is essentially a political and state-driven project. Capital remains as dependent on the state today as it was under ‘Keynesianism’–to police the working classes, bail out large firms that would otherwise go bankrupt, open up markets abroad (including through military intervention), etc.
The ultimate irony, or indecency, is that traditional left establishment parties have become standard-bearers for neoliberalism themselves, both while in elected office and in opposition.
In the months and years that followed the financial crash of 2007–9, capital’s –and capitalism’s –continued dependency on the state in the age of neoliberalism became glaringly obvious, as the governments of the US, Europe and elsewhere bailed out their respective financial institutions to the tune of trillions of euros/dollars.
In Europe, following the outbreak of the so-called ‘euro crisis’ in 2010, this was accompanied by a multi-level assault on the post-war European social and economic model aimed at restructuring and re-engineering European societies and economies along lines more favourable to capital. This radical reconfiguration of European societies –which, again, has seen social-democratic governments at the forefront –is not based on a retreat of the state in favour of the market, but rather on a reintensification of state intervention on the side of capital.
Nonetheless, the erroneous idea of the waning nation state has become an entrenched fixture of the left. As we argue throughout the book, we consider this to be central in understanding the decline of the traditional political left and its acquiescence to neoliberalism.
In view of the above, it is hardly surprising that the mainstream left is, today, utterly incapable of offering a positive vision of national sovereignty in response to neoliberal globalisation. To make matters worse, most leftists have bought into the macroeconomic myths that the establishment uses to discourage any alternative use of state fiscal capacities.
For example, they have accepted without question the so-called household budget analogy, which suggests that currency-issuing governments, like households, are financially constrained, and that fiscal deficits impose crippling debt burdens on future generations –a notion that we thoroughly debunk in Chapter 8.
This has gone hand in hand with another, equally tragic, development. As discussed in Chapter 5, following its historical defeat, the left’s traditional anti-capitalist focus on class slowly gave way to a liberal-individualist understanding of emancipation. Waylaid by post-modernist and post-structuralist theories, left intellectuals slowly abandoned Marxian class categories to focus, instead, on elements of political power and the use of language and narratives as a way of establishing meaning. This also defined new arenas of political struggle that were diametrically opposed to those defined by Marx.
Over the past three decades, the left focus on ‘capitalism’ has given way to a focus on issues such as racism, gender, homophobia, multiculturalism, etc. Marginality is no longer described in terms of class but rather in terms of identity. The struggle against the illegitimate hegemony of the capitalist class has given way to the struggles of a variety of (more or less) oppressed and marginalised groups: women, ethnic and racial minorities, the LGBTQ community, etc. As a result, class struggle has ceased to be seen as the path to liberation.
In this new post-modernist world, only categories that transcend Marxian class boundaries are considered meaningful. Moreover, the institutions that evolved to defend workers against capital –such as trade unions and social-democratic political parties –have become subjugated to these non-class struggle foci. What has emerged in practically all Western countries as a result, as Nancy Fraser notes, is a perverse political alignment between ‘mainstream currents of new social movements (feminism, anti-racism, multiculturalism, and LGBTQ rights), on the one side, and high-end “symbolic” and service-based business sectors (Wall Street, Silicon Valley, and Hollywood), on the other’.
The result is a progressive neoliberalism ‘that mix[es] together truncated ideals of emancipation and lethal forms of financialization’, with the former unwittingly lending their charisma to the latter.
As societies have become increasingly divided between well-educated, highly mobile, highly skilled, socially progressive cosmopolitan urbanites, and lower-skilled and less educated peripherals who rarely work abroad and face competition from immigrants, the mainstream left has tended to consistently side with the former. Indeed, the split between the working classes and the intellectual-cultural left can be considered one of the main reasons behind the right-wing revolt currently engulfing the West.
As argued by Jonathan Haidt, the way the globalist urban elites talk and act unwittingly activates authoritarian tendencies in a subset of nationalists. In a vicious feedback loop, however, the more the working classes turn to right-wing populism and nationalism, the more the intellectual-cultural left doubles down on its liberal-cosmopolitan fantasies, further radicalising the ethno-nationalism of the proletariat.
As Wolfgang Streeck writes: Protests against material and moral degradation are suspected of being essentially fascist, especially now that the former advocates of the plebeian classes have switched to the globalization party, so that if their former clients wish to complain about the pressures of capitalist modernization, the only language at their disposal is the pre-political, untreated linguistic raw material of everyday experiences of deprivation, economic or cultural. This results in constant breaches of the rules of civilized public speech, which in turn can trigger indignation at the top and mobilization at the bottom.
This is particularly evident in the European debate, where, despite the disastrous effects of the EU and monetary union, the mainstream left –often appealing to exactly the same arguments used by Callaghan and Mitterrand 30–40 years ago –continues to cling on to these institutions and to the belief that they can be reformed in a progressive direction, despite all evidence to the contrary, and to dismiss any talk of restoring a progressive agenda on the foundation of retrieved national sovereignty as a ‘retreat into nationalist positions’, inevitably bound to plunge the continent into 1930s-style fascism.
This position, as irrational as it may be, is not surprising, considering that European Economic and Monetary Union (EMU) is, after all, a brainchild of the European left (see Chapter 5). However, such a position presents numerous problems, which are ultimately rooted in a failure to understand the true nature of the EU and monetary union.
First of all, it ignores the fact that the EU’s economic and political constitution is structured to produce the results that we are seeing –the erosion of popular sovereignty, the massive transfer of wealth from the middle and lower classes to the upper classes, the weakening of labour and more generally the rollback of the democratic and social/economic gains that had previously been achieved by subordinate classes –and is designed precisely to impede the kind of radical reforms to which progressive integrationists or federalists aspire to.
More importantly, however, it effectively reduces the left to the role of defender of the status quo, thus allowing the political right to hegemonise the legitimate anti-systemic –and specifically anti-EU –grievances of citizens. This is tantamount to relinquishing the discursive and political battleground for a post-neoliberal hegemony –which is inextricably linked to the question of national sovereignty –to the right and extreme right. It is not hard to see that if progressive change can only be implemented at the global or even European level –in other words, if the alternative to the status quo offered to electorates is one between reactionary nationalism and progressive globalism –then the left has already lost the battle.
It needn’t be this way, however. As we argue in the second part of the book, a progressive, emancipatory vision of national sovereignty that offers a radical alternative to both the right and the neoliberals –one based on popular sovereignty, democratic control over the economy, full employment, social justice, redistribution from the rich to the poor, inclusivity and the socio-ecological transformation of production and society –is possible. Indeed, it is necessary.
As J. W. Mason writes: Whatever [supranational] arrangements we can imagine in principle, the systems of social security, labor regulation, environmental protection, and redistribution of income and wealth that in fact exist are national in scope and are operated by national governments. By definition, any struggle to preserve social democracy as it exists today is a struggle to defend national institutions.
As we contend in this book, the struggle to defend the democratic sovereign from the onslaught of neoliberal globalisation is the only basis on which the left can be refounded (and the nationalist right challenged). However, this is not enough.
The left also needs to abandon its obsession with identity politics and retrieve the ‘more expansive, anti-hierarchical, egalitarian, class-sensitive, anti-capitalist understandings of emancipation’ that used to be its trademark (which, of course, is not in contradiction with the struggle against racism, patriarchy, xenophobia and other forms of oppression and discrimination).
Fully embracing a progressive vision of sovereignty also means abandoning the many false macroeconomic myths that plague left-wing and progressive thinkers. One of the most pervasive and persistent myths is the assumption that governments are revenue-constrained, that is, that they need to ‘fund’ their expenses through taxes or debt. This leads to the corollary that governments have to ‘live within their means’, since ongoing deficits will inevitably result in an ‘excessive’ accumulation of debt, which in turn is assumed to be ‘unsustainable’ in the long run.
In reality, as we show in Chapter 8, monetarily sovereign (or currency-issuing) governments –which nowadays include most governments –are never revenue-constrained because they issue their own currency by legislative fiat and always have the means to achieve and sustain full employment and social justice.
In this sense, a progressive vision of national sovereignty should aim to reconstruct and redefine the national state as a place where citizens can seek refuge ‘in democratic protection, popular rule, local autonomy, collective goods and egalitarian traditions’, as Streeck argues, rather than a culturally and ethnically homogenised society.
This is also the necessary prerequisite for the construction of a new international( ist) world order, based on interdependent but independent sovereign states. It is such a vision that we present in this book.
The Great Transformation Redux: From Keynesianism to Neoliberalism –and Beyond
1 Broken Paradise: A Critical Assessment of the Keynesian ‘Full Employment’ Era
THE IDEALIST VIEW: KEYNESIANISM AS THE VICTORY OF ONE IDEOLOGY OVER ANOTHER
Looking back on the 30-year-long economic expansion that followed World War II, Adam Przeworski and Michael Wallerstein concluded that ‘by most criteria of economic progress the Keynesian era was a success’.
It is hard to disagree: throughout the West, from the mid-1940s until the early 1970s, countries enjoyed lower levels of unemployment, greater economic stability and higher levels of economic growth than ever before. That stability, particularly in the US, also rested on a strong financial regulatory framework: on the widespread provision of deposit insurance to stop bank runs; strict regulation of the financial system, including the separation of commercial banking from investment banking; and extensive capital controls to reduce currency volatility.
These domestic and international restrictions ‘kept financial excesses and bubbles under control for over a quarter of a century’.
Wages and living standards rose, and –especially in Europe –a variety of policies and institutions for welfare and social protection (also known as the ‘welfare state’) were created, including sustained investment in universally available social services such as education and health. Few people would deny that this was, indeed, a ‘golden age’ for capitalism.
However, when it comes to explaining what made this exceptional period possible and why it came to an end, theories abound. Most contemporary Keynesians subscribe to a quasi-idealist view of history –that is, one that stresses the central role of ideas and ideals in human history. This is perhaps unsurprising, considering that Keynes himself famously noted: ‘Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.’
According to this view, the social and economic achievements of the post-war period are largely attributable to the revolution in economic thinking spearheaded by the British economist John Maynard Keynes.
Throughout the 1920s and 1930s, Keynes overturned the old classical (neoclassical) paradigm, rooted in the doctrine of laissez-faire (‘let it be’) free-market capitalism, which held that markets are fundamentally self-regulating. The understanding was that the economy, if left to its own devices –that is, with the government intervening as little as possible –would automatically generate stability and full employment, as long as workers were flexible in their wage demands.
The Great Depression of the 1930s that followed the stock market crash of 1929 –where minimal financial regulation, little-understood financial products and overindebted households and banks all conspired to create a huge speculative bubble which, when it burst, brought the US financial system crashing down, and with it the entire global economy –clearly challenged traditional laissez-faire economic theories.
This bolstered Keynes’ argument –spelled out at length in his masterpiece, The General Theory of Employment, Interest, and Money, published in 1936 –that aggregate spending determined the overall level of economic activity, and that inadequate aggregate spending could lead to prolonged periods of high unemployment (what he called ‘underemployment equilibrium’). Thus, he advocated the use of debt-based expansionary fiscal and monetary measures and a strict regulatory framework to counter capitalism’s tendency towards financial crises and disequilibrium, and to mitigate the adverse effects of economic recessions and depressions, first and foremost by creating jobs that the private sector was unable or unwilling to provide.
The bottom line of Keynes’ argument was that the government always has the ability to determine the overall level of spending and employment in the economy. In other words, full employment was a realistic goal that could be pursued at all times.
Yet politicians were slow to catch on. When the speculative bubbles in both Europe and the United States burst in the aftermath of the Wall Street crash of 1929, various countries (to varying degrees, and more or less willingly) turned to austerity as a perceived ‘cure’ for the excesses of the previous decade.
In the United States, president Herbert Hoover, a year after the crash, declared that ‘economic depression cannot be cured by legislative action or executive pronouncements’ and that ‘economic wounds must be healed by the action of the cells of the economic body –the producers and consumers themselves’.
At first Hoover and his officials downplayed the stock market crash, claiming that the economic slump would be only temporary. When the situation did not improve, Hoover advocated a strict laissez-faire policy, dictating that the federal government should not interfere with the economy but rather let the economy right itself. He counselled that ‘every individual should sustain faith and courage’ and ‘each should maintain self-reliance’.
Even though Hoover supported a doubling of government expenditure on public works projects, he also firmly believed in the need for a balanced budget. As Nouriel Roubini and Stephen Mihm observe, Hoover ‘wanted to reconcile contradictory aims: to cultivate self-reliance, to provide government help in a time of crisis, and to maintain fiscal discipline. This was impossible.’ In fact, it is widely agreed that Hoover’s inaction was responsible for the worsening of the Great Depression.
If the United States’ reaction under Hoover can be described as ‘too little, too late’, Europe’s reaction in the late 1920s and early 1930s actively contributed to the downward spiral of the Great Depression, setting the stage for World War II.
Austerity was the dominant response of European governments during the early years of the Great Depression. The political consequences are well known. Anti-systemic parties gained strength all across the continent, most notably in Germany. While 24 European regimes had been democratic in 1920, the number was down to eleven in 1939.
Various historians and economists see the rise of Hitler as a direct consequence of the austerity policies indirectly imposed on Germany by its creditors following the economic crash of the late 1920s. Ewald Nowotny, the current head of Austria’s national bank, stated that it was precisely ‘the single-minded concentration on austerity policy’ in the 1930s that ‘led to mass unemployment, a breakdown of democratic systems and, at the end, to the catastrophe of Nazism’.
Historian Steven Bryan agrees: ‘During the 1920s and 1930s it was precisely the refusal to acknowledge the social and political consequences of austerity that helped bring about not only the depression, but also the authoritarian governments of the 1930s.
Reclaiming the State. A Progressive Vision of Sovereignty for a Post-Neoliberal World
William Mitchell and Thomas Fazi.
get it at Amazon.com
As even its harshest critics concede, neoliberalism is hard to pin down. In broad terms, it denotes a preference for markets over government, economic incentives over social or cultural norms, and private entrepreneurship over collective or community action. It has been used to describe a wide range of phenomena—from Augusto Pinochet to Margaret Thatcher and Ronald Reagan, from the Clinton Democrats and Britain’s New Labour to the economic opening in China and the reform of the welfare state in Sweden.
Even though neoliberalism as an ideology springs from a desire to curtail the state’s role, neoliberalism as a political-economic reality has produced increasingly powerful, interventionist and ever-reaching – even authoritarian – state apparatuses.
The process of neoliberalisation has entailed extensive and permanent state intervention, including: the liberalisation of goods and capital markets; the privatisation of resources and social services; the deregulation of business, and financial markets in particular; the reduction of workers’ rights (first and foremost, the right to collective bargaining) and more in general the repression of labour activism; the lowering of taxes on wealth and capital, at the expense of the middle and working classes; the slashing of social programmes, and so on.
These policies were systemically pursued throughout the West (and imposed on developing countries) with unprecedented determination, and with the support of all the major international institutions and political parties.
In this sense, neoliberal ideology, at least in its official anti-state guise, should be considered little more than a convenient alibi for what has been and is essentially a political and state-driven project, aimed at placing the commanding heights of economic policy ‘in the hands of capital, and primarily financial interests’.
Capital remains as dependent on the state today as it did in under ‘Keynesianism’ – to police the working classes, bail out large firms that would otherwise go bankrupt, open up markets abroad, etc. In the months and years that followed the financial crash of 2007-9, capital’s – and capitalism’s – continued dependency on the state in the age of neoliberalism became glaringly obvious, as the governments of the US and Europe and elsewhere bailed out their respective financial institutions to the tune of trillions of euros/dollars.
In Europe, following the outbreak of the so-called ‘euro crisis’ in 2010, this was accompanied by a multi-level assault on the post-war European social and economic model aimed at restructuring and re-engineering European societies and economies along lines more favourable to capital.
Nonetheless, the flawed notion that neoliberalism entails a retreat of the state continues to remain a fixture of the left. This is further compounded by the idea that the state has been rendered powerless by the forces of globalisation. Conventional wisdom holds that globalisation and the internationalisation of finance have ended the era of nation states and their capacity to pursue policies not in accord with the diktats of global capital. But does the evidence support the assertion that national sovereignty has truly reached the end of its days?
New Zealanders like to think that we are, in most respects, up with – if not actually ahead of – the play. Sadly, however, as a new government is about to emerge, there is no sign that our politicians and policymakers are aware of recent developments in a crucial area of policy, and that, as a result, we are in danger of missing out on opportunities that others have been ready to take.
The story starts, at least in its most recent form, with two important developments. First, there is the now almost universal recognition that the vast majority of money in circulation is not – as most people once believed – notes and coins issued on behalf of the government by the Reserve Bank, but is actually created by the commercial banks through the credit they advance, using bank entries rather than cash, and usually on mortgage.
The truth of this proposition, so long denied, is now explicitly accepted by the Bank of England, and was – as long ago as 1994 – explained in a letter written by our own Reserve Bank to an enquirer, and stating in terms that 97% of the money included in the usually used definition of money known as M3 is created by the commercial banks.
The proposition is endorsed by the world’s leading monetary economists – Lord Adair Turner, the former chair of the UK’s Financial Services Authority and Professor Richard Werner of Southampton University, to name but two. These men are not snake-oil salesmen, to be easily dismissed. They have been joined by leading financial journalists, such as Martin Wolf of the Financial Times.
The second development was the use by western governments around the world of “quantitative easing” in the aftermath of the Global Financial Crisis. “Quantitative easing” was a sanitised term to describe what is often pejoratively termed “printing money” – but, whatever it is called, it was new money created at the behest of the government and used to bail out the banks by adding it to their balance sheets.
These two developments, not surprisingly, generated a number of unavoidable questions about monetary policy. If banks could create billions in new money for their own profit-making purposes, (they make their money by charging interest on the money they create), why could governments not do the same, but for public purposes, such as investment in new infrastructure and productive capacity?
And if governments were indeed to create new money through “quantitative easing”, why could that new money not be applied to purposes other than shoring up the banks?
The conventional answer to such questions (and the one invariably given in New Zealand by supposed experts in recent times) is that “printing money” will be inflationary – though it is never explained why it is miraculously non-inflationary when the new money is created by bank loans on mortgage or is applied to bail out the banks.
But, in any case, the master economist, John Maynard Keynes, had got there long before the closed minds and had carefully explained that new money could not be inflationary if it was applied to productive purposes so that new output matched the increased money supply. Nor was there any reason why the new money should not precede the increased output, provided that the increased output materialised in due course.
Those timorous souls who doubt the Keynesian argument might care to look instead at practical experience. Franklin Delano Roosevelt used exactly this technique to increase investment in American industry in the year or two before the US entered the Second World War. It was that substantial boost to American industrial capacity that was the decisive factor in allowing the Allies to win the war.
And the great Japanese (and Keynesian) economist, Osamu Shimomura, (almost unknown in the West), took the same approach in advising the post-war Japanese government on how to re-build Japanese industry in a country devastated by defeat and nuclear bombs.
The current Japanese Prime Minister, Shinzo Abe, is a follower of Shimomura. His policies, reapplied today, have Japan growing, after years of stagnation, at 4% per annum and with minimal inflation.
Our leaders, however, including luminaries of both right and left, some with experience of senior roles in managing our economy – and in case it is thought impolite to name them I leave it to you to guess who they are – prefer to remain in their fearful self-imposed shackles, ignoring not only the views of experts and the experience of braver leaders in other countries and earlier times, but – surprisingly enough – denying even our own home-grown New Zealand experience.
Many of today’s generation will have forgotten or be unaware of the brave and successful initiative taken by our Prime Minister in the 1930s – the great Michael Joseph Savage. He created new money with which he built thousands of state houses, thereby bringing an end to the Great Depression in New Zealand and providing decent houses for young families (my own included) who needed them.
Who among our current leaders would disown that hugely valuable legacy?
Bryan Gould, 2 October 2017
“In a very rapidly changing scenario, with a burgeoning population, fast-changing demographic profile, and growth aspirations of people around the world putting pressure on natural resources, our economic thoughts and practices have to change.”
In the beginning there was nothing, no human beings, no animals, no trees, no oceans, no earth, no sun, no stars, not even space or time. A quantum fluctuation leading to the Big Bang almost 14 billion years ago sowed the seeds of the Universe and space and time, as we know it. In the initial phase, stars, black holes, and galaxies were formed. The Earth, our home planet, was born almost 10 billion years later, about 4 billion years ago. It was then a fiery ball and took almost 1 billion years to cool down. Seeds of life sprouted about 3 billion years ago, some say spontaneously, while others hold a view through panspermia, no one knows for sure.
While the earth was cooling, life forms were evolving and the planet was undergoing cataclysmic changes. Continents were shifting and breaking apart, ocean floors were rising and sinking, volcanoes were erupting. Forests, animals, fishes, amphibians came and disappeared, so much so that according to some, 99.9% of the species in existence since beginning of life on Earth have ceased to exist. These changes, over a period of hundreds of millions of years, left us the legacy of natural resources—coal, crude oil, natural gas— and minerals so necessary for industrial processes and evolution of a technological civilisation.
Life forms continued to evolve. Humans came on the scene. No one is sure, but it is said that human sub-species evolved about half a million years ago in the African Savannah. With human civilisations, human aspiration too continued to develop and grow, perhaps slowly, if we were to compare it with the developments in the last 100 years.
The advent of the Industrial Revolution, which started in Europe around 1760, brought in its wake a transformation. Progress brought about by technology encouraged a shift from primarily an agricultural world to an industrial one. Rapid shifts took place in many parts of the world, mainly Europe and North America, and in the earlier part of the last century, in Japan. Such shifts are now taking place in parts of Asia, mainly India and China, Latin America, and Africa. These changes, by themselves great achievements for mankind, have led to a burgeoning population and major demographic changes. An off-shoot of this technological progress has been that more intensive and concentrated methods of food production are required for supporting technological societies and longer human life spans, stemming from better healthcare.
About the time of the birth of Jesus Christ, the planet supported a population of about 200 million human beings, which, by the early 19th century i.e. in a period of about 1,830 years touched a billion people. In another 185 years, we have expanded 7-fold to over 7.2 billion people and we are still continuing to expand. The advent of technological changes and exploitation of natural resources has improved the living conditions of human beings, and on an average a human being lives better, is better fed, and better educated than any other time in the history of mankind.
All this has been brought about by scientific advances in different fields such as Quantum Physics, Relativity, Material Sciences, Chemistry, Agricultural Sciences, and so on and so forth.
The list is endless.
However, a large population and better living standards have created their own challenges in fields as diverse as economics, social sciences, ecology, and environment. At the heart of these is the rapid exploitation of natural resources, be it in the form of energy-generating resources like coal or crude oil, mineral resources like ores, or environmental resources, which are being degraded in the pursuit of economic growth.
These issues are well known, and have been discussed in various fora for decades now. The first Club of Rome report, Limits to Growth, which was published in 1972, raises many issues pertinent to these changes. That landmark report and subsequent Club of Rome reports, which generated extensive debates in the 1970s, now lie peacefully buried in the archives of libraries around the world. While these issues are still relevant, it is not the intent of this book to reiterate them.
Along with technological progress, economic theories evolved as well. A key aspect of economic theories was better and more efficient utilisation of resources, be it capital, land or labour. These concepts and theories optimized utilisation of resources and went a long way in improving the living standards of mankind across the world.
These economic theories, which have served us well for many decades now, need a relook, particularly from the point of view of sustainability. If we lived in a world where resources were infinite or virtually limitless in relation to our consumption, we would have had no issues. But that is indeed not the case, more so, as our population and resource consumption have been expanding exponentially. Using current methods of economic analysis, capital allocation really promotes gross long-term inefficiencies in our resource utilisation. If we continue with these approaches, our societies would become unsustainable.
The authors have long held the view that not only do our economic theories lead to unsustainable development, but really amount to stealing from our future generations. We compare our society to a rich man who sells his family silver to sustain his lifestyle and in the end leaves practically nothing for his children. What is worse in our case is that we would leave our children a huge debt, which they would have to pay. This book will provide enough evidence that our economic and capital allocation models do the same thing: promote current consumption at the cost of future generations. The problem is further compounded by the short-sightedness of the political class in most nations of the world where the focus seems to be the next year, the next election, or in non-democratic societies, growth in personal wealth or stature. Similarly, the corporate world around us generally thinks of the next quarter, the next shareholders’ meet, and the bonuses, which the top managers can persuade the Boards and shareholders to pay them. Few think of the long-term strategies for the company, and fewer still about long-term sustainability issues.
Most businesses use capital allocation models to optimise their working. Similar concepts are, at least theoretically, used by countries (where their leaders are not driven by political considerations, which is not often) to utilise national resources. Few realise the pitfalls of such models. So wide is the use of these models that working of all banks would come to a standstill if somehow these formulae were to be erased from their computers.
Capital allocation models are generally skewed in favour of current consumption. They place a premium on current consumption and earlier use of the resources vis-à-vis saving them for the future generations. For example, if we can pump a barrel of oil now and its price is US$100, our benefit (less the pumping out cost, which we for the sake of simplicity assume to be zero) is US$100. But if we leave the same barrel of oil underground so that someone else can use it 50 years later at a 10% cost of capital, the value of the same barrel of oil today is 85 cents. If we were more farsighted and do not use it for 100 years, the present value falls to 0.7 cents. So our incentive is in using the resource as fast as possible. Of course, in doing this analysis we conveniently forget that nature took several hundred million years to generate the same barrel of oil.
Another way of looking at the same situation is, if, for the sake of argument, through some technological breakthrough it is possible to extract 100 barrels of oil after 50 years, but if the field were to be exploited now, only 1 barrel could be extracted and the remaining 99 barrels are lost forever. Managers would still find it desirable to extract that one barrel of oil now, notwithstanding the fact that future generations would lose 99 barrels of oil. This example may sound extreme, but analogous decisions are routinely taken globally. As a result, the rate of consumption of natural resources is so high that the world reserves of many key resources would be exhausted in a couple of generations. As these resources get exhausted, their availability would decline, although this fall would be generally gradual. But a fall in resource availability would impact industrial production as well as all the consequences that would inevitably result from it.
Everybody would be impacted. No one would be spared. But youngsters in their twenties and thirties, with 30 to 40 years of working life remaining, would be most affected. Their hopes, aspiration and dreams of a comfortable and peaceful retirement after years and years of hard work would stand shattered as money, not backed by availability of goods and services, would lose value as its purchasing power falls.
The aim of this book is to bring out the deep lacunae in our economic thought and practices. The existing economic practices were developed when natural resources were plentiful, the global population small, and natural resource consumption minuscule in relation to the reserves. But in a very rapidly changing scenario, with a burgeoning population, fast-changing demographic profile, and growth aspirations of people around the world putting pressure on natural resources, our economic thoughts and practices have to change.
No change is without associated pain. We are all comfortable with the present thought processes, which predict steady and sustained growth based on the implicit assumption that resources are unlimited. But the reality is that we live in a finite world with limited resources, and after that reality is factored in, none of these projections hold true. And the sooner we realise this, the better it is and perhaps less painful too.
This book is divided into two sections. The first section, The Context, highlights the world we live in and how fast we are consuming our resources and impacting the environment. Some readers may find The Context grim and depressing, but we have painted the picture as we see it based on the best available information. We would request such readers bear with us or simply move on to the next part, The New Economic Paradigm, and then come back to The Context. In the second part, The New Economic Paradigm, we have suggested a new approach to our economic theories, which would lead to a more sustainable world.
“Humans are extremely intelligent and yet extremely foolish. They have failed to perceive the inter-linkages in the Web of Life; remove a few links and the Web could collapse, threatening their own existence.”
Stealing From Our Children. The real dilemma of growth and the need for New Economics – Kamal K. Kothari and Chitra Chandrasekhar.
get it from Amazon.com
The tax system plays a crucial role in New Zealand’s housing markets. At the simplest level, the Goods and Services Tax is applied to new land development and new house construction, raising the price of new housing by 15%.
But the effects of the tax system are more complicated than this.
Since 1986 several tax changes have caused an intergenerational rift in New Zealand society by increasing the prices young people pay to purchase houses.
Some of these tax changes appear justifiable on efficiency grounds, but even these have made it more expensive for young people to purchase or rent property.
In conjunction with other tax changes that have artificially raised property prices, a generation of older property owners have become rich at the expense of current and future generations of New Zealanders.
The scale of the problem
The scale of the problem is seen by observing how average property prices have increased by over 220% in inflation-adjusted terms since 1989; the highest rate of increase in the developed world.
The average size of new houses has also increased more quickly than in Australia or the United States, the only two countries that publish this data.
The average size of a new dwelling in 2013 was 198 m2, up from 125 m2 in 1989, and nearly twice as large as the average new house in Europe.
The tax changes that have affected housing can be divided into those that affect the cost of supplying housing and those that affect the demand for housing.
Unfortunately, unravelling the effect of taxes on house prices and rents is challenging.
The effects depend on the extent that the supply of new housing is responsive to prices.
If the supply of housing is very responsive to prices, taxes that affect supply prices (such as GST) become fully reflected in prices, while taxes that affect demand (such as the relative size of taxes on housing income and other assets) do not. Conversely, if the supply of housing is not really responsive to prices, supply taxes like GST have little effect on prices but demand taxes have large effects.
The analysis is further complicated because the supply of land – particularly land in good locations – is less responsive to price than the supply of new houses.
It is quite possible that a particular tax can simultaneously lead to higher land prices but not much new land, and larger houses but not much of an increase in building costs.
Since 1989, the ways that the tax system affects the demand for housing has been the biggest problem.
The fundamental difficulty is that the returns from other classes of assets such as interest income are more heavily taxed than the returns from housing.
Because interest is more heavily taxed than the returns from owneroccupied housing – which are essentially the rent people get from their own home – people have an incentive to live in larger houses than otherwise, and pay more for well-located properties.
In the absence of this tax distortion, many people would choose to live in smaller houses and land prices in major cities would be a lot lower.
It is not unreasonable to suspect the premium people pay for well-located properties is twice as high as they would pay under a non-distortionary tax system.
But this is not all. The tax system provides incentives for landlords to pay a much higher price/rent multiple for the houses they lease, largely because the absence of a capital gains tax.
Because the houseprice/ rent multiple could increase either because house prices increase or because rents decline (or some combination of both), the tax system could make buying more expensive or it could make renting more affordable. Most of the evidence suggests house prices have increased rather than rents have fallen; either way, the result is a tax-induced decline in home ownership rates.
When the tax system causes artificially high house prices, costs are imposed on current and future generations of young people, who have to borrow more and pay higher mortgage costs.
Why 1989? New Zealanders have never paid tax on the capital gains associated with house price increases, and the way housing is taxed was not fundamentally changed in 1989.
This is true. But the distortionary effects of taxation depend on the way houses are taxed relative to other asset classes, and in 1989 the government changed the way some other capital income is taxed.
Until 1989, money placed into retirement saving schemes was tax deductible, and the earnings from this money were not taxed as they accumulated.
Under this tax scheme – which is used in most developed countries including the United Kingdom, the United States, France, Germany, and Japan – the money placed in these savings schemes is taxed in a similar way to housing.
It reduces the incentive for owner-occupiers and landlords to overinvest in housing.
While the distortions in the current tax system could be eliminated by introducing a capital gains tax on housing and all other assets, and by taxing the rent you implicitly pay yourself when you own your home, most countries have found this too difficult to do.
As they have discovered, it is far simpler to change the way other savings are taxed.
On the supply side, in addition to GST, the Local Government Act (2002) has also affected the cost of supplying housing by changing taxes.
Instead of levying property taxes (rates) to fund the costs of developing new sections, local governments have progressively imposed development charges.
This change has improved efficiency by moving the costs of a larger city to the new people populating it, but it has also increased the price of housing right across cities.
People who bought before 2002 shifted the cost of new development to others, increasing the value of their houses, even though their development costs had been paid by other ratepayers.
For a long time, economists have pointed out that if you tax the income from housing less than other assets, you tend to increase land prices.
At the macroeconomic level, they have noted that this tends to increase national debt levels, and lower national income.
The first owners of land benefit from these schemes, but everyone else loses.
Perhaps this is a reason why other countries have been concerned to tax housing on a similar basis to other assets.
It is unfortunate New Zealand does not do so, even if the tax changes implemented since the late 1980s have proved very advantageous to middle-aged and older generations.
In November 1980 the prophets returned to Nashville, Tennessee, to be honored. Vanderbilt University hosted a symposium honoring the Southern Agrarians on the fiftieth anniversary of the publication of I’ll Take My Stand: The South and the Agrarian Tradition (1930).
I’ll Take My Stand was an indictment of the industrial civilization of modern America. The authors hoped to preserve the manners and culture of the rural South as a healthy alternative. The book was the inspiration of two Vanderbilt English professors and poets, John Crowe Ransom and Donald Davidson, and their former student, the poet Allen Tate.
It was composed of twelve essays written by twelve separate individuals, the title page declaring them to be Twelve Southerners.
An essayist in Time magazine, claiming that 150 doctoral theses had been written about the book, remarked on the appeal of Agrarianism to modern-day environmentalists and theorists of the “zero-sum” society.
“Why do the Agrarians, with their crusty prophecies and affirmations, still sound so pertinent, half a very non-agrarian century later?” he asked.
The answer, he felt, lay in the power of Agrarianism as a poetic metaphor. This was a view shared by the organizers of the event, who, in a volume derived from it, argued that I’ll Take My Stand was a prophetic book. Once dismissed as a nostalgic, backward-looking defense of a romanticized Old South, the book was rather “an affirmation of universal values” and a defense of the “religious, aesthetic, and moral foundations of the old European civilization.”
Industrial society devalues human labor by replacing it with machines, argued the Twelve Southerners. Machine society undercut the dignity of labor and left modern man bereft of vocation and in an attenuated state of “satiety and aimlessness,” glutted with the surfeit of consumer goods produced by the industrial economy. Industrialism, they argued, was inimical to religion, the arts, and the elements of a good life, leisure, conversation, hospitality.
The Twelve Southerners were frankly reactionary and seriously proposed returning to an economy dominated by subsistence agriculture.
The theory of agrarianism, they declared, “is that the culture of the soil is the best and most sensitive of vocations, and that therefore it should have the economic preference and enlist the maximum number of workers.” Why, they asked, should modern men accept a social system so manifestly inferior to what had gone before? “If a community, or a section, or a race, or an age, is groaning under industrialism, and well aware that it is an evil dispensation,” the Twelve Southerners declared, “it must find the way to throw it off. To think that this cannot be done is pusillanimous.”
I’ll Take My Stand was a self-conscious defense of the South, undertaken sixty-five years after Robert E. Lee surrendered at Appomattox Court House.
The passage of years revealed an almost protean quality to Agrarianism. It came to mean very different things to a variety of different thinkers. Indeed, the contributors themselves, over the years, interpreted and reinterpreted their original impulse in light of changing convictions and interests. In 1930, I’ll Take My Stand was an indictment of industrial capitalism and a warning of its potential to destroy what the Agrarians considered a more humane and leisurely social order.
For some, it later came to be a statement of Christian humanism. For others, it was a rousing defense of the southern heritage and southern culture, which, in turn, meant a defense of the Western tradition. For others, Agrarianism was merely a metaphor for the simple life—one not consumed with materialism. For others still, the symposium was part of a traditional southern political discourse, which warned against centralized power and a strong state and which stood against bourgeois liberalism.
After World War II, the nascent conservative movement—poised against what it perceived to be an unwise liberal elite and in defense of traditional values and American capitalism—subsumed the Agrarians within its intellectual tradition. The Agrarians became respected, if quixotic, dissenters from the main trend of American progressivism.
Since the founding of the nation, southerners had sought a way to reconcile modernity and tradition, to participate in the modern market economy while retaining the shockingly premodern (yet profitable) system of slave labor. Slaveholders were alternately beguiled by the riches of the capitalist marketplace and appalled at the prospect of a society based on the pecuniary impulse and the self-interest, chicanery, and competitiveness of the market.
The 1920s offered a richer discourse on the crises of faith, morals, and science produced by modernity than any decade since. Agrarianism was an attempt to respond to questions being asked by others besides southerners: Is it possible to satisfy the felt needs for community, leisure, and stability in the dizzying whirl of modern life? How do we validate values in a disenchanted and secular age?
The Twelve Southerners’ response was both radical and conservative. They rejected industrial capitalism and the culture it produced. In I’ll Take My Stand they called for a return to the small-scale economy of rural America as a means to preserve the cultural amenities of the society they knew. Ransom and Tate believed that only by arresting the progress of industrial capitalism and its imperatives of science and efficiency could a social order capable of fostering and validating humane values and traditional religious faith be preserved.
The South as a symbolic marker of both traditional society and Western civilization became the central element of the Agrarian discourse. Modernism and modernization were no longer deeply related; what was a radical conservatism was now southern traditionalism. This bifurcation of economic and cultural analysis, which the Agrarians had originally resisted, reflects a distinctive attribute of the conservative movement that was emerging in the 1950s and transforming the leadership of the American Right.
Conservatives, southern and otherwise, constitute the final group to preserve the memory of the Agrarians. Conservatives have proudly honored the Agrarians as perceptive forefathers and tend to present them as southern traditionalists—proponents of a social order based on religion, opponents of a godless and untraditional leviathan state, critics of a rootless individualism, and, above all, stout defenders of the South, which necessarily entails a defense of southern tradition, culture, and values.
The paradox for southerners of the Agrarians’ generation, to change but to remain loyal to history, remained a continuing source of division for the Agrarians and undercut the radical conservatism of I’ll Take My Stand.
The growth of great nation-states, even if democratic, had marginalized the individual. Indeed, the individual was reduced to meaninglessness, with no sense of responsibility, no sense of past and place. In this context, the Agrarian image of a better antebellum South came to represent for Warren a potential source of spiritual revitalization. The past recalled not as a mythical “golden age” but “imaginatively conceived and historically conceived in the strictest readings of the researchers” could be a “rebuke to the present.”
In the end, the history of the Agrarian tradition was shaped by the pressure of the past on this group of southern intellectuals, a past whose legacy included segregation and white supremacy.
The southerner, Ransom wrote in I’ll Take My Stand, “identifies himself with a spot of ground, and this ground carries a good deal of meaning; it defines itself for him as nature.”
This may be so, but the interpretation of this meaning has been the subject of much conflict among southerners, white and black, throughout the century. At the heart of Agrarianism was the question not only of where do I stand, but also, who belongs? And it was not the ground that provided the answers but the human beings who took their stands upon it.
THE REBUKE OF HISTORY. The Southern Agrarians and American conservative thought.
Paul V Murphy
get it at Amazon.com
In 1955 the U.S. Supreme Court issued its second Brown v. Board of Education ruling, calling for the dismantling of segregation in public schools with “all deliberate speed.”
Thirty-seven-year-old James McGill Buchanan liked to call himself a Tennessee country boy. No less a figure than Milton Friedman had extolled Buchanan’s potential. As Colgate Whitehead Darden Jr., the president of the University of Virginia reviewed the document, he might have wondered if the newly hired economist had read his mind. For without mentioning the crisis at hand, Buchanan’s proposal put in writing what Darden was thinking: Virginia needed to find a better way to deal with the incursion on states’ rights represented by Brown.
States’ rights, in effect, were yielding in preeminence to individual rights. It was not difficult for either Darden or Buchanan to imagine how a court might now rule if presented with evidence of the state of Virginia’s archaic labor relations, its measures to suppress voting, or its efforts to buttress the power of reactionary rural whites by underrepresenting the moderate voters of the cities and suburbs of Northern Virginia. Federal meddling could rise to levels once unimaginable.
What the court ruling represented to Buchanan was personal. Northern liberals—the very people who looked down upon southern whites like him, he was sure—were now going to tell his people how to run their society. And to add insult to injury, he and people like him with property were no doubt going to be taxed more to pay for all the improvements that were now deemed necessary and proper for the state to make.
Find the resources, he proposed to Darden, for me to create a new center on the campus of the University of Virginia, and I will use this center to create a new school of political economy and social philosophy. It would be an academic center, rigorously so, but one with a quiet political agenda: to defeat the “perverted form” of liberalism that sought to destroy their way of life, “a social order,” as he described it, “built on individual liberty,” a term with its own coded meaning but one that Darden surely understood. The center, Buchanan promised, would train “a line of new thinkers” in how to argue against those seeking to impose an “increasing role of government in economic and social life.”
Buchanan fully understood the scale of the challenge he was undertaking and promised no immediate results. But he made clear that he would devote himself passionately to this cause.
Buchanan’s team had no discernible success in decreasing the federal government’s pressure on the South all the way through the 1960s and ’70s. But take a longer view—follow the story forward to the second decade of the twenty-first century—and a different picture emerges, one that is both a testament to Buchanan’s intellectual powers and, at the same time, the utterly chilling story of the ideological origins of the single most powerful and least understood threat to democracy today: the attempt by the billionaire-backed radical right to undo democratic governance.
A quest that began as a quiet attempt to prevent the state of Virginia from having to meet national democratic standards of fair treatment and equal protection under the law would, some sixty years later, become the veritable opposite of itself: a stealth bid to reverse-engineer all of America, at both the state and the national levels, back to the political economy and oligarchic governance of midcentury Virginia, minus the segregation.
The goal of all these actions was to destroy our institutions, or at least change them so radically that they became shadows of their former selves?
This, then, is the true origin story of today’s well-heeled radical right, told through the intellectual arguments, goals, and actions of the man without whom this movement would represent yet another dead-end fantasy of the far right, incapable of doing serious damage to American society.
When I entered Buchanan’s personal office, part of a stately second-floor suite, I felt overwhelmed. There were papers stacked everywhere, in no discernible order. Not knowing where to begin, I decided to proceed clockwise, starting with a pile of correspondence that was resting, helter-skelter, on a chair to the left of the door. I picked it up and began to read. It contained confidential letters from 1997 and 1998 concerning Charles Koch’s investment of millions of dollars in Buchanan’s Center for Study of Public Choice and a flare-up that followed.
Catching my breath, I pulled up an empty chair and set to work. It took me time—a great deal of time—to piece together what these documents were telling me. They revealed how the program Buchanan had first established at the University of Virginia in 1956 and later relocated to George Mason University, the one meant to train a new generation of thinkers to push back against Brown and the changes in constitutional thought and federal policy that had enabled it, had become the research-and-design center for a much more audacious project, one that was national in scope. This project was no longer simply about training intellectuals for a battle of ideas; it was training operatives to staff the far-flung and purportedly separate, yet intricately connected, institutions funded by the Koch brothers and their now large network of fellow wealthy donors. These included the Cato Institute, the Heritage Foundation, Citizens for a Sound Economy, Americans for Prosperity, FreedomWorks, the Club for Growth, the State Policy Network, the Competitive Enterprise Institute, the Tax Foundation, the Reason Foundation, the Leadership Institute, and more, to say nothing of the Charles Koch Foundation and Koch Industries itself.
I learned how and why Charles Koch first became interested in Buchanan’s work in the early 1970s, called on his help with what became the Cato Institute, and worked with his team in various organizations. What became clear is that by the late 1990s, Koch had concluded that he’d finally found the set of ideas he had been seeking for at least a quarter century by then—ideas so groundbreaking, so thoroughly thought-out, so rigorously tight, that once put into operation, they could secure the transformation in American governance he wanted. From then on, Koch contributed generously to turning those ideas into his personal operational strategy to, as the team saw it, save capitalism from democracy—permanently.
In his first big gift to Buchanan’s program, Charles Koch signaled his desire for the work he funded to be conducted behind the backs of the majority. “Since we are greatly outnumbered,” Koch conceded to the assembled team, the movement could not win simply by persuasion. Instead, the cause’s insiders had to use their knowledge of “the rules of the game”—that game being how modern democratic governance works—“to create winning strategies.” A brilliant engineer with three degrees from MIT, Koch warned, “The failure to use our superior technology ensures failure.” Translation: the American people would not support their plans, so to win they had to work behind the scenes, using a covert strategy instead of open declaration of what they really wanted.
Future-oriented, Koch’s men (and they are, overwhelmingly, men) gave no thought to the fate of the historical trail they left unguarded. And thus, a movement that prided itself, even congratulated itself, on its ability to carry out a revolution below the radar of prying eyes (especially those of reporters) had failed to lock one crucial door: the front door to a house that let an academic archive rat like me, operating on a vague hunch, into the mind of the man who started it all.
What animated Buchanan, what became the laser focus of his deeply analytic mind, was the seemingly unfettered ability of an increasingly more powerful federal government to force individuals with wealth to pay for an increasing number of public goods and social programs they had had no personal say in approving. Better schools, newer textbooks, and more courses for black students might help the children, for example, but whose responsibility was it to pay for these improvements? The parents of these students? Others who wished voluntarily to help out? Or people like himself, compelled through increasing taxation to contribute to projects they did not wish to support? To Buchanan, what others described as taxation to advance social justice or the common good was nothing more than a modern version of mob attempts to take by force what the takers had no moral right to: the fruits of another person’s efforts. In his mind, to protect wealth was to protect the individual against a form of legally sanctioned gangsterism. Where did this gangsterism begin? Not in the way we might have expected him to explain it to Darden: with do-good politicians, aspiring attorneys seeking to make a name for themselves in constitutional law, or even activist judges. It began before that: with individuals, powerless on their own, who had figured out that if they joined together to form social movements, they could use their strength in numbers to move government officials to hear their concerns and act upon them.
The only fact that registered in his mind was the “collective” source of their power—and that, once formed, such movements tended to stick around, keeping tabs on government officials and sometimes using their numbers to vote out those who stopped responding to their needs. How was this fair to other individuals? How was this American?
Even when conservatives later gained the upper hand in American politics, Buchanan saw his idea of economic liberty pushed aside. Richard Nixon expanded government more than his predecessors had, with costly new agencies and regulations, among them a vast new Environmental Protection Agency. George Wallace, a candidate strongly identified with the South and with the right, nonetheless supported public spending that helped white people. Ronald Reagan talked the talk of small government, but in the end, the deficit ballooned during his eight years in office.
Had there not been someone else as deeply frustrated as Buchanan, as determined to fight the uphill fight, but in his case with much keener organizational acumen, the story this book tells would no doubt have been very different. But there was. His name was Charles Koch. An entrepreneurial genius who had multiplied the earnings of the corporation he inherited by a factor of at least one thousand, he, too, had an unrealized dream of liberty, of a capitalism all but free of governmental interference and, at least in his mind, thus able to achieve the prosperity and peace that only this form of capitalism could produce. The puzzle that preoccupied him was how to achieve this in a democracy where most people did not want what he did.
Ordinary electoral politics would never get Koch what he wanted. Passionate about ideas to the point of obsession, Charles Koch had worked for three decades to identify and groom the most promising libertarian thinkers in hopes of somehow finding a way to break the impasse. He subsidized and at one point even ran an obscure academic outfit called the Institute for Humane Studies in that quest. “I have supported so many hundreds of scholars” over the years, he once explained, “because, to me, this is an experimental process to find the best people and strategies.”
The goal of the cause, Buchanan announced to his associates, should no longer be to influence who makes the rules, to vest hopes in one party or candidate. The focus must shift from who rules to changing the rules. For liberty to thrive, Buchanan now argued, the cause must figure out how to put legal—indeed, constitutional shackles on public officials, shackles so powerful that no matter how sympathetic these officials might be to the will of majorities, no matter how concerned they were with their own reelections, they would no longer have the ability to respond to those who used their numbers to get government to do their bidding. There was a second, more diabolical aspect to the solution Buchanan proposed, one that we can now see influenced Koch’s own thinking. Once these shackles were put in place, they had to be binding and permanent. The only way to ensure that the will of the majority could no longer influence representative government on core matters of political economy was through what he called “constitutional revolution.”
By the late 1990s, Charles Koch realized that the thinker he was looking for, the one who understood how government became so powerful in the first place and how to take it down in order to free up capitalism—the one who grasped the need for stealth because only piecemeal, yet mutually reinforcing, assaults on the system would survive the prying eyes of the media, was James Buchanan.
The Koch team’s most important stealth move, and the one that proved most critical to success, was to wrest control over the machinery of the Republican Party, beginning in the late 1990s and with sharply escalating determination after 2008. From there it was just a short step to lay claim to being the true representatives of the party, declaring all others RINOS—Republicans in name only. But while these radicals of the right operate within the Republican Party and use that party as a delivery vehicle, make no mistake about it: the cadre’s loyalty is not to the Grand Old Party or its traditions or standard-bearers. Their loyalty is to their revolutionary cause.
Our trouble in grasping what has happened comes, in part, from our inherited way of seeing the political divide. Americans have been told for so long, from so many quarters, that political debate can be broken down into conservative versus liberal, pro-market versus pro-government, Republican versus Democrat, that it is hard to recognize that something more confounding is afoot, a shrewd long game blocked from our sight by these stale classifications.
The Republican Party is now in the control of a group of true believers for whom compromise is a dirty word. Their cause, they say, is liberty. But by that they mean the insulation of private property rights from the reach of government, and the takeover of what was long public (schools, prisons, western lands, and much more) by corporations, a system that would radically reduce the freedom of the many. In a nutshell, they aim to hollow out democratic resistance. And by its own lights, the cause is nearing success.
The 2016 election looked likely to bring a big presidential win with across-the-board benefits. The donor network had so much money and power at its disposal as the primary season began that every single Republican presidential front-runner was bowing to its agenda. Not a one would admit that climate change was a real problem or that guns weren’t good, and the more widely distributed, the better. Every one of them attacked public education and teachers’ unions and advocated more charter schools and even tax subsidies for religious schools. All called for radical changes in taxation and government spending. Each one claimed that Social Security and Medicare were in mortal crisis and that individual retirement and health savings accounts, presumably to be invested with Wall Street firms, were the best solution.
Although Trump himself may not fully understand what his victory signaled, it put him between two fundamentally different, and opposed, approaches to political economy, with real-life consequences for us all. One was in its heyday when Buchanan set to work. In economics, its standard-bearer was John Maynard Keynes, who believed that for a modern capitalist democracy to flourish, all must have a share in the economy’s benefits and in its governance. Markets had great virtues, Keynes knew—but also significant built-in flaws that only government had the capacity to correct.
As a historian, I know that his way of thinking, as implemented by elected officials during the Great Depression, saved liberal democracy in the United States from the rival challenges of fascism and Communism in the face of capitalism’s most cataclysmic collapse. And that it went on to shape a postwar order whose operating framework yielded ever more universal hope that, by acting together and levying taxes to support shared goals, life could be made better for all.
The most starkly opposed vision is that of Buchanan’s Virginia school. It teaches that all such talk of the common good has been a smoke screen for “takers” to exploit “makers,” in the language now current, using political coalitions to “vote themselves a living” instead of earning it by the sweat of their brows. Where Milton Friedman and F. A. Hayek allowed that public officials were earnestly trying to do right by the citizenry, even as they disputed the methods, Buchanan believed that government failed because of bad faith: because activists, voters, and officials alike used talk of the public interest to mask the pursuit of their own personal self-interest at others’ expense. His was a cynicism so toxic that, if widely believed, it could eat like acid at the foundations of civic life. And he went further by the 1970s, insisting that the people and their representatives must be permanently prevented from using public power as they had for so long. Manacles, as it were, must be put on their grasping hands.
Is what we are dealing with merely a social movement of the right whose radical ideas must eventually face public scrutiny and rise or fall on their merits? Or is this the story of something quite different, something never before seen in American history? Could it be—and I use these words quite hesitantly and carefully—a fifth-column assault on American democratic governance?
The term “fifth column” has been applied to stealth supporters of an enemy who assist by engaging in propaganda and even sabotage to prepare the way for its conquest.
This cause is different. Pushed by relatively small numbers of radical-right billionaires and millionaires who have become profoundly hostile to America’s modern system of government, an apparatus decades in the making, funded by those same billionaires and millionaires, has been working to undermine the normal governance of our democracy. Indeed, one such manifesto calls for a “hostile takeover” of Washington, D.C. That hostile takeover maneuvers very much like a fifth column, operating in a highly calculated fashion, more akin to an occupying force than to an open group engaged in the usual give-and-take of politics. The size of this force is enormous. The social scientists who have led scholars in researching the Koch network write that it “operates on the scale of a national U.S. political party” and employs more than three times as many people as the Republican committees had on their payrolls in 2015.
For all its fine phrases, what this cause really seeks is a return to oligarchy, to a world in which both economic and effective political power are to be concentrated in the hands of a few. It would like to reinstate the kind of political economy that prevailed in America at the opening of the twentieth century, when the mass disfranchisement of voters and the legal treatment of labor unions as illegitimate enabled large corporations and wealthy individuals to dominate Congress and most state governments alike, and to feel secure that the nation’s courts would not interfere with their reign. The first step toward understanding what this cause actually wants is to identify the deep lineage of its core ideas. And although its spokespersons would like you to believe they are disciples of James Madison, the leading architect of the U.S. Constitution, it is not true.
Their intellectual lodestar is John C. Calhoun. He developed his radical critique of democracy a generation after the nation’s founding, as the brutal economy of chattel slavery became entrenched in the South, and his vision horrified Madison.
Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America
by Nancy Maclean
Nancy K. MacLean is an American historian. She is the William H. Chafe Professor of History and Public Policy at Duke University and the author of numerous books and articles on various aspects of twentieth-century United States history.
get it from Amazon
The G20 became the G19 as it ended. On the Paris climate accords the United States was left isolated and friendless.
It is, apparently, where this US President wants to be as he seeks to turn his nation inward.
Donald Trump has a particular, and limited, skill-set. He has correctly identified an illness at the heart of the Western democracy. But he has no cure for it and seems to just want to exploit it.
He is a character drawn from America’s wild west, a travelling medicine showman selling moonshine remedies that will kill the patient.
And this week he underlined he has neither the desire nor the capacity to lead the world.
Given the US was always going to be one out on climate change, a deft American President would have found an issue around which he could rally most of the leaders.
He had the perfect vehicle — North Korea’s missile tests.
So, where was the G20 statement condemning North Korea? That would have put pressure on China and Russia? Other leaders expected it and they were prepared to back it but it never came.
There is a tendency among some hopeful souls to confuse the speeches written for Mr Trump with the thoughts of the man himself.
He did make some interesting, scripted, observations in Poland about defending the values of the West.
And Mr Trump is in a unique position — he is the one man who has the power to do something about it.
But it is the unscripted Mr Trump that is real. A man who barks out bile in 140 characters, who wastes his precious days as President at war with the West’s institutions — like the judiciary, independent government agencies and the free press.
Mr Trump is a man who craves power because it burnishes his celebrity. To be constantly talking and talked about is all that really matters. And there is no value placed on the meaning of words. So what is said one day can be discarded the next.
So, what did we learn this week?
We learned Mr Trump has pressed fast forward on the decline of the US as a global leader. He managed to diminish his nation and to confuse and alienate his allies.
He will cede that power to China and Russia — two authoritarian states that will forge a very different set of rules for the 21st century.
Some will cheer the decline of America, but I think we’ll miss it when it is gone.
And that is the biggest threat to the values of the West which he claims to hold so dear.
“Neoliberal economic policies have failed, and an important aspect of that failure has been that most of such new wealth as has been created has gone to the richest people in society.”
Jim Blogger, former NZ Prime Minister
Jim Bolger headed a government that set about cutting taxes and therefore public services, and weakening trade unions, policies often seen as the hallmarks of neo-liberalism, and that is to say nothing of Ruth Richardson and her boast of delivering “the mother of all budgets”.
It is beyond dispute that the countries which have enjoyed the best economic outcomes have been those – like the Scandinavian countries – which have at the same time most stoutly resisted the growth of inequality. As for the rest, the application of neo-liberal policies has meant a poorer economic performance, accompanied by greater social division.
We do not have to choose, in other words and as is so often asserted, between social justice and economic success. The former is an essential element in producing the latter and is not just a “luxury” we can do without.
Or, to put it in another way, the failure of neo-liberal policies is largely attributable to their inevitable tendency to exacerbate inequality and to foster a lack of concern for the less fortunate.
And a moment’s reflection will tell us why that is so. An economy will always be more successful if it engages with and uses all of its productive capacity – and that means its human resources – rather than leaving some of them under-used and undervalued.
The loss and damage we sustain, if we fail to take account of the interests of the whole of society, creates not only a weaker economy, but a more divided and unhappier society.
In today’s politics, it is the right that is ideologically driven while it is the left that constantly seeks merely pragmatic solutions to pressing problems. The left’s difficulties in attracting majority public support suggest that solutions to problems will stand a better chance of being accepted if they are seen to be grounded in a coherent analysis of what has gone wrong.
It may be that, in their anxiety to gain support from the “middle ground”, the left has too easily been frightened away from developing such an analysis. Surprisingly, they seem reluctant to engage in an ideological debate and prefer to leave the territory uncontested.
If Jim Bolger can do it, and link outcomes to policy frameworks, why not the left? But, if there were to be a next time, Jim, could you please see the light and find the road to Damascus a little sooner?
Neoliberalism: the deep story that lies beneath Donald Trump’s triumph.
The events that led to Donald Trump’s election started in England in 1975. At a meeting a few months after Margaret Thatcher became leader of the Conservative party, one of her colleagues, or so the story goes, was explaining what he saw as the core beliefs of conservatism. She snapped open her handbag, pulled out a dog-eared book, and slammed it on the table. “This is what we believe,” she said. A political revolution that would sweep the world had begun.
The book was The Constitution of Liberty by Frederick Hayek. Its publication, in 1960, marked the transition from an honest, if extreme, philosophy to an outright racket. The philosophy was called neoliberalism. It saw competition as the defining characteristic of human relations. The market would discover a natural hierarchy of winners and losers, creating a more efficient system than could ever be devised through planning or by design. Anything that impeded this process, such as significant tax, regulation, trade union activity or state provision, was counter-productive. Unrestricted entrepreneurs would create the wealth that would trickle down to everyone.
This, at any rate, is how it was originally conceived. But by the time Hayek came to write The Constitution of Liberty, the network of lobbyists and thinkers he had founded was being lavishly funded by multimillionaires who saw the doctrine as a means of defending themselves against democracy. Not every aspect of the neoliberal programme advanced their interests. Hayek, it seems, set out to close the gap.
He begins the book by advancing the narrowest possible conception of liberty: an absence of coercion. He rejects such notions as political freedom, universal rights, human equality and the distribution of wealth, all of which, by restricting the behaviour of the wealthy and powerful, intrude on the absolute freedom from coercion he demands.
Democracy, by contrast, “is not an ultimate or absolute value”. In fact, liberty depends on preventing the majority from exercising choice over the direction that politics and society might take.
He justifies this position by creating a heroic narrative of extreme wealth. He conflates the economic elite, spending their money in new ways, with philosophical and scientific pioneers. Just as the political philosopher should be free to think the unthinkable, so the very rich should be free to do the undoable, without constraint by public interest or public opinion.
The ultra rich are “scouts”, “experimenting with new styles of living”, who blaze the trails that the rest of society will follow. The progress of society depends on the liberty of these “independents” to gain as much money as they want and spend it how they wish. All that is good and useful, therefore, arises from inequality. There should be no connection between merit and reward, no distinction made between earned and unearned income, and no limit to the rents they can charge.
Inherited wealth is more socially useful than earned wealth: “the idle rich”, who don’t have to work for their money, can devote themselves to influencing “fields of thought and opinion, of tastes and beliefs”. Even when they seem to be spending money on nothing but “aimless display”, they are in fact acting as society’s vanguard.
Hayek softened his opposition to monopolies and hardened his opposition to trade unions. He lambasted progressive taxation and attempts by the state to raise the general welfare of citizens. He insisted that there is “an overwhelming case against a free health service for all” and dismissed the conservation of natural resources. It should come as no surprise to those who follow such matters that he was awarded the Nobel prize for economics.
By the time Thatcher slammed his book on the table, a lively network of thinktanks, lobbyists and academics promoting Hayek’s doctrines had been established on both sides of the Atlantic, abundantly financed by some of the world’s richest people and businesses, including DuPont, General Electric, the Coors brewing company, Charles Koch, Richard Mellon Scaife, Lawrence Fertig, the William Volker Fund and the Earhart Foundation. Using psychology and linguistics to brilliant effect, the thinkers these people sponsored found the words and arguments required to turn Hayek’s anthem to the elite into a plausible political programme.
Thatcherism and Reaganism were not ideologies in their own right: they were just two faces of neoliberalism. Their massive tax cuts for the rich, crushing of trade unions, reduction in public housing, deregulation, privatisation, outsourcing and competition in public services were all proposed by Hayek and his disciples. But the real triumph of this network was not its capture of the right, but its colonisation of parties that once stood for everything Hayek detested.
Bill Clinton and Tony Blair did not possess a narrative of their own. Rather than develop a new political story, they thought it was sufficient to triangule. In other words, they extracted a few elements of what their parties had once believed, mixed them with elements of what their opponents believed, and developed from this unlikely combination a “third way”.
It was inevitable that the blazing, insurrectionary confidence of neoliberalism would exert a stronger gravitational pull than the dying star of social democracy. Hayek’s triumph could be witnessed everywhere from Blair’s expansion of the private finance initiative to Clinton’s repeal of the Glass-Steagal Act, which had regulated the financial sector. For all his grace and touch, Barack Obama, who didn’t possess a narrative either (except “hope”), was slowly reeled in by those who owned the means of persuasion.
As I warned in April, the result is first disempowerment then disenfranchisement. If the dominant ideology stops governments from changing social outcomes, they can no longer respond to the needs of the electorate. Politics becomes irrelevant to people’s lives; debate is reduced to the jabber of a remote elite. The disenfranchised turn instead to a virulent anti-politics in which facts and arguments are replaced by slogans, symbols and sensation. The man who sank Hillary Clinton’s bid for the presidency was not Donald Trump. It was her husband.
The paradoxical result is that the backlash against neoliberalism’s crushing of political choice has elevated just the kind of man that Hayek worshipped. Trump, who has no coherent politics, is not a classic neoliberal. But he is the perfect representation of Hayek’s “independent”; the beneficiary of inherited wealth, unconstrained by common morality, whose gross predilections strike a new path that others may follow. The neoliberal thinktankers are now swarming round this hollow man, this empty vessel waiting to be filled by those who know what they want. The likely result is the demolition of our remaining decencies, beginning with the agreement to limit global warming.
Those who tell the stories run the world. Politics has failed through a lack of competing narratives. The key task now is to tell a new story of what it is to be a human in the 21st century. It must be as appealing to some who have voted for Trump and Ukip as it is to the supporters of Clinton, Bernie Sanders or Jeremy Corbyn.
A few of us have been working on this, and can discern what may be the beginning of a story. It’s too early to say much yet, but at its core is the recognition that – as modern psychology and neuroscience make abundantly clear – human beings, by comparison with any other animals, are both remarkably social and remarkably unselfish. The atomisation and self-interested behaviour neoliberalism promotes run counter to much of what comprises human nature.
Hayek told us who we are, and he was wrong. Our first step is to reclaim our humanity.
JUST LIKE NEW ZEALAND’S, LABOUR NEEDS TO WISE UP!
Europe today is in crisis. Economically, much of the continent suffers from low growth, high unemployment and rising inequality, while politically, disillusionment with the European community as well as domestic institutions and elites is widespread. Partially as a result, right-wing populism is growing, increasing political instability and uncertainty even further. Although many have noted a correlation between the rise of populism and the decline of the social democratic or centre-left, the causal relationship between them has not been sufficiently stressed. Indeed, to a large degree the failures of the latter explain the surprising popularity of the former.
The historical role of the centre or social democratic left
Although the decline of social democracy and the rise of populism have become particularly noticeable since the financial crisis that began in 2008, the roots of both lie much earlier, in the 1970s. During this decade economic and social/cultural changes began unsettling long-standing voting and political patterns. Economically, the postwar order was running out of steam, and a noxious mix of unemployment and inflation hit Europe. However, social democrats lacked well thought out plans for getting economies moving again or for using the democratic state to protect citizens from the changes brought by ever-evolving capitalism.
Such plans, of course, had been precisely what social democracy had offered after 1945. Back then, social democrats had not only insisted that it was possible to reform and even improve capitalism – they devised concrete policy proposals for accomplishing this task. These policies enabled governments to contain and cushion the most destructive and destabilising consequences of markets without fettering them entirely. In contrast, during the late twentieth and early twenty-first centuries, social democrats offered either rearguard defences of socioeconomic policies that may have made sense decades ago but which are now out of touch with the realities of a changing global economy, or else watered-down versions of neoliberalism (such as the English “Third Way” or the German “Neue Mitte”) that left many citizens wondering why they should bother to vote for the social democratic or centre-left at all.
The absence of a distinctive, effective social democratic response to economic problems allowed a neoliberal right that had been organising and thinking about what it saw as the drawbacks of the postwar order to begin freeing capitalism from many of the restrictions that had been placed on it beginning in the 1970s. And this unfettered capitalism, in turn, not only helped create the financial crisis of the early twenty-first century, it also drove many voters to the populist right which explicitly promised to reign it in and protect “true” citizens from its harshest effects.
At the same time that European economies were changing, so were European societies. Social and cultural changes unleashed in the late 1960s threatened traditional identities, communities and mores, a process further exacerbated by growing immigration. Together these trends helped erode the social solidarity and sense of shared national purpose that had supported the social democratic postwar order and helped to stabilise European democracies in the decades following the Second World War.
Historically, social democrats recognised and indeed promoted social solidarity and a sense of shared national purpose, identifying these as necessary to the legitimacy of high taxes and a strong welfare state. During the last decades of the twentieth century, however, this basic fact was all too often forgotten or wished away by a centre or social democratic left that lacked distinctive, effective responses to the social, cultural and demographic changes that weakened the sense of solidarity and shared national purpose across one European country after another.
The absence of a distinctive, effective social democratic response to growing diversity allowed the extreme or multicultural left to become the loudest left-wing voice on this issue. This camp tends to see society as divided into irreconcilable groups, with different values and traditions all around. Efforts to find common ground or ease differences, in this view, are undesirable and counterproductive.
This emphasis on the “politics of recognition” – as opposed to the centre-left’s traditional emphasis on the “politics of redistribution” – was bad for the left and bad for democracy. It led many intellectuals away from a focus on economic issues and fragmented the left in a way that makes it hard to build majority coalitions and win elections. It also makes it almost impossible to generate the social solidarity or shared sense of national purpose that is necessary to support the rest of the centre-left agenda or healthy democracy more generally. And of course, a stress on the primacy of racial, religious, or sexual identity over class or even national identity, along with the implicit and often explicit denigration of those worried about the rapidly changing nature of their societies, has also helped to drive many voters to the nationalist, populist right.
The current crisis
It is now fairly commonplace to note the support given by traditionally left or social democratic voters to the populist right. This connection was on obvious display in the Brexit referendum, where many traditional Labour strongholds and supporters voted to leave the EU, and it has been a prominent feature of elections across the continent as working-class voters in particular have flocked to right-wing populist parties. And of course, a version of this was present in the United States, where Donald Trump garnered disproportionate support from less-educated and working-class voters. What is still worth stressing, however, is the causal connection between the failures or missteps of the centre or social democratic left and the rise of right-wing populism.
During the decades following the Second World war, centre-left and social democratic parties offered attractive solutions to the economic and social challenges facing European democracies. They promised citizens an economic order that neither erased capitalism (as many on the far left desired) nor gave it free rein (as classical liberals and contemporary neoliberals favour). Instead, they promised citizens the benefits of capitalist economic dynamism and innovation as well as to shield them from capitalism’s sometimes destructive effects.
The centre or social democratic left also promoted social solidarity and a sense of national purpose – welfare states would protect the health and well-being of all citizens and government would commit itself to creating an equal and prosperous society that benefited all. By the last decades of the twentieth century, however, the centre or social democratic left no longer had convincing responses to the most pressing economic and social challenges facing European societies, and voters accordingly began looking for other political alternatives.
For many former or traditionally left voters, the most attractive alternative turned out to be the populist right, which offered simple, straightforward solutions to citizens’ economic and social fears. Economically, the populist right promises to promote prosperity, via increased government control of the economy and limits on globalisation. Socially, the populist right promises to restore social solidarity and a sense of shared national purpose, by expelling foreigners or severely limiting immigration; diminishing the influence of the European Union, and protecting traditional values, identities and mores.
For those who bemoan the rise of the populist right, the challenge is clear: you can’t beat something with nothing and if the left can’t come up with more viable and attractive solutions to contemporary problems than those offered by its competitors it can expect to continue its slide into the dustheap of history.
John Key’s legacy will not be defined by great policy achievements; it’s his success as the model of a neoliberal leader, a poster boy for trickle-down economics, that he will be remembered for.
Key presided over increasing and gross social inequality.
Like another poster boy for trickle-down economics, Tony Blair, the New Zealand prime minister had the Teflon gene. Even while presiding over record levels of child poverty, his popularity remained high.
Despite ignoring public preferences not to privatise state-owned enterprises, increasing the GST, and more-or-less ignoring New Zealand’s chronic child poverty, because he blamed the victims, none of it stuck.
Only because the average greedy Kiwi is not only politically naive but also has succumb to Neoliberal doctrine and decided ‘To hell with everybody else, I’m getting as much as I can for Me Me Me.’
Key was like a Tony Blair of the South Seas: a certain level of personal charisma and a socially inclusive façade allowed both Key and Blair to sell the nasty side of neoliberalism.
Never mind the hundreds of thousands of children living under the poverty line in New Zealand, a country of only four million, and him brushing off the recommendations of the government panel charged with improving their lot; Key was seen as a good guy and a safe pair of hands.
Key was a person who fitted the narrative of Neoliberalism perfectly, he was a man of his time. He came of age at a time when a Neoliberal coup turned the more-or-less socialist mixed economy of New Zealand on its head.
As financial markets were deregulated and the Keynesian social consensus dismantled, Key began his ascendency to banking-money heaven.
The heart of the Key narrative, like the Trump narrative, is money.
They both have a personal story about business acumen and the notion that making money is the high art of society and the hallmark of good character.
It’s no coincidence that this “good with money” story has found such great traction and continues to propel wealthy business people into political power. In New Zealand, and many other countries, including Australia, the UK and the US, there’s a powerful narrative that says that running a government is very much like running a company; you must balance the books first of all.
Equating the values of entrepreneurship and fiscal discipline with the judgment required to legislate in the public interest is crude nonsense.
But this money story not only has currency, it is the currency of the reigning monetarist fiscal discourse.
Many of the opponents of neoliberalism, including those who tried to unseat Key, still haven’t figured out how to counter the “money story”.
Labour parties around the world have long been experiencing an identity crisis: they are divided between their complicity in creating a neoliberal society, their adoption of Keynesian responses to the global financial crisis and the ideological opposition to neoliberalism among their ranks, in the form of, for example, Jeremy Corbyn and Bernie Sanders.
Until you pull that money story apart, and New Zealand Labour still need to do this, people do buy into it, and they kept voting Key in because they believed in the equation that “good with money” equals “morally upstanding”.
People don’t want this bubble popped.
As economic and political power have once again moved into the hands of a relative few large corporations and wealthy individuals, “freedom” is again being used to justify the multitude of ways they entrench and enlarge that power by influencing the rules of the game.
These include escalating campaign contributions, as well as burgeoning “independent” campaign expenditures, often in the form of negative advertising targeting candidates whom they oppose; growing lobbying prowess, both in Washington and in state capitals; platoons of lawyers and paid experts to defend against or mount lawsuits, so that courts interpret the laws in ways that favor them; additional lawyers and experts to push their agendas in agency rule-making proceedings; the prospect of (or outright offers of) lucrative private-sector jobs for public officials who define or enforce the rules in ways that benefit them; public relations campaigns designed to convince the public of the truth and wisdom of policies they support and the falsity and deficiency of policies they don’t; think tanks and sponsored research that confirm their positions; and ownership of, or economic influence over, media outlets that further promote their goals.
Robert Reich, from his book ‘Saving Capitalism’
People that follow and support you pretty much know what the policy is going to look like Gareth. Good Luck, this will hopefully turn out to be a defining moment in New Zealand’s political history. We urgently need to rethink our social economic policy’s for the betterment of the 90% and the long term survival of our economy.
The Lesson of Neoliberalism.
Some argue that these days, it hardly matters anymore who you vote for. Though we still have a right and a left, neither side seems to have a very clear plan for the future.
In an ironic twist of fate, the neoliberalist brainchild of two men who devoutly believed in the power of ideas (Hayek & Friedman) has now put a lockdown on the development of new ones. It would seem that we have arrived at “the end of history,” with liberal democracy as the last stop and the “free consumer”as the terminus of our species.
By the time Milton Friedman was named president of the Mont Pèlerin Society in 1970, most of its philosophers and historians had already decamped, the debates having become overly technical and economic. In hindsight, Friedman’s arrival marked the dawn of an era in which economists would become the leading thinkers of the Western world. We are still in that era today.
We inhabit a world of managers and technocrats. “Let’s just concentrate on solving the problems,”they say. “Let’s just focus on making ends meet.” Political decisions are continually presented as a matter of exigency – as neutral and objective events, as though there were no other choice.
John Maynard Keynes observed this tendency emerging even in his own day. “Practical men, who believe themselves to be quite exempt from any intellectual influences,” he wrote, “are usually the slaves of some defunct economist.”
When Lehman Brothers collapsed on September 15, 2008, and inaugurated the biggest crisis since the 1930s, there were no real alternatives to hand. No one had laid the groundwork. For years, intellectuals, journalists, and politicians had all firmly maintained that we’d reached the end of the age of “big narratives” and that it was time to trade in ideologies for pragmatism.
Naturally, we should still take pride in the liberty that generations before us fought for and won. But the question is, what is the value of free speech when we no longer have anything worthwhile to say? What’s the point of freedom of association when we no longer feel any sense of affiliation? What purpose does freedom of religion serve when we no longer believe in anything?
On the one hand, the world is still getting richer, safer, and healthier. Every day, more and more people are arriving in Cockaigne. That’s a huge triumph. On the other hand, it’s high time that we, the inhabitants of the Land of Plenty, stake out a new utopia. Let’s rehoist the sails. “Progress is the realisation of Utopias,”Oscar Wilde wrote many years ago. A 15-hour workweek, universal basic income, and a world without borders…They’re all crazy dreams –but for how much longer?
People now doubt that “human ideas and beliefs are the main movers of history,”as Hayek argued back when neoliberalism was still in its infancy. “We all find it so difficult to imagine that our beliefs might be different from what they in fact are.” It could easily take a generation, he asserted, before new ideas prevail. For this very reason, we need thinkers who not only are patient, but also have “the courage to be ‘utopian.’” Let this be the lesson of Mont Pèlerin. Let this be the mantra of everyone who dreams of a better world, so that we don’t once again hear the clock strike midnight and find ourselves just sitting around, empty-handed, waiting for an extraterrestrial salvation that will never come.
Ideas, however outrageous, have changed the world, and they will again. “Indeed,” wrote Keynes, “the world is ruled by little else.”
Rutger Bregman, from his book ‘Utopia for Realists’
Three weeks ago Bill English was crowing about the government’s books being in surplus. Today I read of 30 people going blind in Southland while waiting for eye operations .
English was born in Lumsden in Southland. He represented Clutha- Southland for many years before he became a list MP.You’d think he might be interested in this problem.
Yet the talk is of tax cuts rather than increasing the health budgets of DHB’s who have patients going blind waiting for surgery.
In the end all economic decisions are moral decisions. How you spend your money and what you spend it on reveals a lot about your core beliefs and moral values.
Neoliberal economic theory, introduced by Labour and put on steroids by National, is morally bankrupt. It has blinded many of our politicians to the fact that allowing a few people to get very rich at the expense of the many is taking us back to the Nineteenth century and the days when if you could afford a doctor you got treated. If not,you suffered.
We have got this every wrong folks. We need to decide what kind of society we want and then figure out how to pay for it. Not live in the society some self-centred economic theory wants us to have.
As Nobel Prize Winning economist Prof Muhammad Yunus astutely observed during an interview with me for MIND THE GAP …
“We follow the theory.Theory should be following us!”
Conservative criticism of the old nanny state hits the nail on the head.
The current tangle of red tape keeps people trapped in poverty. It actually produces dependence.
Whereas employees are expected to demonstrate their strengths, social services expects claimants to demonstrate their shortcomings; to prove over and over that an illness is sufficiently debilitating, that a depression is sufficiently bleak, and that chances of getting hired are sufficiently slim. Otherwise your benefits are cut.
Forms, interviews, checks, appeals, assessments, consultations, and then still more forms – every application for assistance has its own debasing, money-guzzling protocol.
“It tramples on privacy and selfrespect in a way inconceivable to anyone outside the benefit system. It creates a noxious fog of suspicion.” said a british social services worker.
Rutger Bregman, from his book ‘Utopia’
We need to scrap our whole money-guzzling welfare state and adopt a Universal Basic Income for everyone over 18. It’s the only way of saving our economy in the face of an aging population and available jobs disappearing due to automation. Here in New Zealand we will have 2.5 working people to every pensioner in twenty years. What, Are these people going to be paying 100 percent tax?
Who’s winning the “class struggle” between business and workers? Bryce Edwards
New Zealand is the best place in the world to do business. This is declared today in the World Bank annual report, “Doing Business 2017” – see Simon Maude’s World Bank names NZ best country for business. There are many factors in New Zealand’s reputation for ease of doing business, and of course the flexibility of the labour market, with very limited employment regulation, is one of the well-known benefits for business here. But can “what is good for business” also be “good for workers”? Or does the success of business come at the expense of workers? Or is the “class struggle” dead, along with a union movement that is struggling for relevance? NZ Herald
“Debt is a cleverly managed reconquest of Africa.” “He who feeds you, controls you.”
Even at gun point, I refuse to accept that borrowing money from the IMF at the detriment of Zambia’s future is part of their mandate. Their mandate is to think of alternative sustainable ways to resuscitate the economy, as opposed to rushing to the IMF for a bailout that strangles the country’s future especially the poorest citizens. A government that cannot think of alternative ways to regrow the economy apart from borrowing from these money-lending institutions is not fit to hold public office. It is now clear, our ministers are appointed, not to think on behalf of the ministries they lead, but to ceremonially occupy such positions while shamelessly enjoying free housing, transport, electricity, airtime, state security and gallivanting around the globe at the expense of taxpayer’s money. Lusaka Times
I like to treat neoliberalism not as some kind of coherent political philosophy, but more as a set of interconnected ideas that have become commonplace in much of our discourse. That the private sector entrepreneur is the wealth creator, and the state typically just gets in their way. That what is good for business is good for the economy, even when it increases monopoly power or involves rent seeking. Interference in business or the market, by governments or unions, is always bad. And so on. …
I do not think austerity could have happened on the scale that it did without this dominance of this neoliberal ethos. Mark Blyth has described austerity as the biggest bait and switch in history. It took two forms. In one the financial crisis, caused by an under regulated financial sector lending too much, led to bank bailouts that increased public sector debt. This leads to an outcry about public debt, rather than the financial sector. In the other the financial crisis causes a deep recession which – as it always does – creates a large budget deficit. Spending like drunken sailors goes the cry, we must have austerity now.
In both cases the nature of what was going on was pretty obvious to anyone who bothered to find out the facts. That so few did so, which meant that the media largely went with the austerity narrative, can be partly explained by a neoliberal ethos. Having spent years seeing the big banks lauded as wealth creating titans, it was difficult for many to comprehend that their basic business model was fundamentally flawed and required a huge implicit state subsidy. On the other hand they found it much easier to imagine that past minor indiscretions by governments were the cause of a full blown debt crisis. …
While in this sense austerity might have been a useful distraction from the problems with neoliberalism made clear by the financial crisis, I think a more important political motive was that it appeared to enable the more rapid accomplishment of a key neoliberal goal: shrinking the state. It is no coincidence that austerity typically involved cuts in spending rather than higher taxes… In that sense too austerity goes naturally with neoliberalism. …
An interesting question is whether the same applies to right wing governments in the UK and US that used immigration/race as a tactic for winning power. We now know for sure, with both Brexit and Trump, how destructive and dangerous that tactic can be. As even the neoliberal fantasists who voted Leave are finding out, Brexit is a major setback for neoliberalism. Not only is it directly bad for business, it involves (for both trade and migration) a large increase in bureaucratic interference in market processes. To the extent she wants to take us back to the 1950s, Theresa May’s brand of conservatism may be very different from Margaret Thatcher’s neoliberal philosophy.
The massive, gaudy houses lining the streets of America’s upscale suburbs began to look like the epitome of bad taste and poor judgement once the foreclosure crisis hit. The writer behind the blog “McMansion Hell” tells why they’ll eventually be gone for good. Huffington Post
Bill English is announcing a $1.8 Billion surplus.
Yet.. We have families living in cars.
Well documented research that shows our Health system is desperately underfunded. We have thousands of school kids being fed by charity.
And what is English talking about? Tax breaks! It’s time to ask…
What’s an economy for? To benefit a select few? Or deliver the greatest good to the greatest number of our citizens over the longest time?
If the National led government also collected the estimated $5 Billion in tax evasion that happens every year we would have plenty of money to provide good housing and health care for everyone.
That’s the kind of break I’d like to see.
The prize matters to everyone, because of market liberalism, which advocates marketisation, deregulation, union-busting, financialisation, inequality, outsourcing of healthcare, pensions and education, low taxes for the rich, and globalisation. In the 1990s, this rightwing platform was endorsed by New Labour, Clinton Democrats, and their equivalents elsewhere.
Like market liberalism, economics regards buying and selling in markets as the template for human relations and claims that market choices scale up to the social good. But the doctrines of economics are not well founded: premises are unrealistic, models inconsistent, predictions often wrong. The halo of the prize has lent credibility to policies that harm society, to inequality and financial disorder.
In the meantime, the me-first assumptions of economics have led to corruption and tax inequity, and an escalating public mistrust of governing elites. Valid economic doctrine has come into disrepute. Disdain for experts, and disaffection with economic reasoning has energised a politics of the excluded, of Jeremy Corbyn, Bernie Sanders, Marine Le Pen, Donald Trump and now Brexit. The Guardian