Category Archives: Capitalism

The Economists Who Stole Christmas – Yanis Varoufakis. 

To welcome the New Year with a cheeky take on the clash of economic ideologies, how might opposing camps’ representatives view Christmas presents? Levity aside, the answer reveals the pompousness and vacuity of each and every economic theory.

Neoclassicists: Given their view of individuals as utility-maximizing algorithms, and their obsession with a paradigm of purely utility-driven transactions, neoclassical economists can see no point in such a fundamentally inefficient form of exchange as Christmas gift-giving. When Jill receives a present from Jack that cost him $X, but which gives her less utility than she would gain from commodity Y, which retails for $Y (that is less than or equal to $X), Jill is forced either to accept this utility loss or to undertake the costly and usually imperfect business of exchanging Jack’s gift for Y. Either way, there is a deadweight loss involved.

In this sense, the only efficient gift is an envelope of cash. But, because Christmas is about exchanging gifts, as opposed to one-sided offerings, what would be the purpose in Jack and Jill exchanging envelopes of cash? If they contain the same amounts, the exercise is pointless. If not, the exchange is embarrassing to the person who has given less and can damage Jack and Jill’s relationship irreparably. The neoclassicist thus endorses the Scrooge hypothesis: the best gift is no gift.

Keynesians: To prevent recessions from turning into depressions, a fall in aggregate demand must be reversed through increased investment, which requires that entrepreneurs believe that increased consumption will mop up the additional output that new investments will bring about. The neoclassical elimination of Christmas gift exchange, or even the containment of Christmas largesse, would be disastrous during recessionary periods.

Indeed, Keynesians might go so far as to argue that it is the government’s job to encourage gift exchanges (as long as the gifts are purchased, rather than handcrafted or home produced), and even to subsidize gift giving by reducing sales taxes during the holiday season. And why stop at just one holiday season? During recessionary times, two or three Christmases might be advisable (preferably spaced out during the year).

But Keynesians also stress the importance of reining in the government deficit, as well as overall consumption, when the economy is booming. To that end, they might recommend a special gift or sales tax during the festive season once growth has recovered, or even canceling Christmas when the pace of GDP growth exceeds that consistent with full employment.

Monetarists: Convinced that the money supply should be the government’s sole economic-policy tool, and that it should be used solely to maintain price stability through equilibration of the money supply vis-á-vis aggregate production, the central bank should gradually increase nominal interest rates once summer ends and reduce them sharply every January. The changes in nominal interest rates they recommend depend on the central bank’s inflation target and the economy’s underlying real interest rate, and must reflect the rates necessary to keep the pace of change in consumption demand and large retailers’ inventories balanced. (Yes, it’s true: Monetarists are the dullest economists to ever have walked the planet!)

Rational Expectations: These Chicago School economists disagree with both Keynesians and monetarists. Unlike the Keynesians, they think a fiscal stimulus of Christmas gift spending in recessionary festive seasons will not encourage gift producers to boost output. Entrepreneurs will not be fooled by government intervention, and will foresee that the current increase in demand for gifts will be offset in the long run by a sharp drop (as government subsidies turn into increased taxation and fewer Christmases are observed during the good times). With output and employment remaining flat, government subsidies and additional Christmases will merely produce more debt and higher prices.

Austrian School libertarians: Supporters of Friedrich von Hayek and Ludwig von Mises have two major objections to Christmas. First, there is the illiberal aspect of the holiday season: the state has no right, and no reason, to force entrepreneurs to close down, against their will (for four days December 25 and 26, and January 1 and 2) over the course of a fortnight. Second, the ever-lengthening pre-Christmas consumption boom tends to expand credit, thus causing bubbles in the toy and electronics market during the fall that will burst in January, with potentially damaging consequences for the rest of the year.

Empiricists: Convinced that observation is our only tool against economic ignorance, empiricists are certain that the only defensible theoretical propositions are those derived from discerning patterns whereby changes in exogenous variables constantly precede changes in endogenous variables, thus establishing empirically (for example, through Granger tests) the direction of causality. This perspective leads empiricists to the safe conclusion that Christmas, and a spurt in gift exchanges, is caused by a prior increase in the money supply and, ceteris paribus, a drop in savings.

Marxists: In societies in which profit is derived exclusively from surplus value “donated” (as part of the capitalist labor process) by workers, and which reflects the capitalists’ extractive power (bequeathed to them by one-sided property rights over the means of production), the Christmas tradition of gift exchange packs dialectical significance.

On one hand, Christmas gift giving is an oasis of non-market exchange that points to the possibility of a non-capitalist system of distribution. On the other hand, it offers capital another opportunity to harness humanity’s finest instincts to profit maximization, through the commodification of all that is pure and good about the festive season. And purists – those who still defend the “law of the falling (long-term) rate of profit” – would say that capital’s capacity to profit from Christmas diminishes from year to year, thus giving rise to social and political forces which, in the long run, will undermine the festive season.

Obviously, none of these theories can possibly account for why people participate, year in and year out, in the ritual of holiday gift giving. For that, we should be grateful.


Yanis Varoufakis, a former finance minister of Greece, is Professor of Economics at the University of Athens.

Project Syndicate 

Capitalism without Capital, the rise of the Intangible Economy – Jonathan Haskel and Stian Westlake. 


Valuation, the Old-Fashioned Way: or, a Thousand Years in Essex. 

Colin Matthews was vexed. To have valuers crawling all over his airport was the last thing he wanted. But after three years, it could no longer be stopped. It was the summer of 2012. For three years he had been fighting the UK competition authorities’ attempts to break up British Airports Authority (BAA), the company he ran and which owned most of Britain’s large airports.

He had exhausted his legal options and was giving up. So now the men and women with suits and spreadsheets and high-viz vests were going round his airports, working out how much they were worth to potential buyers. Accountants and lawyers and surveyors and engineers measured and counted, and bit by bit, they came up with a value for the whole of Stansted, Britain’s fourth-busiest airport, to the northeast of London.

They priced up the tarmac, the terminal, the baggage equipment. There was an agreed value for the parking lots, the bus station, and the airport hotel. There was some argument about the underground fuel pumps, but the calculation was not out of the ordinary for BAA’s accountants: the cost of the asset less its depreciation, with some adjustment for inflation.

Sure enough, when Stansted was sold in 2013 (for £ 1.5 billion), the price was pretty close to what the accountants had valued the business at.

In one sense, the valuation of Stansted looked like a quintessentially twenty-first-century scene. There was the airport itself. What could be a better emblem of globalized high modernity than an airport? There was the troupe of accountants and lawyers, those ubiquitous servants of financial capitalism. And, of course, there was the economic logic of the process: from the privatization that put BAA in the private sector in the first place, to the competition policy that caused the breakup, to the infrastructure funds that circled to buy the assets after breakup; all very modern.

But at the same time, the valuation of Stansted was the kind of thing that had been going on for centuries. The business of working out how much something was worth by counting up and measuring physical stuff has a long and noble tradition.

Nine and a quarter centuries before, Stansted, then just another country village, had played host to a similar scene. Reeves and messengers, the eleventh-century forerunners of the accountants and lawyers that had so vexed Colin Matthews, had converged on the place to assess its value for Domesday Book, the vast survey of England’s wealth carried out by William the Conqueror. Using tally-sticks rather than laptops, they carried out their own valuation. They talked to people and counted things. They recorded that Stansted had a mill, sixteen cows, sixty pigs, and three slaves. Then they measured what they counted and valued the manor of Stansted at £11 per year.

And although the value they put on the medieval village of Stansted was rather less than the £ 1.5 billion BAA got for selling the airport in 2013, the reeves and envoys who did the measuring for William the Conqueror were doing something fundamentally similar to what Colin Matthews’s accountants were doing.

For centuries, when people wanted to measure how much something ought to be worth—an estate, a farm, a business, a country—they counted and measured physical stuff. In particular, they measured things with lasting value. These things became the fixed assets on accountants’ balance sheets and the investments that economists and national statisticians counted up in their attempts to understand economic growth.

Over time, the nature of these assets and investments changed: fields and oxen became less important, animals gave way to machinery and factories and vehicles and computers. But the idea that assets are for the most part things you could touch, and that investment means building or buying physical things was as true for twentieth-century accountants and economists as it was for the scribes of Domesday Book.

Why Investment Matters

The nature of investment is important to all sorts of people, from bankers to managers. Economists are no exception: investment occupies a central place in much economic thought. Investment is what builds up capital, which, together with labor, constitutes the two measured inputs to production that power the economy, the sinews and joints that make the economy work.

Gross domestic product is defined as the sum of the value of consumption, investment, government spending, and net exports;

Of these four, investment is often the driver of booms and recessions, as it tends to rise and fall more dramatically in response to monetary policy and business confidence.

The investment element of GDP is where the animal spirits of the economy bark, and where a recession first bites. As a result, the statisticians whose job it is to work out national income have put long and sustained efforts into measuring how much businesses invest, year after year, quarter after quarter. Since the 1950s, national statistical agencies have sent out regular questionnaires to businesses to find out how much businesses are investing. Periodic studies are done to understand how long particular assets last and, especially for high-tech investments like computers, how much they are improving over time.

Until very recently, the investments that national statistical offices measured were all tangible assets. Although these investments represented the modern age in all its industrial glory (in 2015 in the UK, for example, businesses invested £ 78bn in new buildings; £ 60bn in IT, plant, and machinery; and £ 17bn in vehicles), the basic principle that investment was about physical goods would have made sense to William the Conqueror’s reeves.

The Dark Matter of Investment

But, of course, the economy does not run on tangible investment alone. Stansted Airport, for example, owned not just tarmac and terminals and trucks, but also things that were harder to see or touch: complex software; valuable agreements with airlines and retailers; internal know-how. All these things had taken time and money to build up and had a lasting value to whoever owned the airport, but they consisted not of physical stuff but of ideas, knowledge, and social relations. In the language of economists, they were intangible.

The idea that an economy might come to depend on things that were immaterial was an old one. Futurists like Alvin Toffler and Daniel Bell had begun to talk about the “post-industrial” future as long ago as the 1960s and 1970s. As the power of computers and the Internet became more apparent in the 1990s, the idea that immaterial things were economically important became increasingly widely accepted. Sociologists talked of a “network society” and a “post-Fordist” economy. Business gurus urged managers to think about how to thrive in a knowledge economy. Economists began to think about how research and development and the ideas that resulted from it might be incorporated into their models of economic growth, an economy parsimoniously encapsulated by the title of Diane Coyle’s book The Weightless World.

Authors like Charles Leadbeater suggested we might soon be “living on thin air.”

The bursting of the dot-com bubble in 2000 dampened some of the wilder claims about a new economy, but research continued among economists to understand what exactly was changing.

It was in this context that a group of economists assembled in Washington in 2002 at a meeting of the Conference on Research in Income and Wealth to think about how exactly to measure the types of investment that people were making in what they were calling “the new economy.” At this conference and afterwards, Carol Corrado and Dan Sichel of the US Federal Reserve Board and Charles Hulten of the University of Maryland developed a framework for thinking about different types of investment in the new economy.

To get an idea of what these sorts of investment are, consider the most valuable company in the world at the time of the conference: Microsoft. Microsoft’s market value in 2006 was around $ 250bn. If you looked at Microsoft’s balance sheet, which records its assets, you would find a valuation of around $70bn, $60bn of which was cash and various financial instruments. The traditional assets of plant and equipment were only $3bn, a trifling 4 percent of Microsoft’s assets and 1 percent of its market value.

By the conventional accounting of assets then, Microsoft was a modern-day miracle. This was capitalism without capital.

Not long after the conference, Charles Hulten combed through Microsoft’s accounts to explain why it was worth so much (Hulten 2010). He identified a set of intangible assets, assets that “typically involve the development of specific products or processes, or are investments in organizational capabilities, creating or strengthening product platforms that position a firm to compete in certain markets.”

Examples included the ideas generated by Microsoft’s investments in R&D and product design, the value of its brands, its supply chains and internal structures, and the human capital built up by training. Although none of these intangible assets are physical in the way that Microsoft’s office buildings or servers are, they all share the characteristics of investments: the company had to spend time and money on them up-front, and they delivered value over time that Microsoft was able to benefit from.

But they were typically hidden from company balance sheets and, not surprisingly, from the nation’s balance sheet in the official National Accounts. Corrado, Hulten, and Sichel’s work provided a big push to develop ways to estimate intangible investment across theeconomy, using surveys, existing data series, and triangulation.

A Funny Thing Happened on the Way to the Future

And so the intangibles research program developed. In 2005 Corrado, Hulten, and Sichel published their first estimates of how much American businesses were investing in intangibles. In 2006 Hulten visited the UK and gave a seminar on their work at Her Majesty’s Treasury, which immediately commissioned a team (that included one of this book’s authors) to extend the work to the UK. Work also began in Japan. Agencies like the Organisation for Economic Co-operation and Development (OECD), which were very early on the intangible scene (see, e.g., Young 1998), promoted the idea of intangible investment in policy and political circles, and the idea attracted some attention among commentators and the emerging economic blogosphere.

As figure 1.1 shows, mention of “intangible” became steadily more fashionable even in dry academic journals. Figure 1.1.

“Intangibles” references in scientific journals. Data are the number of mentions of the word “intangible” in the Abstract, Title, or Keyword in academic journals in the field “Economics, Econometrics and Finance” recorded in the database ScienceDirect. Source: authors’calculations from ScienceDirect.

But then something happened that changed the economic agenda: the global financial crisis. Economists and economic policymakers were, quite reasonably, less interested in understanding a purported new economy than in preventing the economy as a whole from collapsing into ruin. Once the most dangerous part of the crisis had been averted, a set of new and rather bleak problems came to dominate economic debate: how to fix a financial system that had so calamitously failed, the growing awareness that inequality of wealth and income had risen sharply, and how to respond to a stubborn stagnation in productivity growth.

To the extent that the idea of the new economy was still discussed, it was mostly framed in pessimistic, even dystopian terms: Had technological progress irreversibly slowed, blasting our economic hopes? Would technology turn bad, producing robots that would steal everyone’s jobs, or give rise to malign and powerful forms of artificial intelligence?

But while these grim challenges were dominating public debate on economics in op-ed columns and blogs, the project to measure new forms of capital was quietly progressing. Surveys and analyses were undertaken to produce data series of intangible investment, first for the United States, then for the UK, and then for other developed countries. Finance ministries and international organizations continued to support the work, and national statistical agencies began to include some types of intangibles, notably R&D, in their investment surveys. Historical data series were built, estimating how intangible investment had changed over time.

And, as we shall see, intangible investment has, in almost all developed countries, been growing more and more important. Indeed, in some countries, it now outweighs tangible investment.

Why Intangible Investment Is Different

Now, there is nothing inherently unusual or interesting from an economic point of view about a change in the types of things businesses invest in. Indeed, nothing could be more normal: the capital stock of the economy is always changing. Railways replaced canals, the automobile replaced the horse and cart, computers replaced typewriters, and, at a more granular level, businesses retool and change their mix of investments all the time.

Our central argument in this book is that there is something fundamentally different about intangible investment, and that understanding the steady move to intangible investment helps us understand some of the key issues facing us today: innovation and growth, inequality, the role of management, and financial and policy reform.

We shall argue there are two big differences with intangible assets.

First, most measurement conventions ignore them. There are some good reasons for this, but as intangibles have become more important, it means we are now trying to measure capitalism without counting all the capital.

Second, the basic economic properties of intangibles make an intangible-rich economy behave differently from a tangible-rich one.

Measurement: Capitalism without Capital

As we will discuss, conventional accounting practice is to not measure intangible investment as creating a long-lived capital asset. And this has something to be said for it. Microsoft’s investment in a desk and an office building can be observed, and the market for secondhand office equipment and renting office space tells you more or less daily the value of that investment. But there is no market where you can see the raw value of its investment in developing better software or redesigning its user interface. So trying to measure the “asset” that’s associated with this investment is a very, very hard task, and accountants, who are cautious people, typically prefer not to do so, except in limited circumstances (typically when the program has been successfully developed and sold, so there is an observable market price).

This conservative approach is all very well in an economy where there is little investment in this type of good. But as such investment starts to exceed tangible investment, it leaves larger and larger areas of the economy uncharted.

Properties of Intangibles: Why the Economy Is Becoming So Different

The shift to intangible investment might be a relatively minor problem if all that was at stake was mismeasurement. It would be as if we were counting most of the new trucks in the economy but missing some of them: an interesting issue for statistics bureaus, but little more.

But there is, we will argue, a more important consequence of the rise of intangibles: intangible assets have, on the whole, quite different economic characteristics from the tangible investment that has traditionally predominated.

First of all, intangible investment tends to represent a sunk cost. If a business buys a tangible asset like a machine tool or an office block, it can typically sell it should it need to. Many tangible investments are like this, even large and unusual ones. If you’ve ever fancied one of those giant Australian mining tractors, you can buy them secondhand at an online auction site called Machinery Zone; World Oils sells gently used drilling rigs; and a business called UVI Sub-Find deals in secondhand submarines.

Intangible assets are harder to sell and more likely to be specific to the company that makes them. Toyota invests millions in its lean production systems, but it would be impossible to separate these investments from their factories and somehow sell them off. And while some research and development gives rise to patents that can in some cases be sold, far more of it is tailored to the specific needs of the business that invests in it, certainly sufficiently so to make intellectual property markets very limited.

The second characteristic of intangible investments is that they generate spillovers. Suppose you run a business that makes flugelbinders, and you own a tangible asset in the form of a factory, and an intangible asset in the form of an excellent new design for a flugelbinder. It’s almost trivially easy to make sure that your firm gets most of the benefits from the factory: you put a lock on the door. If someone asks to use your factory for free, you politely refuse; if they break in, you can call the police and have them arrested; in most developed countries, this would be an open-and-shut case. Indeed, making sure you get the benefit from tangible assets you own, like a factory, is so simple that it seems a silly question to ask.

The designs, however, are a different business altogether. You can keep them secret to prevent their being copied, but competitors may be able to buy some flugelbinders and reverse-engineer them. You might be able to obtain a patent to discourage people from copying you, but your competitors may be able to “invent around” it, changing just enough aspects of the product that your patent offers no protection. Even if your patent is secure, getting redress against patent infringement is far more complicated than getting the police to sling intruders out of your factory—you may be in for months or years of litigation, and you may not win in the end.

After their world-leading first flight, the Wright brothers spent much of their time not developing better aircraft, but fighting rival developers who they felt were infringing on their patents. The tendency for others to benefit from what were meant to be private investments—what economists call spillovers—is a characteristic of many intangible investments.

Intangible assets are also more likely to be scalable. Consider Coke: the Coca Cola Company, based in Atlanta, Georgia, is responsible for only a limited number of the things that happen to produce a liter of Coke. Its most valuable assets are intangible: brands, licensing agreements, and the recipe for how to make the syrup that makes Coke taste like Coke. Most of the rest of the business of making and selling Coke is done by unrelated bottling companies, each of which has signed an agreement to produce Coke in its part of the world. These bottlers typically own their own bottling plants, sales forces, and vehicle fleets.

The Coca Cola Company of Atlanta’s intangible assets can be scaled across the whole world. The formula and the Coke brand work just the same whether a billion Cokes are sold a day or two billion (the actual number is currently about 1.7 billion). The bottlers’ tangible assets scale much less well. If Australians dramatically increase their thirst for Coke, Coca Cola Amatil (the local bottler) will likely need to invest in more trucks to deliver it, bigger production lines, and eventually new plants.

Finally, intangible investments tend to have synergies (or what economists call complementarities) with one another: they are more valuable together, at least in the right combinations. The MP3 protocol, combined with the miniaturized hard disk and Apple’s licensing agreements with record labels and design skills created the iPod, a very valuable innovation.

These synergies are often unpredictable. The microwave oven was the result of a marriage between a defense contractor, which had accidentally discovered that microwaves from radar equipment could heat food, and a white goods manufacturer, which brought appliance design skills.

Tangible assets have synergies too—between the truck and the loading bay, say, or between a server and a router, but typically not on the same radical and unpredictable scale.


These unusual economic characteristics mean that the rise of intangibles is more than a trivial change in the nature of investment.

Because intangible investments, on average, behave differently from tangible investments, we might reasonably expect an economy dominated by intangibles to behave differently too. In fact, once we take into account the changing nature of capital in the modern economy, a lot of puzzling things start to make sense.

In the rest of this book, we’ll look at how the shift to intangible investment helps us understand four issues of great concern to anyone who cares about the economy: secular stagnation, the long-run rise in inequality, the role of the financial system in supporting the nonfinancial economy, and the question of what sort of infrastructure the economy needs to thrive.

Armed with this understanding we then see what these economic changes mean for government policymakers, businesses, and investors. Our journey will take us past the appraisers of old into the unmapped territory that is modern intangible investment.



The Rise of the Intangible Economy

Capital’s Vanishing Act

Investment is one of the most important activities in the economy. But over the past thirty years, the nature of investment has changed. This chapter describes the nature of that change and considers its causes. In chapter 3, we look at how this change in investment can be measured. In chapter 4, we explore the unusual economic properties of these new types of investment, and why they might be important.

Investment is central to the functioning of any economy. The process of committing time, resources, and money so that we can produce useful things in the future is, from an economic point of view, a defining part of what businesses, governments, and individuals do.

The starting point of this book is an observation: Over the last few decades, the nature of investment has been gradually but significantly changing.

The change isn’t primarily about information technology. The new investment does not take the form of robots, computers, or silicon chips, although, as we will see, they all play supporting roles in the story.

The type of investment that has risen inexorably is intangible: investment in ideas, in knowledge, in aesthetic content, in software, in brands, in networks and relationships. This chapter describes this change and why it has happened.

A Trip to the Gym

Our story begins in the gym, or rather in two gyms. We’re going to step inside a commercial gym in 2017 and in 1977 and look at some of the differences. As we will see, gyms provide a vivid but typical example of how even industries that are not obviously high-tech have subtly changed the types of investment they make.

Gyms are an interesting place to begin our search for the intangible economy because at first glance there’s nothing much intangible about them. Even if you avoid gyms like the plague, you probably have an idea of the sort of things you would find there.

Our gym in 2017 is full of equipment that the business needs to run: a reception desk with a computer and maybe a turnstile, exercise machines, some weights, shower fittings, lockers, mats, and mirrors (“the most heavily used equipment in the gym,” as one gym owner joked).

All this kit is reflected in the finances of businesses that own and run gyms: their accounts typically contain lots of assets that you can touch and see, from the premises they operate in to the treadmills and barbells their customers use.

Now, consider a gym from forty years ago. By 1977 the United States was full of gyms. Arnold Schwarzenegger’s breakout movie Pumping Iron had just been released, featuring scenes of him training in Gold’s Gym in Venice Beach, Los Angeles, which had been established in 1965 and was widely franchised across America. Other gyms contained machines like the Nautilus, the original fixed-weight machine, invented by Arthur Jones in the late 1960s.

If you were to look around a gym of the time, you might be surprised to see many similarities to today’s gym. Granted, there might be fewer weight machines and they would be less advanced. Membership would be recorded on index cards rather than on a computer; perhaps the physical fittings would be more rough-and-ready, but otherwise many of the business’s visible assets would look the same: some workout rooms, some changing rooms, some equipment.

But if we return to our 2017 gym and look more closely, we’ll notice a few differences. It turns out that the modern gym has invested in a range of things that its 1977 counterpart hasn’t. There is the software behind the computer on the front desk, recording memberships, booking classes, and scheduling the staff roster, linked to a central database.

The gym has a brand, which has been built up through advertising campaigns whose sophistication and expense dwarf those of gyms in the 1970s.

There’s an operations handbook, telling the staff how to do various tasks from inducting new members to dealing with delinquent customers. Staff members are trained to follow the handbook and are doing things with a routinized efficiency that would seem strange in the easygoing world of Pumping Iron.

All these things—software, brands, processes, and training—are all a bit like the weight machines or the turnstile or the building the gym sits in, in that they cost money in the short run, but over time help the gym function and make money. But unlike the physical features, most of these things can’t be touched—certainly no risk of dropping them on your foot.

Gym businesses are still quite heavy users of assets that are physical (all of the UK’s four biggest gyms are owned by private equity firms, which tend to like asset-intensive businesses), but compared to their counterparts of four decades ago, they have far more assets that you cannot touch.

And the transformation goes deeper than this. In one of its rooms, the gym puts on regular exercise classes for its members; one of the most popular is called Bodypump, or, as the sign on the door significantly puts it “Bodypump®.” It turns out the company that runs the gym is not the only business operating in the premises—and this second business is even more interesting from an economic point of view.

Bodypump is a type of exercise called “high-intensity interval training”(HIIT), where participants move about vigorously and lift small weights in time to music, but this description does not do justice to the intensity of the workouts or the adrenaline-induced devotion that well-run HIIT classes engender in their customers.

The reason for the registered trademark sign is that Bodypump is designed and owned by the other company at work in the building, a business from New Zealand called Les Mills International. Les Mills was an Olympic weightlifter who set up a small gym in Auckland three years after Joe Gold opened his first gym in Los Angeles. His son Philip, after a visit to LA, saw the potential for merging music with group exercise: he brought it back to New Zealand and added weights to the routines to produce Bodypump in 1997.

He realized that by writing up the routines and synchronizing them with compilations of up-to-date, high-energy music, he had a product that could be sold to other gyms.

By 2005 Les Mills classes like Bodypump and Bodycombat were being offered in some 10,000 venues in 55 countries with an estimated 4 million participants a week (Parviainen 2011); the company’s website now estimates 6 million participants per week.

Les Mills’s designers create new choreography for their programs every three months. They film them and dispatch the film with guidance on the choreography notes and the music files to their licensed instructors. At the time of writing, they have 130,000 such instructors. To become an instructor, you have to complete three days of training, currently costing around £ 300, after which you can start teaching, but to proceed further you have to submit a video of a complete class to Les Mills, which checks your technique, choreography, and coaching.

The things that a business like Les Mills uses to make money look very different from the barbells and mats of a 1977 Gold’s Gym. True, some of their assets are physical—recording equipment, computers, offices—but most of them are not. They have a set of very valuable brands (gym customers have been known to mutiny if their gym stops offering Bodypump), intellectual property (IP) protected by copyrights and trademarks, expertise on designing exercise classes, and proprietary relationships with a set of suppliers and partners (such as music distributors and trainers).

The idea of making money from ideas about how to work out is not new—Charles Atlas was selling bodybuilding courses a decade before Les Mills was born—but the scale on which Les Mills International operates, and the way it combines brands, music, course design, and training is remarkable.

Our excursion into the world of gyms suggests that even a very physical business—literally, the business of physiques—has in the last few decades become a lot more dependent on things that are immaterial.

This is not a story of Internet-driven disruption of the kind we are familiar with from a hundred news stories: gyms were not replaced with an app the way record shops were replaced by Napster, iTunes, and Spotify. Software does not replace the need to lift weights. But the business has nevertheless changed in two different ways. The part that looks superficially similar to how it did in the 1970s—the gym itself—has become shot through with systems, processes, relationships, and software. This is not so much innovation, but innervation—the process of a body part being supplied with nerves, making it sensate, orderly, and controllable.

And new businesses have been set up that rely almost entirely for their success on things you cannot touch. In the rest of this chapter, we will look at how the changes in investment and in assets that took place in the gym industry can be seen throughout the economy, and the reasons for these changes. But first, let us look more rigorously at what investment actually is.

What Are Investment, Assets, and Capital?

When we looked at the things that gyms bought or developed to run and make money, we were talking about assets and investments. Investment is very important to economists because it builds up what they call the “capital stock” of the economy: the tools and equipment that workers use to produce the goods and services that together make up economic output. But “investment,” “assets,” and “capital” can be confusing terms.

Take “investment.” Financial journalists typically refer to people who buy and sell securities as “investors,” and nervously diagnose the “mood of investors.” The same journalist might call a long-term financier like Warren Buffett an “investor” and his short-term rivals “speculators.” Someone considering going to college might be advised that “education is the best investment you can make.” The terms “assets” and “capital” are also used in a confusing variety of ways.

In his justly famous Capital in the Twenty-First Century, Thomas Piketty (2014) defined capital as “all forms of wealth that individuals  .  .  . can own.”

Marxist writers commonly ascribe to “capital” not just an accounting definition, but an entire exploitative system. “Assets” also have different definitions. Many firms think of their business assets as their stock of plant and equipment. For an accountant, business assets commonly include the cash in the firm’s bank account and bills its customers have yet to pay, which don’t seem to be machines used in the business production but rather the results of doing that business.

Because of these multiple meanings, and because we’ll be coming back to these terms frequently, it will be helpful to establish working definitions for investment, capital, and assets. We will stick to the internationally agreed definition of investment used by statistics agencies the world over when they measure the performance of national economies. This has the benefit of being standardized and the fruit of much thought, and of being directly linked to figures like GDP that we are used to seeing in news bulletins.

According to the UN’s System of National Accounts, the bible of national accounting, “investment is what happens when a producer either acquires a fixed asset or spends resources (money, effort, raw materials) to improve it.”

This is a quite dense statement, so let’s unpack what it means.

First of all, let’s look at the definition of assets.

An asset is an economic resource that is expected to provide a benefit over a period of time. If a bank buys a new server or a new office building, it expects to get a benefit that lasts for some time—certainly longer than just a year. If it pays its electricity bill quarterly, the benefit lasts for three months. So the server and the building are assets, but neither the electricity nor the fact of having paid the bill is.

Second, consider the word fixed. A fixed asset is an asset that results from using up resources in the process of its production. A plane or a car or a drug patent all have to be produced—someone has to do work to create something from nothing. This can be distinguished from a financial asset, like an equity stake in a public company. An equity stake is not produced (except in the trivial sense that a share certificate might be printed to represent the claim).

This means that when economists talk about investment they are not talking about investing in the personal finance sense, that is, buying stocks and shares. And because they are talking about fixed assets they are not talking about the accountancy concept of cash in a company bank account.

Third, there is the idea of spending resources. To be deemed an investment, the business doing the investing has to either acquire the asset from somewhere else or incur some cost to produce it themselves.

Finally, there is the word producers. National accounts measure production by firms or government or the third sector. Production by households (say, doing the laundry or cooking at home) is not included, and so neither is investment by a household, say, in a washing machine or stove. This is a definitional feature of the way national accounts are calculated, and it is one of the reasons people criticize GDP (not least because it is large, and because it excludes from the record a part of the economy that has historically been run primarily by women).

Perhaps one day “production” will have a broader definition in national accounts; for our purposes, most of the changes we describe in this book would, we believe, apply to the household sector as well as to so-called producers. So, in this book when we talk about “investment” we are not talking about the buying or selling of pieces of paper on a stock market or households paying university tuition. Rather, we are talking about spending by business, government, or the third sector that creates a fixed (i.e., nonfinancial) asset, that is, resources spent that create a long-lived stream of productive services. We shall call such a fixed asset providing these long-lived productive services “capital.”

Because both capital and labor produce such productive services, economists refer to them as “factors of production.”

Not All Investments Are Things You Can Touch

One of the examples of an investment in the section above was a drug patent, say, one owned by a pharmaceutical company. The pharmaceutical company is obviously a producer, not a household; the company has to expend resources to produce the patent or acquire it; the patent arises from a process of production—in this case, the work of scientists in a lab—and if the patent is any good, it will have a long-term value, since the company can develop it for future use and perhaps sell medicines based on it.

The patent is an example of an intangible asset, created by a process of intangible investment. So too were the various assets in the gym story, from the gym’s membership software to Les Mills International’s Bodypump brand. They arose from a process of production, were acquired or improved by producers, and provide a benefit over time.

These kinds of investments can be found throughout the economy. Suppose a solar panel manufacturer researches and discovers a cheaper process for making photovoltaic cells: it is incurring expense in the present to generate knowledge it expects to benefit from in the future. Or consider a streaming music start-up that spends months designing and negotiating deals with record labels to allow it to use songs the record labels own—again, short-term expenditure to create longer-term gain. Or imagine a training company pays for the long-term rights to run a popular psychometric test: it too is investing.

Some of these investments are new technological ideas. Some are other sorts of ideas that have less to do with high technology: new product designs or new business models. Some take the form of lasting or proprietary relationships, such as a taxi app’s network of drivers. Some are codified information, like a customer loyalty card database. What they have in common is that they are not physical. Hence we call them intangible investment.



Capitalism without Capital, the rise of the Intangible Economy


Jonathan Haskel and Stian Westlake.

get it from


Reclaiming the State. A Progressive Vision of Sovereignty for a Post-Neoliberal World –  William Mitchell and Thomas Fazi. 


Make the Left Great Again 

The West is currently in the midst of an anti-establishment revolt of historic proportions. The Brexit vote in the United Kingdom, the election of Donald Trump in the United States, the rejection of Matteo Renzi’s neoliberal constitutional reform in Italy, the EU’s unprecedented crisis of legitimation: although these interrelated phenomena differ in ideology and goals, they are all rejections of the (neo) liberal order that has dominated the world –and in particular the West –for the past 30 years. 

Even though the system has thus far proven capable (for the most part) of absorbing and neutralising these electoral uprisings, there is no indication that this anti-establishment revolt is going to abate any time soon. Support for anti-establishment parties in the developed world is at the highest level since the 1930s –and growing. At the same time, support for mainstream parties –including traditional social-democratic parties –has collapsed. 

The reasons for this backlash are rather obvious. The financial crisis of 2007–9 laid bare the scorched earth left behind by neoliberalism, which the elites had gone to great lengths to conceal, in both material (financialisation) and ideological (‘the end of history’) terms. 

As credit dried up, it became apparent that for years the economy had continued to grow primarily because banks were distributing the purchasing power –through debt –that businesses were not providing in salaries. To paraphrase Warren Buffett, the receding tide of the debt-fuelled boom revealed that most people were, in fact, swimming naked

The situation was (is) further exacerbated by the post-crisis policies of fiscal austerity and wage deflation pursued by a number of Western governments, particularly in Europe, which saw the financial crisis as an opportunity to impose an even more radical neoliberal regime and to push through policies designed to suit the financial sector and the wealthy, at the expense of everyone else. 

Thus, the unfinished agenda of privatisation, deregulation and welfare state retrenchment –temporarily interrupted by the financial crisis –was reinstated with even greater vigour. Amid growing popular dissatisfaction, social unrest and mass unemployment (in a number of European countries), political elites on both sides of the Atlantic responded with business-as-usual policies and discourses. 

As a result, the social contract binding citizens to traditional ruling parties is more strained today than at any other time since World War II –and in some countries has arguably already been broken. 

Of course, even if we limit the scope of our analysis to the post-war period, anti-systemic movements and parties are not new in the West. Up until the 1980s, anti-capitalism remained a major force to be reckoned with. The novelty is that today –unlike 20, 30 or 40 years ago –it is movements and parties of the right and extreme right (along with new parties of the neoliberal ‘extreme centre’, such as the new French president Emmanuel Macron’s party En Marche!) that are leading the revolt, far outweighing the movements and parties of the left in terms of voting strength and opinion-shaping. 

With few exceptions, left parties –that is, parties to the left of traditional social-democratic parties –are relegated to the margins of the political spectrum in most countries. 

Meanwhile, in Europe, traditional social-democratic parties are being ‘pasokified’–that is, reduced to parliamentary insignificance, like many of their centre-right counterparts, due to their embrace of neoliberalism and failure to offer a meaningful alternative to the status quo –in one country after another. 

The term refers to the Greek social-democratic party PASOK, which was virtually wiped out of existence in 2014, due to its inane handling of the Greek debt crisis, after dominating the Greek political scene for more than three decades. A similar fate has befallen other former behemoths of the social-democratic establishment, such as the French Socialist Party and the Dutch Labour Party (PvdA). Support for social-democratic parties is today at the lowest level in 70 years –and falling. 

How should we explain the decline of the left –not just the electoral decline of those parties that are commonly associated with the left side of the political spectrum, regardless of their effective political orientation, but also the decline of core left values within those parties and within society in general? 

Why has the anti-establishment left proven unable to fill the vacuum left by the collapse of the establishment left? More broadly, how did the left come to count so little in global politics? Can the left, both culturally and politically, become a major force in our societies again? And if so, how? 

These are some of the questions that we attempt to answer in this book. Though the left has been making inroads in some countries in recent years –notable examples include Bernie Sanders in the United States, Jeremy Corbyn in the UK, Podemos in Spain and Jean-Luc Mélenchon in France –and has even succeeded in taking power in Greece (though the SYRIZA government was rapidly brought to heel by the European establishment), there is no denying that, for the most part, movements and parties of the extreme right have been more effective than left-wing or progressive forces at tapping into the legitimate grievances of the masses –disenfranchised, marginalised, impoverished and dispossessed by the 40-year-long neoliberal class war waged from above. 

In particular, they are the only forces that have been able to provide a (more or less) coherent response to the widespread –and growing –yearning for greater territorial or national sovereignty, increasingly seen as the only way, in the absence of effective supranational mechanisms of representation, to regain some degree of collective control over politics and society, and in particular over the flows of capital, trade and people that constitute the essence of neoliberal globalisation. Given neoliberalism’s war against sovereignty, it should come as no surprise that ‘sovereignty has become the master-frame of contemporary politics’, as Paolo Gerbaudo notes. 

After all, as we argue in Chapter 5, the hollowing out of national sovereignty and curtailment of popular-democratic mechanisms –what has been termed depoliticisation –has been an essential element of the neoliberal project, aimed at insulating macroeconomic policies from popular contestation and removing any obstacles put in the way of economic exchanges and financial flows. 

Given the nefarious effects of depoliticisation, it is only natural that the revolt against neoliberalism should first and foremost take the form of demands for a repoliticisation of national decision-making processes. 

The fact that the vision of national sovereignty that was at the centre of the Trump and Brexit campaigns, and that currently dominates the public discourse, is a reactionary, quasi-fascist one –mostly defined along ethnic, exclusivist and authoritarian lines –should not be seen as an indictment of national sovereignty as such. History attests to the fact that national sovereignty and national self-determination are not intrinsically reactionary or jingoistic concepts –in fact, they were the rallying cries of countless nineteenth- and twentieth-century socialist and left-wing liberation movements.

Even if we limit our analysis to core capitalist countries, it is patently obvious that virtually all the major social, economic and political advancements of the past centuries were achieved through the institutions of the democratic nation state, not through international, multilateral or supranational institutions, which in a number of ways have, in fact, been used to roll back those very achievements, as we have seen in the context of the euro crisis, where supranational (and largely unaccountable) institutions such as the European Commission, Eurogroup and European Central Bank (ECB) used their power and authority to impose crippling austerity on struggling countries. 

The problem, in short, is not national sovereignty as such, but the fact that the concept in recent years has been largely monopolised by the right and extreme right, which understandably sees it as a way to push through its xenophobic and identitarian agenda. It would therefore be a grave mistake to explain away the seduction of the ‘Trumpenproletariat’ by the far right as a case of false consciousness, as Marc Saxer notes; the working classes are simply turning to the only movements and parties that (so far) promise them some protection from the brutal currents of neoliberal globalisation (whether they can or truly intend to deliver on that promise is a different matter). 

However, this simply raises an even bigger question: why has the left not been able to offer the working classes and increasingly proletarianised middle classes a credible alternative to neoliberalism and to neoliberal globalisation? More to the point, why has it not been able to develop a progressive view of national sovereignty? 

As we argue in this book, the reasons are numerous and overlapping. For starters, it is important to understand that the current existential crisis of the left has very deep historical roots, reaching as far back as the 1960s. If we want to comprehend how the left has gone astray, that is where we have to begin our analysis. 

Today the post-war ‘Keynesian’ era is eulogised by many on the left as a golden age in which organised labour and enlightened thinkers and policymakers (such as Keynes himself) were able to impose a ‘class compromise’ on reluctant capitalists that delivered unprecedented levels of social progress, which were subsequently rolled back following the so-called neoliberal counter-revolution. 

It is thus argued that, in order to overcome neoliberalism, all it takes is for enough members of the establishment to be swayed by an alternative set of ideas. However, as we note in Chapter 2, the rise and fall of Keynesianism cannot simply be explained in terms of working-class strength or the victory of one ideology over another, but should instead be viewed as the outcome of the fortuitous confluence, in the aftermath of World War II, of a number of social, ideological, political, economic, technical and institutional conditions. 

To fail to do so is to commit the same mistake that many leftists committed in the early post-war years. By failing to appreciate the extent to which the class compromise at the base of the Fordist-Keynesian system was, in fact, a crucial component of that history-specific regime of accumulation –actively supported by the capitalist class insofar as it was conducive to profit-making, and bound to be jettisoned once it ceased to be so –many socialists of the time convinced themselves ‘that they had done much more than they actually had to shift the balance of class power, and the relationship between states and markets’. 

Some even argued that the developed world had already entered a post-capitalist phase, in which all the characteristic features of capitalism had been permanently eliminated, thanks to a fundamental shift of power in favour of labour vis-à-vis capital, and of the state vis-à-vis the market. Needless to say, that was not the case. 

Furthermore, as we show in Chapter 3, monetarism –the ideological precursor to neoliberalism –had already started to percolate into left-wing policymaking circles as early as the late 1960s. Thus, as argued in Chapters 2 and 3, many on the left found themselves lacking the necessary theoretical tools to understand –and correctly respond to –the capitalist crisis that engulfed the Keynesian model in the 1970s, convincing themselves that the distributional struggle that arose at the time could be resolved within the narrow limits of the social-democratic framework. 

The truth of the matter was that the labour–capital conflict that re-emerged in the 1970s could only have been resolved one way or another: on capital’s terms, through a reduction of labour’s bargaining power, or on labour’s terms, through an extension of the state’s control over investment and production. As we show in Chapters 3 and 4, with regard to the experience of the social-democratic governments of Britain and France in the 1970s and 1980s, the left proved unwilling to go this way. This left it (no pun intended) with no other choice but to ‘manage the capitalist crisis on behalf of capital’, as Stuart Hall wrote, by ideologically and politically legitimising neoliberalism as the only solution to the survival of capitalism. 

In this regard, as we show in Chapter 3, the Labour government of James Callaghan (1974–9) bears a very heavy responsibility. In an (in) famous speech in 1976, Callaghan justified the government’s programme of spending cuts and wage restraint by declaring Keynesianism dead, indirectly legitimising the emerging monetarist (neoliberal) dogma and effectively setting up the conditions for Labour’s ‘austerity lite’ to be refined into an all-out attack on the working class by Margaret Thatcher. 

Even worse, perhaps, Callaghan popularised the notion that austerity was the only solution to the economic crisis of the 1970s, anticipating Thatcher’s ‘there is no alternative’(TINA) mantra, even though there were radical alternatives available at the time, such as those put forward by Tony Benn and others. These, however, were ‘no longer perceived to exist’. 

In this sense, the dismantling of the post-war Keynesian framework cannot simply be explained as the victory of one ideology (‘neoliberalism’) over another (‘Keynesianism’), but should rather be understood as the result of a number of overlapping ideological, economic and political factors: the capitalists’response to the profit squeeze and to the political implications of full employment policies; the structural flaws of ‘actually existing Keynesianism’; and, importantly, the left’s inability to offer a coherent response to the crisis of the Keynesian framework, let alone a radical alternative. 

These are all analysed in-depth in the first chapters of the book. Furthermore, throughout the 1970s and 1980s, a new (fallacious) left consensus started to set in: that economic and financial internationalisation –what today we call ‘globalisation’–had rendered the state increasingly powerless vis-à-vis ‘the forces of the market’, and that therefore countries had little choice but to abandon national economic strategies and all the traditional instruments of intervention in the economy (such as tariffs and other trade barriers, capital controls, currency and exchange rate manipulation, and fiscal and central bank policies), and hope, at best, for transnational or supranational forms of economic governance. 

In other words, government intervention in the economy came to be seen not only as ineffective but, increasingly, as outright impossible. This process –which was generally (and erroneously, as we shall see) framed as a shift from the state to the market –was accompanied by a ferocious attack on the very idea of national sovereignty, increasingly vilified as a relic of the past. As we show, the left –in particular the European left –played a crucial role in this regard as well, by cementing this ideological shift towards a post-national and post-sovereign view of the world, often anticipating the right on these issues. 

One of the most consequential turning points in this respect, which is analysed in Chapter 4, was Mitterrand’s 1983 turn to austerity –the so-called tournant de la rigueur –just two years after the French Socialists’ historic victory in 1981. Mitterrand’s election had inspired the widespread belief that a radical break with capitalism –at least with the extreme form of capitalism that had recently taken hold in the Anglo-Saxon world –was still possible. By 1983, however, the French Socialists had succeeded in ‘proving’ the exact opposite: that neoliberal globalisation was an inescapable and inevitable reality. As Mitterrand stated at the time: ‘National sovereignty no longer means very much, or has much scope in the modern world economy. …A high degree of supra-nationality is essential.’ 

The repercussions of Mitterrand’s about-turn are still being felt today. It is often brandished by left-wing and progressive intellectuals as proof of the fact that globalisation and the internationalisation of finance has ended the era of nation states and their capacity to pursue policies that are not in accord with the diktats of global capital. The claim is that if a government tries autonomously to pursue full employment and a progressive/redistributive agenda, it will inevitably be punished by the amorphous forces of global capital. 

This narrative claims that Mitterrand had no option but to abandon his agenda of radical reform. To most modern-day leftists, Mitterrand thus represents a pragmatist who was cognisant of the international capitalist forces he was up against and responsible enough to do what was best for France. In fact, as we argue in the second part of the book, sovereign, currency-issuing states –such as France in the 1980s –far from being helpless against the power of global capital, still have the capacity to deliver full employment and social justice to their citizens. 

So how did the idea of the ‘death of the state’come to be so ingrained in our collective consciousness? 

As we explain in Chapter 5, underlying this post-national view of the world was (is) a failure to understand –and in some cases an explicit attempt to conceal –on behalf of left-wing intellectuals and policymakers that ‘globalisation’ was (is) not the result of inexorable economic and technological changes but was (is) largely the product of state-driven processes. All the elements that we associate with neoliberal globalisation –delocalisation, deindustrialisation, the free movement of goods and capital, etc. –were (are), in most cases, the result of choices made by governments. 

More generally, states continue to play a crucial role in promoting, enforcing and sustaining a (neo) liberal international framework –though that would appear to be changing, as we discuss in Chapter 6 –as well as establishing the domestic conditions for allowing global accumulation to flourish. The same can be said of neoliberalism tout court. 

There is a widespread belief –particularly among the left –that neoliberalism has involved (and involves) a ‘retreat’, ‘hollowing out’ or ‘withering away’ of the state, which in turn has fuelled the notion that today the state has been ‘overpowered’ by the market. However, as we argue in Chapter 5, neoliberalism has not entailed a retreat of the state but rather a reconfiguration of the state, aimed at placing the commanding heights of economic policy ‘in the hands of capital, and primarily financial interests’. 

It is self-evident, after all, that the process of neoliberalisation would not have been possible if governments –and in particular social-democratic governments –had not resorted to a wide array of tools to promote it: the liberalisation of goods and capital markets; the privatisation of resources and social services; the deregulation of business, and financial markets in particular; the reduction of workers’ rights (first and foremost, the right to collective bargaining) and more generally the repression of labour activism; the lowering of taxes on wealth and capital, at the expense of the middle and working classes; the slashing of social programmes; and so on. 

These policies were systemically pursued throughout the West (and imposed on developing countries) with unprecedented determination, and with the support of all the major international institutions and political parties. 

As noted in Chapter 5, even the loss of national sovereignty –which has been invoked in the past, and continues to be invoked today, to justify neoliberal policies –is largely the result of a willing and conscious limitation of state sovereign rights by national elites. 

The reason why governments chose willingly to ‘tie their hands’ is all too clear: as the European case epitomises, the creation of self-imposed ‘external constraints’ allowed national politicians to reduce the politics costs of the neoliberal transition –which clearly involved unpopular policies –by ‘scapegoating’ institutionalised rules and ‘independent’ or international institutions, which in turn were presented as an inevitable outcome of the new, harsh realities of globalisation. 

Moreover, neoliberalism has been (and is) associated with various forms of authoritarian statism –that is, the opposite of the minimal state advocated by neoliberals –as states have bolstered their security and policing arms as part of a generalised militarisation of civil protest. In other words, not only does neoliberal economic policy require the presence of a strong state, but it requires the presence of an authoritarian state (particularly where extreme forms of neoliberalism are concerned, such as the ones experimented with in periphery countries), at both the domestic and international level (see Chapter 5). 

In this sense, neoliberal ideology, at least in its official anti-state guise, should be considered little more than a convenient alibi for what has been and is essentially a political and state-driven project. Capital remains as dependent on the state today as it was under ‘Keynesianism’–to police the working classes, bail out large firms that would otherwise go bankrupt, open up markets abroad (including through military intervention), etc. 

The ultimate irony, or indecency, is that traditional left establishment parties have become standard-bearers for neoliberalism themselves, both while in elected office and in opposition. 

In the months and years that followed the financial crash of 2007–9, capital’s –and capitalism’s –continued dependency on the state in the age of neoliberalism became glaringly obvious, as the governments of the US, Europe and elsewhere bailed out their respective financial institutions to the tune of trillions of euros/dollars. 

In Europe, following the outbreak of the so-called ‘euro crisis’ in 2010, this was accompanied by a multi-level assault on the post-war European social and economic model aimed at restructuring and re-engineering European societies and economies along lines more favourable to capital. This radical reconfiguration of European societies –which, again, has seen social-democratic governments at the forefront –is not based on a retreat of the state in favour of the market, but rather on a reintensification of state intervention on the side of capital. 

Nonetheless, the erroneous idea of the waning nation state has become an entrenched fixture of the left. As we argue throughout the book, we consider this to be central in understanding the decline of the traditional political left and its acquiescence to neoliberalism. 

In view of the above, it is hardly surprising that the mainstream left is, today, utterly incapable of offering a positive vision of national sovereignty in response to neoliberal globalisation. To make matters worse, most leftists have bought into the macroeconomic myths that the establishment uses to discourage any alternative use of state fiscal capacities. 

For example, they have accepted without question the so-called household budget analogy, which suggests that currency-issuing governments, like households, are financially constrained, and that fiscal deficits impose crippling debt burdens on future generations –a notion that we thoroughly debunk in Chapter 8. 

This has gone hand in hand with another, equally tragic, development. As discussed in Chapter 5, following its historical defeat, the left’s traditional anti-capitalist focus on class slowly gave way to a liberal-individualist understanding of emancipation. Waylaid by post-modernist and post-structuralist theories, left intellectuals slowly abandoned Marxian class categories to focus, instead, on elements of political power and the use of language and narratives as a way of establishing meaning. This also defined new arenas of political struggle that were diametrically opposed to those defined by Marx. 

Over the past three decades, the left focus on ‘capitalism’ has given way to a focus on issues such as racism, gender, homophobia, multiculturalism, etc. Marginality is no longer described in terms of class but rather in terms of identity. The struggle against the illegitimate hegemony of the capitalist class has given way to the struggles of a variety of (more or less) oppressed and marginalised groups: women, ethnic and racial minorities, the LGBTQ community, etc. As a result, class struggle has ceased to be seen as the path to liberation. 

In this new post-modernist world, only categories that transcend Marxian class boundaries are considered meaningful. Moreover, the institutions that evolved to defend workers against capital –such as trade unions and social-democratic political parties –have become subjugated to these non-class struggle foci. What has emerged in practically all Western countries as a result, as Nancy Fraser notes, is a perverse political alignment between ‘mainstream currents of new social movements (feminism, anti-racism, multiculturalism, and LGBTQ rights), on the one side, and high-end “symbolic” and service-based business sectors (Wall Street, Silicon Valley, and Hollywood), on the other’. 

The result is a progressive neoliberalism ‘that mix[es] together truncated ideals of emancipation and lethal forms of financialization’, with the former unwittingly lending their charisma to the latter. 

As societies have become increasingly divided between well-educated, highly mobile, highly skilled, socially progressive cosmopolitan urbanites, and lower-skilled and less educated peripherals who rarely work abroad and face competition from immigrants, the mainstream left has tended to consistently side with the former. Indeed, the split between the working classes and the intellectual-cultural left can be considered one of the main reasons behind the right-wing revolt currently engulfing the West. 

As argued by Jonathan Haidt, the way the globalist urban elites talk and act unwittingly activates authoritarian tendencies in a subset of nationalists. In a vicious feedback loop, however, the more the working classes turn to right-wing populism and nationalism, the more the intellectual-cultural left doubles down on its liberal-cosmopolitan fantasies, further radicalising the ethno-nationalism of the proletariat. 

As Wolfgang Streeck writes: Protests against material and moral degradation are suspected of being essentially fascist, especially now that the former advocates of the plebeian classes have switched to the globalization party, so that if their former clients wish to complain about the pressures of capitalist modernization, the only language at their disposal is the pre-political, untreated linguistic raw material of everyday experiences of deprivation, economic or cultural. This results in constant breaches of the rules of civilized public speech, which in turn can trigger indignation at the top and mobilization at the bottom. 

This is particularly evident in the European debate, where, despite the disastrous effects of the EU and monetary union, the mainstream left –often appealing to exactly the same arguments used by Callaghan and Mitterrand 30–40 years ago –continues to cling on to these institutions and to the belief that they can be reformed in a progressive direction, despite all evidence to the contrary, and to dismiss any talk of restoring a progressive agenda on the foundation of retrieved national sovereignty as a ‘retreat into nationalist positions’, inevitably bound to plunge the continent into 1930s-style fascism. 

This position, as irrational as it may be, is not surprising, considering that European Economic and Monetary Union (EMU) is, after all, a brainchild of the European left (see Chapter 5). However, such a position presents numerous problems, which are ultimately rooted in a failure to understand the true nature of the EU and monetary union. 

First of all, it ignores the fact that the EU’s economic and political constitution is structured to produce the results that we are seeing –the erosion of popular sovereignty, the massive transfer of wealth from the middle and lower classes to the upper classes, the weakening of labour and more generally the rollback of the democratic and social/economic gains that had previously been achieved by subordinate classes –and is designed precisely to impede the kind of radical reforms to which progressive integrationists or federalists aspire to. 

More importantly, however, it effectively reduces the left to the role of defender of the status quo, thus allowing the political right to hegemonise the legitimate anti-systemic –and specifically anti-EU –grievances of citizens. This is tantamount to relinquishing the discursive and political battleground for a post-neoliberal hegemony –which is inextricably linked to the question of national sovereignty –to the right and extreme right. It is not hard to see that if progressive change can only be implemented at the global or even European level –in other words, if the alternative to the status quo offered to electorates is one between reactionary nationalism and progressive globalism –then the left has already lost the battle. 

It needn’t be this way, however. As we argue in the second part of the book, a progressive, emancipatory vision of national sovereignty that offers a radical alternative to both the right and the neoliberals –one based on popular sovereignty, democratic control over the economy, full employment, social justice, redistribution from the rich to the poor, inclusivity and the socio-ecological transformation of production and society –is possible. Indeed, it is necessary. 

As J. W. Mason writes: Whatever [supranational] arrangements we can imagine in principle, the systems of social security, labor regulation, environmental protection, and redistribution of income and wealth that in fact exist are national in scope and are operated by national governments. By definition, any struggle to preserve social democracy as it exists today is a struggle to defend national institutions.  

As we contend in this book, the struggle to defend the democratic sovereign from the onslaught of neoliberal globalisation is the only basis on which the left can be refounded (and the nationalist right challenged). However, this is not enough. 

The left also needs to abandon its obsession with identity politics and retrieve the ‘more expansive, anti-hierarchical, egalitarian, class-sensitive, anti-capitalist understandings of emancipation’ that used to be its trademark (which, of course, is not in contradiction with the struggle against racism, patriarchy, xenophobia and other forms of oppression and discrimination). 

Fully embracing a progressive vision of sovereignty also means abandoning the many false macroeconomic myths that plague left-wing and progressive thinkers. One of the most pervasive and persistent myths is the assumption that governments are revenue-constrained, that is, that they need to ‘fund’ their expenses through taxes or debt. This leads to the corollary that governments have to ‘live within their means’, since ongoing deficits will inevitably result in an ‘excessive’ accumulation of debt, which in turn is assumed to be ‘unsustainable’ in the long run. 

In reality, as we show in Chapter 8, monetarily sovereign (or currency-issuing) governments –which nowadays include most governments –are never revenue-constrained because they issue their own currency by legislative fiat and always have the means to achieve and sustain full employment and social justice. 

In this sense, a progressive vision of national sovereignty should aim to reconstruct and redefine the national state as a place where citizens can seek refuge ‘in democratic protection, popular rule, local autonomy, collective goods and egalitarian traditions’, as Streeck argues, rather than a culturally and ethnically homogenised society. 

This is also the necessary prerequisite for the construction of a new international( ist) world order, based on interdependent but independent sovereign states. It is such a vision that we present in this book. 



The Great Transformation Redux: From Keynesianism to Neoliberalism –and Beyond 

1 Broken Paradise: A Critical Assessment of the Keynesian ‘Full Employment’ Era 


Looking back on the 30-year-long economic expansion that followed World War II, Adam Przeworski and Michael Wallerstein concluded that ‘by most criteria of economic progress the Keynesian era was a success’. 

It is hard to disagree: throughout the West, from the mid-1940s until the early 1970s, countries enjoyed lower levels of unemployment, greater economic stability and higher levels of economic growth than ever before. That stability, particularly in the US, also rested on a strong financial regulatory framework: on the widespread provision of deposit insurance to stop bank runs; strict regulation of the financial system, including the separation of commercial banking from investment banking; and extensive capital controls to reduce currency volatility. 

These domestic and international restrictions ‘kept financial excesses and bubbles under control for over a quarter of a century’. 

Wages and living standards rose, and –especially in Europe –a variety of policies and institutions for welfare and social protection (also known as the ‘welfare state’) were created, including sustained investment in universally available social services such as education and health. Few people would deny that this was, indeed, a ‘golden age’ for capitalism. 

However, when it comes to explaining what made this exceptional period possible and why it came to an end, theories abound. Most contemporary Keynesians subscribe to a quasi-idealist view of history –that is, one that stresses the central role of ideas and ideals in human history. This is perhaps unsurprising, considering that Keynes himself famously noted: ‘Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.’ 

According to this view, the social and economic achievements of the post-war period are largely attributable to the revolution in economic thinking spearheaded by the British economist John Maynard Keynes. 

Throughout the 1920s and 1930s, Keynes overturned the old classical (neoclassical) paradigm, rooted in the doctrine of laissez-faire (‘let it be’) free-market capitalism, which held that markets are fundamentally self-regulating. The understanding was that the economy, if left to its own devices –that is, with the government intervening as little as possible –would automatically generate stability and full employment, as long as workers were flexible in their wage demands. 

The Great Depression of the 1930s that followed the stock market crash of 1929 –where minimal financial regulation, little-understood financial products and overindebted households and banks all conspired to create a huge speculative bubble which, when it burst, brought the US financial system crashing down, and with it the entire global economy –clearly challenged traditional laissez-faire economic theories. 

This bolstered Keynes’ argument –spelled out at length in his masterpiece, The General Theory of Employment, Interest, and Money, published in 1936 –that aggregate spending determined the overall level of economic activity, and that inadequate aggregate spending could lead to prolonged periods of high unemployment (what he called ‘underemployment equilibrium’). Thus, he advocated the use of debt-based expansionary fiscal and monetary measures and a strict regulatory framework to counter capitalism’s tendency towards financial crises and disequilibrium, and to mitigate the adverse effects of economic recessions and depressions, first and foremost by creating jobs that the private sector was unable or unwilling to provide. 

The bottom line of Keynes’ argument was that the government always has the ability to determine the overall level of spending and employment in the economy. In other words, full employment was a realistic goal that could be pursued at all times. 

Yet politicians were slow to catch on. When the speculative bubbles in both Europe and the United States burst in the aftermath of the Wall Street crash of 1929, various countries (to varying degrees, and more or less willingly) turned to austerity as a perceived ‘cure’ for the excesses of the previous decade. 

In the United States, president Herbert Hoover, a year after the crash, declared that ‘economic depression cannot be cured by legislative action or executive pronouncements’ and that ‘economic wounds must be healed by the action of the cells of the economic body –the producers and consumers themselves’. 

At first Hoover and his officials downplayed the stock market crash, claiming that the economic slump would be only temporary. When the situation did not improve, Hoover advocated a strict laissez-faire policy, dictating that the federal government should not interfere with the economy but rather let the economy right itself. He counselled that ‘every individual should sustain faith and courage’ and ‘each should maintain self-reliance’. 

Even though Hoover supported a doubling of government expenditure on public works projects, he also firmly believed in the need for a balanced budget. As Nouriel Roubini and Stephen Mihm observe, Hoover ‘wanted to reconcile contradictory aims: to cultivate self-reliance, to provide government help in a time of crisis, and to maintain fiscal discipline. This was impossible.’ In fact, it is widely agreed that Hoover’s inaction was responsible for the worsening of the Great Depression. 

If the United States’ reaction under Hoover can be described as ‘too little, too late’, Europe’s reaction in the late 1920s and early 1930s actively contributed to the downward spiral of the Great Depression, setting the stage for World War II. 

Austerity was the dominant response of European governments during the early years of the Great Depression. The political consequences are well known. Anti-systemic parties gained strength all across the continent, most notably in Germany. While 24 European regimes had been democratic in 1920, the number was down to eleven in 1939. 

Various historians and economists see the rise of Hitler as a direct consequence of the austerity policies indirectly imposed on Germany by its creditors following the economic crash of the late 1920s. Ewald Nowotny, the current head of Austria’s national bank, stated that it was precisely ‘the single-minded concentration on austerity policy’ in the 1930s that ‘led to mass unemployment, a breakdown of democratic systems and, at the end, to the catastrophe of Nazism’. 

Historian Steven Bryan agrees: ‘During the 1920s and 1930s it was precisely the refusal to acknowledge the social and political consequences of austerity that helped bring about not only the depression, but also the authoritarian governments of the 1930s.



Reclaiming the State. A Progressive Vision of Sovereignty for a Post-Neoliberal World


William Mitchell and Thomas Fazi.

get it at

It’s Time to Shift the Economy into Fourth Gear Capitalism with Basic Income – Scott Santens. 

The economy is in between gears right now, and that’s a growing problem because as is true with all higher gears, we could be accomplishing so much more with so much less and prosperity could be greatly increased for not only the lucky few, but everyone. What do I mean? Well let’s look at the gears of capitalism, of which there have so far been three, before moving on to what fourth gear is, what’s stopping us from it, and how we can achieve it.

The Gears of Capitalism

First gear was made possible by the invention of the steam engine which allowed for the beginnings of industry and the bridging of great distances with trains and steam-powered ships.

Second gear was made possible by the invention of electricity which allowed for industrialization to go into overdrive while bridging even greater distances with the telegraph and telephones.

Third gear was made possible by the invention of the computer which allowed for full globalization and the connection of everyone to each other all over the world with information technology and the internet.

So what is fourth gear?

Fourth gear is the handing over of labor to machines, and that does not only include muscle labor as was already true in lower gears, but mental labor. It is the long awaited freeing of humanity to pursue human interests, paid or unpaid, as payment is of less concern when machines are working for us… that is as long as we humans are earning the machines’ paychecks to purchase what they’re producing.

And that’s the rub. That’s why we’ve so far refused to shift into fourth gear capitalism, because in fourth gear, human labor necessarily becomes unnecessary. This can be an obstacle within the mind, for capitalism itself was built to combat scarcity, and the division of labor meant everyone need pull their weight so that all may survive. But each gear along the way has enabled us to do more with less energy expended, so where once a majority of humanity’s time was spent in the fields, now about one percent is.

continued … Medium

Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America – Nancy Maclean. 

In 1955 the U.S. Supreme Court issued its second Brown v. Board of Education ruling, calling for the dismantling of segregation in public schools with “all deliberate speed.”

Thirty-seven-year-old James McGill Buchanan liked to call himself a Tennessee country boy. No less a figure than Milton Friedman had extolled Buchanan’s potential. As Colgate Whitehead Darden Jr., the president of the University of Virginia reviewed the document, he might have wondered if the newly hired economist had read his mind. For without mentioning the crisis at hand, Buchanan’s proposal put in writing what Darden was thinking: Virginia needed to find a better way to deal with the incursion on states’ rights represented by Brown.

States’ rights, in effect, were yielding in preeminence to individual rights. It was not difficult for either Darden or Buchanan to imagine how a court might now rule if presented with evidence of the state of Virginia’s archaic labor relations, its measures to suppress voting, or its efforts to buttress the power of reactionary rural whites by underrepresenting the moderate voters of the cities and suburbs of Northern Virginia. Federal meddling could rise to levels once unimaginable.

What the court ruling represented to Buchanan was personal. Northern liberals—the very people who looked down upon southern whites like him, he was sure—were now going to tell his people how to run their society. And to add insult to injury, he and people like him with property were no doubt going to be taxed more to pay for all the improvements that were now deemed necessary and proper for the state to make.

Find the resources, he proposed to Darden, for me to create a new center on the campus of the University of Virginia, and I will use this center to create a new school of political economy and social philosophy. It would be an academic center, rigorously so, but one with a quiet political agenda: to defeat the “perverted form” of liberalism that sought to destroy their way of life, “a social order,” as he described it, “built on individual liberty,” a term with its own coded meaning but one that Darden surely understood. The center, Buchanan promised, would train “a line of new thinkers” in how to argue against those seeking to impose an “increasing role of government in economic and social life.”

Buchanan fully understood the scale of the challenge he was undertaking and promised no immediate results. But he made clear that he would devote himself passionately to this cause.

Buchanan’s team had no discernible success in decreasing the federal government’s pressure on the South all the way through the 1960s and ’70s. But take a longer view—follow the story forward to the second decade of the twenty-first century—and a different picture emerges, one that is both a testament to Buchanan’s intellectual powers and, at the same time, the utterly chilling story of the ideological origins of the single most powerful and least understood threat to democracy today: the attempt by the billionaire-backed radical right to undo democratic governance.

A quest that began as a quiet attempt to prevent the state of Virginia from having to meet national democratic standards of fair treatment and equal protection under the law would, some sixty years later, become the veritable opposite of itself: a stealth bid to reverse-engineer all of America, at both the state and the national levels, back to the political economy and oligarchic governance of midcentury Virginia, minus the segregation.

The goal of all these actions was to destroy our institutions, or at least change them so radically that they became shadows of their former selves?

This, then, is the true origin story of today’s well-heeled radical right, told through the intellectual arguments, goals, and actions of the man without whom this movement would represent yet another dead-end fantasy of the far right, incapable of doing serious damage to American society.

When I entered Buchanan’s personal office, part of a stately second-floor suite, I felt overwhelmed. There were papers stacked everywhere, in no discernible order. Not knowing where to begin, I decided to proceed clockwise, starting with a pile of correspondence that was resting, helter-skelter, on a chair to the left of the door. I picked it up and began to read. It contained confidential letters from 1997 and 1998 concerning Charles Koch’s investment of millions of dollars in Buchanan’s Center for Study of Public Choice and a flare-up that followed.

Catching my breath, I pulled up an empty chair and set to work. It took me time—a great deal of time—to piece together what these documents were telling me. They revealed how the program Buchanan had first established at the University of Virginia in 1956 and later relocated to George Mason University, the one meant to train a new generation of thinkers to push back against Brown and the changes in constitutional thought and federal policy that had enabled it, had become the research-and-design center for a much more audacious project, one that was national in scope. This project was no longer simply about training intellectuals for a battle of ideas; it was training operatives to staff the far-flung and purportedly separate, yet intricately connected, institutions funded by the Koch brothers and their now large network of fellow wealthy donors. These included the Cato Institute, the Heritage Foundation, Citizens for a Sound Economy, Americans for Prosperity, FreedomWorks, the Club for Growth, the State Policy Network, the Competitive Enterprise Institute, the Tax Foundation, the Reason Foundation, the Leadership Institute, and more, to say nothing of the Charles Koch Foundation and Koch Industries itself.

I learned how and why Charles Koch first became interested in Buchanan’s work in the early 1970s, called on his help with what became the Cato Institute, and worked with his team in various organizations. What became clear is that by the late 1990s, Koch had concluded that he’d finally found the set of ideas he had been seeking for at least a quarter century by then—ideas so groundbreaking, so thoroughly thought-out, so rigorously tight, that once put into operation, they could secure the transformation in American governance he wanted. From then on, Koch contributed generously to turning those ideas into his personal operational strategy to, as the team saw it, save capitalism from democracy—permanently.

In his first big gift to Buchanan’s program, Charles Koch signaled his desire for the work he funded to be conducted behind the backs of the majority. “Since we are greatly outnumbered,” Koch conceded to the assembled team, the movement could not win simply by persuasion. Instead, the cause’s insiders had to use their knowledge of “the rules of the game”—that game being how modern democratic governance works—“to create winning strategies.” A brilliant engineer with three degrees from MIT, Koch warned, “The failure to use our superior technology ensures failure.” Translation: the American people would not support their plans, so to win they had to work behind the scenes, using a covert strategy instead of open declaration of what they really wanted.

Future-oriented, Koch’s men (and they are, overwhelmingly, men) gave no thought to the fate of the historical trail they left unguarded. And thus, a movement that prided itself, even congratulated itself, on its ability to carry out a revolution below the radar of prying eyes (especially those of reporters) had failed to lock one crucial door: the front door to a house that let an academic archive rat like me, operating on a vague hunch, into the mind of the man who started it all.

What animated Buchanan, what became the laser focus of his deeply analytic mind, was the seemingly unfettered ability of an increasingly more powerful federal government to force individuals with wealth to pay for an increasing number of public goods and social programs they had had no personal say in approving. Better schools, newer textbooks, and more courses for black students might help the children, for example, but whose responsibility was it to pay for these improvements? The parents of these students? Others who wished voluntarily to help out? Or people like himself, compelled through increasing taxation to contribute to projects they did not wish to support? To Buchanan, what others described as taxation to advance social justice or the common good was nothing more than a modern version of mob attempts to take by force what the takers had no moral right to: the fruits of another person’s efforts. In his mind, to protect wealth was to protect the individual against a form of legally sanctioned gangsterism. Where did this gangsterism begin? Not in the way we might have expected him to explain it to Darden: with do-good politicians, aspiring attorneys seeking to make a name for themselves in constitutional law, or even activist judges. It began before that: with individuals, powerless on their own, who had figured out that if they joined together to form social movements, they could use their strength in numbers to move government officials to hear their concerns and act upon them.

The only fact that registered in his mind was the “collective” source of their power—and that, once formed, such movements tended to stick around, keeping tabs on government officials and sometimes using their numbers to vote out those who stopped responding to their needs. How was this fair to other individuals? How was this American?

Even when conservatives later gained the upper hand in American politics, Buchanan saw his idea of economic liberty pushed aside. Richard Nixon expanded government more than his predecessors had, with costly new agencies and regulations, among them a vast new Environmental Protection Agency. George Wallace, a candidate strongly identified with the South and with the right, nonetheless supported public spending that helped white people. Ronald Reagan talked the talk of small government, but in the end, the deficit ballooned during his eight years in office.

Had there not been someone else as deeply frustrated as Buchanan, as determined to fight the uphill fight, but in his case with much keener organizational acumen, the story this book tells would no doubt have been very different. But there was. His name was Charles Koch. An entrepreneurial genius who had multiplied the earnings of the corporation he inherited by a factor of at least one thousand, he, too, had an unrealized dream of liberty, of a capitalism all but free of governmental interference and, at least in his mind, thus able to achieve the prosperity and peace that only this form of capitalism could produce. The puzzle that preoccupied him was how to achieve this in a democracy where most people did not want what he did.

Ordinary electoral politics would never get Koch what he wanted. Passionate about ideas to the point of obsession, Charles Koch had worked for three decades to identify and groom the most promising libertarian thinkers in hopes of somehow finding a way to break the impasse. He subsidized and at one point even ran an obscure academic outfit called the Institute for Humane Studies in that quest. “I have supported so many hundreds of scholars” over the years, he once explained, “because, to me, this is an experimental process to find the best people and strategies.”

The goal of the cause, Buchanan announced to his associates, should no longer be to influence who makes the rules, to vest hopes in one party or candidate. The focus must shift from who rules to changing the rules. For liberty to thrive, Buchanan now argued, the cause must figure out how to put legal—indeed, constitutional shackles on public officials, shackles so powerful that no matter how sympathetic these officials might be to the will of majorities, no matter how concerned they were with their own reelections, they would no longer have the ability to respond to those who used their numbers to get government to do their bidding. There was a second, more diabolical aspect to the solution Buchanan proposed, one that we can now see influenced Koch’s own thinking. Once these shackles were put in place, they had to be binding and permanent. The only way to ensure that the will of the majority could no longer influence representative government on core matters of political economy was through what he called “constitutional revolution.”

By the late 1990s, Charles Koch realized that the thinker he was looking for, the one who understood how government became so powerful in the first place and how to take it down in order to free up capitalism—the one who grasped the need for stealth because only piecemeal, yet mutually reinforcing, assaults on the system would survive the prying eyes of the media, was James Buchanan.

The Koch team’s most important stealth move, and the one that proved most critical to success, was to wrest control over the machinery of the Republican Party, beginning in the late 1990s and with sharply escalating determination after 2008. From there it was just a short step to lay claim to being the true representatives of the party, declaring all others RINOS—Republicans in name only. But while these radicals of the right operate within the Republican Party and use that party as a delivery vehicle, make no mistake about it: the cadre’s loyalty is not to the Grand Old Party or its traditions or standard-bearers. Their loyalty is to their revolutionary cause.

Our trouble in grasping what has happened comes, in part, from our inherited way of seeing the political divide. Americans have been told for so long, from so many quarters, that political debate can be broken down into conservative versus liberal, pro-market versus pro-government, Republican versus Democrat, that it is hard to recognize that something more confounding is afoot, a shrewd long game blocked from our sight by these stale classifications.

The Republican Party is now in the control of a group of true believers for whom compromise is a dirty word. Their cause, they say, is liberty. But by that they mean the insulation of private property rights from the reach of government, and the takeover of what was long public (schools, prisons, western lands, and much more) by corporations, a system that would radically reduce the freedom of the many. In a nutshell, they aim to hollow out democratic resistance. And by its own lights, the cause is nearing success.

The 2016 election looked likely to bring a big presidential win with across-the-board benefits. The donor network had so much money and power at its disposal as the primary season began that every single Republican presidential front-runner was bowing to its agenda. Not a one would admit that climate change was a real problem or that guns weren’t good, and the more widely distributed, the better. Every one of them attacked public education and teachers’ unions and advocated more charter schools and even tax subsidies for religious schools. All called for radical changes in taxation and government spending. Each one claimed that Social Security and Medicare were in mortal crisis and that individual retirement and health savings accounts, presumably to be invested with Wall Street firms, were the best solution.

Although Trump himself may not fully understand what his victory signaled, it put him between two fundamentally different, and opposed, approaches to political economy, with real-life consequences for us all. One was in its heyday when Buchanan set to work. In economics, its standard-bearer was John Maynard Keynes, who believed that for a modern capitalist democracy to flourish, all must have a share in the economy’s benefits and in its governance. Markets had great virtues, Keynes knew—but also significant built-in flaws that only government had the capacity to correct.

As a historian, I know that his way of thinking, as implemented by elected officials during the Great Depression, saved liberal democracy in the United States from the rival challenges of fascism and Communism in the face of capitalism’s most cataclysmic collapse. And that it went on to shape a postwar order whose operating framework yielded ever more universal hope that, by acting together and levying taxes to support shared goals, life could be made better for all.

The most starkly opposed vision is that of Buchanan’s Virginia school. It teaches that all such talk of the common good has been a smoke screen for “takers” to exploit “makers,” in the language now current, using political coalitions to “vote themselves a living” instead of earning it by the sweat of their brows. Where Milton Friedman and F. A. Hayek allowed that public officials were earnestly trying to do right by the citizenry, even as they disputed the methods, Buchanan believed that government failed because of bad faith: because activists, voters, and officials alike used talk of the public interest to mask the pursuit of their own personal self-interest at others’ expense. His was a cynicism so toxic that, if widely believed, it could eat like acid at the foundations of civic life. And he went further by the 1970s, insisting that the people and their representatives must be permanently prevented from using public power as they had for so long. Manacles, as it were, must be put on their grasping hands.

Is what we are dealing with merely a social movement of the right whose radical ideas must eventually face public scrutiny and rise or fall on their merits? Or is this the story of something quite different, something never before seen in American history? Could it be—and I use these words quite hesitantly and carefully—a fifth-column assault on American democratic governance?

The term “fifth column” has been applied to stealth supporters of an enemy who assist by engaging in propaganda and even sabotage to prepare the way for its conquest.

This cause is different. Pushed by relatively small numbers of radical-right billionaires and millionaires who have become profoundly hostile to America’s modern system of government, an apparatus decades in the making, funded by those same billionaires and millionaires, has been working to undermine the normal governance of our democracy. Indeed, one such manifesto calls for a “hostile takeover” of Washington, D.C. That hostile takeover maneuvers very much like a fifth column, operating in a highly calculated fashion, more akin to an occupying force than to an open group engaged in the usual give-and-take of politics. The size of this force is enormous. The social scientists who have led scholars in researching the Koch network write that it “operates on the scale of a national U.S. political party” and employs more than three times as many people as the Republican committees had on their payrolls in 2015.

For all its fine phrases, what this cause really seeks is a return to oligarchy, to a world in which both economic and effective political power are to be concentrated in the hands of a few. It would like to reinstate the kind of political economy that prevailed in America at the opening of the twentieth century, when the mass disfranchisement of voters and the legal treatment of labor unions as illegitimate enabled large corporations and wealthy individuals to dominate Congress and most state governments alike, and to feel secure that the nation’s courts would not interfere with their reign. The first step toward understanding what this cause actually wants is to identify the deep lineage of its core ideas. And although its spokespersons would like you to believe they are disciples of James Madison, the leading architect of the U.S. Constitution, it is not true.

Their intellectual lodestar is John C. Calhoun. He developed his radical critique of democracy a generation after the nation’s founding, as the brutal economy of chattel slavery became entrenched in the South, and his vision horrified Madison.


Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America 

by Nancy Maclean

Nancy K. MacLean is an American historian. She is the William H. Chafe Professor of History and Public Policy at Duke University and the author of numerous books and articles on various aspects of twentieth-century United States history.

get it from Amazon

How the Postal System and the Printing Press Transformed European Markets – Prateek Raj. 

In the sixteenth century, the Northwest European region of England and the Low Countries underwent transformational change. In this region, a bourgeois culture emerged and cities like Antwerp, Amsterdam, and London became centers of institutional and business innovation, whose accomplishments have influenced the modern world.

For example, one of the first permanent commodity bourses was established in Antwerp in 1531, the first stock exchange emerged in Amsterdam in 1602, and joint stock companies became a promising form of organizing business in London in the late sixteenth century. The sixteenth century transformation was followed by the seventeenth century Dutch Golden Age, and the eighteenth century English Industrial Revolution. What made the Northwest region of Europe so different? The question remains a central concern in social sciences, with scholars from diverse fields researching the subject.

The medieval power of merchant guilds

Markets don’t function well if they are ridden with frictions like lack of information, lack of trust, or high transaction costs. In the presence of frictions, business is often conducted via relationships.

Until the end of the fifteenth century, impartial institutions like courts and police that serve all parties generally—so ubiquitous today in the developed world—weren’t well developed in Europe. In such a world without impartial institutions, trade often was (is) heavily dependent on relationships and conducted through networks like merchant guilds. Such relationship-based trade through dense networks of merchant guilds reduced concerns of information access and reliability. Not surprisingly, because the merchant guild system was an effective system in the absence of strong formal institutions, it sustained in Europe for several centuries. In developing countries like India, lacking in developed formal institutions, networked institutions like castes still play an important role in business.

Before the fourteenth century, merchant guild networks were probably less hierarchical, more voluntary, and more inclusive. But, with time, merchant guilds started to become exclusive monopolies, placing high barriers to entry for outsiders, and they began to resemble cartels with close involvement in local politics. There were two reasons why these guilds erected such tough barriers to entry:

  • Repeated committed interaction was the key to effectiveness of merchant guilds. Uncommitted outsiders could behave opportunistically and undermine the reliability of the system. Therefore, outsiders faced restrictions.
  • Outsiders threatened the position of existing businessmen by increasing competition. So, even genuinely committed outsiders could be restricted to enter as they threatened the domination of existing members.

But, in the sixteenth century, the merchant guild system began to lose its significance as more impersonal markets, where traders could directly trade without the need of an affiliation, began to emerge and rulers stopped granting privileges to merchant guilds. The traders began to rely less on networked and collective institutions like merchant guilds, and directly initiated partnerships with traders who they may not have known well. For example, in Antwerp the domination of intermediaries (called hostellers) who would connect foreign traders declined. Instead, the foreign traders began to conduct such trades directly with each other in facilities like bourses.

Emergence of markets in the 16th century

In a new working paper, I study the emergence of impersonal markets in Europe during the sixteenth century. I survey the 50 largest European cities during the fourteenth through sixteenth centuries and codify the nature of sixteenth century economic institutions in each of the cities. In the survey, I find that merchant guilds were declining in the Northwest region of Europe, while elsewhere in Europe they continued to dominate commerce until much later, although there were some reforms underway in the Milanese and Viennese regions of Italy.

What explains the observed pattern of emergence of impersonal markets in sixteenth century Europe? I focus on the interaction between the commercial and communication revolutions of the late fifteenth century Europe. In the paper, I argue that the Northwest European region uniquely benefited from both of these revolutions due to its unique geography.

Commercial revolution at the Atlantic coast

What motivated traders to seek risky opportunities beyond close networks? If traders found partnerships with unfamiliar traders beyond their business networks to be highly beneficial, that would provide good incentives for the rise of impersonal markets. The Northwest Region was close to the sea, notably the Atlantic coast, which was at the time undergoing a commercial revolution with the discovery of new sea routes to Asia and the Americas. So, the region became a hub for long distance trade, attracting unfamiliar traders who came to its coast looking for business opportunity. I find that all cities where merchant guild privileges declined were at the sea, along the Atlantic or North Sea coast. Moreover, all cities where merchant guilds underwent reform (but didn’t decline) were within 150km of a sea port.

The communication revolution of the postal system and the printing press

What made traders feel confident about the reliability of such risky impersonal partnerships? If availability of trade-related information and business practices improved, it could increase confidence traders had in such unfamiliar partnerships. In the sixteenth century, the postal system improved across Europe. The postal system made communication between distant traders easier as traders could correspond regularly with each other and gain more accurate information. This helped expand long distance trade across Europe.

While the Northwest European region didn’t have a particular advantage over other regions in postal communication, it had an advantage in early diffusion of printed books. The Northwest European region was close to Mainz, the city where Johannes Gutenberg invented the movable type printing press in the mid fifteenth century showed how cities close to Mainz adopted printing sooner than many other regions of Europe in the first few decades of its introduction. So, trade-related books and new (or unknown) business practices like double-entry bookkeeping diffused early and rapidly in the region.

Such a high penetration of printed material reduced information barriers and improved business practices. I find that all cities where guild privileges declined or merchant guilds underwent reform in the sixteenth century enjoyed high penetration of printed material in the fifteenth century. Among cities within a 150km distance from a sea port, cities where merchant guilds declined or reformed had more than twice the number of diffused books per capita than cities where merchant guilds continued to dominate.

As a comparison, there were four major Atlantic port cities where merchant guilds declined: Hamburg, London, Antwerp, and Amsterdam; while there were four guild-based Atlantic cities: Lisbon, Seville, Rouen, and Bordeaux in the sixteenth century. The fifteenth century per capita printing penetration of the cities would stack as: Lisbon < Bordeaux < Hamburg < Seville < Rouen < London < Amsterdam < Antwerp.

The combination of both the commercial revolution along the sea coast, especially the Atlantic coast, and the communication revolution, especially near Mainz, uniquely benefited Northwest Europe, as it began to attract traders who favored impersonal market-based exchange over exchange conducted via guild networks. Rulers began to disfavor privileged monopolies when they realized the feasibility of impersonal exchange and that they could have superior sources of revenue from impersonal markets. In the region, trade democratised, as more people could participate in business.

Regions like Spain and Portugal that benefited only from the commercial revolution of trade through the sea to Asia and Americas had low levels of printing penetration. In contrast, regions like Germany, Italy, and France benefited from the communication and print revolution but didn’t enjoy a bustling Atlantic coast. Thus, no other region enjoyed the unique combination of both benefits of the commercial and communication revolution.

Takeaway for policy makers: democratize the market

If information access is poor (lack of transparency) or businesses don’t adopt reliable business practices (poor financial reporting or opaque quality standards), these deficiencies at the business level can make customers and investors question the reliability of new businesses. Politicians, like medieval rulers, may be more willing to enter into a nexus with dominant businesses, like medieval merchant guilds, if 1) market frictions or 2) lack of incentives make the economy dependent on such businesses.

This was the case with the taxi industry for a long time, where customers were willing to pay a high fee to get reliable taxi services as supply of drivers was low (new drivers in cities like London had to pay a high license fee and fulfill tough training requirements). But, better taxi hailing mobile apps like Lyft and Uber, by giving customers access to real time GPS tracking, have revolutionized the industry, much like the communication revolution did in late fifteenth and sixteenth century. Another area where information access has improved reliability in business is the tourism and travel industry.

While in the past the tourism sector was dominated by travel agents and their recommended offerings, now an influx of providers and travel comparison websites, such as and AirBnB, has increased the reliability of small unknown hospitality service providers. Today, many prefer to stay at a stranger’s home over a reputed hotel chain. Such a revolution in the taxi or the travel industry is following the old historical trend where disruption in how information is made available changes how businesses are organized.

Noticing Neoliberalism’s Nakedness – Chris Trotter. 

If the 2017 general election turns into a messy boil-over, it will be the fault of New Zealand’s most successful people. For the best part of 30 years, the high achievers of New Zealand society have aligned themselves with an ideology that has produced consistently negative outcomes. Not for themselves. In fact, they have done extremely well out of the economic and social changes of the past 30 years. For the majority of their fellow citizens, however, the Neoliberal Revolution has been a disaster.

The real puzzle of the past 30 years is, therefore, why a political system intended to empower the majority has not consigned neoliberalism to the dustbin of history. Why have those on the receiving end of economic and social policies designed to benefit only a minority of the population not simply elected a party, or parties, committed to eliminating them?

A large part of the answer is supplied in Hans Christian Andersen’s famous fable, The Emperor’s New Clothes. Those who know the story will recall that the crucial element of the swindlers’ con was their insistence that the Emperor’s magnificent attire could only be seen by the wise. To “anyone who was unfit for his office, or who was unusually stupid”, the Emperor would appear to be wearing nothing at all.

The interesting thing about Andersen’s fable is that it’s actually supported by a critical element of scientific fact. If people whose judgement we have no reason to doubt inform us that black is white, most of us will, in an astonishingly short period of time, start disregarding the evidence of our own eyes. Even worse, if an authority figure instructs us to administer punishments to people “for their own good” most of us will do so. Even when the punishment appears to be causing the recipients intense, even fatal, pain, we will be continue flicking the switch for as long as the authority figure insists that the pain is necessary and that we have no alternative except to proceed. (If you doubt this, just google “Stanley Milgram”.)

For 30 years, then, New Zealand’s best and brightest business leaders, academics, journalists and politicians have been telling the rest of us that the only reason neoliberalism appears to be promoting a nakedly brutal and inequitable economic and social system is because we are too stupid to perceive the true beneficence of the free market. In language ominously reminiscent of Professor Milgram’s terrible experiment, we have been told by those in authority that there can be “no long-term gain without short-term pain”, and, God forgive us, we have believed them – and continued flicking the switch.

Nowhere has this readiness to discount the evidence before one’s own eyes been more pronounced than in our politicians. How many of them, when confronted with the social and environmental wreckage of neoliberalism, have responded like the “honest old minister” in Andersen’s fable, who, upon being ushered into the swindlers’ workshop, and seeing nothing, thought: “Heaven have mercy! Can it be that I’m a fool? I’d have never guessed it, and not a soul must know. Am I unfit to be the minister? It would never do to let on that I can’t see the cloth.”

How else are we to explain the unwillingness of the Labour Party and the Greens to break decisively with the neoliberal swindle? Or, the repeated declarations from National and Act praising the beauty and enchantment of its effects: “Such a pattern, what colours!”

Even as the evidence of its malignity mounted before them. Even as the numbers harmed by its poisonous remedies increased. The notion that the best and the brightest might perceive them as being unusually stupid and unfit for office led the opposition parties to concentrate all their criticism on the symptoms of neoliberalism. Or, in the spirit of Andersen’s tale, critiquing the cut of the Emperor’s new clothes instead of their non-existence.

Eventually, of course, the consequences of neoliberalism are felt by too many people to be ignored. Children who cannot afford to buy their own home. Grandchildren who cannot access mental health care. The spectacle of people living in their cars. Of homeless men freezing to death in the streets. Eventually someone – a politician unafraid of being thought unusually stupid, or unfit for office – breaks the swindlers’ spell.

“‘But he hasn’t got anything on,’ a little child said.

‘Did you ever hear such innocent prattle?’ said its father. And one person whispered to another what the child had said, ‘He hasn’t anything on. A child says he hasn’t anything on.’

‘But he hasn’t got anything on!’ the whole town cried out at last.”

Now, the whole New Zealand electorate may not be calling “Time!” on neoliberalism – and certainly not its best and its brightest – but Winston Peters is.

And the town is whispering.

Bowalley Road

‘These problems will not be fixed by the market’ – Bruce Plested. 

Over the years I’ve had a variety of bosses. In seeking to recognise a good boss from one not so good, I asked this question: ‘Would they make a good foreman?

  • Could they ask the workers to do a difficult or unpleasant job and expect them to do it?
  • Could they do the job themselves?
  • Could they take their people with them?
  • Did they get the job done every day, week and year?

With 2017 being an election year in New Zealand it is worth asking these questions of our politicians.  Too many of them fail the test and are lost in platitudes, jokes, jibes, foxy words and sheer procrastination.


Our houses, through most parts of New Zealand, cost some ten times the net annual income of the family seeking to buy them.

These high prices (three times annual income was normal for many years prior to the early 2000s) have been progressively increasing for the past 15 years, and all governments have been aware of the problem. No government or local government has taken any meaningful action against this rising tide.

As the New Zealand Initiative has stated, “There are not enough homes being built to meet the demand.”


  • Planning restrictions make it difficult to increase population directly within the city boundaries.
  • Cities are prevented from growing outwards because of rural and urban boundaries.
  • New developments require infrastructure investment from local councils, which can only pay for such investment by rates increases.

Politicians, both local and national, must take action on this very fixable social disgrace. “The market” cannot sort out this problem. Real leadership and intestinal fortitude is needed now.


Measured by some standards, our education is at satisfactory levels on a global average scale. However only 30% of children from lower decile school areas are reaching the New Zealand average for level 3 NCEA.

This low level of success continues the establishment of a permanent socio-economic group of under-achievers in education, and it is our Māori and Pacific Island people who make up most of this group.

This group of under-achievers are more displaced than ever by rising housing and rent prices. Without educational success they will continue to make a lesser contribution to society.

Business can play a bigger role in attempting to sustain and assist educational development. If businesses and schools, particularly in lower decile areas, get together in a meaningful way, benefits will evolve. The more children understand how a business (from a farm, to a fruit shop, to an engineering factory, to a quarry) works and interacts, the more they can understand the possibilities. Business people may be able to inspire children and parents to strive for success, and may be able to contribute to financing school wish lists, from computers to sports equipment, books to bus trips.

Can electorate and local politicians help make this happen?


Pollution and degradation of our environment is another area requiring strong political will.

Most cities provide bins for rubbish and bins for recycling. There is however no education, or ongoing exhortation, on how to recycle, why to recycle and whether it works. Is an unwashed bottle or can recyclable, or does it go into landfill? Should we recycle bottles with the lid left on? Should wine bottles have the lead seal removed? What happens to polystyrene, what happens to plastic bottles with pumps attached, what about empty aerosol cans? Much of this stuff is going to landfill because our local authorities don’t tell us what is required. If recycling is just a myth, let us know, otherwise teach us to recycle for the benefit of the planet.

Our lack of respect for water and water quality is an indictment of governments going back decades. Various businesses and pressure groups have been allowed to pour chemical waste, animal entrails, milk, and human and animal effluent into our streams, rivers and sea. Freshwater rights for irrigation have been given, to the extent that some rivers run dry most years. And now we are giving water rights to export freshwater in plastic bottles.

Regulators could have stood against many of these past and present excesses, but chose to do nothing and leave the problems to our children and grandchildren.

A couple of years ago I heard a European billionaire being interviewed. When the slightly irritable reporter asked “Well, how much money do you want?” the billionaire answered “Just a wee bit more.”

And it is the “wee bit more” that has done so much to damage our environment – just a few more cows per acre, just a wee bit more water for irrigation, just another water bore in case it doesn’t rain, just a wee bit more sewerage mixed with a wee bit more storm water, just a few more years hitting our already depleted fish stocks.

The problems mentioned here are not fixed by the market. They are like law and order – the local and national politicians should be dealing with them and committing to solutions before the next elections.


Bruce Plested, Mainfreight founder. 

The Spinoff

When Shareholder Capitalism Came to Town – Steven Pearlstein.  

The rise in inequality can be blamed on the shift from managerial to shareholder capitalism.

It was only 20 years ago that the world was in the thrall of American-style capitalism. Not only had it vanquished communism, but it was widening its lead over Japan Inc. and European-style socialism. America’s companies were widely viewed as the most innovative and productive, its capital markets the most efficient, its labor markets the most flexible and meritocratic, its product markets the most open and competitive, its tax and regulatory regimes the most accommodating to economic growth.

Today, that sense of confidence and economic hegemony seems a distant memory. We have watched the bursting of two financial bubbles, struggled through two long recessions, and suffered a lost decade in terms of incomes of average American households.

We continue to rack up large trade deficits even as many of the country’s biggest corporations shift more of their activity and investment overseas. Economic growth has slowed, and the top 10 percent of households have captured whatever productivity gains there have been. Economic mobility has declined to the point that, by international comparison, it is only middling. A series of accounting and financial scandals, coupled with ever-escalating pay for chief executives and hedge-fund managers, has generated widespread cynicism about business. Other countries are beginning to turn to China, Germany, Sweden, and even Israel as models for their economies.

No wonder, then, that large numbers of Americans have begun to question the superiority of our brand of free-market capitalism. This disillusionment is reflected in the rise of the Tea Party and the Occupy Wall Street movements and the increasing polarization of our national politics. It is also reflected on the shelves of bookstores and on the screens of movie theaters.

Embedded in these critiques is not simply a collective disappointment in the inability of American capitalism to deliver on its economic promise of wealth and employment opportunity. Running through them is also a nagging question about the larger purpose of the market economy and how it serves society.

In the current, cramped model of American capitalism, with its focus on maximizing output growth and shareholder value, there is ample recognition of the importance of financial capital, human capital, and physical capital but no consideration of social capital.

Social capital is the trust we have in one another, and the sense of mutual responsibility for one another, that gives us the comfort to take risks, make long-term investments, and accept the inevitable dislocations caused by the economic gales of creative destruction. Social capital provides the necessary grease for the increasingly complex machinery of capitalism and for the increasingly contentious machinery of democracy. Without it, democratic capitalism cannot survive.

It is our social capital that is now badly depleted. This erosion manifests in the weakened norms of behavior that once restrained the most selfish impulses of economic actors and provided an ethical basis for modern capitalism.

A capitalism in which Wall Street bankers and traders think peddling dangerous loans or worthless securities to unsuspecting customers is just “part of the game.”

A capitalism in which top executives believe it is economically necessary that they earn 350 times what their front-line workers do. 

A capitalism that thinks of employees as expendable inputs. 

A capitalism in which corporations perceive it as both their fiduciary duty to evade taxes and their constitutional right to use unlimited amounts of corporate funds to purchase control of the political system. 

That is a capitalism whose trust deficit is every bit as corrosive as budget and trade deficits.

As economist Luigi Zingales of the University of Chicago concludes in his recent book, A Capitalism for the People, American capitalism has become a victim of its own success. In the years after the demise of communism, “the intellectual hegemony of capitalism, however, led to complacency and extremism: complacency through the degeneration of the system, extremism in the application of its ideological premises,” he writes. “‘Greed is good’ became the norm rather than the frowned-upon exception. Capitalism lost its moral higher ground.”

Pope Francis recently gave voice to this nagging sense that our free-market system had lost its moral bearings. “Some people continue to defend trickle-down theories, which assume that economic growth, encouraged by a free market, will inevitably succeed in bringing about greater justice and inclusiveness in the world,” wrote the new pope in an 84-page apostolic exhortation. “This opinion, which has never been confirmed by the facts, expresses a crude and naïve trust in the goodness of those wielding economic power and in the sacralized workings of the prevailing economic system.”

Our challenge now is to restore both the economic and moral legitimacy of American capitalism. And there is no better place to start than with a reconsideration of the purpose of the corporation.


In the recent history of bad ideas, few have had a more pernicious effect than the one that corporations should be managed to maximize “shareholder value.”

Indeed, much of what we perceive to be wrong with the American economy these days—the slowing growth and rising inequality, the recurring scandals and wild swings from boom to bust, the inadequate investment in research and development and worker training—has its roots in this misguided ideology.

It is an ideology, moreover, that has no basis in history, in law, or in logic. What began in the 1970s and 1980s as a useful corrective to self-satisfied managerial mediocrity has become a corrupting self-interested dogma peddled by finance professors, Wall Street money managers, and overcompensated corporate executives.

Let’s start with the history. The earliest corporations, in fact, were generally chartered not for private but for public purposes, such as building canals or transit systems. Well into the 1960s, corporations were broadly viewed as owing something in return to the community that provided them with special legal protections and the economic ecosystem in which they could grow and thrive.

Legally, no statutes require that companies be run to maximize profits or share prices. In most states, corporations can be formed for any lawful purpose. Lynn Stout, a Cornell law professor, has been looking for years for a corporate charter that even mentions maximizing profits or share price. So far, she hasn’t found one. Companies that put shareholders at the top of their hierarchy do so by choice, Stout writes, not by law.

Nor does the law require, as many believe, that executives and directors owe a special fiduciary duty to the shareholders who own the corporation. The director’s fiduciary duty, in fact, is owed simply to the corporation, which is owned by no one, just as you and I are owned by no one—we are all “persons” in the eyes of the law. Corporations own themselves.

What shareholders possess is a contractual claim to the “residual value” of the corporation once all its other obligations have been satisfied—and even then the directors are given wide latitude to make whatever use of that residual value they choose, just as long as they’re not stealing it for themselves.

It is true, of course, that only shareholders have the power to elect the corporate directors. But given that directors are almost always nominated by the management and current board and run unopposed, it requires the peculiar imagination of corporate counsel to leap from the shareholders’ power to “elect” directors to a sweeping mandate that directors and the executives must put the interests of shareholders above all others.

Given this lack of legal or historical support, it is curious how “maximizing shareholder value” has evolved into such a widely accepted norm of corporate behavior.

Milton Friedman, the University of Chicago free-market economist, is often credited with first articulating the idea in a 1970 New York Times Magazine essay in which he argued that “there is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits.” Anything else, he argues, is “unadulterated socialism.”

A decade later, Friedman’s was still a minority view among corporate leaders. In 1981, as Ralph Gomory and Richard Sylla recount in a recent article in Daedalus, the Business Roundtable, representing the nation’s largest firms, issued a statement recognizing a broader purpose of the corporation: “Corporations have a responsibility, first of all, to make available to the public quality goods and services at fair prices, thereby earning a profit that attracts investment to continue and enhance the enterprise, provide jobs and build the economy.” The statement went on to talk about a “symbiotic relationship” between business and society not unlike that voiced nearly 30 years earlier by General Motors chief executive Charlie Wilson, when he reportedly told a Senate committee that “what is good for the country is good for General Motors, and vice versa.”

By 1997, however, the Business Roundtable was striking a tone that sounded a whole lot more like Professor Friedman than CEO Wilson. “The principal objective of a business enterprise is to generate economic returns to its owners,” it declared in its statement on corporate responsibility. “If the CEO and the directors are not focused on shareholder value, it may be less likely the corporation will realize that value.”

The most likely explanation for this transformation involves three broad structural changes that were going on in the U.S. economy—globalization, deregulation, and rapid technological change. Over a number of decades, these three forces have conspired to rob what were once the dominant American corporations of the competitive advantages they had during the “golden era” of the 1950s and 1960s in both U.S. and global markets. Those advantages—and the operating profits they generated—were so great that they could spread the benefits around to all corporate stakeholders. The postwar prosperity was so widely shared that it rarely occurred to stockholders, consumers, or communities to wonder if they were being shortchanged.

It was only when competition from foreign suppliers or recently deregulated upstarts began to squeeze out those profits—often with the help of new technologies—that these once-mighty corporations were forced to make difficult choices. In the early going, their executives found that it was easier to disappoint shareholders than customers, workers, or even their communities. The result, during the 1970s, was a lost decade for investors.

Beginning in the mid-1980s, however, a number of companies with lagging stock prices found themselves targets for hostile takeovers launched by rival companies or corporate raiders employing newfangled “junk bonds” to finance unsolicited bids. Disappointed shareholders were only too willing to sell out to the raiders. So it developed that the mere threat of a hostile takeover was sufficient to force executives and directors across the corporate landscape to embrace a focus on profits and share prices. Almost overnight they tossed aside their more complacent and paternalistic management style, and with it a host of inhibitions against laying off workers, cutting wages and benefits, closing plants, spinning off divisions, taking on debt, moving production overseas. Some even joined in waging hostile takeovers themselves.

Spurred on by this new “market for corporate control,” companies traded in their old managerial capitalism for a new shareholder capitalism, which continues to dominate the business sector to this day. Those high-yield bonds, once labeled as “junk” and peddled by upstart and ethically challenged investment banks, are now a large and profitable part of the business of every Wall Street firm. The unsavory raiders have now morphed into respected private-equity and hedge-fund managers, some of whom proudly call themselves “activist investors.” And corporate executives who once arrogantly ignored the demands of Wall Street now profess they have no choice but to dance to its tune.


An elaborate institutional infrastructure has developed to reinforce shareholder capitalism and its generally accepted corporate mandate to maximize short-term profits and share price. This infrastructure includes free–market-oriented think tanks and university faculties that continue to spin out elaborate theories about the efficiency of financial markets.

An earlier generation of economists had looked at the stock-market boom and bust that led to the Great Depression and concluded that share prices often reflected irrational herd behavior on the part of investors. But in the 1960s, a different theory began to take hold at intellectual strongholds such as the University of Chicago that quickly spread to other economics departments and business schools. The essence of the “efficient market” hypothesis, first articulated by Eugene Fama (a 2013 Nobel laureate) is that the current stock price reflects all the public and private information known about a company and therefore is a reliable gauge of the company’s true economic value. For a generation of finance professors, it was only a short logical leap from this hypothesis to a broader conclusion that the share price is therefore the best metric around which to organize a company’s strategy and measure its success.

With the rise of behavioral economics, and the onset of two stock-market bubbles, the efficient–market hypothesis has more recently come under serious criticism. Another of last year’s Nobel winners, Robert Shiller, demonstrated the various ways in which financial markets 
are predictably irrational. Curiously, however, the efficient-market hypothesis is still widely accepted by business schools—and, in particular, their finance departments—which continue to preach the shareholder-first ideology.

Surveys by the Aspen Institute’s Center for Business Education, for example, find that
most MBA students believe that maximizing value for shareholders is the most important responsibility of a company and that this conviction strengthens as they proceed toward their degree, in many schools taking courses that teach techniques for manipulating short-term earnings and share prices. The assumption is so entrenched that even business-school deans who have publicly rejected the ideology acknowledge privately that they’ve given up trying to convince their faculties to take a more balanced approach.

Equally important in sustaining the shareholder focus are corporate lawyers, in-house as well as outside counsels, who now reflexively advise companies against actions that would predictably lower a company’s stock price.

For many years, much of the jurisprudence coming out of the Delaware courts—where most big corporations have their legal home—was based around the “business judgment” rule, which held that corporate directors have wide discretion in determining a firm’s goals and strategies, even if their decisions reduce profits or share prices. But in 1986, the Delaware Court of Chancery ruled that directors of the cosmetics company Revlon had to put the interests of shareholders first and accept the highest price offered for the company. As Lynn Stout has written, and the Delaware courts subsequently confirmed, the decision was a narrowly drawn exception to the business–judgment rule that only applies once a company has decided to put itself up for sale. But it has been widely—and mistakenly—used ever since as a legal rationale for the primacy of shareholder interests and the legitimacy of share-price maximization.

Reinforcing this mistaken belief are the shareholder lawsuits now routinely filed against public companies by class-action lawyers any time the stock price takes a sudden dive. Most of these are frivolous and, particularly since passage of reform legislation in 1995, many are dismissed. But even those that are dismissed generate cost and hassle, while the few that go to trial risk exposing the company to significant embarrassment, damages, and legal fees.

The bigger damage from these lawsuits comes from the subtle way they affect corporate behavior. Corporate lawyers, like many of their clients, crave certainty when it comes to legal matters. So they’ve developed what might be described as a “safe harbor” mentality—an undue reliance on well-established bright lines in advising clients to shy away from actions that might cause the stock price to fall and open the company up to a shareholder lawsuit. Such actions include making costly long-term investments, or admitting mistakes, or failing to follow the same ruthless strategies as their competitors. One effect of this safe-harbor mentality is to reinforce the focus on short-term share price.

The most extensive infrastructure supporting the shareholder-value ideology is to be found on Wall Street, which remains thoroughly fixated on quarterly earnings and short-term trading. Companies that refuse to give quarterly-earnings guidance are systematically shunned by some money managers, while those that miss their earnings targets by even small amounts see their stock prices hammered.

Recent investigations into insider trading have revealed the elaborate strategies and tactics used by some hedge funds to get advance information about a quarterly earnings report in order to turn enormous profits by trading on it. And corporate executives continue to spend enormous amounts of time and attention on industry analysts whose forecasts and ratings have tremendous impact on share prices.

In a now-infamous press interview in the summer of 2007, former Citigroup chairman Charles Prince provided a window into the hold that Wall Street has over corporate behavior. At 
the time, Citi’s share price had lagged behind that of the other big banks, and there was speculation in the financial press that Prince would be fired if he didn’t quickly find a way to catch up. In the interview with the Financial Times, Prince seemed to confirm that speculation. When asked why he was continuing to make loans for high-priced corporate takeovers despite evidence that the takeover boom was losing steam, he basically said he had no choice—as long as other banks were making big profits from such loans, Wall Street would force him, or anyone else in his job, to make them as well. “As long as the music is playing,” Prince explained, “you’ve got to get up and dance.”

It isn’t simply the stick of losing their jobs, however, that causes corporate executives to focus on maximizing shareholder value. There are also plenty of carrots to be found in those generous—some would say gluttonous—pay packages, the value of which is closely tied to the short-term performance of company stock.

The idea of loading up executives with stock options also dates to the transition to shareholder capitalism. The academic critique of managerial capitalism was that the lagging performance of big corporations was a manifestation of what economists call a “principal-agent” problem. In this case, the “principals” were the shareholders and their directors, and the misbehaving “agents” were the executives who were spending too much of their time, and the shareholder’s money, worrying about employees, customers, and the community at large.

In what came to be one of the most widely cited academic papers of all time, business-school professors Michael Jensen of Harvard and William Meckling of the University of Rochester wrote in 1976 that the best way to align the interests of managers to those of the shareholders was to tie a substantial amount of the managers’ compensation to the share price. In a subsequent paper in 1989 written with Kevin Murphy, Jensen went even further, arguing
that the reason corporate executives acted more like “bureaucrats than value-maximizing entrepreneurs” was because they didn’t get to keep enough of the extra value they created.

With that academic foundation, and the enthusiastic support of executive-compensation specialists, stock-based compensation took off. Given the tens and, in more than a few cases, the hundreds of millions of dollars lavished on individual executives, the focus
on boosting share price is hardly surprising. The ultimate irony, of course, is that the result
 of this lavish campaign to more closely align incentives and interests is that the “agents” have done considerably better than the “principals.”

Roger Martin, the former dean of the Rotman School of Management at the University of Toronto, calculates that from 1933 until 1976—roughly speaking, the era of “managerial capitalism” in which managers sought to balance the interest of shareholders with those of employees, customers, and society at large—the total real compound annual return on the stocks of the S&P 500 was 7.5 percent. From 1976 until 2011—roughly the period of “shareholder capitalism”—the comparable return has been 6.5 percent. Meanwhile, according to Martin’s calculation, the ratio of chief-executive compensation to corporate profits increased eightfold between 1980 and 2000, almost all of it coming in the form of stock-based compensation.


All of this reinforcing infrastructure—the academic underpinning, the business-school indoctrination, the threat of shareholder lawsuits, the Wall Street quarterly earnings machine, the executive compensation—has now succeeded in hardwiring the shareholder-value ideology into the economy and business culture. It has also set in motion a dynamic in which corporate and investor time horizons have become shorter and shorter. The average holding periods for corporate stocks, which for decades was six years, is now down to less than six months. The average tenure of a Fortune 500 chief executive is now down to less than six years. Given those realities, it should be no surprise that the willingness of corporate executives to sacrifice short-term profits to make long-term investments is rapidly disappearing.

A recent study by McKinsey & Company, the blue-chip consulting firm, and Canada’s public pension board found alarming levels of short-termism in the corporate executive suite. According
to the study, nearly 80 percent of top executives and directors reported feeling the most pressure to demonstrate a strong financial performance over a period of two years or less, with only 7 percent feeling considerable pressure to deliver strong performance over a period of five years or more. It also found that 55 percent of chief financial officers would forgo an attractive investment project today if it would cause the company to even marginally miss its quarterly-earnings target.

The shift on Wall Street from long-term investing to short-term trading presents a dilemma for those directing a company solely for shareholders: Which group of shareholders is it whose interests the corporation is supposed to optimize? Should it be the hedge funds that are buying and selling millions of shares in a matter of seconds to earn hedge fund–like returns? Or the “activist investors” who have just bought a third of the shares? Or should it be the retired teacher in Dubuque who has held the stock for decades as part of her retirement savings and wants a decent return with minimal downside risk?

One way to deal with this quandary would be for corporations to give shareholders a
bigger voice in corporate decision-making. But it turns out that even as they proclaim
their dedication to shareholder value, executives and directors have been doing everything possible to minimize shareholder involvement and influence in corporate governance. This curious hypocrisy is most recently revealed in the all-out effort by the business lobby to limit shareholder “say on pay” or the right to nominate a competing slate of directors.

For too many corporations, “maximizing shareholder value” has also provided justification
for bamboozling customers, squeezing employees, avoiding taxes, and leaving communities in the lurch. For any one profit–maximizing company, such ruthless behavior may be perfectly rational. But when competition forces all companies to behave in this fashion, neither they, nor we, wind up better off.

Take the simple example of outsourcing production to lower-cost countries overseas. Certainly it makes sense for any one company to aggressively pursue such a strategy. But
if every company does it, these companies may eventually find that so many American consumers have suffered job loss and wage cuts that they no longer can buy the goods they are producing, even at the cheaper prices. The companies may also find that government no longer has sufficient revenue to educate its remaining American workers or dredge the ports through which its imported goods are delivered to market.

Economists have a name for such unintended spillover effects—negative externalities—and normally the most effective response is some form of government action, such as regulation, taxes, or income transfers. But one of the hallmarks of the current political environment is that every tax, every regulation, and every new safety-net program is bitterly opposed by the corporate lobby as an assault on profits and job creation. Not only must the corporation commit to putting shareholders first—as they see it, the society must as well. And with the Supreme Court’s decision in Citizens United, corporations are now free to spend unlimited sums of money on political campaigns to elect politicians sympathetic to this view.

Perhaps the most ridiculous aspect of shareholder–über-alles is how at odds it is with every modern theory about managing people. David Langstaff, then–chief executive of TASC, a Virginia–based government-contracting firm, put it this way in a recent speech at a conference hosted by the Aspen Institute and the business school at Northwestern University: “If you are the sole proprietor of a business, do you think that you can motivate your employees for maximum performance by encouraging them simply to make more money for you?” Langstaff asked rhetorically. “That is effectively what an enterprise is saying when it states that its purpose is to maximize profit for its investors.”

Indeed, a number of economists have been trying to figure out the cause of the recent slowdown in both the pace of innovation and the growth in worker productivity. There are lots of possible culprits, but surely one candidate is that American workers have come to understand that whatever financial benefit may result from their ingenuity or increased efficiency is almost certain to be captured by shareholders and top executives.

The new focus on shareholders also hasn’t been a big winner with the public. Gallup polls show that people’s trust in and respect for big corporations have been on a long, slow decline in recent decades—at the moment, only Congress and health-maintenance organizations rank lower. When was the last time you saw a corporate chief executive lionized on the cover of a newsweekly? Odds are it was probably the late Steve Jobs of Apple, who wound up creating more wealth for more shareholders than anyone on the planet by putting shareholders near the bottom of his priority list.


The usual defense you hear of “maximizing shareholder value” from corporate chief executives is that at many firms—not theirs!—it has been poorly understood and badly executed. These executives make clear they don’t confuse today’s stock price or this quarter’s earnings with shareholder value, which they understand to be profitability and stock appreciation over the long term. They are also quick to acknowledge that no enterprise can maximize long-term value for its shareholders without attracting great employees, providing great products and services to customers, and helping to support efficient governments and healthy communities.

Even Michael Jensen has felt the need to reformulate his thinking. In a 2001 paper, he wrote, “A firm cannot maximize value if it ignores the interest of its stakeholders.” He offered a proposal he called “enlightened stakeholder theory,” one that “accepts maximization of the long run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders.”

But if optimizing shareholder value implicitly requires firms to take good care of customers, employees, and communities, then by the same logic you could argue that optimizing customer satisfaction would require firms to take good care of employees, communities, and shareholders. More broadly, optimizing any function inevitably requires the same tradeoffs or messy balancing of interests that executives of an earlier era claimed to have done.

The late, great management guru Peter Drucker long argued that if one stakeholder group should be first among equals, surely it should be the customer. “The purpose of business is to create and keep a customer,” he famously wrote.

Roger Martin picked up on Drucker’s theme in “Fixing the Game,” his book-length critique of shareholder value. Martin cites the experience of companies such as Apple, Johnson & Johnson, and Proctor & Gamble, companies that put customers first, and whose long-term shareholders have consistently done better than those of companies that claim to put shareholders first. The reason, Martin says, is that customer focus minimizes undue risk taking, maximizes reinvestment, and creates, over the long run, a larger pie.

Having spoken with more than a few top executives over the years, I can tell you that many would be thrilled if they could focus on customers rather than shareholders. In private, they chafe under the quarterly earnings regime forced on them by asset managers and the financial press. They fear and loathe “activist” investors. They are disheartened by their low public esteem. Few, however, have dared to challenge the shareholder-first ideology in public.

But recently, some cracks have appeared.

In 2006, Ian Davis, then–managing director of McKinsey, gave a lecture at the University of Pennsylvania’s Wharton School in which he declared, “Maximization of shareholder value is in danger of becoming irrelevant.”

Davis’s point was that global corporations have to operate not just in the United States but in the rest of the world where people either don’t understand the concept of putting shareholders first or explicitly reject it—and companies that trumpet it will almost surely draw the attention of hostile regulators and politicians.

“Big businesses have to be forthright in saying what their role is in society, and they will never do it by saying, ‘We maximize shareholder value.’”

A few years later, Jack Welch, the former chief executive of General Electric, made headlines when he told the Financial Times, “On the face of it, shareholder value is the dumbest idea in the world.” What he meant, he scrambled to explain a few days later, is that shareholder value is an outcome, not a strategy. But coming from the corporate executive (“Neutron Jack”) who had embodied ruthlessness in the pursuit of competitive dominance, his comment was viewed as a recognition that the single-minded pursuit of shareholder value had gone too far. “That’s not a strategy that helps you know what to do when you come to work every day,” Welch told Bloomberg Businessweek. “It doesn’t energize or motivate anyone. So basically my point is, increasing the value of your company in both the short and long term is an outcome of the implementation of successful strategies.”

Tom Rollins, the founder of the Teaching Company, offers as an alternative what he calls the “CEO” strategy, standing for customers, employees, and owners. Rollins starts by noting that at the foundation of all microeconomics are voluntary trades or exchanges that create “surplus” for both buyer and seller that in most cases exceed their minimum expectations. The same logic, he argues, ought to apply to the transactions between a company and its employees, customers, and owners/shareholders.

The problem with a shareholder-first strategy, Rollins argues, is that it ignores this basic tenet of economics. It views any surplus earned by employees and customers as both unnecessary and costly. After all, if the market would allow the firm to hire employees for 10 percent less, or charge customers 10 percent more, then by not driving the hardest possible bargain with employees and customers, shareholder profit is not maximized.

But behavioral research into the importance of “reciprocity” in social relationships strongly suggests that if employees and customers believe they are not getting any surplus from a transaction, they are unlikely to want to continue to engage in additional transactions with the firm. Other studies show that having highly satisfied customers and highly engaged employees leads directly to higher profits. As Rollins sees it, if firms provide above-market returns—surplus—to customers and employees, then customers and employees are likely to reciprocate and provide surplus value to firms and their owners.

Harvard Business School professor Michael Porter and Kennedy School senior fellow Mark Kramer have also rejected the false choice between a company’s social and value–maximizing responsibilities that is implicit in the shareholder-value model. “The solution lies in the principle of shared value, which involves creating economic value
in a way that also creates value for society by addressing its needs and challenges,” they wrote in the Harvard Business Review in 2011. In the past, economists have theorized that
for profit-maximizing companies to provide societal benefits, they had to sacrifice economic success by adding to their costs or forgoing revenue. What they overlooked, Porter and Kramer wrote, was that by ignoring social goals—safe workplaces, clean environments, effective school systems, adequate infrastructure—companies wound up adding to their overall costs while failing to exploit profitable business opportunities. “Businesses must reconnect company success with social progress,” Porter and Kramer wrote. “Shared value is not social responsibility, philanthropy or even sustainability, but a new way to achieve economic success. It is not on the margin of what companies do, but at the center.”


If it were simply the law that was responsible for the undue focus on shareholder value, it would be relatively easy to alter it. Changing a behavioral norm, however—particularly one so accepted and reinforced by so much supporting infrastructure—is a tougher challenge. The process will, of necessity, be gradual, requiring carrots as well as sticks. The goal should not be to impose a different focus for corporate decision-making as inflexible as maximizing shareholder value has become but rather to make it acceptable for executives and directors to experiment with and adopt a variety of goals and purposes.

Companies would surely be responsive if investors and money managers would make clear that they have a longer time horizon or are looking for more than purely bottom-line results. There has long been a small universe of “socially responsible” investing made up of mutual funds, public and union pension funds, and research organizations that monitor corporate behavior and publish scorecards based on an assessment of how companies treat customers, workers, the environment, and their communities. While some socially responsible funds and asset managers and investors have consistently achieved returns comparable or even slightly superior to those of competitors focused strictly on financial returns, there is no evidence of any systematic advantage. Nor has there been a large hedge fund or private-equity fund that made it to the top with a socially responsible investment strategy. You can do well by doing good, but it’s no sure thing that you’ll do better.

Nineteen states—the latest is Delaware, where a million businesses are legally registered—have recently established a new kind of corporate charter, the “benefit corporation,” that explicitly commits companies to be managed for the benefit of all stakeholders. About 550 companies, including Patagonia and Seventh Generation, now have B charters, while 960 have been certified as meeting the standards set out by the nonprofit B Lab. Although almost all of today’s B corps are privately held, supporters of the concept hope that a number of sizable firms will become B corps and that their stocks will then be traded on a separate exchange.

One big challenge facing B corps and the socially responsible investment community is
that the criteria they use to assess corporate behavior exhibit an unmistakable liberal bias that makes it easy for many investors, money managers, and executives to dismiss them
as ideological and naïve. Even a company run for the benefit of multiple stakeholders will at various points be forced to make tough choices, such as reducing payroll, trimming costs, closing facilities, switching suppliers, or doing business in places where corruption is rampant or environmental regulations are weak. As chief executives are quick to remind, companies that ignore short-term profitability run the risk of never making it to the long term.

Among the growing chorus of critics of “shareholder value,” a consensus is emerging around a number of relatively modest changes in tax and corporate governance laws that, at a minimum, could help lengthen time horizons of corporate decision-making. A group of business leaders assembled by the Aspen Institute to address the problem of “short-termism” recommended a recalibration of the capital-gains tax to provide investors with lower tax rates for longer-term investments. A small transaction tax, such as the one proposed by the European Union, could also be used to dampen the volume and importance of short-term trading.

The financial-services industry and some academics have argued that such measures, by reducing market liquidity, will inevitably increase the cost of capital and result in markets that are more volatile, not less. A lower tax rate for long-term investing has also been shown to have a “lock-in” effect that discourages investors from moving capital to companies offering the prospect of the highest return. But such conclusions are implicitly based on the questionable assumption that markets without such tax incentives are otherwise rational and operate with perfect efficiency. They also beg fundamental questions about the role played by financial markets in the broader economy. Once you assume, as they do, that the sole purpose of financial markets is to channel capital into investments that earn the highest financial return to private investors, then maximizing shareholder value becomes the only logical corporate strategy.

There is also a lively debate on the question of whether companies should offer earnings guidance to investors and analysts—estimates of what earnings per share will be for the coming quarter. The argument against such guidance is that it reinforces the undue focus of both executives and investors on short-term earnings results, discouraging long-term investment and incentivizing earnings manipulation. The counterargument is that even in the absence of company guidance, investors and executives inevitably play the same game by fixating on the “consensus” earnings estimates of Wall Street analysts. Given that reality, they argue, isn’t it better that those analyst estimates are informed as much as possible by information provided by the companies themselves?

In weighing these conflicting arguments, the Aspen group concluded that investors and analysts would be better served if companies provided information on a wider range of metrics with which to assess and predict business performance over a longer time horizon than the next quarter. While it might take Wall Street and its analysts some time to adjust to this richer and more nuanced form of communication, it would give the markets a better understanding of what drives each business while taking some of the focus off the quarterly numbers game.

In addressing the question of which shareholders should have the most say over company strategies and objectives, there have been suggestions for giving long-term investors greater power in selecting directors, approving mergers and asset sales, and setting executive compensation. The idea has been championed by McKinsey & Company managing director Dominic Barton and John Bogle, the former chief executive of the Vanguard Group, and is under active consideration by European securities regulators. Such enhanced voting rights, however, would have to be carefully structured so that they encourage a sense of stewardship on the part of long-term investors without giving company insiders or a few large shareholders the opportunity to run roughshod over other shareholders.

The short-term focus of corporate executives and directors is heavily reinforced by the demands of asset managers at mutual funds, pension funds, hedge funds, and endowments, who are evaluated and compensated on the basis of the returns they generated over the last year and the last quarter. Even while most big companies have now taken steps to stretch out over several years the incentive pay plans of top corporate executives to encourage those executives to take a longer-term perspective, the outsize quarterly and annual bonuses on Wall Street keep the economy’s time horizons fixated on the short term. At a minimum, federal regulators could require asset managers to disclose how their compensation is determined. They might also require funds to justify, on the basis of actual performance, the use of short-term metrics when managing long-term money such as pensions and college endowments.

The Securities and Exchange Commission also could nudge companies to put greater emphasis on long-term strategy and performance in their communications with shareholders. For starters, companies could be required to state explicitly in their annual reports whether their priority is to maximize shareholder value or to balance shareholder interests with other interests in some fashion—certainly shareholders deserve to know that in advance. The commission might require companies to annually disclose the size of their workforce in each country and information on the pay and working conditions of the company’s employees and those of its major contractors. Disclosure of any major shifts in where work is done could also be required, along with the rationale. There could be a requirement for companies to perform and disclose regular environmental audits and to acknowledge other potential threats to their reputation and brand equity. In proxy statements, public companies could be required to explain the ways in which executive compensation is aligned with long-term strategy and performance.

If I had to guess, however, my hunch would be that employees, not outside investors and regulators, will finally free the corporate sector from the straitjacket
of shareholder value. Today, young people—particularly those with high-demand skills—are drawn to work that doesn’t simply pay well but also has meaning and social value. You can already see that reflected in what students are choosing to study and where highly sought graduates are choosing to work. As the economy improves and the baby-boom generation retires, companies that have reputations as ruthless maximizers of short-term profits will find themselves on the losing end of the global competition for talent.

In an era of plentiful capital, it will be skills, knowledge, and creativity that will be in short supply, with those who have them calling the corporate tune. Who knows? In the future, there might even be conferences at which hedge-fund managers and chief executives get together to gripe about the tyranny of “maximizing employee satisfaction” and vow to do something about it.

The American Prospect

Steven Pearlstein is a Pulitzer Prize-winning columnist for The Washington Post and a professor of public affairs at George Mason University. 

Gareth Morgan ‘overwhelmed’ at support for new party as hundreds sign up. 

Gareth Morgan’s new political party already has more than 750 paid members – with the entrepreneur saying he has been blown away by the response.

Morgan launched the Opportunities Party (Top) on Friday, saying problems like inequality and housing affordability could be solved but not by “establishment” politicians.

He plans to gauge public reaction to his campaign before registering the party next year.

Political parties need 500 financial members to register, and that mark was surpassed by Top within 24 hours of launching.

“I thought would take us months to get those kinds of numbers, particularly given I haven’t released any policy yet!”

People that follow and support you pretty much know what the policy is going to look like Gareth. Good Luck, this will hopefully turn out to be a defining moment in New Zealand’s political history. We urgently need to rethink our social economic policy’s for the betterment of the 90% and the long term survival of our economy. 

NZ Herald 

The Five Lies Of Rentier Capitalism. – Guy Standing. 

We live in the age of rentier capitalism. It is the crisis point of the Global Transformation, during which claims made for capitalism have been wholly undermined by a developing system that is radically different from what its advocates say. They assert a belief in ‘free markets’ and want us to believe that they are extending them. That is untrue. Today we have a most unfree market system.

How can politicians say we have a free market system when patents guarantee monopoly incomes for 20 years, preventing anyone from competing? How can they claim there are free markets when copyright rules give guaranteed income for 70 years after a person’s death? Far from trying to stop these and other negations of free markets, governments are creating rules that encourage them.

The twentieth-century income distribution system has broken down. Since the 1980s, the share of income going to labour has shrunk in most economically significant countries. Real wages on average have stagnated or fallen. Today, a tiny minority of people and corporations are accumulating vast wealth, not from ‘hard work’ or productive activity, but from rental income.

‘Rentiers’ derive income from possession of assets that are scarce or artificially made scarce. Most familiar is rental income from land, property, minerals or financial investments, but other sources have grown too. They include the income lenders gain from debt interest; income from ownership of ‘intellectual property’; capital gains on investments; ‘above normal’ company profits (when a firm has a dominant position); income from subsidies; and income of financial intermediaries derived from third-party transactions.

Social Europe