All posts by TPPA = CRISIS

Hi, my name is Hans Hilhorst. I am just a private individual hoping to contribute to our debate about matters of Political Economy. I am not affiliated with or a member of any commercial entity, political party or organization of any kind. - All my life I have been interested in psychology and economics, the things that define human society. I have also always been an avid reader of everything remotely related to those topics and I have gained a fair bit of ‘research material’ and ‘empirical evidence’ during my career through the Neoliberal nightmare, or ‘User Pays’ as we call our brand of Neoliberalism down under. - My opinions are neither here nor there, they are mine alone. I share them merely to throw in my 5 cents, looking for common spirits and clear young minds, open to new thinking. I endeavor though to keep my contributions controlled and prefer to put forward those of the multitude of academics, scientists, thinkers, authors and whoever else of credible authority. Those who have done the research, spoken to the victims, walked in their shoes and dealt with the consequences of our economic and social policies. In this way I hope to show that the cry for urgent attention to our social economic policies is not just the sound of a leftist mob, but a well supported and professional crowd of experts from around the globe. - I wish to stir the debate on our economic direction. I think it has been lopsided. Not because we are biased or stupid, but because we have been deliberately misinformed, censored and deceived. - At this juncture in time we are truly faced with the consequences of our progress: Robot Domination! Will it be like SkyNet and Arnie’s goldmine or just a happy invasion of our workplaces, allowing us time for ‘the good life’, the beach, and reading. - Hans

Money. The Unauthorised Biography – Felix Martin.

Simple and intuitive though it may be, there is a drawback to the conventional theory of money. It is entirely false.

Not a single researcher has been able to find a society, historical or contemporary, that regularly conducted its trade by barter.

‘For a century or more, the “civilized” world regarded as a manifestation of its wealth, metal dug from deep in the ground, refined at great labor, and transported great distances to be buried again in elaborate vaults deep under the ground. Is the one practice really more rational than the other?’ Milton Friedman

So if it is so obvious that the conventional theory of money is wrong, why has such a distinguished canon of economists and philosophers believed it? And why does today’s economics profession by and large persist in using the fundamental ideas of this tradition as the building blocks of modern economic thinking?

What is money, and how does it work?

The conventional answer is that people once used sugar in the West Indies, tobacco in Virginia, and dried cod in Newfoundland, and that today’s financial universe evolved from barter.

Unfortunately, there is a problem with this story. It’s wrong. And not just wrong, but dangerous. Money: the Unauthorised Biography unfolds a panoramic secret history and explains the truth about money: what it is, where it comes from, and how it works.

Drawing on stories from throughout human history and around the globe, Money will radically rearrange your understanding of the world and shows how money can once again become the most powerful force for freedom we have ever known.

About the author

Felix Martin was educated in the UK, Italy and the US, and holds degrees in Classics, International Relations and Economics, including a D.Phil. in Economics from Oxford University. He worked for the World Bank and for the European Stability Initiative think tank, and is currently a partner in the fixed income division at Liontrust Asset Management plc.

1 What is Money?

“Everyone, except an economist, knows what ‘money’ means, and even an economist can describe it in the course of a chapter or so . . .” A.H. Quiggin, A Survey of Primitive Money: the Beginnings of Currency

THE ISLAND OF STONE MONEY

THE PACIFIC ISLAND of Yap was, at the beginning of the twentieth century, one of the most remote and inaccessible inhabited places on earth. An idyllic, subtropical paradise, nestled in a tiny archipelago nine degrees north of the equator and more than 300 miles from Palau, its closest neighbour, Yap had remained almost innocent of the world beyond Micronesia right up until the final decades of the nineteenth century. There had, it is true, been a brief moment of Western contact in 1731 when a group of intrepid Catholic missionaries had established a small base on the island. When their supply ship returned the following year, however, it discovered that the balmy, palm-scattered islands of Yap had not proved fertile ground for the Christian gospel. The entire mission had been massacred several months previously by local witch doctors aggrieved at the competition presented by the Good News. Yap was left to its own devices for another one hundred and forty years.

It was not until 1869 that the first European trading post run by the German merchant firm of Godeffroy and sons was established in the Yap archipelago. Once a few years had passed, with Godeffroy not only avoiding summary execution but prospering, Yap’s presence came to the attention of the Spanish, who by virtue of their colonial possessions in the Philippines a mere 800 miles to the west considered themselves the natural overlords of this part of Micronesia. The Spanish laid claim to the islands, and believed that they had achieved a fait accompli when in the summer of 1885 they erected a house and installed a Governor in it. They had not counted, however, on the tenacity of Bismarck’s Germany in matters of foreign policy. No island was so small, or so remote, as to be unworthy of the Imperial Foreign Ministry’s attention if it meant a potential addition to German power. The ownership of Yap became the subject of an international dispute. Eventually, the matter was referred somewhat ironically, given the island’s track record to arbitration by the Pope, who granted political control to Spain, but full commercial rights to Germany. But the Iron Chancellor had the last laugh. Within a decade and a half, Spain had lost a damaging war with America for control of the Philippines, and its ambitions in the Pacific had disintegrated. In 1899, Spain sold Yap to Germany for the sum of $3.3 million.

The absorption of Yap into the German Empire had one great benefit. It brought one of the more interesting and unusual monetary systems in history to the attention of the world. More specifically, it proved the catalyst for a visit by a brilliant and eccentric young American adventurer, William Henry Furness III. The scion of a prominent New England family, Furness had trained as a doctor before converting to anthropology and making his name with a popular account of his travels in Borneo. In 1903 he made a two-month visit to Yap, and published a broad survey of its physical and social make-up a few years later. He was immediately impressed by how much more remote and untouched it was than Borneo. Yet despite being a tiny island with only a few thousand inhabitants ‘whose whole length and breadth is but a day’s walk’, as Furness described it Yap turned out to have a remarkably complex society. There was a caste system, with a tribe of slaves, and special Clubhouses lived in by fishing and fighting fraternities. There was a rich tradition of dancing and songs, which Furness took particular delight in recording for posterity. There was a vibrant native religion as the missionaries had previously discovered to their cost complete with an elaborate genesis myth locating the origins of the Yapese in a giant barnacle attached to some floating driftwood. But undoubtedly the most striking thing that Furness discovered on Yap was its monetary system.

The economy of Yap, such as it was, could hardly be called developed. The market extended to a bare three products fish, coconuts, and Yap’s one and only luxury, sea cucumber. There was no other exchangeable commodity to speak of; no agriculture; few arts and crafts; the only domesticated animals were pigs and, since the Germans had arrived, a few cats; and there had been little contact or trade with outsiders. It was as simple and as isolated an economy as one could hope to find. Given these antediluvian conditions, Furness expected to find nothing more advanced than simple barter. Indeed, as he observed, ‘in a land where food and drink and ready-made clothes grow on trees and may be had for the gathering’ it seemed possible that even barter itself would be an unnecessary sophistication.

The very opposite turned out to be true. Yap had a highly developed system of money. It was impossible for Furness not to notice it the moment that he set foot on the island, because its coinage was extremely unusual. It consisted of fei, large, solid, thick stone wheels ranging in diameter from a foot to twelve feet, having in the centre a hole varying in size with the diameter of the stone, wherein a pole may be inserted sufficiently large and strong to bear the weight and facilitate transportation’. This stone money was originally quarried on Babelthuap, an island some 300 miles away in Palau, and had mostly been brought to Yap, so it was said, long ago. The value of the coins depended principally on their size, but also on the fineness of the grain and the whiteness of the limestone.

At first, Furness believed that this bizarre form of currency might have been chosen because, rather than in spite of, its extraordinary unwieldiness: ‘when it takes four strong men to steal the price of a pig, burglary cannot but prove a somewhat disheartening occupation’, he ventured. ‘As may be supposed, thefts of fei are almost unknown.’ But as time went on, he observed that physical transportation of fei from one house to another was in fact rare. Numerous transactions took place but the debts incurred were typically just offset against each other, with any outstanding balance carried forward in expectation of some future exchange. Even when open balances were felt to require settlement, it was not usual for fei to be physically exchanged.

‘The noteworthy feature of this stone currency,’ wrote Furness, ‘is that it is not necessary for its owner to reduce it to possession. After concluding a bargain which involves the price of a fei too large to be conveniently moved, its new owner is quite content to accept the bare acknowledgement of ownership and without so much as a mark to indicate the exchange, the coin remains undisturbed on the former owner’s premises.’

The stone currency of Yap as photographed by William Henry Furness III in 1903, with people and palm trees for scale.

When Furness expressed amazement at this aspect of the Yap monetary system, his guide told him an even more surprising story:

“There was in the village nearby a family whose wealth was unquestioned, acknowledged by everyone and yet no one, not even the family itself, had ever laid eye or hand on this wealth; it consisted of an enormous fei, whereof the size is known only by tradition; for the past two or three generations it had been and was at that time lying at the bottom of the sea!”

This fei, it transpired, had been shipwrecked during a storm while in transit from Babelthuap many years ago. Nevertheless:

“It was universally conceded . . . that the mere accident of its loss overboard was too trifling to mention, and that a few hundred feet of water offshore ought not to affect its marketable value . . . The purchasing power of that stone remains, therefore, as valid as if it were leaning visibly against the side of the owner’s house, and represents wealth as potentially as the hoarded inactive gold of a miser in the Middle Ages, or as our silver dollars stacked in the Treasury in Washington, which we never see or touch, but trade with on the strength of a printed certificate that they are there.”

When it was published in 1910, it seemed unlikely that Furness’ eccentric travelogue would ever reach the notice of the economics profession. But eventually a copy happened to find its way to the editors of the Royal Economic Society’s Economic Journal, who assigned the book to a young Cambridge economist, recently seconded to the British Treasury on war duty: a certain John Maynard Keynes. The man who over the next twenty years was to revolutionise the world’s understanding of money and finance was astonished. Furness’ book, he wrote, ‘has brought us into contact with a people whose ideas on currency are probably more truly philosophical than those of any other country. Modern practice in regard to gold reserves has a good deal to learn from the more logical practices of the island of Yap.’ Why it was that the greatest economist of the twentieth century believed the monetary system of Yap to hold such important and universal lessons is the subject of this book.

GREAT MINDS THINK ALIKE

What is money, and where does it come from?

A few years ago, over a drink, I posed these two questions to an old friend, a successful entrepreneur with a prospering business in the financial services industry. He responded with a familiar story. In primitive times, there was no money just barter. When people needed something that they didn’t produce themselves, they had to find someone who had it and was willing to swap it for whatever they did produce. Of course, the problem with this system of barter exchange is that it was very inefficient. You had to find another person who had exactly what you wanted, and who in turn wanted exactly what you had got, and what is more, both at exactly the same time.

So at a certain point, the idea emerged of choosing one thing to serve as a ‘medium of exchange’. This thing could in principle be anything so long as, by general agreement, it was universally acceptable as payment. In practice, however, gold and silver have always been the most common choices, because they are durable, malleable, portable, and rare. In any case, whatever it was, this thing was from then on desirable not only for its own sake, but because it could be used to buy other things and to store up wealth for the future. This thing, in short, was money and this is where money came from. it’s a simple and powerful story. And as I explained to my friend, it is a theory of money’s nature and origins with a very ancient and distinguished pedigree. A version of it can be found in Aristotle’s Politics, the earliest treatment of the subject in the entire Western canon. It is the theory developed by John Locke, the father of classical political Liberalism, in his Second Treatise of Government. To cap it all, it is the very theory almost to the letter advocated by none other than Adam Smith in his chapter ‘Of the Origin and Use of Money’ in the foundation text of modern economics, An Inquiry into the Nature and Causes of the Wealth of Nations:

“But when the division of labour first began to take place, this power of exchanging must frequently have been very much clogged and embarrassed in its operations . . . The butcher has more meat in his shop than he himself can consume, and the brewer and the baker would each of them be willing to purchase a part of it. But they have nothing to offer in exchange, except the productions of their respective trades, and the butcher is already provided with all the bread and beer which he has immediate occasion for . . . In order to avoid such situations, every prudent man in every period of society, after the first establishment of the division of labour, must naturally have endeavoured to manage his affairs in such a manner, as to have at all times by him, besides the peculiar produce of his own industry, a certain quantity of some one commodity or other, such as he imagined few other people would be likely to refuse in exchange for the produce of their industry.”

Smith even shared my friend’s agnosticism as to which commodity would be chosen to serve as money:

“Many different commodities, it is probable, were successively both thought of and employed for this purpose. In the rude ages of society, cattle are said to have been the most common instrument of commerce . . . Salt is said to be the common instrument of commerce and exchange in Abyssinia; a species of shells in some parts of the coast of India; dried cod in Newfoundland; tobacco in Virginia; sugar in some of our West India colonies; hides or dressed leather in some other countries; and there is to this day a village in Scotland where it is not uncommon, I am told, for a workman to carry nails instead of money to the baker’s shop or the alehouse.”

And like my friend, Smith also believed that in general, gold, silver, and other metals were the most logical choices:

“In all countries, however, men seem at last to have been determined by irresistible reasons to give the preference, for this employment, to metals above every other commodity. Metals can not only be kept with as little loss as any other commodity, scarce any thing being less perishable than they are, but they can likewise, without any loss, be divided into any number of parts, as by fusion of those parts can easily be re-united again; a quality which no other equally durable commodities possess, and which more than any other quality renders them fit to be the instruments of commerce and circulation.”

So I told my friend he could congratulate himself. Without having studied economics at all, he had arrived at the same theory as the great Adam Smith. But that’s not all, I explained. This theory of money’s origins and nature is not just a historical curiosity like Ptolemy’s geocentric astronomy, a set of obsolete hypotheses long since superseded by more modern theories. On the contrary, it is found today in virtually all mainstream textbooks of economics. What’s more, its fundamental ideas have formed the bedrock of an immense body of detailed theoretical and empirical research on monetary questions over the last sixty years. Based on its assumptions, economists have designed sophisticated mathematical models to explore exactly why one commodity is chosen as money over all others and how much of it people will want to hold, and have constructed a vast analytical apparatus designed to explain every aspect of money’s value and use. It has provided the basis for the branch of economics, ‘macroeconomics’ as it is known, which seeks to explain economic booms and busts, and to recommend how we can moderate these so-called business cycles by managing interest rates and government spending. In short, my friend’s ideas not only had history behind them. They remain today, amongst amateurs and experts alike, very much the conventional theory of money.

By now, my friend was positively brimming with self-congratulation. ‘I know that I’m brilliant,’ he said with his usual modesty, ‘but it does still amaze me that I, a rank amateur, can match the greatest minds in the economic canon without ever having given it a second thought before today. Doesn’t it make you think you might have been wasting your time all those years you were studying for your degrees?’ I agreed that there was certainly something a bit troubling about it all. But not because he had hit upon the theory without any training in economics. It was quite the opposite. It was that those of us who have had years of training regurgitate this theory. Because simple and intuitive though it may be, there is a drawback to the conventional theory of money. It is entirely false.

John Maynard Keynes

STONE AGE ECONOMICS?

John Maynard Keynes was right about Yap. William Henry Furness’ description of its curious stone currency may at first appear to be nothing more than a picturesque footnote to the history of money. But it poses some awkward questions of the conventional theory of money. Take, for example, the idea that money emerged out of barter. When Aristotle, Locke, and Smith were making this claim, they were doing so purely on the basis of deductive logic. None of them had ever actually seen an economy that operated entirely via barter exchange. But it seemed plausible that such an arrangement might once have existed; and if it had existed, then it also seemed plausible that it would have been so unsatisfactory that someone would have tried to invent a way to improve on it.

In this context, the monetary system of Yap came as something of a surprise. Here was an economy so simple that it should theoretically have been operating by barter. Yet it was not: it had a fully developed system of money and currency. Perhaps Yap was an exception to the rule. But if an economy this rudimentary already had money, then where and when would a barter economy be found?

This question continued to trouble researchers over the century after Furness’ account of Yap was published. As historical and ethnographic evidence accumulated, Yap came to look less and less of an anomaly. Seek as they might, not a single researcher was able to find a society, historical or contemporary, that regularly conducted its trade by barter.

By the 1980s, the leading anthropologists of money considered the verdict to be in. ‘Barter, in the strict sense of moneyless market exchange, has never been a quantitatively important or dominant mode of transaction in any past or present economic system about which we have hard information,’ wrote the American scholar George Dalton in 1982. ‘No example of a barter economy, pure and simple, has ever been described, let alone the emergence from it of money; all available ethnography suggests that there has never been such a thing,’ concluded the Cambridge anthropologist Caroline Humphrey.

The news even began filtering through to the more intellectually adventurous fringes of the economics profession. The great American economic historian Charles Kindleberger, for example, wrote in the second edition of his Financial History of Western Europe, published in 1993, that ‘Economic historians have occasionally maintained that evolution in economic intercourse has proceeded from a natural or barter economy to a money economy and ultimately to a credit economy. This view was put forward, for example, in 1864 by Bruno Hildebrand of the German historical school of economics; it happens to be wrong.’

By the beginning of the twenty-first century, a rare academic consensus had been reached amongst those with an interest in empirical evidence that the conventional idea that money emerged from barter was false. As the anthropologist David Graeber explained bluntly in 2011: ‘There’s no evidence that it ever happened, and an enormous amount of evidence suggesting that it did not.’

The story of Yap does not just present a challenge to the conventional theory’s account of money’s origins, however. It also raises serious doubts about its conception of what money actually is. The conventional theory holds that money is a ‘thing’ a commodity chosen from amongst the universe of commodities to serve as a medium of exchange and that the essence of monetary exchange is the swapping of goods and services for this commodity medium of exchange. But the stone money of Yap doesn’t fit this scheme. In the first place, it is difficult to believe that anyone could have chosen ‘large, solid, thick stone wheels ranging in diameter from a foot to twelve feet’ as a medium of exchange since in most cases, they would be a good deal harder to move than the things being traded. But more worryingly, it was clear that the fei were not a medium of exchange in the sense of a commodity that could be exchanged for any other since most of the time, they were not exchanged at all. Indeed, in the case of the infamous shipwrecked fei, no one had ever even seen the coin in question, let alone passed it around as a medium of exchange. No, there could be no doubt: the inhabitants of Yap were curiously indifferent to the fate of the fei themselves. The essence of their monetary system was not stone coins used as a medium of exchange, but something else.

Closer consideration of Adam Smith’s story of commodities chosen to serve as media of exchange suggests that the inhabitants of Yap were on to something. Smith claimed that at different times and in different places, numerous commodities had been chosen to serve as the money: dried cod in Newfoundland; tobacco in Virginia; sugar in the West Indies; and even nails in Scotland. Yet suspicions about the validity of some of these examples were already being raised within a generation or two of the publication of Smith’s Wealth of Nations.

The American financier Thomas Smith, for example, argued in his Essay on Currency and Banking in 1832 that whilst Smith thought that these stories were evidence of commodity media of exchange, they were in fact nothing of the sort. In every case, these were examples of trade that was accounted for in pounds, shillings, and pence, just as it was in modern England. Sellers would accumulate credit on their books, and buyers debts, all denominated in monetary units. The fact that any net balances that remained between them might then be discharged by payment of some commodity or other to the value of the debt did not mean that that commodity was ‘money’. To focus on the commodity payment rather than the system of credit and clearing behind it was to get things completely the wrong way round. And to take the view that it was the commodity itself that was money, as Smith did, might therefore start out seeming logical, but would end in nonsense. Alfred Mitchell Innes, the author of two neglected masterworks on the nature of money, summed up the problem with Smith’s report of cod-money in Newfoundland bluntly but accurately:

“A moment’s reflection shows that a staple commodity could not be used as money, because ex hypothesi the medium of exchange is equally receivable by all members of the community. Thus if the fishers paid for their supplies in cod, the traders would equally have to pay for their cod in cod, an obvious absurdity.”

If the fei of Yap were not a medium of exchange, then what were they? And more to the point, what, in fact, was Yap’s money if it wasn’t the fei? The answer to both questions is remarkably simple. Yap’s money was not the fei, but the underlying system of credit accounts and clearing of which they helped to keep track. The fei were just tokens by which these accounts were kept.

As in Newfoundland, the inhabitants of Yap would accumulate credits and debts in the course of their trading in fish, coconut, pigs, and sea cucumber. These would be offset against one another to settle payments. Any outstanding balances carried forward at the end of a single exchange, or a day, or a week, might, if the counterparties so wished, be settled by the exchange of currency, a fei to the appropriate value; this being a tangible and visible record of the outstanding credit that the seller enjoyed with the rest of Yap.

Coins and currency, in other words, are useful tokens to record the underlying system of credit accounts and to implement the underlying process of clearing. They may even be necessary in an economy larger than that of Yap, where coins could drop to the bottom of the sea and yet no one would think to question the wealth of their owner.

But currency is not itself money. Money is the system of credit accounts and their clearing that currency represents.

If all this sounds familiar to the modern reader, even obvious, it should. After all, thinking of money as a commodity and monetary exchange as the swapping of goods for a tangible medium of exchange may have been intuitive in the days when coins were minted from precious metals. It may even have made sense when the law entitled the holder of a Federal Reserve or Bank of England note to present it on Constitution Avenue or Threadneedle Street and expect its redemption for a specified quantity of gold.

But those days are long gone. In today’s modern monetary regimes, there is no gold that backs our dollars, pounds, or euros nor any legal right to redeem our banknotes for it.

Modern banknotes are quite transparently nothing but tokens. What is more, most of the currency in our contemporary economies does not enjoy even the precarious physical existence of a banknote. The vast majority of our national money around 90 per cent in the US, for example, and 97 per cent in the UK has no physical existence at all. It consists merely of our account balances at our banks. The only tangible apparatus employed in most monetary payments today is a plastic card and a keypad. It would be a brave theorist indeed who would try to maintain that a pair of microchips and a Wi-Fi connection are a commodity medium of exchange.

By a strange coincidence, John Maynard Keynes is not the only giant of twentieth-century economics to have saluted the inhabitants of Yap for their clear understanding of the nature of money. In 1991, seventy-nine-year-old Milton Friedman, hardly Keynes’ ideological bedfellow, also came across Furness’ obscure book. He too extolled the fact that Yap had escaped from the conventional but unhealthy obsession with commodity coinage, and that by its indifference to its physical currency it acknowledged so transparently that money is not a commodity, but a system of credit and clearing.

‘For a century or more, the “civilized” world regarded as a manifestation of its wealth metal dug from deep in the ground, refined at great labor, and transported great distances to be buried again in elaborate vaults deep under the ground,’ he wrote. ‘Is the one practice really more rational than the other?’

To win the praise of one of the two greatest monetary economists of the twentieth century may be regarded as chance; to win the praise of both deserves attention.

MONETARY VANDALISM: THE FATE OF THE EXCHEQUER TALLIES

The economic worldview of Yap, which both Keynes and Friedman applauded, of money as a special type of credit, of monetary exchange as the clearing of credit accounts, and of currency as merely tokens of an underlying credit relationship, has not been without its own forceful historical proponents. Amongst those who have had to deal with the practical business of managing money especially in extremis the view of money as credit, rather than a commodity, has always had a strong following. One famous example is provided by the siege of Valletta by the Turks in 1565. As the Ottoman embargo dragged on, the supply of gold and silver began to run short, and the Knights of Malta were forced to mint coins using copper. The motto that they stamped on them in order to remind the population of the source of their value would have seemed perfectly sensible to the inhabitants of Yap: Non Aes, sed Fides ‘Not the metal, but trust’.

Nevertheless, it is undoubtedly the conventional view of money as a commodity, of monetary exchange as swapping goods for a medium of exchange, and of credit as the lending out of the money commodity, that has enjoyed the lion’s share of support from theorists and philosophers over the centuries, and thereby dominated economic thought and, for much of the time, policy as well.

But if it is so obvious that the conventional theory of money is wrong, why has such a distinguished canon of economists and philosophers believed it? And why does today’s economics profession by and large persist in using the fundamental ideas of this tradition as the building blocks of modern economic thinking? Why, in short, is the conventional theory of money so resilient? There are two basic reasons, and they are worth dwelling on.

The first reason has to do with the historical evidence for money. The problem is not that so little of it survives from earlier ages, but that it is virtually all of a single type, coins. Museums around the world heave with coins, ancient and modern. Coins and their inscriptions are one of the main archaeological sources for the understanding of ancient culture, society, and history. Deciphered by ingenious scholars, their graven images and their abbreviated inscriptions give up vast libraries of knowledge about the chronologies of ancient kings, the hierarchy of classical deities, and the ideologies of ancient republics. An entire academic discipline, numismatics, is devoted to the study of coins; and far from being the scholarly equivalent of stamp collecting, as it might appear to the uninitiated, numismatics is amongst the most fruitful fields of historical research.

But of course the real reason why coins are so important in the study of ancient history, and why they have dominated in particular the study of the history of money, is that coins are what have survived. Coins are made of durable metals and very often of imperishable metals, such as gold or silver, which do not rust or corrode. As a result, they tend to survive the ravages of time better than most other things. What is more, coins are valuable. As a result, there has always been a tendency for them to be squirrelled away in buried or hidden hoards, the better to be discovered decades, centuries, or even millennia later by the enterprising historian or numismatist. The problem is that in no field so much as the history of money is an approach fixated upon what physically survives likely to lead us into error.

The unfortunate story of the wholesale destruction of one of the most important collections of source material for the history of money ever to have existed shows why.

For more than six hundred years, from the twelfth to the late eighteenth century, the operation of the public finances of England rested on a simple but ingenious piece of accounting technology: the Exchequer tally. A tally was a wooden stick usually harvested from the willows that grew along the Thames near the Palace of Westminster. On the stick were inscribed, always with notches in the wood and sometimes also in writing, details of payments made to or from the Exchequer. Some were receipts for tax payments made by landowners to the Crown. Others referred to transactions in the opposite direction, recording the sums due on loans by the sovereign to prominent subjects. ‘92 4s 4p from Fulk Basset for the farm of Wycombe’ reads one that has survived, for example relating a debt owed by Fulk Basset, a thirteenth-century Bishop of London, to Henry III. Even bribes seem to have been recorded on Exchequer tallies: one stick in a private collection bears the suspicious-sounding euphemism ‘135 4d from William de Tullewyk for the king’s good will’.

Once the details of the payment had been recorded on the tally stick, it was split down the middle from end to end so that each party to the transaction could keep a record. The creditor’s half was called the ‘stock’, and the debtor’s the ‘foil’: hence the English use of the term ‘stocks’ for Treasury bonds, which survives to this day. The unique grain of the willow wood meant that a convincing forgery was virtually impossible; while the record of the account in a portable format rather than just inscribed in the Treasury account books at Westminster, for example meant that Exchequer credits could be passed from their original holder to a third party in payment of some unrelated debt. Tallies were what are called ‘bearer securities’ in modern financial jargon: financial obligations such as bonds, share certificates, or banknotes, the beneficiary of which is whoever holds the physical record.

Historians agree that the vast majority of fiscal operations in medieval England must have been carried out using tally sticks; and they suppose that a great deal of monetary exchange was transacted using them as well. A credit with the Exchequer, as recorded on a tally stick, would after all have been welcomed in payment by anyone who had taxes of his own coming due. It is, however, impossible to know for certain. For although millions of tallies must have been manufactured over the centuries, and though we know for sure that many thousands survived in the Exchequer archives up until the early nineteenth century, only a handful of specimens exist today. The ultimate culprit for this unfortunate situation is the famous zeal of England’s nineteenthcentury advocates of administrative reform.

A collection of English Exchequer tallies: rare survivors of one of the great episodes of historical vandalism of the nineteenth century.

Despite the fact that the tally-stick system had proved itself remarkably efficient over the preceding five hundred years, by the late eighteenth century it was felt that it was time to dispense with it. Keeping accounts with notched sticks let alone using wooden splints as money alongside the elegant paper notes of the Bank of England was by then considered little short of barbaric, and certainly out of keeping with the enormous progress being made in commerce and technology. An Act of Parliament of 1782 officially abolished tally sticks as the main means of account-keeping at the Exchequer, though, because certain sinecures still operated on the old system, the Act had to wait almost another half-century, until 1826, to come into effect. But in 1834, the ancient institution of the Receipt of the Exchequer was finally abolished, and the last Exchequer tally replaced by a paper note.

Once the tally-stick system had finally been abolished, the question arose of what to do with the vast archive of tallies left in the Exchequer. Amongst the partisans of reform the general feeling was that they were nothing but embarrassing relics of the way in which the fiscal accounts of the British Empire had been kept, ‘much as Robinson Crusoe kept his calendar on the desert island’, and it was decided without hesitation to incinerate them. Twenty years later, Charles Dickens recounted the unfortunate consequences:

It came to pass that they were burnt in a stove in the House of Lords. The stove, overgorged with these preposterous sticks, set fire to the panelling; the panelling set fire to the House of Lords; the House of Lords set fire to the House of Commons; the two houses were reduced to ashes; architects were called in to build others; we are now in the second million of the cost thereof . . .

The Houses of Parliament could be rebuilt, of course and were, to leave the splendid Palace of Westminster that stands on the banks of the Thames today. What could not be resurrected from the inferno, however, was the priceless record of England’s fiscal and monetary history constituted by the tallies. Historians have had to rely on a handful of tallies that survived by chance in private collections, and we are fortunate that there are a few contemporary accounts of how they were used. But as for the immense wealth of knowledge that the Westminster archive embodied about the state of England’s money and finances throughout the Middle Ages, it is irretrievably lost.

If this is a problem for the history of money in medieval England, the situation is infinitely worse for the history of money more generally and especially in pre-literate societies. All too often, the only physical trace of money that remains is coins: yet as the example of the English tally-stick system shows, coinage may have been only the very tip of the monetary iceberg. Vast hinterlands of monetary and financial history lie beyond our grasp simply because no physical evidence of their existence and operation survives.

To appreciate the seriousness of the problem we have only to consider what hope the historians of the future would have of reconstructing our own monetary history if a natural disaster were to destroy the digital records of our contemporary financial system. We can only trust that reason would prevail, and that they would not build their understanding of modern economic life on the assumption that the pound and euro coins and nickels and dimes that survived were the sum total of our money.

THE BENEFIT OF BEING A FISH OUT OF WATER

The second reason why the conventional theory of money remains so resilient is directly related to a still more intrinsic difficulty. There is an old Chinese proverb: ‘The fish is the last to know water’. It is a concise explanation of why the ‘social’ or ‘human’ sciences anthropology, sociology, economics and so on are different from the natural sciences physics, chemistry, and biology. In the natural sciences, we study the physical world; and it is at least in principle possible to get an objective view. Things are not so simple in the social sciences. In these fields, we are studying ourselves, as individuals and in groups. Society and our selves have no independent existence apart from us and by contrast to the natural sciences, this makes it exceptionally difficult to get an objective view of things. The closer an institution is to the heart of our daily lives, the trickier it is to step outside of it in order to analyse it and the more controversial will be attempts to do so.

The second reason why the nature of money is so difficult to pin down, and why it has been and remains a subject of such controversy, is precisely because it is such an integral part of our economies. When we try to understand money, we are like the fish of the Chinese proverb, trying to know the very water in which it moves.

This doesn’t mean that all social science is a waste of time, however. It may not be possible to get an absolutely objective view of our own habits, customs, and traditions; but by studying them under different historical conditions we can get a more objective view than otherwise. Just as we can use two different perspectives on a point in the distance to triangulate its position when out hiking, we can learn a lot about a familiar social phenomenon by observing it in other times, in other places, and in other cultures. The only problem in the case of money is that it is such a basic element of the economy that finding opportunities for such triangulation is tricky. Most of the time money is just part of the furniture. It is only when the normal monetary order is disrupted that the veil is snatched from our eyes. When the monetary order dissolves, the water is temporarily tipped out of the fishbowl and we become for a critical moment a fish out of water.

. . .

*

from

Money. The Unauthorised Biography

by Felix Martin

get it at Amazon.com

Swearing is Good for You. The Amazing Science of Bad Language – Emma Byrne.

Swear words are (a) words people use when they are highly emotional and (b) words that refer to something taboo.

Languages are dominated by either religious swearing, copulatory swearing or excretory swearing.

Swearing is a complex social signal that is laden with emotional and cultural significance.

Introduction:

What the Fuck is Swearing?

“Swearing draws upon such powerful and incongruous resonators as religion, sex, madness, excretion, and nationality, encompassing an extraordinary variety of attitudes including the violent, the amusing, the shocking, the absurd, the casual and the impossible.” Geoffrey Hughes

When I was about nine years old, I was smacked for calling my little brother a ‘twat’. I had no idea what a twat was, I thought it was just a silly way of saying ‘twit’ but that smack taught me that some words are more powerful than others and that I had to be careful how I used them.

But, as you’ve no doubt gathered, that experience didn’t exactly cure me of swearing. In fact, it probably went some way towards piquing my fascination with profanity. Since then I’ve had a certain pride in my knack for colourful and well timed swearing: being a woman in a male dominated field, I rely on it to camouflage myself as one of the guys. Calling some equipment a fucking piece of shit is often a necessary rite of passage when I join a new team.

So when I discovered that other scientists have been taking swearing seriously for a long time and that I’m not the only person who finds judicious profanity useful I was fucking delighted! I first began to realise there was more to swearing than a bit of banter or blasphemy when I happened to read a study that involved sixty-seven brave volunteers, a bucket of ice water, a swear word and a stopwatch. I was working in a neuroscience lab at the time, and that study changed the course of my research. It set me on a quest to study swearing: why we do it, how we do it and what it tells us about ourselves.

But what is swearing and why is it special? Is it the way that it sounds? Or the way that it feels when we say it? Does every language have swearing? Why do we try to teach our children not to swear but always end up having to tell them not to swear? Thanks to a whole range of scientists from Victorian surgeons to modern neuroscientists, we know a lot more about swearing than we used to. But, because swearing is still seen as shocking (there was much agonising about the wisdom or otherwise of using a swear word in the title of this book), that information hasn’t made it into the mainstream. It’s a fucking shame that the fascinating facts about swearing are still largely locked up in journals and textbooks.

For example, I’m definitely not the only person who uses swearing as a way of fitting in at work. On the contrary, research shows that swearing can help build teams in the workplace. From the factory floor to the operating theatre, scientists have shown that teams who share a vulgar lexicon tend to work more effectively together, feel closer and be more productive than those who don’t. These same studies show that managing stress in the same way that we manage pain with a fucking good swear is more effective than any number of team building exercises.

Swearing has also helped to develop the field of neuroscience. By providing us with a useful emotional barometer, swearing has been used as a research tool for over 150 years. It has helped us to discover some fascinating things about the structure of the human brain, such as its division into left and right hemispheres, and the role of cerebral structures like the amygdala in the regulation of emotions.

Swearing has taught us a great deal about our minds, too. We know that people who learn a second language often find it less stressful to swear in their adopted tongue, which gives us an idea of the childhood developmental stages at which we learn emotions and taboos. Swearing also makes the heart beat faster and primes us to think aggressive thoughts while, paradoxically, making us less likely to be physically violent.

And swearing is a surprisingly flexible part of our linguistic repertoire. It reinvents itself from generation to generation as taboos shift. Profanity has even become part of the way we express positive feelings we know that football fans use ‘fuck’ just as frequently when they’re happy as when they are angry or frustrated.

That last finding is one of my own. With colleagues at City University, London, I’ve studied thousands of football fans and their bad language during big games. It’s no great surprise that football fans swear, and that they are particularly fond of ‘fuck’ and ‘shit’. But we noticed something interesting about the ratio between these two swear words. The ‘fuck-shit’ ratio is a reliable indicator of which team has scored because it turns out that ‘shit’ is almost universally negative while ‘fuck’ can be a sign of something good or bad. Swearing among football fans also isn’t anywhere near as aggressive as you might think; fans on Twitter almost never swear about their opponents and reserve their outbursts for players on their own team.

Publishing that research gave me an insight into the sort of public disapproval that swearing still attracts. We were contacted by a journalist from one of the UK’s most widely read newspapers. I won’t name it, but it’s well known for its thunderously moralising tone while at the same time printing long-lens photographs of women who are then accused of ‘flaunting’ some part of their bodies. We were asked (a) how much money had been spent (wasted) on the research and (b) whether we wouldn’t be better doing something useful (like curing cancer). I replied that the entire cost of the research, the £6.99 spent on a bottle of wine while we came up with the hypothesis, had been self-funded, and that my co-author and I were computer scientists with very limited understanding of oncology, so it was probably best if we stayed away from interfering with anyone suffering from cancer. We didn’t hear back. But this exchange brought home the fact that swearing is still a long way from being a respectable topic of research.

Swearing is one of those things that comes so naturally, and seems so frivolous, that you might be surprised by the number of scientists who are studying it. But neuroscientists, psychologists, sociologists and historians have long taken an interest in bad language, and for good reason. Although swearing might seem frivolous, it teaches us a lot about how our brains, our minds and even our societies work.

This book won‘t just look at swearing in isolation. One of the things that makes swearing so fucking amazing is the sheer breadth of connections it has with our lives. Throughout this book I’ll cover many different topics, some of which might seem like digressions. There are plenty of pages that contain no profanity whatsoever but, from the indirectness of Japanese speech patterns to the unintended consequences of potty training chimpanzees, everything relates back to the way we use bad language.

Is this book simply an attempt to justify rudeness and aggression? Not at all. I certainly wouldn’t want profanities to become commonplace: swearing needs to maintain its emotional impact in order to be effective. We only need to look at the way that swearing has changed over the last hundred years to see that, as some swear words become mild and ineffectual through overuse or shifting culturaI values, we reach for other taboos to fill the gap. Where blasphemy was once the true obscenity, the modern unsayables include racist and sexist terms as swear words. Depending on your point of view this is either a lamentable shift towards political correctness or timely recognition that bigotry is ugly and damaging.

What is Swearing?

Historically, bad language consisted of swearing, oaths and curses. That’s because such utterances were considered to have a particular type of word magic. The power of an oath, a pledge or a curse was potentially enough to call down calamities or literally change the world.

These days, we don’t really believe that swearing has the power to alter reality. No one expects the curse ‘go fuck yourself’ to result in any greater injury than a bit of hurt pride. Nevertheless, there is still a kind of word magic involved: swearing, cursing, bad language, profanity, obscenity call it what you will, draws on taboos, and that’s where the power lies.

That doesn’t mean that swearing is always used as a vehicle for aggression or insult. In fact, study after study has shown that swearing is as likely to be used in frustration with oneself, or in solidarity, or to amuse someone, as it is to be used as ‘fighting words’. That can be a problem: swearing and abuse are both slippery beasts to pin down, and without clear definitions of a phenomenon, how are we supposed to study it? Among the hundreds of studies I’ve read while writing this book, two common definitions appear over and over again: swear words are (a) words people use when they are highly emotional and (b) words that refer to something taboo. If you think about the words you class as swearing, you’ll find that they tick both of these boxes.

More formally, several linguists have tried to pin down exactly what constitutes swearing. Among them is Professor Magnus Ljung of the University of Stockholm, a respected expert on swearing. In 2011 he published Swearing: A Cross-Cultural Linguistic Study, in which he defines swearing, based upon his study of thousands of examples and what they had in common, as:

– The use of taboo words like ‘fuck’ and ‘shit’,

– Which aren’t used literally,

– Which are fairly formulaic,

– And which are emotive: swearing sends a signal about the speaker’s state of mind.

In his book What the F, Benjamin K. Bergen points out that, of the 7,000 known languages in the world, there is massive variation in the type, the use and even the number of swear words. Russian, for example, with its elaborate rules of inflection, has an almost infinite number of ways of swearing, most of them related to the moral standing of one’s interlocutor’s mother. In Japanese, where the excretory taboo is almost non-existent (hence the friendly poo emoji), there’s no equivalent to ‘shit’ or ‘piss’ but, contrary to popular belief, there are several swear words in the language. Kichigai loosely translates as ‘retard’ and is usually bleeped in the media, as is kutabare (‘drop dead’). And, as in so many languages, the queen of all swear words is manko, which refers to a body part so taboo that artist Megumi Igarashi was arrested in 2014 for making 3D models of her own manko for an installation in Tokyo.

Languages vary in their repertoire of swear words; it’s a natural consequence of the differences in our cultures. Bergen suggests that languages fall into one of four classes, what he calls the Holy Fucking Shit Nigger principle. Languages are dominated by either religious swearing, copulatory swearing or excretory swearing. The fourth category refers to slur-based swearing, but so far I haven’t come across any languages that are dominated by slurs. There are languages whose most frowned upon taboos include animal names. In Germany, for example, you can be fined anywhere from €300 to €600 for calling someone a daft cow, and up to €2,500 for ‘old pig’. Dutch, meanwhile, has a whole host of bad language to do with illness: calling a police officer a cancer sufferer (Kankerlijer) can net you two years’ incarceration.

Bergen also investigates whether the characteristics of swear words set them apart. In American English, swear words do tend to be a bit shorter than average, but that’s not the case in French or Spanish. It’s unlikely to be the sound of the words either, as words that sound innocuous in one language can sound grossly offensive in another. This has been played for laughs since Shakespeare’s time, with the comedy ‘English lesson’ in Henry V. The French Princess Katherine wants to learn English from her maid, Alice. Having mastered ‘elbow’, ‘neck’ and ‘chin’, she asks how to say ‘pied’ and ‘robe’:

Katherine: ‘Ainsi dis-je! “D’elbow, de nick, et de sin”. Comment appelez-vous le pied et la robe?’

(‘That’s what I said! “D’elbow, de nick, et de sin”. How do you say pied and robe?’)

Alice: “‘Le foot”, Madame, et “le count”.’

Katherine proceeds to have hysterics, the gag being that foot sounds a little like foutre and count (Alice’s mangled pronunciation of ‘gown’) sounds a bit like con:

“‘Le foot” et “le count”. O Seigneur Dieu! Ils sont mots de son mauvais, corruptible, gros, et impudique, et non pour les dames d’honneur d’user! Je ne voudrais prononcer ces mots devant les seigneurs de France pour tout le monde. Foh! “Le foot” et “Le count”!’

(“‘Fuck” and “cunt”. Oh my Lord! Those are some awful, corrupted, coarse and rude words, not to be used by a lady of virtue! I would not say those words before the Lords of France for the world! Foh! “Fuck” and “cunt”!’)

If we can’t judge by the length, the spelling or the sound of words to tell us what makes a swear word, what can we go on? Some linguists have tried to define swearing by the parts of the brain involved. In his book Language, the Stuff of Thought, linguist and psychologist Steven Pinker says that swearing is distinct from ‘genuine’ language and suggests that it is not generated by those parts of the brain responsible for ‘higher thought’ the cortex, or the brain’s outer layers. Instead, swearing comes from the subcortex the part of the brain responsible for movement, emotions and bodily functions. It is, he suggests, more like an animal’s cry than human language.

In the context of the latest scientific advances, I don’t agree. Certainly, swearing is deeply engrained in our behaviour, but to read Pinker’s definition, you might conclude that swearing is a vestigial, primitive part of our lexicon; something we should try to evolve ourselves away from. There’s a vast body of other research that shows how important swearing is to us as individuals, and how it has developed alongside and even shaped our culture and society. Far from being a simple cry, swearing is a complex social signal that is laden with emotional and cultural significance.

If we want to define swearing, why isn’t it as simple as looking it up in the dictionary? For a start, dictionaries can be incredibly coy about swearing. When he compiled his dictionary in 1538, Sir Thomas Elyot was in no doubt as to the kinds of people who look up dirty words and was having none of it. ‘If anyone wants obscene words with which to arouse dormant desire while reading, let him consult other dictionaries.’ Dr Johnson, on being praised by two society ladies for having left ‘naughty words’ out of his dictionary, replied, ‘What! My dears! Then you have been looking for them?’ At the height of Victorian prudery, the Oxford English Dictionary offered ‘ineffables’ for trousers, and well into the twentieth century, while it included all of the religious and racial swear words, it left out fuck, cunt and ‘the curse’.

As a side note, I find it interesting that there are plentiful euphemisms for menstruation, including ‘the curse’, ‘the crimson tide’, ‘Arsenal playing at home’ and ‘having the decorators in’, but it has never spawned its own class of curse words. The only ones that I’m aware of are the ‘bloodclaat’ and ‘rassclaat’ in Jamaican patois. In the later part of the twentieth century, other lexicographers were still dropping words based on their acceptability in polite society. In 1976 the American Webster’s dictionary dropped ‘dago’, ‘kike’, ‘wop’ and ‘wog’, with the foreword note: ‘This dictionary could easily dispense with those true obscenities, the terms of racial or ethnic opprobrium that are, in any case, encountered with diminishing frequency these days.’

The editors of Webster’s had good motives but were perhaps a little naive. Taking words out of the dictionary doesn’t remove them from our language. And while they might have hoped that 1976 marked a new era in racial and ethnic harmony, from the vantage point of forty years on, this seems touchingly optimistic.

So who does get to decide what constitutes a true obscenity? The answer is that we all do. Within our social groups, our own tribes, we decide what is and is not taboo, and which taboos are suitable for breaking for emotional or rhetorical purposes. Even within the same country, social class can have an effect on what constitutes swearing. According to Robert Graves, author of the 1927 essay Lars Porsena or the Future of Swearing, ‘bastard’ was unforgivable among the ‘governed classes’ whereas ‘bugger’ (which Graves can’t even bring himself to render in print, preferring to use ‘one addicted to an unnatural vice’ and the oddly xenophobic ‘Bulgarian heretic’) was a much deadlier insult among the ranks to which Graves himself belonged.

‘In the governing classes there is a far greater tolerance to bastards, who often have noble or even royal blood in their veins,’ he wrote. ‘Bugger’ was less offensive among the governed because they ‘are more free from the homosexual habit’, he rather artlessly theorised. But ‘when some thirty years ago the word was written nakedly up on a club noticeboard as a charge against one of its members’, and here Graves can’t even bring himself to name Oscar Wilde, ‘there followed a terrific social explosion, from which the dust has even now not yet settled’.

But, while swearing varies from group to group, it still manages to be surprisingly formulaic. So much of swearing, in English at least, uses the same few constructions. For example Geoffrey Hughes, author of Swearing: A Social History of Foul Language, Oaths and Profanity in English, points out that the nouns Christ, fuck, pity and shit have nothing in common except that they can be used in the construction for -’s sake.

I thought about the constructions I regularly use and hear and realise that there are many phrases that are grammatically correct but that are seldom used (and some that are grammatically incorrect, like ‘cock it’ and ‘oh do cock off’ that I use regularly). For example, ‘shit’ is a verb as well as a noun, but I don’t think I’ve ever heard anyone say ‘Shit it!’ or ‘Shit you,’ as a complete sentence. ‘Shit’ as a verb currently seems to have a very specific meaning: to wind up or lie to, as in ‘You’re shitting me!’ and the charmingly archaically formulaic reply ‘I shit you not.’ Meanwhile, the ever-flexible fucking and buggery can go in almost every swearing phrase.

Common Formulaic swearing constructions in British English

The British broadcasting regulator, Ofcom, recently carried out a survey of public attitudes to swearing on TV and radio, the results of which I have summarised diagrammatically in Figure 1. Of the ‘big four’ types of swearing in British English (religious, copulatory, excretory and slur-based), religious swearing was considered the least offensive, while slurs, particularly race or sexuality based slurs were considered most offensive. In fact, a soon to be published study of over 10 million words of recorded speech, collected from 376 volunteers, found that many homophobic and racist slurs have disappeared from people’s everyday speech.

Figure 1: The proportions of strong and mild swearing by category

Familiar classics like ‘fuck you’ and ‘bugger off’ seem to have been around for ever, and they certainly don’t lack staying power. Nevertheless, I’m prepared to wager that these swear words will seem as quaint as ‘blast your eyes’ or archaic as ‘sblood’ in a few generations’ time. As our values change, swearing constantly reinvents itself.

How Swearing Changes Over Time

Swearing is a bellwether, a foul beaked canary in the coalmine that tells us what our societal taboos are. A ‘Jesus Christ!’ 150 years ago was as offensive then as a ‘fuck’ or ‘shit’ today. Conversely, there are words used by authors from Agatha Christie to Mark Twain, words that used to be sung in nursery rhymes that these days would not pass muster in polite society.

The acceptability of swearing as a whole waxes and wanes over time. The seriously misnamed Master of the Revels, who presided over London theatre in Shakespeare’s day, banned all profanity from the stage. That’s why the original quarto editions of Othello and Hamlet contain oaths like ‘sblood’ (God’s blood) and ‘zounds’ (God’s wounds), both of which are out completely from the later folio edition. By the time a few generations had passed, and ‘zounds’ was a fossil word found only on paper, the pronunciation shifted to ‘zaunds’ and the word lost all connection with its root thanks to the zealous weeding out of the term from the popular culture of the time.

The censoring of Shakespeare isn’t the only evidence we have of changes in what counts as socially unacceptable language. Linguists and historians have studied trends over the years and identified a huge shift during the Renaissance in Europe. In the Middle Ages, privacy and modesty norms were very different. Talking of bodily parts and functions wasn’t automatically deemed obscene or offensive. But during the Renaissance, those bodily terms began to replace religious oaths and curses as the true obscenities of the time.

That evolution is still unfolding, with terms of abuse that relate to race and sexuality taking on the mantle of the unsayable and disability following behind. That’s partly because we’re more aware of the effect of a mindset known as ‘othering’. Othering is a powerful mental shortcut that we’ve inherited, way back from our earliest primate societies. We all have the subconscious tendency to identify the differences between ourselves and others and to divide the world into ‘people like us’ and ‘people not like us’. We tend to be more generous towards and more trusting of the people who are most like ourselves. The problem is that for hundreds of years (at least) the more powerful groups have persecuted and exploited the less powerful. And the words we have for those people in the less powerful groups tend to reinforce those patterns of subjugation, leading to some incredibly powerful emotions. Steven Pinker (as a white male) writing in the New Republic, said: ‘To hear “nigger” is to try on, however briefly, the thought that there is something contemptible about African Americans.’

. . .

*

Dr Emma Byrne is a scientist, journalist, and public speaker. Her BBC Radio 4 ‘Four Thought’ episode was selected as one of the “best of 2013” by the programme’s editors. She has been selected as a British Science Association Media Fellow and for the BBC Expert Women Training, and is published in CIO, Forbes, the Financial Times and e-Health Insider. Swearing is Good For You is her first book.

*

from

Swearing is Good for You. The Amazing Science of Bad Language

by Emma Byrne

get it at Amazon.com

“Which of the me’s is me?” An Unquiet Mind. A Memoir of Moods and Madness – Kay Redfield Jamison.

Kay Jamison’s story is not of someone who has succeeded despite having a severe disorder, but of someone whose particular triumphs are a consequence of her disorder. She would not have observed what she has if she had not experienced what she did. The fact that she has endured such battles helps her to understand them in others.

“The disease that has, on several occasions, nearly killed me does kill tens of thousands of people every year: most are young, most die unnecessarily, and many are among the most imaginative and gifted that we as a society have. The major clinical problem in treating manic-depressive illness is not that there are not effective medications, there are, but that patients so often refuse to take them. Freedom from the control imposed by medication loses its meaning when the only alternatives are death and insanity.”

Her remarkable achievements are a beacon of hope to those who imagine that they cannot survive their condition, much less thrive with it.

I doubt sometimes whether a quiet & unagitated life would have suited me, yet I sometimes long for it. – Byron.

For centuries, the prevailing wisdom had been that having a mental illness would prevent a doctor from providing competent care, would indicate a vulnerability that would undermine the requisite aura of medical authority, and would kill any patient’s trust. In the face of this intense stigmatization, many bright and compassionate people with mental illness avoided the field of medicine, and many physicians with mental illness lived in secrecy. Kay Redfield Jamison herself led a closeted life for many years, even as she coauthored the standard medical textbook on bipolar illness. She suffered from the anguish inherent in her illness and from the pain that comes of living a lie.

With the publication of An Unquiet Mind, she left that lie behind, revealing her condition not only to her immediate colleagues and patients, but also to the world, and becoming the first clinician ever to describe travails with bipolar illness in a memoir. It was an act of extraordinary courage, a grand risk taking infused with a touch of manic exuberance, and it broke through a firewall of prejudice. You can have bipolar illness and be a brilliant clinician; you can have bipolar illness and be a leading authority on the condition, informed by your experiences rather than blinded by them. You can have bipolar illness and still have a joyful, fruitful, and dignified life.

Kay Jamison’s story is not of someone who has succeeded despite having a severe disorder, but of someone whose particular triumphs are a consequence of her disorder. Ovid said, “The wounded doctor heals best” and Jamison’s open hearted declarations have been a salve for the wounded psyches of untold thousands of people; her unquiet mind has often soothed the minds of others. Her discernments come from a rare combination of observation and experience: she would not have observed what she has if she had not experienced what she did. The fact that she has endured such battles helps her to understand them in others, and her frankness about them offers an antidote to the pervasive shame that Cloisters so many mentally ill people in fretful isolation.

Her remarkable achievements are a beacon of hope to those who imagine that they cannot survive their condition, much less thrive with it. Those who address mental illnesses tend to do so with either rigor or empathy; Jamison attains a rare marriage of the two. Just as her clinical work has been strengthened by her personal experience, her personal experience has been informed by her academic insights.

It is different to go through manic and depressive episodes when you know everything there is to know about your condition than it is to go through them in ignorance, constantly ambushed by the apparently inexplicable.

Like many people with mental illness, Jamison has had to reckon with the impossibility of separating her personality from her condition. “Which of the me’s is me?” she asks rhetorically in these pages. She kept up a nearly willful self-ignorance for years before she succumbed to knowledge; she resisted remedy at first because she feared she might lose some of her essential self to it. It took repeated descents and ascents into torment to instigate a kind of acquiescence. She has become glad of that surrender; it has saved a life that turns out to be well worth living. As this book bleached away her erstwhile denial, it has mediated her readers’ denial, too. As a professor of psychiatry at Johns Hopkins University and in her frequent lectures around the globe, Kay Jamison has taught a younger generation of doctors how to make sense of their patients: not merely how to treat them, but how to help them.

Though An Unquiet Mind does not provide diagnostic criteria or propose specific courses of treatment, it remains very much a book about medicine, with a touchingly fond portrait of science. Jamison expresses enormous gratitude to the doctors who have treated her and to the researchers who established the modes of treatment that have kept her alive. She engages medicine’s resonant clarities, and she tolerates the relative primitivism of our understanding of the brain.

Appreciating the biology of her illness and the mechanisms of its therapies allowed her to achieve a truce with her bipolar illness, and science informed her choice to speak openly about her skirmishes with it. That peace has not entirely precluded further episodes, but it makes them easier to tolerate when they come. Equally, it has given her the courage to stay on medication and the resilience to sustain other forms of self-care.

You can feel in Jamison’s writing a bracing honesty unmarred by self-pity. It seems clear Jamison is not by nature an exhibitionist, and making so much of her private life into public property cannot have been easy for her. On every page, you sense the resolve it has required. Her book differs from much confessional writing in that, although she describes certain experiences in agonizing detail, she maintains a vocabulary of discretion. An Unquiet Mind may have been intended as a book about an illness, not about a life, but it is both. There is satisfaction in making your affliction useful to other people; it redeems what seemed in the instance to be useless experiences. That insistence on making something good out of something bad is the vital force in her writing.

I met Kay Jamison in 1995, shortly after the publication of An Unquiet Mind, when I had first decided to write about depression. I contacted her to request an interview, and she suggested we have lunch; she then invited me to my first serious scientific conference, a suicide symposium she had organized, attended by the leading figures in the field. Her kindness to me in the early stages of my research points to a personal generosity that mirrors the brave generosity of her books. The forbearance that has made her a good clinician and a good writer also makes her a good friend.

In the years since then, Jamison has produced a corpus of work that, in a very different kind of bipolarity, limns the glittering revelations of psychosis only to return to its perilous ordeals. Touched with Fire (1993) had already chronicled the artistic achievements of people with bipolar illness; Night Falls Fast (1999) tackles the impossible subject of suicide; Exuberance (2004) tells us how unipolar mania has generated many intellectual and artistic breakthroughs; and Nothing was the Same (2009) is a closely observed and deeply personal account of losing her second husband to cancer, a journey complicated by her unreliable moods. Her illness runs through these books even when it is not her explicit topic. But that recurrent theme does not narrow the books into ego studies; instead, it makes them startlingly, powerfully intimate.

Jamison consistently evinces a romantic attachment to language itself. Her sentences flow out in an often poetic rapture, and she displays a sustaining love for the poetry of others, quoting it by the apposite yard. Few doctors know poetry so well, and few poets understand so much biology, and Jamison serves as a translator between humanism and science, which are so often disparate vocabularies for the same phenomena. While poetry inflects her literary voice, it sits comfortably beside a sense of humor. Irony is among her best defenses against gloom, and the zing of her comic asides makes reading about unbearable things a great deal more bearable. The crossing point of precision, luminosity, and hilarity may be the safest domain for an inconsistent mind, a nexus of relief for someone whose stoicism cannot fully assuage her distress.

Two decades after its publication, An Unquiet Mind remains fresh. There’s been a bit more science in the field and a great deal of social change regarding mental illness, change this book helped to create: a society in which what was relentlessly shameful is more easily and frequently acknowledged. The book delineates not how to treat the condition, but how to live with the condition and its treatments, and that remains relevant even as actual treatments evolve.

Jamison does not stint on her own despair, but she has constructed meaning and built an identity from it. While she might not have opted for this illness, neither does she entirely regret it; she prefers, as she writes so movingly, a life of passionate turbulence to one of tedious calm. Learning to appreciate the things you also regret makes for a good way forward. If you have bipolar illness, this book will help you to forgive yourself for everything that has gone awry; if you do not, it will perhaps show how a steely tenacity can imbue disasters with value, a capacity that stands to enrich any and every life.

Andrew Solomon

Kay Redfield Jamison

Prologue

When it’s two o’clock in the morning, and you’re manic, even the UCLA Medical Center has a certain appeal. The hospital, ordinarily a cold clotting of uninteresting buildings, became for me, that fall morning not quite twenty years ago, a focus of my finely wired, exquisitely alert nervous system. With vibrissae twinging, antennae perked, eyes fast forwarding and fly faceted, I took in everything around me. I was on the run. Not just on the run but fast and furious on the run, darting back and forth across the hospital parking lot trying to use up a boundless, restless, manic energy. I was running fast, but slowly going mad.

The man I was with, a colleague from the medical school, had stopped running an hour earlier and was, he said impatiently, exhausted. This, to a saner mind, would not have been surprising: the usual distinction between day and night had long since disappeared for the two of us, and the endless hours of scotch, brawling, and fallings about in laughter had taken an obvious, if not final, toll. We should have been sleeping or working, publishing not perishing, reading journals, writing in charts, or drawing tedious scientific graphs that no one would read.

Suddenly a police car pulled up. Even in my less than totally lucid state of mind I could see that the officer had his hand on his gun as he got out of the car. “What in the hell are you doing running around the parking lot at this hour?” he asked. A not unreasonable question. My few remaining islets of judgment reached out to one another and linked up long enough to conclude that this particular situation was going to be hard to explain. My colleague, fortunately, was thinking far better than I was and managed to reach down into some deeply intuitive part of his own and the world’s collective unconscious and said, “We’re both on the faculty in the psychiatry department.” The policeman looked at us, smiled, went back to his squad car, and drove away. Being professors of psychiatry explained everything.

Within a month of signing my appointment papers to become an assistant professor of psychiatry at the University of California, Los Angeles, I was well on my way to madness; it was 1974, and I was twenty-eight years old. Within three months I was manic beyond recognition and just beginning a long, costly personal war against a medication that I would, in a few years’ time, be strongly encouraging others to take. My illness, and my struggles against the drug that ultimately saved my life and restored my sanity, had been years in the making.

For as long as I can remember I was frighteningly, although often wonderfully, beholden to moods. Intensely emotional as a child, mercurial as a young girl, first severely depressed as an adolescent, and then unrelentingly caught up in the cycles of manicdepressive illness by the time I began my professional life, I became, both by necessity and intellectual inclination, a student of moods. It has been the only way I know to understand, indeed to accept, the illness I have; it also has been the only way I know to try and make a difference in the lives of others who also suffer from mood disorders.

The disease that has, on several occasions, nearly killed me does kill tens of thousands of people every year: most are young, most die unnecessarily, and many are among the most imaginative and gifted that we as a society have.

The Chinese believe that before you can conquer a beast you first must make it beautiful. In some strange way, I have tried to do that with manic-depressive illness. It has been a fascinating, albeit deadly, enemy and companion; I have found it to be seductively complicated, a distillation both of what is finest in our natures, and of what is most dangerous. In order to contend with it, I first had to know it in all of its moods and infinite disguises, understand its real and imagined powers. Because my illness seemed at first simply to be an extension of myself, that is to say, of my ordinarily changeable moods, energies, and enthusiasms, I perhaps gave it at times too much quarter. And, because I thought I ought to be able to handle my increasingly violent mood swings by myself, for the first ten years I did not seek any kind of treatment. Even after my condition became a medical emergency, I still intermittently resisted the medications that both my training and clinical research expertise told me were the only sensible way to deal with the illness I had.

My manias, at least in their early and mild forms, were absolutely intoxicating states that gave rise to great personal pleasure, an incomparable flow of thoughts, and a ceaseless energy that allowed the translation of new ideas into papers and projects. Medications not only cut into these fast-flowing, highflying times, they also brought with them seemingly intolerable side effects. it took me far too long to realize that lost years and relationships cannot be recovered, that damage done to oneself and others cannot always be put right again, and that freedom from the control imposed by medication loses its meaning when the only alternatives are death and insanity.

The war that I waged against myself is not an uncommon one. The major clinical problem in treating manic-depressive illness is not that there are not effective medications, there are, but that patients so often refuse to take them. Worse yet, because of a lack of information, poor medical advice, stigma, or fear of personal and professional reprisals, they do not seek treatment at all.

Manic-depression distorts moods and thoughts, incites dreadful behaviors, destroys the basis of rational thought, and too often erodes the desire and will to live. It is an illness that is biological in its origins, yet one that feels psychological in the experience of it; an illness that is unique in conferring advantage and pleasure, yet one that brings in its wake almost unendurable suffering and, not infrequently, suicide.

I am fortunate that I have not died from my illness, fortunate in having received the best medical care available, and fortunate in having the friends, colleagues, and family that I do. Because of this, I have in turn tried, as best I could, to use my own experiences of the disease to inform my research, teaching, clinical practice, and advocacy work.

Through writing and teaching I have hoped to persuade my colleagues of the paradoxical core of this quicksilver illness that can both kill and create; and, along with many others, have tried to change public attitudes about psychiatric illnesses in general and manic depressive illness in particular. It has been difficult at times to weave together the scientific discipline of my intellectual field with the more compelling realities of my own emotional experiences. And yet it has been from this binding of raw emotion to the more distanced eye of clinical science that I feel I have obtained the freedom to live the kind of life I want, and the human experiences necessary to try and make a difference in public awareness and clinical practice.

I have had many concerns about writing a book that so explicitly describes my own attacks of mania, depression, and psychosis, as well as my problems acknowledging the need for ongoing medication. Clinicians have been, for obvious reasons of licensing and hospital privileges, reluctant to make their psychiatric problems known to others. These concerns are often well warranted.

I have no idea what the longterm effects of discussing such issues so openly will be on my personal and professional life, but, whatever the consequences, they are bound to be better than continuing to be silent. I am tired of hiding, tired of misspent and knotted energies, tired of the hypocrisy, and tired of acting as though I have something to hide.

One is what one is, and the dishonesty of hiding behind a degree, or a title, or any manner and collection of words, is still exactly that: dishonest. Necessary, perhaps, but dishonest. I continue to have concerns about my decision to be public about my illness, but one of the advantages of having had manic-depressive illness for more than thirty years is that very little seems insurmountably difficult. Much like crossing the Bay Bridge when there is a storm over the Chesapeake, one may be terrified to go toward, but there is no question of going back. I find myself somewhat inevitably taking a certain solace in Robert Lowell’s essential question, Yet why not say what happened?

Part One

THE WILD BLUE YONDER

Into the Sun

I was standing with my head back, one pigtail caught between my teeth, listening to the jet overhead. The noise was loud, unusually so, which meant that it was close. My elementary school was near Andrews Air Force Base, just outside Washington; many of us were pilots’ kids, so the sound was a matter of routine. Being routine, however, didn’t take away from the magic, and I instinctively looked up from the playground to wave. I knew, of course, that the pilot couldn’t see me, I always knew that, just as I knew that even if he could see me the odds were that it wasn’t actually my father. But it was one of those things one did, and anyway I loved any and all excuses just to stare up into the skies. My father, a career Air Force officer, was first and foremost a scientist and only secondarily a pilot. But he loved to fly, and, because he was a meteorologist, both his mind and his soul ended up being in the skies. Like my father, I looked up rather more than I looked out.

When I would say to him that the Navy and the Army were so much older than the Air Force, had so much more tradition and legend, he would say, Yes, that’s true, but the Air Force is the future. Then he would always add: And we can fly. This statement of creed would occasionally be followed by an enthusiastic rendering of the Air Force song, fragments of which remain with me to this day, nested together, somewhat improbably, with phrases from Christmas carols, early poems, and bits and pieces of the Book of Common Prayer: all having great mood and meaning from childhood, and all still retaining the power to quicken the pulses.

So I would listen and believe and, when I would hear the words “Off we go into the wild blue yonder,” I would think that “wild” and “yonder” were among the most wonderful words I had ever heard; likewise, I would feel the total exhilaration of the phrase “Climbing high, into the sun” and know instinctively that I was a part of those who loved the vastness of the sky.

The noise of the jet had become louder, and I saw the other children in my second grade class suddenly dart their heads upward. The plane was coming in very low, then it streaked past us, scarcely missing the playground. As we stood there clumped together and absolutely terrified, it flew into the trees, exploding directly in front of us. The ferocity of the crash could be felt and heard in the plane’s awful impact; it also could be seen in the frightening yet terrible lingering loveliness of the flames that followed. Within minutes, it seemed, mothers were pouring onto the playground to reassure children that it was not their fathers; fortunately for my brother and sister and myself, it was not ours either. Over the next few days it became clear, from the release of the young pilot’s final message to the control tower before he died, that he knew he could save his own life by bailing out. He also knew, however, that by doing so he risked that his unaccompanied plane would fall onto the playground and kill those of us who were there.

The dead pilot became a hero, transformed into a scorchingly vivid, completely impossible ideal for what was meant by the concept of duty. It was an impossible ideal, but all the more compelling and haunting because of its very unobtainability. The memory of the crash came back to me many times over the years, as a reminder both of how one aspires after and needs such ideals, and of how killingly difficult it is to achieve them. I never again looked at the sky and saw only vastness and beauty. From that afternoon on I saw that death was also and always there.

Although, like all military families, we moved a lot, by the fifth grade my older brother, sister, and I had attended four different elementary schools, and we had lived in Florida, Puerto Rico, California, Tokyo, and Washington, twice, our parents, especially my mother, kept life as secure, warm, and constant as possible. My brother was the eldest and the steadiest of the three of us children and my staunch ally, despite the three year difference in our ages. I idolized him growing up and often trailed along after him, trying very hard to be inconspicuous, when he and his friends would wander off to play baseball or cruise the neighborhood. He was smart, fair, and self-confident, and I always felt that there was a bit of extra protection coming my way whenever he was around. My relationship with my sister, who was only thirteen months older than me, was more complicated. She was the truly beautiful one in the family, with dark hair and wonderful eyes, who from the earliest times was almost painfully aware of everything around her. She had a charismatic way, a fierce temper, very black and passing moods, and little tolerance for the conservative military lifestyle that she felt imprisoned us all. She led her own life, defiant, and broke out with abandon whenever and wherever she could. She hated high school and, when we were living in Washington, frequently skipped classes to go to the Smithsonian or the Army Medical Museum or just to smoke and drink beer with her friends.

She resented me, feeling that I was, as she mockingly put it, “the fair-haired one”, a sister, she thought, to whom friends and schoolwork came too easily, passing far too effortlessly through life, protected from reality by an absurdly optimistic view of people and life. Sandwiched between my brother, who was a natural athlete and who never seemed to see less-than-perfect marks on his college and graduate admission examinations, and me, who basically loved school and was vigorously involved in sports and friends and class activities, she stood out as the member of the family who fought back and rebelled against what she saw as a harsh and difficult world. She hated military life, hated the constant upheaval and the need to make new friends, and felt the family politeness was hypocrisy.

Perhaps because my own violent struggles with black moods did not occur until I was older, I was given a longer time to inhabit a more benign, less threatening, and, indeed to me, a quite wonderful world of high adventure. This world, I think, was one my sister had never known. The long and important years of childhood and early adolescence were, for the most part, very happy ones for me, and they afforded me a solid base of warmth, friendship, and confidence. They were to be an extremely powerful amulet, a potent and positive countervailing force against future unhappiness. My sister had no such years, no such amulets. Not surprisingly, perhaps, when both she and I had to deal with our respective demons, my sister saw the darkness as being within and part of herself, the family, and the world. I, instead, saw it as a stranger; however lodged within my mind and soul the darkness became, it almost always seemed an outside force that was at war with my natural self.

My sister, like my father, could be vastly charming: fresh, original, and devastatingly witty, she also was blessed with an extraordinary sense of aesthetic design. She was not an easy or untroubled person, and as she grew older her troubles grew with her, but she had an enormous artistic imagination and soul. She also could break your heart and then provoke your temper beyond any reasonable level of endurance. Still, I always felt a bit like pieces of earth to my sister’s fire and flames.

For his part, my father, when involved, was often magically involved: ebullient, funny, curious about almost everything, and able to describe with delight and originality the beauties and phenomena of the natural world. A snowflake was never just a snowflake, nor a cloud just a cloud. They became events and characters, and part of a lively and oddly ordered universe. When times were good and his moods were at high tide, his infectious enthusiasm would touch everything. Music would fill the house, wonderful new pieces of jewelry would appear, a moonstone ring, a delicate bracelet of cabochon rubies, a pendant fashioned from a moody sea, green stone set in a swirl of gold, and we’d all settle into our listening mode, for we knew that soon we would be hearing a very great deal about whatever new enthusiasm had taken him over. Sometimes it would be a discourse based on a passionate conviction that the future and salvation of the world was to be found in windmills; sometimes it was that the three of us children simply had to take Russian lessons because Russian poetry was so inexpressibly beautiful in the original.

*

from

An Unquiet Mind. A Memoir of Moods and Madness

by Kay Redfield Jamison

get it at Amazon.com

The Great God of Depression. How mental illness stopped being a terrible dark secret – Pagan Kennedy * DARKNESS VISIBLE. A MEMOIR of MADNESS – William Styron.

The pain of severe depression is quite unimaginable to those who have not suffered it, and it kills in many instances because its anguish can no longer be borne.

The most honest authorities face up squarely to the fact that serious depression is not readily treatable. Failure of alleviation is one of the most distressing factors of the disorder as it reveals itself to the victim, and one that helps situate it squarely in the category of grave diseases.

One by one, the normal brain circuits begin to drown, causing some of the functions of the body and nearly all of those of instinct and intellect to slowly disconnect.

Inadvertently I had helped unlock a closet from which many souls were eager to come out. It is possible to emerge from even the deepest abyss of despair and “once again behold the stars.”

Nearly 30 years ago, the author William Styron outed himself as mentally ill. “My days were pervaded by a gray drizzle of unrelenting horror,” he wrote in a New York Times op-ed article, describing the deep depression that had landed him in the psych ward. He compared the agony of mental illness to that of a heart attack. Pain is pain, Whether it’s in the mind or the body. So why, he asked, were depressed people treated as pariahs?

A confession of mental illness might not seem like a big deal now, but it was back then. In the 1980s, “if you were depressed, it was a terrible dark secret that you hid from the world,” according to Andrew Solomon, a historian of mental illness and author of “The Noonday Demon.” “People with depression were seen as pathetic and even dangerous. You didn’t let them near your kids.”

From William Styron’s Op-Ed on Depression. “In the popular mind, suicide is usually the work of a coward or sometimes, paradoxically, a deed of great courage, but it is neither; the torment that precipitates the act makes it often one of blind necessity.”

The response to Mr. Styron’s op-ed was immediate. Letters flooded into The New York Times. The readers thanked him, blurted out their stories and begged him for more. “Inadvertently I had helped unlock a closet from which many souls were eager to come out,” Mr. Styron wrote later.

“It was like the #MeToo movement,” Alexandra Styron, the author’s daughter, told me. “Somebody comes out and says: ‘This happened. This is real. This is what it feels like.’ And it just unleashed the floodgates.”

Readers were electrified by Mr. Styron’s confession in part because he inhabited a storybook world of glamour. After his novel “Sophie’s Choice” was adapted into a blockbuster movie in 1982, Mr. Styron rocketed from mere literary success to Hollywood fame. Meryl Streep, who won an Oscar for playing Sophie, became a lifelong friend, adding to Mr. Styron’s roster of illustrious buddies, from “Jimmy” Baldwin to Arthur Miller. He appeared at gala events with his silver hair upswept in a genius-y pompadour and his face ruddy from summers on Martha’s Vineyard. And yet he had been so depressed that he had eyed the knives in his kitchen with suicide-lust.

William Styron

James L.W. West, Mr. Styron’s friend and biographer, told me that Mr. Styron had never wanted to become “the guru of depression.” But after his article, he felt he had a duty to take on that role.

His famous memoir of depression, “Darkness Visible,” came out in October 1990. It was Mr. Styron’s curiosity about his own mind, and his determination to use himself as a case study to understand a mysterious disease, that gave the book its political power. “Darkness Visible” demonstrated that patients could be the owners and describers of their mental disorders, upending centuries of medical tradition in which the mentally ill were discredited and shamed. The brain scientist Alice Flaherty, who was Mr. Styron’s close friend and doctor, has called him “the great god of depression” because his influence on her field was so profound. His book became required reading in some medical schools, where physicians were finally being trained to listen to their patients.

Mr. Styron also helped to popularize a new way of looking at the brain. In his telling, suicidal depression is a physical ailment, as unconnected to the patient’s moral character as cancer. The book includes a cursory discussion of the chemistry of the brain neurotransmitters, serotonin and so forth. For many readers, it was a first introduction to scientific ideas that are now widely accepted.

For people with severe mood disorders, “Darkness Visible” became a guidebook. “I got depressed and everyone said to me: ‘You have to read the Bill Styron book. You have to read the Bill Styron book. Have you read the Bill Styron book? Let me give you a copy of the Bill Styron book,”’ Mr. Solomon told me. “On the one hand an absolutely harrowing read, and on the other hand one very much rooted in hope.”

The book benefited from perfect timing. It appeared contemporaneously with the introduction of Prozac and other mood disorder medications with fewer side effects than older psychiatric drugs. Relentlessly advertised on TV and in magazines, they seemed to promise protection. And though Mr. Styron himself probably did not take Prozac and was rather skeptical about drugs, his book became the bible of that era.

He also inspired dozens of writers including Mr. Solomon and Dr. Flaherty to chronicle their own struggles. In the 1990s, bookstores were crowded with mental-illness memoirs, Kay Redfield Jamison’s “An Unquiet Mind,” Susanna Kaysen’s “Girl, Interrupted” and Elizabeth Wurtzel’s “Prozac Nation,” to name a few. You read; you wrote; you survived.

It was an optimistic time. In 1999, with “Darkness Visible” in its 25th printing, Mr. Styron told Diane Rehm in an NPR interview: “I’m in very good shape, if I may be so bold as to say that.” He continued, “It’s as if I had purged myself of this pack of demons.”

It wouldn’t last. In the summer of 2000, he crashed again. In the last six years of his life, he would check into mental hospitals and endure two rounds of electroshock therapy.

Mr. Styron’s story mirrors the larger trends in American mental health over the past few decades. During the exuberance of the 1990s, it seemed possible that drugs would one day wipe out depression, making suicide a rare occurrence. But that turned out to be an illusion. In fact, the American suicide rate has continued to climb since the beginning of the 21st century.

We don’t know why this is happening, though we do have a few clues. Easy access to guns is probably contributing to the epidemic: Studies show that when people are able to reach for a firearm, a momentary urge to self-destruct is more likely to turn fatal. Oddly enough, climate change may also be to blame: A new study shows that rising temperatures can make people more prone to suicide.

With suicidal depression so widespread, we find ourselves needing new ways to talk about it, name its depredations and help families cope with it. Mr. Styron’s mission was to invent this new language of survival, but he did so at high cost to his own mental health.

When he revealed his history of depression, he inadvertently set a trap for himself. He became an icon of recovery. His widow, Rose Styron, told me that readers would call the house at all hours when they felt suicidal, and Mr. Styron would counsel them. He always took those calls, even when they woke him at 3 in the morning.

When he plunged into depression again in 2000, Mr. Styron worried about disappointing his fans. “When he crashed, he felt so guilty because he thought he’d let down all the people he had encouraged in ‘Darkness Visible,’” Ms. Styron told me. And he became painfully aware that if he ever did commit suicide, that private act would ripple out all over the world. The consequences would be devastating for his readers, some of whom might even decide to imitate him.

And so, one dark day in the summer of 2000, he wrote up a statement to be released in the event of his suicide. “I hope that readers of ‘Darkness Visible’ past, present and future will not be discouraged by the manner of my dying,” his message began. It was an attempt to inoculate his fans from the downstream effects of his own selfdestruction.

Mr. Styron’s family described this sense of his that succumbing to depression a second time made him a fraud.

DARKNESS VISIBLE.

A MEMOIR of MADNESS

William Styron

For the thing which I greatly feared is come upon me, and that which I was afraid of Is come unto me. I was not in safety, neither had I rest, neither was I quiet; yet trouble came. -Job

One

IN PARIS ON A CHILLY EVENING LATE IN OCTOBER OF 1985 I first became fully aware that the struggle with the disorder in my mind, a struggle which had engaged me for several months, might have a fatal outcome. The moment of revelation came as the car in which I was riding moved down a rain slick street not far from the Champs Elysées and slid past a dully glowing neon sign that read HOTEL WASHINGTON. I had not seen that hotel in nearly thirty-five years, since the spring of 1952, when for several nights it had become my initial Parisian roosting place.

In the first few months of my Wanderjahr, I had come down to Paris by train from Copenhagen, and landed at the Hotel Washington through the whimsical determination of a New York travel agent. In those days the hotel was one of the many damp, plain hostelries made for tourists, chiefly American, of very modest means who, if they were like me, colliding nervously for the first time with the French and their droll kinks, would always remember how the exotic bidet, positioned solidly in the drab bedroom, along with the toilet far down the ill-lit hallway, virtually defined the chasm between Gallic and Anglo-Saxon cultures.

But I stayed at the Washington for only a short time. Within days I had been urged out of the place by some newly found young American friends who got me installed in an even seedier but more colorful hotel in Montparnasse, hard by Le Dome and other suitably literary hangouts. (In my mid-twenties, I had just published a first novel and was a celebrity, though one of very low rank since few of the Americans in Paris had heard of my book, let alone read it.) And over the years the Hotel Washington gradually disappeared from my consciousness.

It reappeared, however, that October night when I passed the gray stone facade in a drizzle, and the recollection of my arrival so many years before started flooding back, causing me to feel that I had come fatally full circle. I recall saying to myself that when I left Paris for New York the next morning it would be a matter of forever. I was shaken by the certainty with which I accepted the idea that I would never see France again, just as I would never recapture a lucidity that was slipping away from me with terrifying speed.

Only days before I had concluded that I was suffering from a serious depressive illness, and was floundering helplessly in my efforts to deal with it. I wasn’t cheered by the festive occasion that had brought me to France. Of the many dreadful manifestations of the disease, both physical and psychological, a sense of self-hatred, or, put less categorically, a failure of self-esteem, is one of the most universally experienced symptoms, and I had suffered more and more from a general feeling of worthlessness as the malady had progressed.

My dank joylessness was therefore all the more ironic because I had flown on a rushed four day trip to Paris in order to accept an award which should have sparklingly restored my ego. Earlier that summer I received word that I had been chosen to receive the Prix Mondial Cino del Duca, given annually to an artist or scientist whose work reflects themes or principles of a certain “humanism.” The prize was established in memory of Cino del Duca, an immigrant from Italy who amassed a fortune just before and after World War II by printing and distributing cheap magazines, principally comic books, though later branching out into publications of quality; he became proprietor of the newspaper Paris-Jour.

He also produced movies and was a prominent racehorse owner, enjoying the pleasure of having many winners in France and abroad. Aiming for nobler cultural satisfactions, he evolved into a renowned philanthropist and along the way established a book publishing firm that began to produce works of literary merit (by chance, my first novel, Lie Down in Darkness, was one of del Duca’s offerings, in a translation entitled Un Lit de Ténébres); by the time of his death in 1967 this house, Editions Mondiales, became an important entity of a multifold empire that was rich yet prestigious enough for there to be scant memory of its comic book origins when del Duca’s widow, Simone, created a foundation whose chief function was the annual bestowal of the eponymous award.

The Prix Mondial Cino del Duca has become greatly respected in France, a nation pleasantly besotted with cultural prize giving, not only for its eclecticism and the distinction shown in the choice of its recipients but for the openhandedness of the prize itself, which that year amounted to approximately $25,000. Among the winners during the past twenty years have been Konrad Lorenz, Alejo Carpentier, Jean Anouilh, Ignazio Silone, Andrei Sakharov, Jorge Luis Borges and one American, Lewis Mumford. (No women as yet, feminists take note.)

As an American, I found it especially hard not to feel honored by inclusion in their company. While the giving and receiving of prizes usually induce from all sources an unhealthy uprising of false modesty, backbiting, selftorture and envy, my own view is that certain awards, though not necessary, can be very nice to receive. The Prix del Duca was to me so straightforwardly nice that any extensive self-examination seemed silly, and so I accepted gratefully, writing in reply that I would honor the reasonable requirement that I be present for the ceremony. At that time I looked forward to a leisurely trip, not a hasty turnaround. Had I been able to foresee my state of mind as the date of the award approached, I would not have accepted at all.

Depression is a disorder of mood, so mysteriously painful and elusive in the way it becomes known to the self, to the mediating intellect, as to verge close to being beyond description.

It thus remains nearly incomprehensible to those who have not experienced it in its extreme mode, although the gloom, “the blues” which people go through occasionally and associate with the general hassle of everyday existence are of such prevalence that they do give many individuals a hint of the illness in its catastrophic form. But at the time of which I write I had descended far past those familiar, manageable doldrums. In Paris, I am able to see now, I was at a critical stage in the development of the disease, situated at an ominous way station between its unfocused stirrings earlier that summer and the near violent denouement of December, which sent me into the hospital. I will later attempt to describe the evolution of this malady, from its earliest origins to my eventual hospitalization and recovery, but the Paris trip has retained a notable meaning for me.

On the day of the award ceremony, which was to take place at noon and be followed by a formal luncheon, I woke up at midmorning in my room at the Hétel Pont Royal commenting to myself that I felt reasonably sound, and I passed the good word along to my wife, Rose. Aided by the minor tranquilizer Halcion, I had managed to defeat my insomnia and get a few hours’ sleep. Thus I was in fair spirits.

But such wan cheer was an habitual pretense which I knew meant very little, for I was certain to feel ghastly before nightfall. I had come to a point where I was carefully monitoring each phase of my deteriorating condition. My acceptance of the illness followed several months of denial during which, at first, I had ascribed the malaise and restlessness and sudden fits of anxiety to withdrawal from alcohol; I had abruptly abandoned whiskey and all other intoxicants that June.

During the course of my worsening emotional climate I had read a certain amount on the subject of depression, both in books tailored for the layman and in weightier professional works including the psychiatrists’ bible, DSM (The Diagnostic and Statistical Manual of the American Psychiatric Association). Throughout much of my life I have been compelled, perhaps unwisely, to become an autodidact in medicine, and have accumulated a better than average amateur’s knowledge about medical matters (to which many of my friends, surely unwisely, have often deferred), and so it came as an astonishment to me that I was close to a total ignoramus about depression, which can be as serious a medical affair as diabetes or cancer. Most likely, as an incipient depressive, I had always subconsciously rejected or ignored the proper knowledge; it cut too close to the psychic bone, and I shoved it aside as an unwelcome addition to my store of information.

At any rate, during the few hours when the depressive state itself eased off long enough to permit the luxury of concentration, I had recently filled this vacuum with fairly extensive reading and I had absorbed many fascinating and troubling facts, which, however, I could not put to practical use.

The most honest authorities face up squarely to the fact that serious depression is not readily treatable. Unlike, let us say, diabetes, where immediate measures taken to rearrange the body’s adaptation to glucose can dramatically reverse a dangerous process and bring it under control, depression in its major stages possesses no quickly available remedy: failure of alleviation is one of the most distressing factors of the disorder as it reveals itself to the victim, and one that helps situate it squarely in the category of grave diseases.

Except in those maladies strictly designated as malignant or degenerative, we expect some kind of treatment and eventual amelioration, by pills or physical therapy or diet or surgery, with a logical progression from the initial relief of symptoms to final cure. Frighteningly, the layman sufferer from major depression, taking a peek into some of the many books currently on the market, will find much in the way of theory and symptomatology and very little that legitimately suggests the possibility of quick rescue. Those that do claim an easy way out are glib and most likely fraudulent. There are decent popular works which intelligently point the way toward treatment and cure, demonstrating how certain therapies, psychotherapy or pharmacology, or a combination of these, can indeed restore people to health in all but the most persistent and devastating cases; but the wisest books among them underscore the hard truth that serious depressions do not disappear overnight.

All of this emphasizes an essential though difficult reality which I think needs stating at the outset of my own chronicle: the disease of depression remains a great mystery. It has yielded its secrets to science far more reluctantly than many of the other major ills besetting us. The intense and sometimes comically strident factionalism that exists in present day psychiatry, the schism between the believers in psychotherapy and the adherents of pharmacology, resembles the medical quarrels of the eighteenth century (to bleed or not to bleed) and almost defines in itself the inexplicable nature of depression and the difficulty of its treatment. As a clinician in the field told me honestly and, I think, with a striking deftness of analogy: “If you compare our knowledge with Columbus’s discovery of America, America is yet unknown; we are still down on that little island in the Bahamas.”

In my reading I had learned, for example, that in at least one interesting respect my own case was atypical. Most people who begin to suffer from the illness are laid low in the morning, with such malefic effect that they are unable to get out of bed. They feel better only as the day wears on. But my situation was just the reverse. While I was able to rise and function almost normally during the earlier part of the day, I began to sense the onset of the symptoms at midafternoon or a little later, gloom crowding in on me, a sense of dread and alienation and, above all, stifling anxiety. I suspect that it is basically a matter of indifference whether one suffers the most in the morning or the evening: if these states of excruciating near paralysis are similar, as they probably are, the question of timing would seem to be academic. But it was no doubt the turnabout of the usual daily onset of symptoms that allowed me that morning in Paris to proceed without mishap, feeling more or less self-possessed, to the gloriously ornate palace on the Right Bank that houses the Fondation Cino del Duca. There, in a rococo salon, I was presented with the award before a small crowd of French cultural figures, and made my speech of acceptance with what I felt was passable aplomb, stating that while I was donating the bulk of my prize money to various organizations fostering French-American goodwill, including the American Hospital in Neuilly, there was a limit to altruism (this spoken jokingly) and so I hoped it would not be taken amiss if I held back a small portion for myself.

What I did not say, and which was no joke, was that the amount I was withholding was to pay for two tickets the next day on the Concorde, so that I might return speedily with Rose to the United States, where just a few days before I had made an appointment to see a psychiatrist. For reasons that I’m sure had to do with a reluctance to accept the reality that my mind was dissolving, I had avoided seeking psychiatric aid during the past weeks, as my distress intensified. But I knew I couldn’t delay the confrontation indefinitely, and when I did finally make contact by telephone with a highly recommended therapist, he encouraged me to make the Paris trip, telling me that he would see me as soon as I returned. I very much needed to get back, and fast.

Despite the evidence that I was in serious difficulty, I wanted to maintain the rosy view. A lot of the literature available concerning depression is, as I say, breezily optimistic, spreading assurances that nearly all depressive states will be stabilized or reversed if only the suitable antidepressant can be found; the reader is of course easily swayed by promises of quick remedy. In Paris, even as I delivered my remarks, I had a need for the day to be over, felt a consuming urgency to fly to America and the office of the doctor, who would whisk my malaise away with his miraculous medications. I recollect that moment clearly now, and am hardly able to believe that I possessed such ingenuous hope, or that I could have been so unaware of the trouble and peril that lay ahead.

Simone del Duca, a large dark-haired woman of queenly manner, was understandably incredulous at first, and then enraged, when after the presentation ceremony I told her that I could not join her at lunch upstairs in the great mansion, along with a dozen or so members of the Académie Frangaise, who had chosen me for the prize. My refusal was both emphatic and simpleminded; I told her point-blank that I had arranged instead to have lunch at a restaurant with my French publisher, Frangoise Gallimard. Of course this decision on my part was outrageous; it had been announced months before to me and everyone else concerned that a luncheon, moreover, a luncheon in my honor, was part of the day’s pageantry. But my behavior was really the result of the illness, which had progressed far enough to produce some of its most famous and sinister hallmarks: confusion, failure of mental focus and lapse of memory. At a later stage my entire mind would be dominated by anarchic disconnections; as I have said, there was now something that resembled bifurcation of mood: lucidity of sorts in the early hours of the day, gathering murk in the afternoon and evening. It must have been during the previous evening’s murky distractedness that I made the luncheon date with Frangoise Gallimard, forgetting my del Duca obligations. That decision continued to completely master my thinking, creating in me such obstinate determination that now I was able to blandly insult the worthy Simone del Duca. “Alors!” she exclaimed to me, and her face flushed angrily as she whirled in a stately volte-face, “au revoir!”

Suddenly I was flabbergasted, stunned with horror at what I had done. I fantasized a table at which sat the hostess and the Académie Frangaise, the guest of honor at La Coupole. I implored Madame’s assistant, a bespectacled woman with a clipboard and an ashen, mortified expression, to try to reinstate me: it was all a terrible mistake, a mixup, a malentendu. And then I blurted some words that a lifetime of general equilibrium, and a smug belief in the impregnability of my psychic health, had prevented me from believing I could ever utter; I was chilled as I heard myself speak them to this perfect stranger.

“I’m sick,” I said, “un probleme psychiatrique.”

Madame del Duca was magnanimous in accepting my apology and the lunch went off without further strain, aIthough I couldn’t completely rid myself of the suspicion, as we chatted somewhat stiffly, that my benefactress was still disturbed by my conduct and thought me a weird number. The lunch was a long one, and when it was over I felt myself entering the afternoon shadows with their encroaching anxiety and dread. A television crew from one of the national channels was waiting (I had forgotten about them, too), ready to take me to the newly opened Picasso Museum, where I was supposed to be filmed looking at the exhibits and exchanging comments with Rose.

This turned out to be, as I knew it would, not a captivating promenade but a demanding struggle, a major ordeal. By the time we arrived at the museum, having dealt with heavy traffic, it was past four o’clock and my brain had begun to endure its familiar siege: panic and dislocation, and a sense that my thought processes were being engulfed by a toxic and unnameable tide that obliterated any enjoyable response to the living world. This is to say more specifically that instead of pleasure, certainly instead of the pleasure I should be having in this sumptuous showcase of bright genius, I was feeling in my mind a sensation close to, but indescribably different from, actual pain.

This leads me to touch again on the elusive nature of such distress. That the word “indescribable” should present itself is not fortuitous, since it has to be emphasized that if the pain were readily describable most of the countless sufferers from this ancient affliction would have been able to confidently depict for their friends and loved ones (even their physicians) some of the actual dimensions of their torment, and perhaps elicit a comprehension that has been generally lacking; such incomprehension has usually been due not to a failure of sympathy but to the basic inability of healthy people to imagine a form of torment so alien to everyday experience.

For myself, the pain is most closely connected to drowning or suffocation, but even these images are off the mark. William James, who battled depression for many years, gave up the search for an adequate portrayal, implying its near-impossibility when he wrote in The Varieties of Religious Experience:

“It is a positive and active anguish, a sort of psychical neuralgia wholly unknown to normal life.”

The pain persisted during my museum tour and reached a crescendo in the next few hours when, back at the hotel, I fell onto the bed and lay gazing at the ceiling, nearly immobilized and in a trance of supreme discomfort. Rational thought was usually absent from my mind at such times, hence trance.

I can think of no more apposite word for this state of being, a condition of helpless stupor in which cognition was replaced by that “positive and active anguish.”

And one of the most unendurable aspects of such an interlude was the inability to sleep. It had been my custom of a near lifetime, like that of vast numbers of people, to settle myself into a soothing nap in the late afternoon, but the disruption of normal sleep patterns is a notoriously devastating feature of depression; to the injurious sleeplessness with which I had been afflicted each night was added the insult of this afternoon insomnia, diminutive by comparison but all the more horrendous because it struck during the hours of the most intense misery. It had become clear that I would never be granted even a few minutes’ relief from my full-time exhaustion. I clearly recall thinking, as I lay there while Rose sat nearby reading, that my afternoons and evenings were becoming almost measurably worse, and that this episode was the worst to date. But I somehow managed to reassemble myself for dinner with, who else? -Francoise Gallimard, co-victim along with Simone del Duca of the frightful lunchtime contretemps.

The night was blustery and raw, with a chill wet wind blowing down the avenues, and when Rose and I met Francoise and her son and a friend at La Lorraine, a glittering brasserie not far from L’Etoile, rain was descending from the heavens in torrents. Someone in the group, sensing my state of mind, apologized for the evil night, but I recall thinking that even if this were one of those warmly scented and passionate evenings for which Paris is celebrated I would respond like the zombie I had become. The weather of depression is unmodulated, its light a brownout.

And zombielike, halfway through the dinner, I lost the del Duca prize check for $25,000. Having tucked the check in the inside breast pocket of my jacket, I let my hand stray idly to that place and realized that it was gone. Did I “intend” to lose the money? Recently I had been deeply bothered that I was not deserving of the prize. I believe in the reality of the accidents we subconsciously perpetrate on ourselves, and so how easy it was for this loss to be not loss but a form of repudiation, offshoot of that seIf-loathing (depression’s premier badge) by which I was persuaded that I could not be worthy of the prize, that I was in fact not worthy of any of the recognition that had come my way in the past few years. Whatever the reason for its disappearance, the check was gone, and its loss dovetailed well with the other failures of the dinner: my failure to have an appetite for the grand plateau de fruits de mer placed before me, failure of even forced laughter and, at last, virtually total failure of speech.

At this point the ferocious inwardness of the pain produced an immense distraction that prevented my articulating words beyond a hoarse murmur; I sensed myself turning walleyed, monosyllabic, and also I sensed my French friends becoming uneasily aware of my predicament. It was a scene from a bad Operetta by now: all of us near the floor, searching for the vanished money. Just as I signaled that it was time to go, Francoise’s son discovered the check, which had somehow slipped out of my pocket and fluttered under an adjoining table, and we went forth into the rainy night. Then, while I was riding in the car, I thought of Albert Camus and Romain Gary.

Two

WHEN I WAS A YOUNG WRITER THERE HAD BEEN A stage where Camus, almost more than any other contemporary literary figure, radically set the tone for my own view of life and history. I read his novel The Stranger somewhat later than I should have, I was in my early thirties, but after finishing it I received the stab of recognition that proceeds from reading the work of a writer who has wedded moral passion to a style of great beauty and whose unblinking vision is capable of frightening the soul to its marrow.

The cosmic loneliness of Meursault, the hero of that novel, so haunted me that when I set out to write The Confessions of Nat Turner I was impelled to use Camus’s device of having the story flow from the point of view of a narrator isolated in his jail cell during the hours before his execution. For me there was a spiritual connection between Meursault’s frigid solitude and the plight of Nat Turner, his rebel predecessor in history by a hundred years, likewise condemned and abandoned by man and God.

Camus’s essay “Reflections on the Guillotine” is a virtually unique document, freighted with terrible and fiery logic; it is difficult to conceive of the most vengeful supporter of the death penalty retaining the same attitude after exposure to scathing truths expressed with such ardor and precision. I know my thinking was forever altered by that work, not only turning me around completely, convincing me of the essential barbarism of capital punishment, but establishing substantial claims on my conscience in regard to matters of responsibility at large. Camus was a great cleanser of my intellect, ridding me of countless sluggish ideas, and through some of the most unsettling pessimism I had ever encountered causing me to be aroused anew by life’s enigmatic promise.

The disappointment I always felt at never meeting Camus was compounded by that failure having been such a near miss. I had planned to see him in 1960, when I was traveling to France and had been told in a letter by the writer Romain Gary that he was going to arrange a dinner in Paris where I would meet Camus. The enormously gifted Gary, whom I knew slightly at the time and who later became a cherished friend, had informed me that Camus, whom he saw frequently, had read my Un Lit de Te’nebres and had admired it; I was of course greatly flattered and felt that a get-together would be a splendid happening. But before I arrived in France there came the appalling news: Camus had been in an automobile crash, and was dead at the cruelly young age of forty-six. I have almost never felt so intensely the loss of someone I didn’t know. I pondered his death endlessly. Although Camus had not been driving he supposedly knew the driver, who was the son of his publisher, to be a speed demon; so there was an element of recklessness in the accident that bore overtones of the near-suicidal, at least of a death flirtation, and it was inevitable that conjectures concerning the event should revert back to the theme of suicide in the writer’s work.

One of the century’s most famous intellectual pronouncements comes at the beginning of The Myth of Sisyphus: “There is but one truly serious philosophical problem, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.” Reading this for the first time I was puzzled and continued to be throughout much of the essay, since despite the work’s persuasive logic and eloquence there was a lot that eluded me, and I always came back to grapple vainly with the initial hypothesis, unable to deal with the premise that anyone should come close to wishing to kill himself in the first place.

A later short novel, The Fall, I admired with reservations; the guilt and seIf-condemnation of the lawyer-narrator, gloomily spinning out his monologue in an Amsterdam bar, seemed a touch clamorous and excessive, but at the time of my reading I was unable to perceive that the lawyer was behaving very much like a man in the throes of clinical depression. Such was my innocence of the very existence of this disease. Camus, Romain told me, occasionally hinted at his own deep despondency and had spoken of suicide. Sometimes he spoke in jest, but the jest had the quality of sour wine, upsetting Romain. Yet apparently he made no attempts and so perhaps it was not coincidental that, despite its abiding tone of melancholy, a sense of the triumph of life over death is at the core of The Myth of Sisyphus with its austere message: in the absence of hope we must still struggle to survive, and so we doby the skin of our teeth.

It was only after the passing of some years that it seemed credible to me that Camus’s statement about suicide, and his general preoccupation with the subject, might have sprung at least as strongly from some persistent disturbance of mood as from his concerns with ethics and epistemology. Gary again discussed at length his assumptions about Camus’s depression during August of 1978, when I had lent him my guest cottage in Connecticut, and I came down from my summer home on Martha’s Vineyard to pay him a weekend visit. As we talked I felt that some of Romain’s suppositions about the seriousness of Camus’s recurring despair gained weight from the fact that he, too, had begun to suffer from depression, and he freely admitted as much. It was not incapacitating, he insisted, and he had it under control, but he felt it from time to time, this leaden and poisonous mood the color of verdigris, so incongruous in the midst of the lush New England summer. A Russian Jew born in Lithuania, Romain had always seemed possessed of an Eastern European melancholy, so it was hard to tell the difference. Nonetheless, he was hurting. He said that he was able to perceive a flicker of the desperate state of mind which had been described to him by Camus.

Gary’s situation was hardly lightened by the presence of Jean Seberg, his lowa-born actress wife, from whom he had been divorced and, I thought, long estranged. I learned that she was there because their son, Diego, was at a nearby tennis camp. Their presumed estrangement made me surprised to see her living with Romain, surprised too, no, shocked and saddened, by her appearance: all her once fragile and luminous blond beauty had disappeared into a puffy mask. She moved like a Sleepwalker, said little, and had the blank gaze of someone tranquilized (or drugged, or both) nearly to the point of catalepsy. I understood how devoted they still were, and was touched by his solicitude, both tender and paternal. Romain told me that Jean was being treated for the disorder that afflicted him, and mentioned something about antidepressant medications, but none of this registered very strongly, and also meant little.

This memory of my relative indifference is important because such indifference demonstrates powerfully the outsider’s inability to grasp the essence of the illness. Camus’s depression and now Romain Gary’s, and certainly Jean’s, were abstract ailments to me, in spite of my sympathy, and I hadn’t an inkling of its true contours or the nature of the pain so many victims experience as the mind continues in its insidious meltdown.

In Paris that October night I knew that I, too, was in the process of meltdown. And on the way to the hotel in the car I had a clear revelation. A disruption of the circadian cycle, the metabolic and glandular rhythms that are central to our workaday life, seems to be involved in many, if not most, cases of depression; this is why brutal insomnia so often occurs and is most likely why each day’s pattern of distress exhibits fairly predictable alternating periods of intensity and relief. The evening’s relief for me, an incomplete but noticeable letup, like the change from a torrential downpour to a steady shower, came in the hours after dinner time and before midnight, when the pain lifted a little and my mind would become lucid enough to focus on matters beyond the immediate upheaval convulsing my system. Naturally I looked forward to this period, for sometimes I felt close to being reasonably sane, and that night in the car I was aware of a semblance of clarity returning, along with the ability to think rational thoughts. Having been able to reminisce about Camus and Romain Gary, however, I found that my continuing thoughts were not very consoling.

The memory of Jean Seberg gripped me with sadness. A little over a year after our encounter in Connecticut she took an overdose of pills and was found dead in a car parked in a cul-de-sac off a Paris avenue, where her body had lain for many days. The following year I sat with Romain at the Brasserie Lipp during a long lunch while he told me that, despite their difficulties, his loss of Jean had so deepened his depression that from time to time he had been rendered nearly helpless. But even then I was unable to comprehend the nature of his anguish. I remembered that his hands trembled and, though he could hardly be called superannuated, he was in his mid-sixties, his voice had the wheezy sound of very old age that I now realized was, or could be, the voice of depression; in the vortex of my severest pain I had begun to develop that ancient voice myself. I never saw Romain again. Claude Gallimard, Francoise’s father, had recollected to me how, in 1980, only a few hours after another lunch where the talk between the two old friends had been composed and casual, even lighthearted, certainly anything but somber, Romain Gary, twice winner of the Prix Goncourt (one of these awards pseudonymous, the result of his having gleefully tricked the critics), hero of the Republic, valorous recipient of the Croix de Guerre, diplomat, bon vivant, womanizer par excellence, went home to his apartment on the rue du Bac and put a bullet through his brain.

It was at some point during the course of these musings that the sign HOTEL WASHINGTON swam across my vision, bringing back memories of my long ago arrival in the city, along with the fierce and sudden realization that I would never see Paris again. This certitude astonished me and filled me with a new fright, for while thoughts of death had long been common during my siege, blowing through my mind like icy gusts of wind, they were the formless shapes of doom that I suppose are dreamed of by people in the grip of any severe affliction. The difference now was in the sure understanding that tomorrow, when the pain descended once more, or the tomorrow after that, certainly on some not too distant tomorrow, I would be forced to judge that life was not worth living and thereby answer, for myself at least, the fundamental question of philosophy.

Three

TO MANY OF US WHO KNEW ABBIE HOFFMAN EVEN slightly, as I did, his death in the spring of 1989 was a sorrowful happening. Just past the age of fifty, he had been too young and apparently too vital for such an ending; a feeling of chagrin and dreadfulness attends the news of nearly anyone’s suicide, and Abbie’s death seemed to me especially cruel.

I had first met him during the wild days and nights of the 1968 Democratic Convention in Chicago, where I had gone to write a piece for The New York Review of Books, and I later was one of those who testified on behalf of him and his fellow defendants at the trial, also in Chicago, in 1970. Amid the pious follies and morbid perversions of American life, his antic style was exhilarating, and it was hard not to admire the hellraising and the brio, the anarchic individualism.

I wish I had seen more of him in recent years; his sudden death left me with a particular emptiness, as suicides usually do to everyone. But the event was given a further dimension of poignancy by what one must begin to regard as a predictable reaction from many: the denial, the refusal to accept the fact of the suicide itself, as if the voluntary act, as opposed to an accident, or death from natural causes, were tinged with a delinquency that somehow lessened the man and his character.

Abbie’s brother appeared on television, grief, ravaged and distraught; one could not help feeling compassion as he sought to deflect the idea of suicide, insisting that Abbie, after all, had always been careless with pills and would never have left his family bereft. However, the coroner confirmed that Hoffman had taken the equivalent of 150 phenobarbitals.

It’s quite natural that the people closest to suicide victims so frequently and feverishly hasten to disclaim the truth; the sense of implication, of personal guilt, the idea that one might have prevented the act if one had taken certain precautions, had somehow behaved differently, is perhaps inevitable. Even so, the sufferer, whether he has actually killed himself or attempted to do so, or merely expressed threats, is often, through denial on the part of others, unjustly made to appear a wrongdoer.

A similar case is that of Randall Jarrell, one of the fine poets and critics of his generation, who on a night in 1965, near Chapel Hill, North Carolina, was struck by a car and killed. Jarrell’s presence on that particular stretch of road, at an odd hour of the evening, was puzzling, and since some of the indications were that he had deliberately let the car strike him, the early conclusion was that his death was suicide. Newsweek, among other publications, said as much, but Jarrell’s widow protested in a letter to that magazine; there was a hue and cry from many of his friends and supporters, and a coroner’s jury eventually ruled the death to be accidental. Jarrell had been suffering from extreme depression and had been hospitalized; only a few months before his misadventure on the highway and while in the hospital, he had slashed his wrists.

Anyone who is acquainted with some of the jagged contours of Jarrell’s life, including his violent fluctuations of mood, his fits of black despondency, and who, in addition, has acquired a basic knowledge of the danger signals of depression, would seriously question the verdict of the coroner’s jury. But the stigma of selfinflicted death is for some people a hateful blot that demands erasure at all costs. (More than two decades after his death, in the Summer 1986 issue of The American Scholar, a one time student of Jarrell’s, reviewing a collection of the poet’s letters, made the review less a literary or biographical appraisal than an occasion for continuing to try to exorcise the vile phantom of suicide.)

Randall Jarrell almost certainly killed himself. He did so not because he was a coward, nor out of any moral feebleness, but because he was afflicted with a depression that was so devastating that he could no longer endure the pain of it.

This general unawareness of what depression is really like was apparent most recently in the matter of Primo Levi, the remarkable Italian writer and survivor of Auschwitz who, at the age of sixty-seven, hurled himself down a stairwell in Turin in 1987. Since my own involvement with the illness, I had been more than ordinarily interested in Levi’s death, and so, late in 1988, when I read an account in The New York Times about a symposium on the writer and his work held at New York University, was fascinated but, finally, appalled. For, according to the article, many of the participants, worldly writers and scholars, seemed mystified by Levi’s suicide, mystified and disappointed. It was as if this man whom they had all so greatly admired, and who had endured so much at the hands of the Nazis, a man of exemplary resilience and courage, had by his suicide demonstrated a frailty, a crumbling of character they were loath to accept. In the face of a terrible absolute self-destruction, their reaction was helplessness and (the reader could not avoid it) a touch of shame.

My annoyance over all this was so intense that I was prompted to write a short piece for the op-ed page of the Times. The argument I put forth was fairly straightforward:

The pain of severe depression is quite unimaginable to those who have not suffered it, and it kills in many instances because its anguish can no longer be borne.

The prevention of many suicides will continue to be hindered until there is a general awareness of the nature of this pain. Through the healing process of time, and through medical intervention or hospitalization in many cases, most people survive depression, which may be its only blessing; but to the tragic legion who are compelled to destroy themselves there should be no more reproof attached than to the victims of terminal cancer.

I had set down my thoughts in this Times piece rather hurriedly and spontaneously, but the response was equally spontaneous, and enormous. It had taken, I speculated, no particular originality or boldness on my part to speak out frankly about suicide and the impulse toward it, but I had apparently underestimated the number of people for whom the subject had been taboo, a matter of secrecy and shame. The overwhelming reaction made me feel that inadvertently I had helped unlock a closet from which many souls were eager to come out and proclaim that they, too, had experienced the feelings I had described. It is the only time in my life I have felt it worthwhile to have invaded my own privacy, and to make that privacy public. And I thought that, given such momentum, and with my experience in Paris as a detailed example of what occurs during depression, it would be useful to try to chronicle some of my own experiences with the illness and in the process perhaps establish a frame of reference out of which one or more valuable conclusions might be drawn.

Such conclusions, it has to be emphasized, must still be based on the events that happened to one man. In setting these reflections down I don’t intend my ordeal to stand as a representation of what happens, or might happen, to others. Depression is much too complex in its cause, its symptoms and its treatment for unqualified conclusions to be drawn from the experience of a single individual. Although as an illness depression manifests certain unvarying characteristics, it also allows for many idiosyncrasies; I’ve been amazed at some of the freakish phenomena, not reported by other patients, that it has wrought amid the twistings of my mind’s labyrinth.

Depression afflicts millions directly, and millions more who are relatives or friends of victims. It has been estimated that as many as one in ten Americans will suffer from the illness. As assertively democratic as a Norman Rockwell poster, it strikes indiscriminately at all ages, races, creeds and classes, though women are at considerably higher risk than men. The occupational list (dressmakers, barge captains, sushi chefs, cabinet members) of its patients is too long and tedious to give here; it is enough to say that very few people escape being a potential victim of the disease, at least in its milder form. Despite depression’s eclectic reach, it has been demonstrated with fair convincingness that artistic types (especially poets) are particularly vulnerable to the disorder, which, in its graver, clinical manifestation takes upward of twenty percent of its victims by way of suicide.

Just a few of these fallen artists, all modern, make up a sad but scintillant roll call: Hart Crane, Vincent van Gogh, Virginia Woolf, Arshile Gorky, Cesare Pavese, Romain Gary, Vachel Lindsay, Sylvia Plath, Henry de Montherlant, Mark Rothko, John Berryman, Jack London, Ernest Hemingway, William Inge, Diane Arbus, Tadeusz Borowski, Paul Celan, Anne Sexton, Sergei Esenin, Vladimir Mayakovsky, the list goes on. (The Russian poet Mayakovsky was harshly critical of his great contemporary Esenin’s suicide a few years before, which should stand as a caveat for all who are judgmental about self destruction.)

When one thinks of these doomed and splendidly creative men and women, one is drawn to contemplate their childhoods, where, to the best of anyone’s knowledge, the seeds of the illness take strong root; could any of them have had a hint, then, of the psyche’s perishability, its exquisite fragility? And why were they destroyed, while others, similarly stricken, struggled through?

Four

WHEN I WAS FIRST AWARE THAT I HAD BEEN LAID low by the disease, I felt a need, among other things, to register a strong protest against the word “depression.” Depression, most people know, used to be termed “melancholia,” a word which appears in English as early as the year 1303 and crops up more than once in Chaucer, who in his usage seemed to be aware of its pathological nuances. “Melancholia” would still appear to be a far more apt and evocative word for the blacker forms of the disorder, but it was usurped by a noun with a bland tonality and lacking any magisterial presence, used indifferently to describe an economic decline or a rut in the ground, a true wimp of a word for such a major illness. It may be that the scientist generally held responsible for its currency in modern times, a Johns Hopkins Medical School faculty member justly venerated, the Swiss born psychiatrist Adolf Meyer, had a tin ear for the finer rhythms of English and therefore was unaware of the semantic damage he had inflicted by offering “depression” as a descriptive noun for such a dreadful and raging disease. Nonetheless, for over seventy-five years the word has slithered innocuously through the language like a slug, leaving little trace of its intrinsic malevolence and preventing, by its very insipidity, a general awareness of the horrible intensity of the disease when out of control.

As one who has suffered from the malady in extremis yet returned to tell the tale, I would lobby for a truly arresting designation. “Brainstorm,” for instance, has unfortunately been preempted to describe, somewhat jocularly, intellectual inspiration. But something along these lines is needed. Told that someone’s mood disorder has evolved into a storm, a veritable howling tempest in the brain, which is indeed what a clinical depression resembles like nothing else, even the uninformed layman might display sympathy rather than the standard reaction that “depression” evokes, something akin to “So what?” or “You’ll pull out of it” or “We all have bad days.” The phrase “nervous breakdown” seems to be on its way out, certainly deservedly so, owing to its insinuation of a vague spinelessness, but we still seem destined to be saddled with “depression” until a better, sturdier name is created.

The depression that engulfed me was not of the manic type, the one accompanied by euphoric highs, which would have most probably presented itself earlier in my life. I was sixty when the illness struck for the first time, in the “unipolar” form, which leads straight down. I shall never learn what “caused” my depression, as no one will ever learn about their own. To be able to do so will likely forever prove to be an impossibility, so complex are the intermingled factors of abnormal chemistry, behavior and genetics. Plainly, multiple components are involved, perhaps three or four, most probably more, in fathomless permutations.

That is why the greatest fallacy about suicide lies in the belief that there is a single immediate answer, or perhaps combined answers, as to why the deed was done.

The inevitable question “Why did he, or she do it?” usually leads to odd speculations, for the most part fallacies themselves. Reasons were quickly advanced for Abbie Hoffman’s death: his reaction to an auto accident he had suffered, the failure of his most recent book, his mother’s serious illness. With Randall Jarrell it was a declining career cruelly epitomized by a vicious book review and his consequent anguish. Primo Levi, it was rumored, had been burdened by caring for his paralytic mother, which was more onerous to his spirit than even his experience at Auschwitz.

Any one of these factors may have lodged like a thorn in the sides of the three men, and been a torment. Such aggravations may be crucial and cannot be ignored. But most people quietly endure the equivalent of injuries, declining careers, nasty book reviews, family illnesses. A vast majority of the survivors of Auschwitz have borne up fairly well. Bloody and bowed by the outrages of life, most human beings still stagger on down the road, unscathed by real depression.

To discover why some people plunge into the downward spiral of depression, one must search beyond the manifest crisis, and then still fail to come up with anything beyond wise conjecture.

The storm which swept me into a hospital in December began as a cloud no bigger than a wine goblet the previous June. And the cloud, the manifest crisis, involved alcohol, a substance I had been abusing for forty years. Like a great many American writers, whose sometimes lethal addiction to alcohol has become so legendary as to provide in itself a stream of studies and books, I used alcohol as the magical conduit to fantasy and euphoria, and to the enhancement of the imagination. There is no need to either rue or apologize for my use of this soothing, often sublime agent, which had contributed greatly to my writing; although I never set down a line while under its influence, I did use it, often in conjunction with music, as a means to let my mind conceive visions that the unaltered, sober brain has no access to. Alcohol was an invaluable senior partner of my intellect, besides being a friend whose ministrations I sought daily, sought also, I now see, as a means to calm the anxiety and incipient dread that I had hidden away for so long somewhere in the dungeons of my spirit.

The trouble was, at the beginning of this particular summer, that I was betrayed. It struck me quite suddenly, almost overnight: I could no longer drink. It was as if my body had risen up in protest, along with my mind, and had conspired to reject this daily mood bath which it had so long welcomed and, who knows? perhaps even come to need. Many drinkers have experienced this intolerance as they have grown older. I suspect that the crisis was at least partly metabolic, the liver rebelling, as if to say, “No more, no more”, but at any rate I discovered that alcohol in minuscule amounts, even a mouthful of wine, caused me nausea, a desperate and unpleasant wooziness, a sinking sensation and ultimately a distinct revulsion. The comforting friend had abandoned me not gradually and reluctantly, as a true friend might do, but like a shot, and I was left high and certainly dry, and unhelmed.

Neither by will nor by choice had I became an abstainer; the situation was puzzling to me, but it was also traumatic, and I date the onset of my depressive mood from the beginning of this deprivation. Logically, one would be overjoyed that the body had so summarily dismissed a substance that was undermining its health; it was as if my system had generated a form of Antabuse, which should have allowed me to happily go my way, satisfied that a trick of nature had shut me off from a harmful dependence. But, instead, I began to experience a vaguely troubling malaise, a sense of something having gone cockeyed in the domestic universe I’d dwelt in so long, so comfortably. While depression is by no means unknown when people stop drinking, it is usually on a scale that is not menacing. But it should be kept in mind how idiosyncratic the faces of depression can be.

It was not really alarming at first, since the change was subtle, but I did notice that my surroundings took on a different tone at certain times: the shadows of nightfall seemed more somber, my mornings were less buoyant, walks in the woods became less zestful, and there was a moment during my working hours in the late afternoon when a kind of panic and anxiety overtook me, just for a few minutes, accompanied by a visceral queasiness, such a seizure was at least slightly alarming, after all. As I set down these recollections, I realize that it should have been plain to me that I was already in the grip of the beginning of a mood disorder, but I was ignorant of such a condition at that time.

When I reflected on this curious alteration of my consciousness, and I was baffled enough from time to time to do so, I assumed that it all had to do somehow with my enforced withdrawal from alcohol. And, of course, to a certain extent this was true. But it is my conviction now that alcohol played a perverse trick on me when we said farewell to each other: although, as everyone should know, it is a major depressant, it had never truly depressed me during my drinking career, acting instead as a shield against anxiety.

Suddenly vanished, the great ally which for so long had kept my demons at bay was no longer there to prevent those demons from beginning to swarm through the subconscious, and I was emotionally naked, vulnerable as I had never been before.

Doubtless depression had hovered near me for years, waiting to swoop down. Now I was in the first stage, premonitory, like a flicker of sheet lightning barely perceived, of depression’s black tempest.

I was on Martha’s Vineyard, where I’ve Spent a good part of each year since the 1960s, during that exceptionally beautiful summer. But I had begun to respond indifferently to the island’s pleasures. I felt a kind of numbness, an enervation, but more particularly an odd fragility, as if my body had actually become frail, hypersensitive and somehow disjointed and clumsy, lacking normal coordination. And soon I was in the throes of a pervasive hypochondria. Nothing felt quite right with my corporeal self; there were twitches and pains, sometimes intermittent, often seemingly constant, that seemed to presage all sorts of dire infirmities. (Given these signs, one can understand how, as far back as the seventeenth century, in the notes of contemporary physicians, and in the perceptions of John Dryden and others, a connection is made between melancholia and hypochondria; the words are often interchangeable, and were so used until the nineteenth century by writers as various as Sir Walter Scott and the Brontés, who also linked melancholy to a preoccupation with bodily ills.) It is easy to see how this condition is part of the psyche’s apparatus of defense: unwilling to accept its own gathering deterioration, the mind announces to its indwelling consciousness that it is the body with its perhaps correctable defects, not the precious and irreplaceable mind, that is going haywire.

In my case, the overall effect was immensely disturbing, augmenting the anxiety that was by now never quite absent from my waking hours and fueling still another strange behavior pattern, a fidgety restlessness that kept me on the move, somewhat to the perplexity of my family and friends. Once, in late summer, on an airplane trip to New York, I made the reckless mistake of downing a scotch and soda, my first alcohol in months, which promptly sent me into a tailspin, causing me such a horrified sense of disease and interior doom that the very next day I rushed to a Manhattan internist, who inaugurated a long series of tests. Normally I would have been satisfied, indeed elated, when, after three weeks of high-tech and extremely expensive evaluation, the doctor pronounced me totally fit; and I was happy, for a day or two, until there once again began the rhythmic daily erosion of my mood, anxiety, agitation, unfocused dread.

By now I had moved back to my house in Connecticut. It was October, and one of the unforgettable features of this stage of my disorder was the way in which my old farmhouse, my beloved home for thirty years, took on for me at that point when my spirits regularly sank to their nadir an almost palpable quality of ominousness. The fading evening light, akin to that famous “slant of light” of Emily Dickinson’s, which spoke to her of death, of chill extinction, had none of its familiar autumnal loveliness, but ensnared me in a suffocating gloom. I wondered how this friendly place, teeming with such memories of (again in her words) “Lads and Girls,” of “laughter and ability and sighing, and Frocks and Curls,” could almost perceptibly seem so hostile and forbidding. Physically, I was not alone. As always Rose was present and listened with unflagging patience to my complaints. But I felt an immense and aching solitude. I could no longer concentrate during those afternoon hours, which for years had been my working time, and the act of writing itself, becoming more and more difficult and exhausting, stalled, then finally ceased.

There were also dreadful, pouncing seizures of anxiety. One bright day on a walk through the woods with my dog I heard a flock of Canada geese honking high above trees ablaze with foliage; ordinarily a sight and sound that would have exhilarated me, the flight of birds caused me to stop, riveted with fear, and I stood stranded there, helpless, shivering, aware for the first time that I had been stricken by no mere pangs of withdrawal but by a serious illness whose name and actuality I was able finally to acknowledge. Going home, I couldn’t rid my mind of the line of Baudelaire’s, dredged up from the distant past, that for several days had been skittering around at the edge of my consciousness: “I have felt the wind of the wing of madness.”

Our perhaps understandable modern need to dull the sawtooth edges of so many of the afflictions we are heir to has led us to banish the harsh old-fashioned words: madhouse, asylum, insanity, melancholia, lunatic, madness.

But never let it be doubted that depression, in its extreme form, is madness. The madness results from an aberrant biochemical process. It has been established with reasonable certainty (after strong resistance from many psychiatrists, and not all that long ago) that such madness is chemically induced amid the neurotransmitters of the brain, probably as the result of systemic stress, which for unknown reasons causes a depletion of the chemicals norepinephrine and serotonin, and the increase of a hormone, cortisol.

With all of this upheaval in the brain tissues, the alternate drenching and deprivation, it is no wonder that the mind begins to feel aggrieved, stricken, and the muddied thought processes register the distress of an organ in convulsion. Sometimes, though not very often, such a disturbed mind will turn to violent thoughts regarding others. But with their minds turned agonizingly inward, people with depression are usually dangerous only to themselves. The madness of depression is, generally speaking, the antithesis of violence. It is a storm indeed, but a storm of murk. Soon evident are the slowed-down responses, near paralysis, psychic energy throttled back close to zero. Ultimately, the body is affected and feels sapped, drained.

That fall, as the disorder gradually took full possession of my system, I began to conceive that my mind itself was like one of those outmoded small town telephone exchanges, being gradually inundated by flood waters: one by one, the normal circuits began to drown, causing some of the functions of the body and nearly all of those of instinct and intellect to slowly disconnect.

There is a well known checklist of some of these functions and their failures. Mine conked out fairly close to schedule, many of them following the pattern of depressive seizures. I particularly remember the lamentable near disappearance of my voice. It underwent a strange transformation, becoming at times quite faint, wheezy and spasmodic, a friend observed later that it was the voice of a ninety year old. The libido also made an early exit, as it does in most major illnesses, it is the superfluous need of a body in beleaguered emergency. Many people lose all appetite; mine was relatively normal, but I found myself eating only for susistence: food, like everything else within the scope of sensation, was utterly without savor. Most distressing of all the instinctual disruptions was that of sleep, along with a complete absence of dreams.

Exhaustion combined with sleeplessness is a rare torture. The two or three hours of sleep I was able to get at night were always at the behest of the Halcyon, a matter which deserves particular notice. For some time now many experts in psychopharmacology have warned that the benzodiazepine family of tranquilizers, of which Halcion is one (Valium and Ativan are others), is capable of depressing mood and even precipitating a major depression. Over two years before my siege, an insouciant doctor had prescribed Ativan as a bedtime aid, telling me airily that I could take it as casually as aspirin. The Physicians’ Desk Reference, the pharmacological bible, reveals that the medicine I had been ingesting was (a) three times the normally prescribed strength, (b) not advisable as a medication for more than a month or so, and (c) to be used with special caution by people of my age. At the time of which I am speaking I was no longer taking Ativan but had become addicted to Halcion and was consuming large doses. It seems reasonable to think that this was still another contributory factor to the trouble that had come upon me. Certainly, it should be a caution to others.

At any rate, my few hours of sleep were usually terminated at three or four in the morning, when I stared up into yawning darkness, wondering and writhing at the devastation taking place in my mind, and awaiting the dawn, which usually permitted me a feverish, dreamless nap. I’m fairly certain that it was during one of these insomniac trances that there came over me the knowledge, a weird and shocking revelation, like that of some long beshrouded metaphysical truth, that this condition would cost me my life if it continued on such a course. This must have been just before my trip to Paris.

Death, as I have said, was now a daily presence, blowing over me in cold gusts. I had not conceived precisely how my end would come. In short, I was still keeping the idea of suicide at bay. But plainly the possibility was around the corner, and I would soon meet it face to face.

What I had begun to discover is that, mysteriously and in ways that are totally remote from normal experience.

The gray drizzle of horror induced by depression takes on the quality of physical pain. But it is not an immediately identifiable pain, like that of a broken limb. It may be more accurate to say that despair, owing to some evil trick played upon the sick brain by the inhabiting psyche, comes to resemble the diabolical discomfort of being imprisoned in a fiercely overheated room. And because no breeze stirs this caldron, because there is no escape from this smothering confinement, it is entirely natural that the victim begins to think ceaselessly of oblivion.

*

Five

. . .

*

from

DARKNESS VISIBLE. A MEMOIR of MADNESS

by William Styron

get it at Amazon.com

A State Built on Murder. RISE AND KILL FIRST: The Secret History Of Israel’s Targeted Assassinations – Ronen Bergman.

OF ALL THE MEANS that democracies use to protect their security, there is none more fraught and controversial than “killing the driver”, assassination.

Israel’s reliance on assassination as a military tool did not happen by chance, but rather stems from the revolutionary and activist roots of the Zionist movement.

The six million would be avenged and let it be known that “no Nazi will place a foot on the soil of the Land of Israel.”

Since World War II, Israel has assassinated more people than any other country in the Western world. They have developed the most robust, streamlined assassination machine in history.

The members of Hashomer who led the Haganah at the outset were even willing to commit acts of violence against fellow Jews.

“You need to know how to forgive. You need to know how to forgive the enemy. However, we have no authority to forgive people like bin Laden. That, only God can do. Our job is to arrange a meeting between them. In my laboratory, I opened a matchmaker’s office, a bureau that arranged such meetings. I orchestrated more than thirty such meetings.” Natan Rotberg

Is it legitimate, both ethically and judicially, for a country to employ the gravest of all crimes in any code of ethics or law, the premeditated taking of a human life, in order to protect its own citizens?

MEIR DAGAN, CHIEF OF the Israeli Mossad, legendary spy and assassin, walked into the room, leaning on his cane.

He’d been using it ever since he was wounded by a mine laid by Palestinian terrorists he was fighting in the Gaza Strip as a young special-ops officer in the 1970s. Dagan, who knew a thing or two about the power of myths and symbols, was careful not to deny the rumors that there was a blade concealed in the cane, which he could bare with a push of a button.

Dagan was a short man, so dark-skinned that people were always surprised to hear that he was from Polish origins, and he had a potbelly with a presence of its own. On this occasion he was wearing a simple open-necked shirt, light black pants, and black shoes, and it looked as if he’d not paid any special attention to his appearance. There was something about him that expressed a direct, terse self-confidence, and a quiet, sometimes menacing charisma.

The conference room that Dagan entered that afternoon, on January 8, 2011, was in the Mossad Academy, north of Tel Aviv. For the first time ever, the head of the espionage agency was meeting with journalists in the heart of one of Israel’s most closely guarded and secret installations.

Dagan had no love for the media. “I’ve reached the conclusion that it is an insatiable monster,” he would tell me later, “so there’s no point in maintaining a relationship with it.” Nevertheless, three days before the meeting, I and a number of other correspondents had received a confidential invitation. I was surprised. For an entire decade I had been leveling some harsh criticism at the Mossad, and in particular at Dagan, making him very angry.

The Mossad did everything it could to give the affair a cloak-and-dagger atmosphere. We were told to come to the parking lot of Cinema City, a movie theater complex not far from Mossad HQ, and to leave everything in our cars except notebooks and writing implements. “You will be carefully searched, and we want to avoid any unpleasantness,” our escorts told us. From there we were driven in a bus with dark tinted windows to the Mossad headquarters complex. We passed through a number of electric gates and electronic signs warning those entering what was permitted and what forbidden inside the perimeter. Then came a thorough scanning with metal detectors to make sure we hadn’t brought any video or audio recording equipment. We entered the conference room, and Dagan came in a few minutes after us, walking around and shaking hands. When he got to me, he gripped my hand for a moment and said with a smile, “You really are some kind of a bandit.”

Then he sat down. He was flanked by the spokesman of Prime Minister Benjamin Netanyahu and the chief military censor, a female brigadier general. (The Mossad is a unit of the prime minister’s office, and, under national law, reporting on any of its activities is subject to censorship.) Both of these officials believed that Dagan had called the meeting merely to bid a formal farewell to the people who had covered his tenure, and that he would say nothing substantive.

They were wrong. The surprise was evident on the face of the prime minister’s spokesperson, whose eyes got wider and wider as Dagan continued speaking.

“There are advantages to having a back injury,” Dagan said, opening his address. “You get a doctor’s certificate confirming that you’re not spineless.” Very quickly, we realized that this was no mere wisecrack, as Dagan launched into a vehement attack on the prime minister of Israel. Benjamin Netanyahu, Dagan claimed, was behaving irresponsibly and, for his own egotistical reasons, leading the country into disaster. “That someone is elected does not mean that he is smart” was one of his jibes.

This was the last day of Dagan’s term as the Mossad’s director. Netanyahu was showing him the door, and Dagan, whose life’s dream had been to hold the position of Israel’s top spy, was not going to stand by with folded arms. The acute crisis of confidence between the two men had flared up around two issues, and both of them were intimately connected to Meir Dagan’s weapon of choice: assassination.

“That someone is elected does not mean that he is smart” Meir Dagan

Eight years earlier, Ariel Sharon had appointed Dagan to the Mossad post and put him in charge of disrupting the Iranian nuclear weapons project, which both men saw as an existential threat to Israel. Dagan acted in a number of ways to fulfill this task. The most difficult way, but also the most effective, Dagan believed, was to identify iran’s key nuclear and missile scientists, locate them, and kill them. The Mossad pinpointed fifteen such targets, of whom it eliminated six, mostly when they were on their way to work in the morning, by means of bombs with short time fuses, attached to their cars by a motorcyclist. In addition, a general of Iran’s Islamic Revolutionary Guard Corps, who was in charge of the missile project, was blown up in his headquarters together with seventeen of his men.

These operations and many others initiated by the Mossad, some in collaboration with the United States, were all successful, but Netanyahu and his defense minister, Ehud Barak, had begun to feel that their utility was declining. They decided that clandestine measures could no longer effectively delay the Iranian nuclear project, and that only a massive aerial bombardment of the Iranians’ nuclear facilities would successfully halt their progress toward acquiring such weapons.

Dagan strongly opposed this idea. Indeed, it flew in the face of everything he believed in: that open warfare should be waged only when “the sword is on our throat,” or as a last resort, in situations in which there was no other choice. Everything else could and should be handled through clandestine means.

“Assassinations,” he said, “have an effect on morale, as well as a practical effect. I don’t think there were many who could have replaced Napoleon, or a president like Roosevelt or a prime minister like Churchill. The personal aspect certainly plays a role. It’s true that anyone can be replaced, but there’s a difference between a replacement with guts and some lifeless character.”

Furthermore, the use of assassination, in Dagan’s view, “is a lot more moral” than waging all-out war. Neutralizing a few major figures is enough to make the latter option unnecessary and save the lives of untold numbers of soldiers and civilians on both sides. A large-scale attack against Iran would lead to a large-scale conflict across the Middle East, and even then it likely would not cause enough damage to the Iranian installations.

Finally, from Dagan’s point of view, if Israel started a war with Iran, it would be an indictment of his entire career. History books would show that he had not fulfilled the task that Sharon had given him: to put an end to Iranian nuclear acquisition using covert means, without recourse to an open assault.

Dagan’s opposition, and similar heavy pressure from the top military and intelligence chiefs, forced the repeated postponement of the attack on Iran. Dagan even briefed CIA Director Leon Panetta about the Israeli plan (the prime minister alleges he did so without permission), and soon President Obama was also warning Netanyahu not to attack.

The tension between the two men escalated even higher in 2010, seven years into Dagan’s tenure. Dagan had dispatched a hit team of twenty-seven Mossad operatives to Dubai to eliminate a senior official of the Palestinian terror group Hamas. They did the job: the assassins injected him with a paralyzing drug in his hotel room and made their getaway from the country before the body was discovered. But just a short while after their departure, due to a series of gross errors they made, forgetting to take into account Dubai’s innumerable CCTV cameras; using the same phony passports that the operatives had previously used to enter Dubai in order to follow the target; and a phone setup that the local police had no trouble in cracking, the whole world was soon watching video footage of their faces and a complete record of their movements. The discovery that this was a Mossad operation caused serious operational damage to the agency, as well as profound embarrassment to the State of Israel, which had once again been caught using fake passports of friendly Western countries for its agents. “But you told me it would be easy and simple, that the risk of things going wrong was close to zero,” Netanyahu fumed at Dagan, and ordered him to suspend many of the pending assassination plans and other operations until further notice.

The confrontation between Dagan and Netanyahu became more and more acute until Netanyahu (according to his version) decided not to extend Dagan’s tenure, or (in Dagan’s words) “I simply got sick of him and I decided to retire.”

At that briefing in the Mossad Academy and in a number of later interviews for this book, Dagan displayed robust confidence that the Mossad, under his leadership, would have been able to stop the Iranians from making nuclear weapons by means of assassinations and other pinpoint measures, for instance, working with the United States to keep the Iranians from being able to import critical parts for their nuclear project that they could not manufacture themselves. “If we manage to prevent Iran from obtaining some of the components, this would seriously damage their project. In a car there are 25,000 parts on average. Imagine if one hundred of them are missing. It would be very hard to make it go. “On the other hand,” Dagan added with a smile, returning to his favorite modus operandi, “sometimes it’s most effective to kill the driver, and that’s that.”

OF ALL THE MEANS that democracies use to protect their security, there is none more fraught and controversial than “killing the driver”, assassination.

Some, euphemistically, call it “liquidation.” The American intelligence community calls it, for legal reasons, “targeted killings.” In practice, these terms amount to the same thing: killing a specific individual in order to achieve a specific goal, saving the lives of people the target intends to kill, averting a dangerous act that he is about to perpetrate, and sometimes removing a leader in order to change the course of history.

The use of assassinations by a state touches two very difficult dilemmas. First, is it effective? Can the elimination of an individual, or a number of individuals, make the world a safer place? Second, is it morally and legally justified? is it legitimate, both ethically and judicially, for a country to employ the gravest of all crimes in any code of ethics or law, the premeditated taking of a human life, in order to protect its own citizens?

This book deals mainly with the assassinations and targeted killings carried out by the Mossad and by other arms of the Israeli government, in both peacetime and wartime, as well as, in the early chapters, by the underground militias in the pre-state era, organizations that were to become the army and intelligence services of the state, once it was established.

Since World War II, Israel has assassinated more people than any other country in the Western world. On innumerable occasions, its leaders have weighed what would be the best way to defend its national security and, out of all the options, have time and again decided on clandestine operations, with assassination the method of choice. This, they believed, would solve difficult problems faced by the state, and sometimes change the course of history. In many cases, Israel’s leaders have even determined that in order to kill the designated target, it is moral and legal to endanger the lives of innocent civilians who may happen to find themselves in the line of fire. Harming such people, they believe, is a necessary evil.

The numbers speak for themselves. Up until the start of the Second Palestinian Intifada, in September 2000, when Israel first began to respond to suicide bombings with the daily use of armed drones to perform assassinations, the state had conducted some 500 targeted killing operations. In these, at least 1,000 people were killed, both civilians and combatants. During the Second Intifada, Israel carried out some 1,000 more operations, of which 168 succeeded. Since then, up until the writing of this book, Israel has executed some 800 targeted killing operations, almost all of which were part of the rounds of warfare against Hamas in the Gaza Strip in 2008, 2012, and 2014 or Mossad operations across the Middle East against Palestinian, Syrian, and Iranian targets. By contrast, during the presidency of George W. Bush, the United States of America carried out 48 targeted killing operations, according to one estimate, and under President Barack Obama there were 353 such attacks.

Israel’s reliance on assassination as a military tool did not happen by chance, but rather stems from the revolutionary and activist roots of the Zionist movement, from the trauma of the Holocaust, and from the sense among Israel’s leaders and citizens that the country and its people are perpetually in danger of annihilation and that, as in the Holocaust, no one will come to their aid when that happens.

Because of Israel’s tiny dimensions, the attempts by the Arab states to destroy it even before it was established, their continued threats to do so, and the perpetual menace of Arab terrorism, the country evolved a highly effective military and, arguably, the best intelligence community in the world. They, in turn, have developed the most robust, streamlined assassination machine in history.

The following pages will detail the secrets of that machine, the fruit of a mixed marriage between guerrilla warfare and the military might of a technological powerhouse, its operatives, leaders, methods, deliberations, successes, and failures, as well as the moral costs. They will illustrate how two separate legal systems have arisen in Israel, one for ordinary citizens and one for the intelligence community and defense establishment. The latter system has allowed, with a nod and a wink from the government, highly problematic acts of assassination, with no parliamentary or public scrutiny, resulting in the loss of many innocent lives.

On the other hand, the assassination weapon, based on intelligence that is “nothing less than exquisite”, to quote the former head of the NSA and the CIA, General Michael Hayden, is what made Israel’s war on terror the most effective ever waged by a Western country. On numerous occasions, it was targeted killing that saved Israel from very grave cases.

The Mossad and Israel’s other intelligence arms have done away with individuals who were identified as direct threats to national security, and killing them has also sent a bigger message: If you are an enemy of Israel, we will find and kill you, wherever you are. This message has indeed been heard around the world. Occasional blunders have only enhanced the Mossad’s aggressive and merciless reputation, not a bad thing, when the goal of deterrence is as important as the goal of preempting specific hostile acts.

The assassinations were not all carried out by small, closed groups. The more complex they became, the more people took part, sometimes as many as hundreds, the majority of them below the age of twenty-five. Sometimes these young people will come with their commanders to meet the prime minister, the only one authorized to green-light an assassination, in order to explain the operation and get final approval. Such forums, in which most of the participants advocating for someone’s death are under the age of thirty, are probably unique to Israel. Some of the low ranking officers involved in these meetings have advanced over the years to become national leaders and even prime ministers themselves. What marks have remained imprinted on them from the times they took part in hit operations?

The United States has taken the intelligence gathering and assassination techniques developed in Israel as a model, and after 9/11 and President Bush’s decision to launch a campaign of targeted killings against Al Qaeda, it transplanted some of these methods into its own intelligence and war on terror systems. The command and control systems, the war rooms, the methods of information gathering, and the technology of the pilotless aircraft, or drones, that now serve the Americans and their allies were all in large part developed in Israel.

Nowadays, when the same kind of extrajudicial killing that Israel has used for decades is being used daily by America against its enemies, it is appropriate not only to admire the impressive operational capabilities that Israel has built, but also to study the high moral price that has been paid, and still is being paid, for the use of such power.

Chapter One

IN BLOOD AND FIRE

ON SEPTEMBER 29, 1944, David Shomron hid in the gloom of St. George Street, not far from the Romanian Church in Jerusalem. A church building was used as officers’ lodgings by the British authorities governing Palestine, and Shomron was waiting for one of those officers, a man named Tom Wilkin, to leave.

Wilkin was the commander of the Jewish unit at the Criminal investigation Department (CID) of the British Mandate for Palestine, and he was very good at his job, especially the part that involved infiltrating and disrupting the fractious Jewish underground. Aggressive, yet also exceptionally patient and calculating, Wilkin spoke fluent Hebrew, and after thirteen years of service in Palestine, he had an extensive network of informants. Thanks to the intelligence they provided, underground fighters were arrested, their weapons caches were seized, and their planned operations, aimed at forcing the British to leave Palestine, were foiled.

Which was why Shomron was going to kill him.

Shomron and his partner that night, Yaakov Banai (code named Mazal, “Luck”), were operatives with Lehi, the most radical of the Zionist underground movements fighting the British in the early 1940s. Though Lehi was the acronym for the Hebrew phrase “fighters for the freedom of Israel,” the British considered it a terrorist organization, referring to it dismissively as the Stern Gang, after its founder, the romantic ultra-nationalist Avraham Stern. Stern and his tiny band of followers employed a targeted mayhem of assassinations and bombings, a campaign of “personal terror,” as Lehi’s operations chief (and later Israeli prime minister), Yitzhak Shamir, called it.

Wilkin knew he was a target. Lehi already had tried to kill him and his boss, Geoffrey Morton, nearly three years earlier, in its first, clumsy operation. On January 20, 1942, assassins planted bombs on the roof and inside the building of 8 Yael Street, in Tel Aviv. Instead they ended up killing three police officers-two Jews and an Englishman, who arrived before Wilkin and Morton and tripped the charges. Later, Morton fled Palestine after being wounded in another attempt on his life, that one in retribution for Morton having shot Stern dead.

None of those details, the back-and-forth of who killed whom and in what order, mattered to Shomron. The British occupied the land the Zionists saw as rightfully theirs, that was what mattered, and Shamir had issued a death sentence against Wilkin.

For Shomron and his comrades, Wilkin was not a person but rather a target, prominent and high value. “We were too busy and hungry to think about the British and their families,” Shomron said decades later.

After discovering that Wilkin was residing in the Romanian Church annex, the assassins set out on their mission. Shomron and Banai had revolvers and hand grenades in their pockets. Additional Lehi operatives were in the vicinity, smartly dressed in suits and hats to look like Englishmen.

Wilkin left the officers’ lodgings in the church and headed for the ClD’s facility in the Russian Compound, where underground suspects were held and interrogated. As always, he was wary, scanning the street as he walked and keeping one hand in his pocket all the time. As he passed the corner of St. George and Mea Shearim Streets, a youngster sitting outside the neighborhood grocery store got up and dropped his hat. This was the signal, and the two assassins began walking toward Wilkin, identifying him according to the photographs they’d studied. Shomron and Banai let him pass, gripping their revolvers with sweating palms.

Then they turned around and drew.

“Before we did it, Mazal [Banai] said, ‘Let me shoot first,” Shomron recalled. “But when we saw him, I guess I couldn’t restrain myself. I shot first.”

Between them, Banai and Shomron fired fourteen times. Eleven of those bullets hit Wilkin. “He managed to turn around and draw his pistol,” Shomron said, “but then he fell face first. A spurt of blood came out of his forehead, like a fountain. It was not such a pretty picture.”

Shomron and Banai darted back into the shadows and made off in a taxi in which another Lehi man was waiting for them.

“The only thing that hurt me was that we forgot to take the briefcase in which he had all his documents,” Shomron said. Other than that, “I didn’t feel anything, not even a little twinge of guilt. We believed the more coffins that reached London, the closer the day of freedom would be.”

THE IDEA THAT THE return of the People of Israel to the Land of Israel could be achieved only by force was not born with Stern and his Lehi comrades.

The roots of that strategy can be traced to eight men who gathered in a stifling one room apartment overlooking an orange grove in Jaffa on September 29, 1907, exactly thirty seven years before a fountain of blood spurted from Wilkin’s head, when Palestine was still part of the Turkish Ottoman Empire. The flat was rented by Yitzhak Ben-Zvi, a young Russian who’d immigrated to Ottoman Palestine earlier that year. Like the others in his apartment that night, all emigrants from the Russian empire, sitting on a straw mat spread on the floor of the candlelit room, he was a committed Zionist, albeit part of a splinter sect that had once threatened to end the movement.

Zionism as a political ideology had been founded in 1896 when Viennese Jewish journalist Theodor Herzl published Der Judenstaat (The Jewish State). He had been deeply affected while covering the trial in Paris of Alfred Dreyfus, a Jewish army officer unjustly accused and convicted of treason.

In his book, Herzl argued that anti-Semitism was so deeply ingrained in European culture that the Jewish people could achieve true freedom and safety only in a nation-state of their own. The Jewish elite of Western Europe, who’d managed to carve out comfortable lives for themselves, mostly rejected Herzl. But his ideas resonated with poor and working-class Jews of Eastern Europe, who suffered repeated pogroms and continual oppression and to which some of them responded by aligning themselves with leftist uprisings.

Herzl himself saw Palestine, the Jews’ ancestral homeland, as the ideal location for a future Jewish state, but he maintained that any settlement there would have to be handled deliberately and delicately, through proper diplomatic channels and with international sanction, if a Jewish nation was to survive in peace. Herzl’s view came to be known as political Zionism.

Ben-Zvi and his seven comrades, on the other hand, were-like most other Russian Jews, practical Zionists. Rather than wait for the rest of the world to give them a home, they believed in creating one themselves, in going to Palestine, working the land, making the desert bloom. They would take what they believed to be rightfully theirs, and they would defend what they had taken.

This put the practical Zionists in immediate conflict with most of the Jews already living in Palestine. As a tiny minority in an Arab land, many of them peddlers and religious scholars and functionaries under the Ottoman regime, they preferred to keep a low profile. Through Mustafasubservience and compromise and bribery, these established Palestinian Jews had managed to buy themselves relative peace and a measure of security.

But Ben-Zvi and the other newcomers were appalled at the conditions their fellow Jews tolerated. Many were living in abject poverty and had no means of defending themselves, utterly at the mercy of the Arab majority and the venal officials of the corrupt Ottoman Empire. Arab mobs attacked and plundered Jewish settlements, rarely with any consequences. Worse, as Ben-Zvi and the others saw it, those same settlements had consigned their defense to Arab guards, who in turn would sometimes collaborate with attacking mobs.

Ben-Zvi and his friends found this situation to be unsustainable and intolerable. Some were former members of Russian left-wing revolutionary movements inspired by the People’s Will (Narodnaya Volya), an aggressive anti-tsarist guerrilla movement that employed terrorist tactics, including assassinations.

Disappointed by the abortive 1905 revolution in Russia, which in the end produced only minimal constitutional reforms, some of these socialist revolutionaries, social democrats, and liberals moved to Ottoman Palestine to reestablish a Jewish state.

They all were desperately poor, barely scraping by, earning pennies at teaching jobs or manual labor in the fields and orange groves, often going hungry. But they were proud Zionists. If they were going to create a nation, they first had to defend themselves. So they slipped through the streets of Jaffa in pairs and alone, making their way to the secret meeting in Ben-Zvi’s apartment.

That night, those eight people formed the first Hebrew fighting force of the modern age. They decreed that, from then forward, everything would be different from the image of the weak and persecuted Jew all across the globe. Only Jews would defend Jews in Palestine.

They named their fledgling army Bar-Giora, after one of the leaders of the Great Jewish Revolt against the Roman Empire, in the first century. On their banner, they paid homage to that ancient rebellion and predicted their future. “In blood and fire Judea fell,” it read. “In blood and fire Judea will rise.”

Judea would indeed rise. Ben-Zvi would one day be the Jewish nation’s second president. Yet first there would be much fire, and much blood.

BAR-GIORA WAS NOT, AT first, a popular movement. But more Jews arrived in Palestine from Russia and Eastern Europe every year, 35,000 between 1905 and 1914, bringing with them that same determined philosophy of practical Zionism.

With more like-minded Jews flooding into the Yishuv, as the Jewish community in Palestine was called, Bar-Giora in 1909 was reconstituted into the larger and more aggressive Hashomer (Hebrew for “the Guard”). By 1912, Hashomer was defending fourteen settlements. Yet it was also developing offensive, albeit clandestine, capabilities, preparing for what practical Zionists saw as an inevitable eventual war to take control of Palestine. Hashomer therefore saw itself as the nucleus for a future Jewish army and intelligence service.

Mounted on their horses, Hashomer vigilantes raided a few Arab settlements to punish residents who had harmed Jews, sometimes beating them up, sometimes executing them. In one case, a special clandestine assembly of Hashomer members decided to eliminate a Bedouin policeman, Aref al-Arsan, who had assisted the Turks and tortured Jewish prisoners. He was shot dead by Hashomer in June 1916.

Hashomer did not recoil from using force to assert its authority over other Jews, either. During World War I, Hashomer was violently opposed to NILI, a Jewish spy network working for the British in Ottoman Palestine. Hashomer feared that the Turks would discover the spies and wreak vengeance against the entire Jewish community. When they failed to get NILI to cease operations or to hand over a stash of gold coins they’d received from the British, they made an attempt on the life of Yosef Lishansky, one of its members, managing only to wound him.

In 1920, Hashomer evolved again, now into the Haganah (Hebrew for “Defense”). Though it was not specifically legal, the British authorities, who had been ruling the country for about three years, tolerated the Haganah as the paramilitary defensive arm of the Yishuv. The Histadrut, the socialist labor union of the Jews in Israel that was founded in the same year, and the Jewish Agency, the Yishuv’s autonomous governing authority, established a few years later, both headed by David Ben-Gurion, maintained command over the secret organization.

David Ben-Gurion

Ben-Gurion was born David Yosef Grijn in Plo’nsk, Poland, in 1886. From an early age, he followed in his father’s footsteps as a Zionist activist. In 1906, he migrated to Palestine and, thanks to his charisma and determination, soon became one of the leaders of the Yishuv, despite his youth. He then changed his name to Ben-Gurion, after another of the leaders of the revolt against the Romans.

Haganah in its early years was influenced by the spirit and aggressive attitude of Hashomer. On May 1, 1921, an Arab mob massacred fourteen Jews in an immigrants’ hostel in Jaffa. After learning that an Arab police officer by the name of Tewfik Bey had helped the mob get into the hostel, Haganah sent a hit squad to dispose of him, and on January 17, 1923, he was shot dead in the middle of a Tel Aviv street. “As a matter of honor,” he was shot from the front and not in the back, according to one of those involved, and the intention was “to show the Arabs that their deeds are not forgotten and their day will come, even if belatedly.”

The members of Hashomer who led the Haganah at the outset were even willing to commit acts of violence against fellow Jews. Jacob de Haan was a Dutch born Haredi-an ultra-Orthodox Jew, living in Jerusalem in the early 1920s. He was a propagandist for the Haredi belief that only the Messiah could establish a Jewish state, that God alone would decide when to return the Jews to their ancestral homeland, and that humans trying to expedite the process were committing a grave sin. In other words, de Haan was a staunch anti-Zionist, and he was surprisingly adept at swaying international opinion. To Yitzhak Ben-Zvi, by now a prominent Haganah leader, that made de Haan dangerous. So he ordered his death.

On June 30, 1924, just a day before de Haan was to travel to London to ask the British government to reconsider its promise to establish a Jewish nation in Palestine, two assassins shot him three times as he emerged from a synagogue on Jaffa Road in Jerusalem.

Ben-Gurion, however, took a dim view of such acts. He realized that in order to win even partial recognition from the British for Zionist aims, he would have to enforce orderly and more moderate norms on the semi-underground militia under his command. Hashomer’s brave and lethal lone riders were replaced after the de Haan murder by an organized, hierarchical armed force. Ben-Gurion ordered Haganah to desist from using targeted killings. “As to personal terror, Ben-Gurion’s line was consistently and steadily against it,” Haganah commander Yisrael Galili testified later, and he recounted a number of instances in which Ben-Gurion had refused to approve proposals for hits against individual Arabs. These included the Palestinian leader Hajj Amin al-Husseini and other members of the Arab Higher Committee, and British personnel, such as a senior official in the Mandate’s lands authority who was obstructing Jewish settlement projects.

Not everyone was eager to acquiesce to Ben-Gurion. Avraham Tehomi, the man who shot de Haan, despised the moderate line Ben-Gurion took against the British and the Arabs, and, together with some other leading figures, he quit Haganah and in 1931 formed the Irgun Zvai Leumi, the “National Military Organization” whose Hebrew acronym is Etzel, usually referred to in English as IZL or the Irgun. This radical right-wing group was commanded in the 1940s by Menachem Begin, who in 1977 was to become prime minister of Israel. Inside the Irgun, too, there were clashes, personal and ideological. Opponents of Begin’s agreement to cooperate with Britain in its war against the Nazis broke away and formed Lehi. For these men, any cooperation with Britain was anathema.

These two dissident groups both advocated, to different degrees, the use of targeted killings against the Arab and British enemy, and against Jews they considered dangerous to their cause. Ben-Gurion remained adamant that targeted killings would not be used as a weapon and even took aggressive measures against those who did not obey his orders.

But then World War II ended, and everything, even the views of the obstinate Ben-Gurion, changed.

DURING WORLD WAR II, some 38,000 Jews from Palestine volunteered to help and serve in the British Army in Europe. The British formed the so-called Jewish Brigade, albeit somewhat reluctantly and only after being pressured by the Yishuv’s civilian leadership.

Unsure exactly what to do with the Brigade, the British first sent it to train in Egypt. It was there, in mid-1944, that its members first heard of the Nazi campaign of Jewish annihilation. When they were finally sent to Europe to fight in Italy and Austria, they witnessed the horrors of the Holocaust firsthand and were among the first to send detailed reports to Ben-Gurion and other leaders of the Yishuv.

One of those soldiers was Mordechai Gichon, who later would be one of the founders of Israeli military intelligence. Born in Berlin in 1922, Gichon had a father who was Russian and a mother who was the scion of a famous German-Jewish family, niece of Rabbi Leo Baeck, a leader of Germany’s Liberal (Reform) Jews. Gichon’s family moved to Palestine in 1933, after Mordechai had been required in his German school to give the Nazi salute and sing the party anthem.

He returned as a soldier to a Europe in ruins, his people nearly destroyed, their communities smoldering ruins. “The Jewish people had been humiliated, trampled, murdered,” he said. “Now was the time to strike back, to take revenge. In my dreams, when I enlisted, revenge took the form of me arresting my best friend from Germany, whose name was Detlef, the son of a police major. That’s how I would restore lost Jewish honor.”

It was that sense of lost honor, of a people’s humiliation, as much as rage at the Nazis, that drove men like Gichon. He first met the Jewish refugees on the border between Austria and Italy. The men of the Brigade fed them, took off their own uniforms to clothe them against the cold, tried to draw out of them details of the atrocities they had undergone. He remembers an encounter in June 1945 in which a female refugee came up to him.

“She broke away from her group and spoke to me in German,” he said. “She said, ‘You, the soldiers of the Brigade, are the sons of Bar Kokhba’”, the great hero of the Second Jewish Revolt against the Romans, in AD. 132-135. “She said, ‘I will always remember your insignia and what you did for us.”’

Gichon was flattered by the Bar Kokhba analogy, but for her praise and gratitude, Gichon felt only pity and shame. If the Jews in the Brigade were the sons of Bar Kokhba, who were these Jews? The soldiers from the Land of Israel, standing erect, tough, and strong, saw the Holocaust survivors as victims who needed help, but also as part of the European Jewry who had allowed themselves to be massacred. They embodied the cowardly, feeble stereotype of the Jews of the Diaspora, the Exile, in traditional Jewish and Zionist parlance, who surrendered rather than fought back, who did not know how to shoot or wield a weapon. It was that image, in its most extreme version, the Jew as a Muselmann, prisoners’ slang for the emaciated, zombie-like inmates hovering near death in the Nazi camps, that the new Jews of the Yishuv rejected.

“My brain could not grasp, not then and not today, how it could have been that there were tens of thousands of Jews in a camp with only a few German guards, but they did not rise up, they simply went like lambs to the slaughter,” Gichon said more than sixty years later. “Why didn’t they tear the Germans to shreds? I’ve always said that no such thing could happen in the Land of Israel. Had those communities had leaders worthy of the name, the entire business would have looked completely different.”

In the years following the war, the Zionists of the Yishuv would prove, both to the world and, more important, to themselves, that Jews would never again go to such slaughter, and that Jewish blood would not come cheaply.

The six million would be avenged.

“We thought we could not rest until we had exacted blood for blood, death for death,” said Hanoch Bartov, a highly regarded Israeli novelist who enlisted in the Brigade a month before his seventeenth birthday.

Such vengeance, though, atrocity for atrocity would violate the rules of war and likely be disastrous for the Zionist cause. Ben-Gurion, practical as always, publicly said as much: “Revenge now is an act of no national value. It cannot restore life to the millions who were murdered.”

Still, the Haganah’s leaders privately understood the need for some sort of retribution, both to satisfy the troops who had been exposed to the atrocities and also to achieve some degree of historical justice and deter future attempts to slaughter Jews. Thus, they sanctioned some types of reprisals against the Nazis and their accomplices. Immediately after the war, a secret unit, authorized and controlled by the Haganah high command and unknown to the British commanders, was set up within the Brigade. It was called Gmul, Hebrew for “Recompense.” The unit’s mission was “revenge, but not a robber’s revenge,” as a secret memo at the time put it. “Revenge against those SS men who themselves took part in the slaughter.”

“We looked for big fish,” Mordechai Gichon said, breaking a vow of silence among the Gmul commanders that he’d kept for more than sixty years. “The senior Nazis who had managed to shed their uniforms and return to their homes.”

The Gmul agents worked undercover even as they performed their regular Brigade duties. Gichon himself assumed two fake identities, one as a German civilian, the other as a British major, as he hunted Nazis. In expeditions under his German cover, Gichon recovered the Gestapo archives in Tarvisio, Villach, and Klagenfurt, to which fleeing Nazis had set fire but only a small part of which actually burned. Operating as the British major, he gleaned more names from Yugoslavian Communists who were still afraid to carry out revenge attacks themselves. A few Jews in American intelligence also were willing to help by handing over information they had on escaped Nazis, which they thought the Palestinian Jews would use to better effect than the American military.

Coercion worked, too. In June 1945, Gmul agents found a Polish-born German couple who lived in Tarvisio. The wife had been involved in transferring stolen Jewish property from Austria and Italy to Germany, and her husband had helped run the regional Gestapo office. The Palestinian Jewish soldiers offered them a stark choice: cooperate or die.

“The guy broke and said he was willing to cooperate,” said Yisrael Karmi, who interrogated the couple and later, after Israel was born, would become the commander of the Israeli Army’s military police. “I assigned him to prepare lists of all the senior officials that he knew and who had worked with the Gestapo and the SS. Name, date of birth, education, and positions.”

The result was a dramatic intelligence breakthrough, a list of dozens of names. Gmul’s men tracked down each missing NAZI, finding some wounded in a local hospital, where they were being treated under stolen ALIASES, and then pressured those men to provide more information. They promised each German he would not be harmed if he cooperated, so most did. Then, when they were no longer useful, Gmul agents shot them and dumped the bodies. There was no sense in leaving them alive to tip the British command to Gmul’s clandestine mission.

Once a particular name had been verified, the second phase began: locating the target and gathering information for the final killing mission.

Gichon, who’d been born in Germany, often was assigned that job. “No one suspected me,” he said. “My vocal cords were of Berlin stock. I’d go to the corner grocery store or pub or even just knock on a door to convey greetings from someone. Most of the time, the people would respond to their real names, or recoil into vague silence, which was as good as a confirmation.” Once the identification was confirmed, Gichon would track the German’s movements and provide a detailed sketch of the house where he lived or the area that had been chosen for the abduction.

The killers themselves worked in teams of no more than five men. When meeting their target, they generally wore British military police uniforms, and they typically told their target they had come to take a man named so-and-so for interrogation. Most of the time, the German came without objection. As one of the unit’s soldiers, Shalom Giladi, related in his testimony to the Haganah Archive, the Nazi was sometimes killed instantly, and other times transported to some remote spot before being killed. “In time we developed quiet, rapid, and efficient methods of taking care of the SS men who fell into our hands,” he said.

“As anyone who has ever gotten into a pickup truck knows, a person hoisting himself up into one braces his foot on the rear running board, leans forward under the canvas canopy, and sort of rolls in. The man lying in wait inside the truck would take advantage of this natural tilt of the body.

The minute the German’s head protruded into the gloom, the ambusher would bend over him and wrap his arms under his chin, around his throat, in a kind of reverse choke hold, and, carrying that into a throttle embrace, the ambusher would fall back flat on the mattress, which absorbed every sound. The backward fall, while gripping the German’s head, would suffocate the German and break his neck instantly.

One day, a female SS officer escaped from an English detention camp next to our base. After the British discovered that the officer had escaped, they sent out photographs of her taken during her imprisonment, front and side view, to all the military police stations. We went through the refugee camp and identified her. When we addressed her in German, she played the fool and said she only knew Hungarian. That wasn’t a problem. A Hungarian kid went up to her and said: “A ship carrying illegal immigrants from Hungary is about to sail for Palestine. Pack up your belongings quietly and come with us.” She had no choice but to take the bait and went with us in the truck. During this operation, I sat with Zaro [Meir Zorea, later an IDF general] in the back while Karmi drove. The order Karmi gave us was: “When I get some distance to a suitable deserted place, I’ll honk the horn. That will be the sign to get rid of her.”

That’s what happened. Her last scream in German was: “Was ist los?” (“What’s going on?”). To make sure she was dead, Karmi shot her and we gave her body and the surroundings the appearance of a violent rape.

In most cases we brought the Nazis to a small line of fortifications in the mountains. There were fortified caves there, abandoned. Most of those facing their executions would lose their Nazi arrogance when they heard that we were Jews. “Have mercy on my wife and children!” We would ask him how many such screams the Nazis had heard in the extermination camps from their Jewish victims.”

The operation lasted only three months, from May to July, during which time Gmul killed somewhere between one hundred and two hundred people. Several historians who’ve researched Gmul’s operations maintain that the methods used to identify targets were insufficient, and that many innocents were killed. On many occasions, those critics argue, Gmul teams were exploited by their sources to carry out personal vendettas; in other cases, operatives simply identified the wrong person.

Gmul was closed down when the British, who’d heard complaints about disappearances from German families, grasped what was going on. They decided not to investigate further, but to transfer the Jewish Brigade to Belgium and the Netherlands, away from the Germans, and Haganah command issued a firm order to cease revenge operations. The Brigade’s new priorities, according to the Haganah, not the British, were to look after Holocaust survivors, to help organize the immigration of refugees to Palestine in the face of British opposition, and to appropriate weapons for the Yishuv.

YET, THOUGH THEY ORDERED Gmul to stop killing Germans in Europe, the Haganah’s leaders did not forsake retribution. The vengeance that had been halted in Europe, they decided, would be carried on in Palestine itself.

Members of the German Tempelgesellschaft (the Templer sect) had been expelled from Palestine by the British at the beginning of the war because of their nationality and Nazi sympathies. Many joined the German war effort and took an active part in the persecution and annihilation of the Jews. When the war ended, some of them returned to their former homes, in Sarona, in the heart of Tel Aviv, and other locations.

The leader of the Templers in Palestine was a man named Gotthilf Wagner, a wealthy industrialist who assisted the Wehrmacht and the Gestapo during the war. A Holocaust survivor by the name of Shalom Friedman, who was posing as a Hungarian priest, related that in 1944 he met Wagner, who “boasted that he was at Auschwitz and Buchenwald twice. When he was in Auschwitz, they brought out a large group of Jews, the youngest ones, and poured flammable liquid over them. ‘I asked them if they knew there was a hell on earth, and when they ignited them I told them that this was the fate awaiting their brethren in Palestine.” After the war, Wagner organized the attempts to allow the Templers to return to Palestine.

Rafi Eitan, the son of Jewish pioneers from Russia, was seventeen at the time. “Here come exultant Germans, who had been members of the Nazi Party, who enlisted to the Wehrmacht and SS, and they want to return to their property when all the Jewish property outside was destroyed,” he said.

Eitan was a member of a seventeen man force from the Haganah’s “special company” sent to liquidate Wagner, under a direct order from the Haganah high command. The Haganah chief of staff, Yitzhak Sadeh, realized that this was not a regular military operation and summoned the two men who had been selected to squeeze the trigger. To encourage them, he told them about a man he had shot with his pistol in Russia as revenge for a pogrom.

On March 22, 1946, after painstaking intelligence gathering, the hit squad lay in wait for Wagner in Tel Aviv. They forced him off the road onto a sandy lot at 123 Levinsky Street and shot him. Haganah’s underground radio station, Kol Yisraei (the Voice of Israel), announced the following day, “The wellknown Nazi Gotthilf Wagner, head of the German community in Palestine, was executed yesterday by the Hebrew underground. Let it be known that no Nazi will place a foot on the soil of the Land of Israel.”

Shortly thereafter, Haganah assassinated two other Templers in the Galilee and two more in Haifa, where the sect had also established communities.

“It had an immediate effect,” Eitan said. “The Templers disappeared from the country, leaving everything behind, and were never seen again.” The Templers’ neighborhood in Tel Aviv, Sarona, would become the headquarters of Israel’s armed forces and intelligence services. And Eitan, an assassin at seventeen, would help found the Mossad’s targeted killing unit.

The killing of the Templers was not merely a continuation of the acts of revenge against the Nazis in Europe, but signified a major change in policy. The lessons that the new Jews of Palestine learned from the Holocaust were that the Jewish people would always be under the threat of destruction, that others could not be relied upon to protect the Jews, and that the only way to do so was to have an independent state.

A people living with this sense of perpetual danger of annihilation is going to take any and all measures, however extreme, to obtain security, and will relate to international laws and norms in a marginal manner, if at all.

From now on, Ben-Gurion and the Haganah would adopt targeted killings, guerrilla warfare, and terrorist attacks as additional tools, above and beyond the propaganda and political measures that had always been used, in the effort to achieve the goal of a state and to preserve it. What had only a few years before been a means used only by the outcast extremists of Lehi and the Irgun was now seen by the mainstream as a viable weapon.

At first, Haganah units began assassinating Arabs who had murdered Jewish civilians. Then the militia’s high command ordered a “special company” to begin “personal terror operations,” a term used at the time for the targeted killings of officers of the British CID who had persecuted the Jewish underground and acted against the Jewish immigration to the Land of Israel. They were ordered to “blow up British intelligence centers that acted against Jewish acquisition of weapons” and “to take retaliatory action in cases where British military courts sentence Haganah members to death.”

Ben-Gurion foresaw that a Jewish state would soon be established in Palestine and that the new nation would immediately be forced to fight a war against Arabs in Palestine and repel invasions by the armies of neighboring Arab states.

The Haganah command thus also began secretly preparing for this all-out war, and as part of the preparations, an order, code named Zarzir (or Starling) was issued, providing for the assassination of the heads of the Arab population of Palestine.

WHILE THE HAGANAH SLOWLY stepped up the use of targeted killings, the radical undergrounds had their killing campaign in full motion, trying to push the British out of Palestine.

Yitzhak Shamir, now in command of Lehi, resolved not only to eliminate key figures of the British Mandate locally, killing CID personnel and making numerous attempts to do the same to the Jerusalem police chief, Michael Joseph McConnell, and the high commissioner, Sir Harold McMichael, but also Englishmen in other countries who posed a threat to his political objective. Walter Edward Guinness, more formally known as Lord Moyne, for example, was the British resident minister of state in Cairo, which was also under British rule. The Jews in Palestine considered Moyne a flagrant anti-Semite who had assiduously used his position to restrict the Yishuv’s power by significantly reducing immigration quotas for Holocaust survivors.

Shamir ordered Moyne killed. He sent two Lehi operatives, Eliyahu Hakim and Eliyahu Bet-Zuri, to Cairo, where they waited at the door to Moyne’s house. When Moyne pulled up, his secretary in the car with him, Hakim and Bet-Zuri sprinted to the car. One of them shoved a pistol through the window, aimed it at Moyne’s head, and fired three times. Moyne gripped his throat. “Oh, they’ve shot us!” he cried, and then slumped forward in his seat. Still, it was an amateurish operation. Shamir had counseled his young killers to arrange to escape in a car, but instead they fled on slow-moving bicycles. Egyptian police quickly apprehended them, and Hakim and Bet-Zuri were tried, convicted, and, six months later, hanged.

The assassination had a decisive effect on British officials, though not the one Shamir had envisioned. As Israel would learn repeatedly in future years, it is very hard to predict how history will proceed after someone is shot in the head.

After the unmitigated evil of the Holocaust, the attempted extermination of an entire people in Europe, there was growing sympathy in the West for the Zionist cause.

According to some accounts, up until the first week of November 1944, Britain’s prime minister, Winston Churchill, had been pushing his cabinet to support the creation of a Jewish state in Palestine. He rallied several influential figures to back the initiative, including Lord Moyne. It is not a stretch to assume, then, that Churchill might well have arrived at the Yalta summit with Franklin Roosevelt and Joseph Stalin with a clear, positive policy regarding the future of a Jewish state, had Lehi not intervened. Instead, after the Cairo killing, Churchill labeled the attackers “a new group of gangsters” and announced that he was reconsidering his position.

And the killing continued. On July 22, 1946, members of Menachem Begin’s Irgun planted 350 KG explosives in the south wing of the King David Hotel, in Jerusalem, where the British Mandate’s administration and army and intelligence offices were housed. A warning call from the Irgun apparently was dismissed as a hoax; the building was not evacuated before a massive explosion ripped through it. Ninety-one people were killed, and forty-five wounded.

This was not the targeted killing of a despised British official or a guerrilla attack on a police station. Instead, it was plainly an act of terror, aimed at a target with numerous civilians inside. Most damningly, many Jews were among the casualties.

The King David Hotel bombing sparked a fierce dispute in the Yishuv. Ben-Gurion immediately denounced the Irgun and called it “an enemy of the Jewish people.”

But the extremists were not deterred.

Three months after the King David attack, on October 31, a Lehi cell, again acting on their own, without Ben-Gurion’s approval or knowledge, bombed the British embassy in Rome. The embassy building was severely damaged, but thanks to the fact that the operation took place at night, only a security guard and two Italian pedestrians were injured.

Almost immediately after that, Lehi mailed letter bombs to every senior British cabinet member in London. On one level, this effort was a spectacular failure, not a single letter exploded, but on another, Lehi had made its point, and its reach, clear. The files of MI5, Britain’s security service, showed that Zionist terrorism was considered the most serious threat to British national security at the time, even more serious than the Soviet Union. Irgun cells in Britain were established, according to one MI5 memo, “to beat the dog in its own kennel.” British intelligence sources warned of a wave of attacks on “selected VIPs,” among them Foreign Secretary Ernest Bevin and even Prime Minister Clement Attlee himself. At the end of 1947, a report to the British high commissioner tallied the casualties of the previous two years: 176 British Mandate personnel and civilians killed.

“Only these actions, these executions, caused the British to leave,” David Shomron said, decades after he shot Tom Wilkin dead on a Jerusalem street. “If Avraham Stern had not begun the war, the State of Israel would not have come into being.”

Avraham Stern, leader and founder of Lehi

One may argue with these statements. The shrinking British Empire ceded control of the majority of its colonies, including many countries where terror tactics had not been employed, due to economic reasons and increased demands for independence from the native populations. India, for instance, gained its independence right around the same time.

Nevertheless, Shomron and his ilk were firmly convinced that their own bravery and their extreme methods had brought about the departure of the British.

And it was the men who fought that bloody underground war, guerrillas, assassins, terrorists, who would play a central role in the building of the new state of Israel’s armed forces and intelligence community.

Chapter two

A SECRET WORLD IS BORN

ON NOVEMBER 29, 1947, the United Nations General Assembly voted to divide Palestine, carving out a sovereign Jewish homeland. The partition wouldn’t go into effect until six months later, but Arab attacks began the very next day. Hassan Salameh, the commander of the Palestinian forces in the southern part of the country, and his fighters ambushed two Israeli buses near the town of Petah-Tikva, murdering eight passengers and injuring many others. Civil war between Palestinian Arabs and Jews had begun. The day after the bus attacks, Salameh stood in the central square of the Arab port city of Jaffa. “Palestine will turn into a bloodbath,” he promised his countrymen. He kept that promise: During the next two weeks, 48 Jews were killed and 155 wounded.

Salameh, who led a force of five hundred guerrillas and even directly attacked Tel Aviv, became a hero in the Arab world, lionized in the press. The Egyptian magazine Al-Musawar published an enormous photograph of Salameh briefing his forces in its January 12, 1948, issue, under the banner headline THE HERO HASSAN SALAMEH, COMMANDER OF THE SOUTHERN FRONT.

Ben-Gurion had prepared for such assaults. To his thinking, Palestine’s Arabs were the enemy, and the British, who would continue to rule until the partition took formal effect in May 1948, were their abettors. The Jews could depend only on themselves and their rudimentary defenses. Most of the Haganah troops were poorly trained and poorly equipped, their arms hidden in secret caches to avoid confiscation by the British. They were men and women who had served in the British Army, bolstered by new immigrants who had survived the Holocaust (some of them Red Army veterans), but they were vastly outnumbered by the combined forces of the Arab states. Ben-Gurion was aware of the estimations of the CIA and other intelligence services that the Jews would collapse under Arab attack. Some of his own people weren’t confident of victory. But Ben-Gurion, at least outwardly, displayed confidence in the Haganah’s ability to win.

To bridge the numerical gap, the Haganah’s plan, then, was to use selective force, picking targets for maximum effectiveness. As part of this conception, a month into the civil war, its high command launched Operation Starling, which named twenty-three leaders of the Palestinian Arabs who were to be targeted.

The mission, according to Haganah’s commander in chief, Yaakov Dori, was threefold: “Elimination or capture of the leaders of the Arab political parties; strikes against political centers; strikes against Arab economic and manufacturing centers.”

Hassan Salameh was at the top of the list of targets. Under the leadership of Hajj Amin al-Husseini, the grand mufti of Jerusalem and spiritual leader of the Palestinian Arabs, Salameh had helped lead the Arab Revolt of 1936, in which Arab guerrillas for three years attacked both British and Jewish targets.

Both al-Husseini and Salameh fled Palestine after they were put on the British Mandate’s most wanted list. In 1942, they joined forces with the SS and the Abwehr, the Nazis’ military intelligence agency, to plot Operation Atlas. It was a grandiose plan in which German and Arab commandos would parachute into Palestine and poison Tel Aviv’s water supply in order to kill as many Jews as possible, rousing the country’s Arabs to fight a holy war against the British occupiers. It failed miserably when the British, having cracked the Nazis’ Enigma code, captured Salameh and four others after they dropped into a desert ravine near Jericho on October 6, 1944.

After World War II, the British released al-Husseini and Salameh. The Jewish Agency’s Political Department, which oversaw much of the Yishuv’s covert activity in Europe, tried to locate the former and kill him several times between 1945 and 1948. The motive was partly revenge for the mufti’s alliance with Hitler, but it was also defensive: Al-Husseini might have been out of the country, but he was still actively involved in organizing attacks on Jewish settlements in northern Palestine and in attempts to assassinate Jewish leaders. Due to a lack of intelligence and trained operational personnel, all those attempts failed.

The hunt for Salameh, the first Haganah operation to integrate human and electronic intelligence, began promisingly. A unit belonging to SHAI, the Haganah’s intelligence branch, and commanded by Isser Harel, tapped into the central telephone trunk line that connected Jaffa with the rest of the country. Harel had a toolshed built on the grounds of the nearby Mikveh Israel agricultural school and filled it with pruning shears and lawn mowers. But hidden in a pit under the floor was a listening device clipped to the copper wires of Jaffa’s phone system. “I’ll never forget the face of the Arabic-speaking SHAI operative who put on a set of headphones and listened to the first conversation,” Harel later wrote in his memoir. “His mouth gaped in astonishment and he waved his hand emotionally to silence the others who were tensely waiting. The lines were bursting with conversations that political leaders and the chiefs of armed contingents were conducting with their colleagues.” One of the speakers was Salameh. In one of the intercepted calls, SHAI learned he would be traveling to Jaffa. Haganah agents planned to ambush him by felling a tree to block the road on which his car would be traveling.

But the ambush failed, and it was not the last failure. Salameh survived multiple assassination attempts before falling in combat in June 1948, his killer unaware of his identity. Almost all of the other Operation Starling targeted killing bids also failed, because of faulty intelligence or flawed performances by the unskilled and inexperienced hit men.

Isser Harel

THE ONLY OPERATIONS THAT did succeed were all carried out by two of the Haganah’s elite units, both of which belonged to the Palmach, the militia’s only well-trained and fairly weII-armed corps. One of these units was the Palyam, the “marine company,” and the other was “the Arab Platoon,” a clandestine commando unit whose members operated disguised as Arabs.

Palyam, the naval company, was ordered to take over the port in Haifa, Palestine’s most important maritime gateway, as soon as the British departed. Its task was to steal as much of the weaponry and equipment the British were beginning to ship out as possible, and to prevent the Arabs from doing likewise.

“We focused on the Arab arms acquirers in Haifa and the north. We searched for them and killed them,” recalled Avraham Dar, one of the Palyam men.

Dar, who was a native English speaker, and two other Palyam men posed as British soldiers wanting to sell stolen gear to the Palestinians for a large amount of cash. A rendezvous was set up for the exchange near an abandoned flour mill on the outskirts of an Arab village. The three Jews, wearing British uniforms, were at the meeting place when the Palestinians arrived. Four others who were hiding nearby waited for the signal and then fell upon the Arabs, killing them with metal pipes. “We feared that gunshots would wake the neighbors, and we decided on a silent operation,” said Dar.

The Arab Platoon was established when the Haganah decided it needed a nucleus of trained fighters who could operate deep inside enemy lines, gathering information and carrying out sabotage and targeted killing missions. The training of its men, most of them immigrants from Arab lands, included commando tactics and explosives, but also intensive study of Islam and Arab customs. They were nicknamed Mistaravim, the name by which Jewish communities went in some Arab countries, where they practiced the Jewish religion, but were similar to the Arabs in all other respects-dress, language, social customs, etc.

Cooperation between the two units produced an attempt on the life of Sheikh Nimr al-Khatib, a head of the Islamic organizations of Palestine, one of the original targets of Operation Starling, because of his considerable influence over the Palestinian street. The Mistaravim could move around without being stopped by either the British or the Arabs. In February 1948, they ambushed al-Khatib when he returned from a trip to Damascus with a carload of ammunition. He was badly wounded, left Palestine, and removed himself from any active political roles.

A few days later, Avraham Dar heard from one of his port worker informants that a group of Arabs in a café had been talking about their plan to detonate a vehicle packed with explosives in a crowded Jewish section of Haifa. The British ambulance that they had acquired for this purpose was being readied in a garage in Nazareth Road, in the Arab part of the city. The Mistaravim prepared a bomb of their own in a truck that they drove into the Arab district, posing as workers engaged in fixing a burst pipe, and parked next to the wall of the garage. “What are you doing here? No parking here! Move the truck!” the men in the garage yelled at them in Arabic.

“Right away, we’re just getting a drink, and we need to take a leak” the Mistaravim replied in Arabic, adding a few juicy curses. They walked away to a waiting car, and minutes later their bomb went off, detonating the one in the ambulance as well, and killing the five Palestinians working on it.

ON MAY 14, 1948, Ben-Gurion declared the establishment of the new state of Israel and became its first prime minister and minister of defense. He knew what to expect next.

Years earlier, Ben-Gurion had ordered the formation of a deep network of sources in the Arab countries. Now, three days before the establishment of Israel, Reuven Shiloah, director of the Political Department of the Jewish Agency, the agency’s intelligence division, had informed him that “the Arab states have decided finally to launch a simultaneous attack on May 15. They are relying on the lack of heavy armaments and a Hebrew air force.” Shiloah provided many details about the attack plan.

The information was accurate.

At midnight, after the state was declared, seven armies attacked. They far outnumbered and were infinitely better equipped than the Jewish forces, and they achieved significant gains early on, conquering settlements and inflicting casualties.

The secretary general of the Arab League, Abdul Rahman Azzam Pasha, declared, “This will be a war of great destruction and slaughter that will be remembered like the massacres carried out by the Mongols and the Crusaders.”

But the Jews, now officially “Israelis”, rapidly regrouped and even went on the offensive. After a month, a truce was mediated by the United Nations special envoy, Count Folke Bernadotte. Both sides were exhausted and in need of rest and resupply.

When fighting resumed, the tables were turned and, with excellent intelligence and battle management, along with the help of many Holocaust survivors who had only just arrived from Europe, the Israelis drove the Arab forces back and eventually conquered far more territory than had been allocated to the Jewish state in the UN partition plan.

Though Israel had repelled superior armies, Ben-Gurion was not sanguine about the embryonic Israel Defense Forces’ short-term victory. The Arabs might have lost the first battles, but they, both those who lived in Palestine and those in the Arab states surrounding Israel, refused to accept the legitimacy of the new nation. They vowed to destroy Israel and return the refugees to their homes.

Ben-Gurion knew the IDF couldn’t hope to defend Israel’s long, convoluted borders through sheer manpower. From the remnants of the Haganah’s SHAI intelligence operations, he had to begin building a proper espionage system fit for a legitimate state.

On June 7, Ben-Gurion summoned his top aides, headed by Shiloah, to his office in the former Templer colony in Tel Aviv. “Intelligence is one of the military and political tools that we urgently need for this war,” Shiloah wrote in a memo to Ben-Gurion. “It will have to become a permanent tool, including in our peacetime political apparatus.”

Ben-Gurion did not need to be persuaded. After all, a large part of the surprising, against-all-odds establishment of the state, and its defense, was owed to the effective use of accurate intelligence.

That day, he ordered the establishment of three agencies. The first was the Intelligence Department of the IsraeI Defense Forces General Staff, later commonly referred to by its Hebrew acronym, AMAN. Second was the Shin Bet (acronym for the General Security Service), responsible for internal security and created as a sort of hybrid between the American FBI and the British Ml5. (The organization later changed its name to the Israeli Security Agency, but most Israelis still refer to it by its acronym, Shabak, or, more commonly, as in this book, as Shin Bet.) And a third, the Political Department, now belonging to the new Foreign Ministry, instead of the Jewish Agency, would engage in foreign espionage and intelligence collection. Abandoned Templer homes in the Sarona neighborhood, near the Defense Ministry, were assigned to each outfit, putting Ben-Gurion’s office at the center of an ostensibly organized force of security services.

But nothing in those first months and years was so tidy. Remnants of Haganah agencies were absorbed into various security services or spy rings, then shuffled and reabsorbed into another. Add to that the myriad turf battles and clashing egos of what were essentially revolutionaries, and much was chaos in the espionage underground. “They were hard years,” said Isser Harel, one of the founding fathers of Israeli intelligence. “We had to establish a country and defend it. But the structure of the services and the division of labor was determined without any systematic judgment, without discussions with all the relevant people, in an almost dilettantish and conspiratorial way.”

Under normal conditions, administrators would establish clear boundaries and procedures, and field agents would patiently cultivate sources of information over a period of years. But Israel did not have this luxury. Its intelligence operations had to be built on the fly and under siege, while the young country was fighting for its very existence.

THE FIRST CHALLENGE THAT Ben-Gurion’s spies faced was an internal one: There were Jews who blatantly defied his authority, among them the remnants of the right-wing underground movements. An extreme example of this defiance was the Altalena affair, in June 1948. A ship by that name, dispatched from Europe by the Irgun, was due to arrive, carrying immigrants and arms. But the organization refused to hand all the weapons over to the army of the new state, insisting that some of them be given to still intact units of its own. Ben-Gurion, who had been informed of the plans by agents inside Irgun, ordered that the ship be taken over by force. In the ensuing fight, it was sunk, and sixteen Irgun fighters and three IDF soldiers were killed. Shortly afterward, security forces rounded up two hundred Irgun members all over the country, effectively ending its existence.

Yitzhak Shamir and the Lehi operatives under his command also refused to accept the more moderate Ben-Gurion’s authority. Over the summer, during the truce, UN envoy Bernadotte crafted a tentative peace plan that would have ended the fighting. But the plan was unacceptable to Lehi and Shamir, who accused Bernadotte of collaborating with the Nazis during World War II and of drafting a proposal that would redraw Israeli borders in such a way, including giving most of the Negev and Jerusalem to the Arabs, and putting the Haifa port and Lydda airport under international control, as well as obliging the Jewish state to take back 300,000 Arab refugees, that the country would not survive.

Lehi issued several public warnings, in the form of notices posted in the streets of cities: ADVICE TO THE AGENT BERNADOTTE: CLEAR OUT OF OUR COUNTRY. The underground radio was even more outspoken, declaring, “The Count will end up like the Lord” (a reference to the assassinated Lord Moyne). Bernadotte ignored the warnings, and even ordered UN observers not to carry arms, saying, “The United Nations flag protects us.”

Convinced that the envoy’s plan would be accepted, Shamir ordered his assassination. On September 17, four months after statehood was declared, and the day after Bernadotte submitted his plan to the UN Security Council, he was traveling with his entourage in a convoy of three white DeSoto sedans from UN headquarters to the Rehavia neighborhood of Jewish Jerusalem, when a jeep blocked their way. Three young men wearing peaked caps jumped out. Two of them shot the tires of the UN vehicles, and the third, Yehoshua Cohen, opened the door of the car Bernadotte was traveling in and opened fire with his Schmeisser MP40 submachine gun. The first burst hit the man sitting next to Bernadotte, a French colonel by the name of André Serot, but the next, more accurate, hit the count in the chest. Both men were killed. The whole attack was over in seconds, “like thunder and lightning, the time it takes to fire fifty rounds,” is the way the Israeli liaison officer, Captain Moshe Hillman, who was in the car with the victims, described it. The perpetrators were never caught.

The assassination infuriated and profoundly embarrassed the Jewish leadership. The Security Council condemned it as “a cowardly act which appears to have been committed by a criminal group of terrorists in Jerusalem,” and The New York Times wrote the following day, “No Arab armies could have done so much harm to the Jewish state in so short atime.”

Ben-Gurion saw Lehi’s rogue operation as a serious challenge to his authority, one that could lead to a coup or even a civil war. He reacted immediately, outlawing both the Irgun and Lehi. He ordered Shin Bet chief Isser Harel to round up Lehi members. Topping the wanted list was Yitzhak Shamir. He wasn’t captured, but many others were, and they were locked up under heavy guard. Lehi ceased to exist as an organization.

Ben-Gurion was grateful to Harel for his vigorous action against the underground and made him the number-one intelligence official in the country.

A short, solid, and driven man, Isser Harel was influenced by the Russian Bolshevik revolutionary movement and its use of sabotage, guerrilla warfare, and assassination, but he abhorred communism. Under his direction, the Shin Bet kept constant surveillance and conducted political espionage against Ben-Gurion’s political opponents, the leftwing socialist and Communist parties, and the rightwing Herut party formed by veterans of Irgun and Lehi.

Meanwhile, Ben-Gurion and his foreign minister, Moshe Sharett, were at loggerheads over what policy should be adopted toward the Arabs. Sharett was the most prominent of Israel’s early leaders who believed diplomacy was the best way to achieve regional peace and thus secure the country. Even before independence, he made secret overtures to Jordan’s King Abdullah and Lebanon’s prime minister, Riad al-Solh, who would be instrumental in forming the coalition of invading Arabs, and who already had been largely responsible for the Palestinian militias that exacted heavy losses on the pre-state Yishuv. Despite al-Solh’s virulently anti Jewish rhetoric and anti Israel actions, he secretly met with Eliyahu Sasson, one of Sharett’s deputies, several times in Paris in late 1948 to discuss a peace agreement. “If we want to establish contacts with the Arabs to end the war,” said Sasson when Sharett, enthusiastic about his secret contacts, took him to report to the cabinet, “we have to be in contact with those people who are now in power. With those who have declared war on us and who are having trouble continuing.”

Those diplomatic overtures obviously were not effective, and Ben-Gurion, on December 12, 1948, ordered military intelligence agents to assassinate al-Solh.

“Sharett was vehemently opposed to the idea,” recalled Asher (Arthur) Ben-Natan, a leading figure in the Foreign Ministry’s Political Department, the arm responsible for covert activities abroad. “And when our department was asked to help military intelligence execute the order, through our contacts in Beirut, he countermanded the order, effectively killing it.”

This incident, plus a number of other clashes between Harel and Sharett, made Ben-Gurion’s blood boil. He considered diplomacy a weak substitute for a strong military and robust intelligence, and he viewed Sharett, personally, as a competitor who threatened the prime minister’s control. In December 1949, Ben-Gurion removed the Political Department from the control of the Foreign Ministry and placed it under his direct command. He later gave the agency a new name: the Institute for Intelligence and Special Operations. More commonly, though, it was known simply as “the Institute”, the Mossad.

With the establishment of the Mossad, Israeli inteIligence services coalesced into the three pronged community that survives in more or less the same form today: AMAN, the military intelligence arm that supplies information to the IDF; the Shin Bet, responsible for internal intelligence, counterterror, and counterespionage; and the Mossad, which deals with covert activities beyond the country’s borders.

More important, it was a victory for those who saw the future of the Israeli state as more dependent upon a strong army and intelligence community than upon diplomacy. That victory was embodied in real estate: The former Templer homes in Tel Aviv that the Political Department had occupied were handed over to the Mossad. It was also a personal victory for Isser Harel. Already in charge of the Shin Bet, he was installed as the chief of the Mossad as well, making him one of the most powerful, and secretive figures in early Israeli history.

From that point on, Israeli foreign and security policy would be determined by jousting between Tel Aviv, where the military high command, the intelligence headquarters, and the Defense Ministry were located, and where Ben-Gurion spent most of his time, and Jerusalem, where the Foreign Ministry was housed in a cluster of prefabricated huts. Tel Aviv always had the upper hand.

Ben-Gurion kept all of the agencies under his direct control. The Mossad and the Shin Bet were under him in his capacity as prime minister, and military intelligence fell under his purview because he was also minister of defense. It was an enormous concentration of covert, and political, power. Yet from the beginning, it was kept officially hidden from the Israeli public. Ben-Gurion forbade anyone from acknowledging, let alone revealing, that this sprawling web of official institutions even existed. In fact, mentioning the name Shin Bet or Mossad in public was prohibited until the 1960s. Because their existence could not be acknowledged, Ben-Gurion prevented the creation of a legal basis for those same agencies’ operations. No law laid out their goals, roles, missions, powers, or budgets or the relations between them.

In other words, Israeli intelligence from the outset occupied a shadow realm, one adjacent to yet separate from the country’s democratic institutions. The activities of the intelligence community, most of it (Shin Bet and the Mossad) under the direct command of the prime minister, took place without any effective supervision by Israel’s parliament, the Knesset, or by any other independent external body.

In this shadow realm, “state security” was used to justify a large number of actions and operations that, in the visible world, would have been subject to criminal prosecution and long prison terms: constant surveillance of citizens because of their ethnic or political affiliations; interrogation methods that included prolonged detention without judicial sanction, and torture; perjury in the courts and concealment of the truth from counsel and judges.

The most notable example was targeted killing. In Israeli law, there is no death penalty, but Ben-Gurion circumvented this by giving himself the authority to order extrajudicial executions.

The justification for maintaining that shadow realm was that anything other than complete secrecy could lead to situations that would threaten the very existence of Israel. Israel had inherited from the British Mandate a legal system that included state of emergency provisions to enforce order and suppress rebellions. Among those provisions was a requirement that all print and broadcast media submit any reports on intelligence and army activities to a military censor, who vetoed much of the material. The state of emergency has not been rescinded as of the time of this writing. But as a sop to the hungry media, Ben-Gurion was shrewd enough to establish an Editors Committee, which was composed of the editors in chief of the print and radio news outlets. From time to time, Ben-Gurion himself, or someone representing him, would appear before the committee to share covert tidbits while explaining why those tidbits could never, under any circumstances, be released to the public. The editors were thrilled because they had gained for themselves entrée to the twilight realm and its mysteries. In gratitude, they imposed on themselves a level of self-censorship that went beyond even that imposed by the actual censor.

IN JULY 1952, AN exhibit of paintings by the Franco German artist Charles Duvall opened at the National Museum in Cairo. Duvall, a tall young man with a cigarette permanently dangling from his lip, had moved to Egypt from Paris two years earlier, announcing that he’d “fallen in love with the land of the Nile.” The Cairo press published a number of fawning pieces about Duvall and his work, strongly influenced, the critics said, by Picasso, and he soon became a fixture in high society. Indeed, the Egyptian minister of culture attended the opening of Duvall’s show and even purchased two of the paintings that he left on loan to the museum, where they would hang for the next twenty-three years.

Five months later, when his show had closed, Duvall said that his mother had fallen ill and he had to rush back to Paris to care for her. After his return to France, he sent a few letters to old friends in Egypt, and then he was never heard from again.

Shlomo Cohen-Abarbanel

Duvall’s real name was Shlomo Cohen-Abarbanel, and he was an Israeli spy. He was the youngest of four sons born to a prominent rabbi in Hamburg in Germany. In the winter of 1933, as the Nazis rose to power and began enforcing race laws, the family fled to France and then Palestine. Fourteen years later, in 1947, Cohen-Abarbanel, whose artistic abilities had been apparent since he was a toddler, returned to Paris to study painting at the age of twenty-seven. A short time later, Haganah intelligence personnel heard about his talents and recruited him to forge passports and papers to be used by European and North African Jews being smuggled into Palestine in violation of British immigration laws. It was the beginning of a long career in espionage. Portraying himself as a bohemian artist, Cohen-Abarbanel operated networks of agents in Egypt and recruited new agents throughout the Arab world. He collected information about Nazi war criminals who had taken refuge in the Middle East, and he reported to his superiors on the initial attempts of German rocket scientists to sell their services to Arab armies. When he returned to Israel in 1952, he pushed his superiors in the young intelligence agency the Mossad to invest more resources into finding and killing Nazis.

A short time after taking command of the Mossad, Isser Harel asked Cohen-Abarbanel to design an official emblem for the agency. The artist shut himself in his room and emerged with a design, which he’d drawn by hand. At its center was a seven-branched menorah, the sacred lamp that stood in the Temple in Jerusalem that the Romans destroyed in AD. 70. The seal also bore a legendverse 6 from chapter 24 of the Book of Proverbs, authored, according to Jewish tradition, by King Solomon himself: “For by subterfuge you will make war.” This was later changed to another line from Proverbs (chapter 11, verse 14), which reads, “Where there is no subterfuge, the nation falls, but in the multitude of counselors there is safety.” Cohen-Abarbanel’s meaning could not have been clearer: using covert stratagems, the Mossad would be the supreme shield of the new Jewish commonwealth, ensuring that never again would Jews be dishonored, that never again would Judea fall.

The Mossad’s charter, written by Harel, was equally broad and ambitious. The organization’s purpose, according to its official orders, was “secret collection of information (strategic, political, operational) outside the country’s borders; carrying out special operations outside Israel’s borders; thwarting the development and acquisition of unconventional weapons by hostile states; prevention of terror attacks against Israeli and Jewish targets outside Israel; development and maintenance of intelligence and political ties with countries that do not maintain diplomatic relations with Israel; bringing to Israel Jews from countries that refused to allow them to leave, and creating frameworks for the defense of the Jews still in those countries.” In other words, it was charged with not only protecting Israel and its citizens but also standing as a sentinel for world Jewry.

ISRAEL’S YOUNG INTELLIGENCE SERVICES had to offer a response to a series of challenges presented by the ring of twenty-one hostile Arab nations that surrounded Israel and threatened to destroy it. There were those in the top echelons of the defense establishment who believed that these challenges would best be met by the use of pinpointed special operations far beyond enemy lines.

To this end, AMAN set up a unit called Intelligence Service 13 (which in Jewish tradition is considered a lucky number). Avraham Dar, now one of its prominent officers, went to Egypt in 1951 to set up a network of agents culled from local Zionist activists. On various pretexts, the recruits traveled to Europe, and then to Israel for training in espionage and sabotage. Outlining the goal of his network, Dar explained that “the central problem that made Egypt so antagonistic to Israel was the way King Farouk ran the government. If we could get rid of that obstacle many problems would be solved. In other words”, and here Dar turned to a Spanish proverb, “no dog, no rabies.”

King Farouk , Queen Farida and his daughters

Getting rid of “the dog” proved to be unnecessary, Farouk soon was overthrown in a coup. And AMAN’s assumption that things would be better when he was gone turned out to be totally groundless. However, the idea that this already established Egyptian network could be employed to change the course of history in the region was simply too tempting for Israel’s leaders to let go. AMAN decided to use these local agents against the Free Officers Movement, which had just recently ousted Farouk, “aiming to undermine Western confidence in the Egyptian regime by causing public insecurity and provoking demonstrations, arrests, and retaliatory actions, with Israel’s role remaining unexposed.” But the whole operation ended in catastrophe.

Despite intensive training, AMAN’s recruits were amateurish and sloppy, and all of their sabotage operations ended in failure. Eventually, eleven operatives were ferreted out by Egyptian authorities. Some were executed after short trials, and one killed himself after suffering gruesome torture. The lucky ones were sentenced to long prison terms and hard labor.

The ensuing turmoil gave rise to a major political dispute that raged in Israel for many years, over whether AMAN had received the approval of the political establishment for these abortive operations.

The main lesson drawn by Israel was that local Jews should never be recruited in hostile “target” countries. Their capture was almost certain to end in death, and send ripples throughout the entire Jewish community. Despite the temptation to use people who were already on the ground and didn’t need to establish a cover story, Israel almost never again did.

However, the underlying conviction that Israel could act boldly and change history through special operations behind enemy lines remained, and was in fact cemented in place as the core principle of Israel’s security doctrine. Indeed, this philosophy, that special ops behind enemy lines should be at least one of the country’s primary methods of national defense, would predominate among Israel’s political and intelligence establishment all the way up to the present day.

And while many of the world’s established nations kept a separation between the intelligence outfits that gathered information and the operations units that utilized that information to conduct clandestine missions, from the very beginning Israel’s special forces were an integral part of its intelligence agencies. In America, for instance, specialoperations units Delta Force and SEAL Team Six are components of the Joint Special Operations Command, not the CIA or military intelligence. In Israel, however, special operations units were under the direct control of the intelligence agencies Mossad and AMAN.

The goal was to continually translate gathered intelligence into operations. While other nations at the time were also gathering intelligence during peacetime, they did so only to be prepared in case war broke out, or to authorize the occasional special-ops attack. Israel, on the other hand, would constantly use its intelligence to develop special-ops attacks behind enemy lines, in the hope of avoiding all-out warfare entirely.

THE FASHIONING OF AN emblem, a charter, and a military philosophy was one thing. Implementation, as Harel was soon to learn, was another thing altogether, especially when it came to aggressive action.

The Mossad’s first major operation ended badly. In November 1954, a captain in the Israeli Navy named Alexander Yisrael, a philandering grifter deeply in debt, slipped out of the country on a bogus passport and tried to sell top-secret documents to the Egyptian embassy in Rome. A Mossad agent working in that embassy tipped off his superiors in Tel Aviv, who immediately began to develop a plan to kidnap Yisraeli and return him to Israel for trial as a traitor.

For Harel, this was a critical test, both for the security of the nation and his career. In those formative years, the heads of all the agencies jockeyed for power and prestige, and one significant failure could prove professionally fatal. He assembled a top-notch team of Mossad and Shin Bet operatives to grab Yisraeli in Europe. He put his second cousin, Rafi Eitan, who as a teenager had assassinated two German Templers, in charge.

Eitan says that “there were some who proposed finding Yisraeli and killing him as quickly as possible. But Harel squelched this immediately. ‘We don’t kill Jews,’ he said, and declared this was to be an abduction operation.” Harel himself said, “It never occurred to me to issue an order to kill one of our own. I wanted him to be brought to Israel and put on trial for treason.”

This is an important point. There is a tradition of mutual responsibility in Judaism, and a deep connection among all Jews, as if they are one big family. These values are seen as having kept the Jewish people alive as a nation throughout the two thousand years of exile, and for a Jew to harm another Jew is considered intolerable. Back in the days of the Palestinian underground, when it was effectively impossible to hold trials, eliminating Jewish traitors was deemed legitimate to a certain extent, but not after the state was established. “We do not kill Jews”, even if they were believed to be a grave danger to national security, became an iron law of the Israeli intelligence community.

The plan unfolded perfectly at first. Eitan and three others pinched Yisraeli after he’d been stopped by another Mossad female asset at a Paris intersection. The captive was taken to a safe house, where a Mossad doctor injected him with a sedative and placed him in a crate typically used to transfer arms, before putting him on a long, multi-stop flight on an Israeli Air Force cargo plane. At every stop, Yisraeli was injected again until, just as the plane touched down in Athens, he suffered a massive seizure and died. Following Harel’s orders, one of Eitan’s men ended up dumping the body from the back of the plane into the sea.

Harel’s people fed the Israeli press false information that Yisraeli, who left behind a pregnant wife, had stolen money and settled somewhere in South America. Harel, who was very embarrassed that an operation of his had ended in the death of a Jew, ordered that all the records on the case be secreted deep in one of the Mossad’s safes. But Harel’s rivals kept a copy of some of the documents, to be used against him someday if so required.

Harel also came to the conclusion that there was an urgent need for the formation of a special unit specifically designed to carry out sabotage and targeted killing missions. He began searching for “trained fighters, tough and loyal, who would not hesitate to squeeze the trigger when necessary.” He found them in the last place he would have been expected to look: the veterans of the Irgun and Lehi, against whom he had once fought a bitter struggle.

Ben-Gurion had forbidden the employment of any former members of the right-wing underground in government departments, and many of them were jobless, frustrated, and hungry for action. The Shin Bet believed that some of them were dangerous and were liable to start underground movements against the regime.

Harel aimed to kill two birds: to set up his specialops unit, and to get the underground fighters into action under his command, outside the borders of the state.


Irgun parade in 1948

David Shomron, Yitzhak Shamir, and those of their comrades in the Irgun and Lehi who were deemed tough and daring enough were invited to Harel’s home in north Tel Aviv and sworn in. This was the establishment of Mifratz, Hebrew for “Gulf” or “Bay,” the Mossad’s first hit team.

Chapter Three

THE BUREAU FOR ARRANGING MEETINGS WITH GOD

ISRAEL’S WAR OF INDEPENDENCE officially ended with armistice agreements in 1949. The unofficial fighting never stopped. Throughout the early 1950s, the country was constantly infiltrated by Arabs from the parts of Palestine that remained in Arab hands after the war, namely, the Gaza Strip, in the south, which was administered by Egypt, and the West Bank, in the east, which Jordan had annexed. The IDF estimated that in 1952, about sixteen thousand infiltrations occurred (eleven thousand from Jordan and the rest from Egypt). Some of those infiltrators were refugees who had fled during the War of Independence, either voluntarily or involuntarily, and were trying to return to their villages and salvage what was left of their property. But many others were militants whose objective was to kill Jews and spread terror. They called themselves fedayeen“those who self-sacrifice.”

The Egyptians, despite having signed an armistice, quickly realized that the fedayeen could fight a proxy war on their behalf. With proper training and supervision, those Palestinian militants could wreak substantial havoc on Israel while giving Egypt the cover of plausible deniabiiity.

A young captain in Egyptian military intelligence, Mustafa Hafez, was put in charge of organizing the fedayeen. Beginning in mid-1953, Hafez (along with Salah Mustafa, the Egyptian military attache in Jordan’s capital, Amman) started recruiting and training guerrilla squads to be dispatched into Israel’s south. For years, those squads, six hundred fedayeen in total, sneaked across the border from Gaza and laid waste to anything they could. They blew up water pipes, set fire to fields, bombed train tracks, mined roads; they murdered farmers in their fields and yeshiva students at study, altogether some one thousand civilians between 1951 and 1955. They spread panic and fear to the point that Israelis refrained from driving at night on main roads in the south.

Mustafa Hafez

The proxy squads were considered a huge success. The Israelis couldn’t hold Egypt or Jordan directly responsible. They would respond instead by recruiting their own proxies, turning Arabs into informers, collecting intelligence on fedayeen targets, and then assassinating them. Those tasks were assigned, for the most part, to an IDF intelligence team known as Unit 504.

Some of the men of Unit 504 had been raised in Arab neighborhoods of Palestine and thus were intimately familiar with the language and customs of the locals. Unit 504 was under the command of Rehavia Vardi. Polish-born, Vardi had served as a senior Haganah intelligence officer prior to the establishment of the state, and he was known for his sharp wit and blunt statements. “Every Arab,” he said, “can be recruited on the basis of one of the three PS, praise, payment or pussy.” Whether through those three Ps or other means, Vardi and his men recruited four hundred to five hundred agents, who passed on invaluable information in the period between 1948 and 1956. Those recruits, in turn, provided Unit 504 with information on a number of senior fedayeen dispatchers. Several were identified, located, and targeted, and in ten to fifteen of those cases, the Israelis persuaded their Arab agents to place a bomb near that target.

That was when they would call Unit 188. That was when they required the services of Natan Rotberg.

“IT WAS ALL VERY, very secret,” Rotberg said. “We were not allowed to mention the names of units; we were not allowed to tell anyone where we were going or where we were serving or, it goes without saying, what we were doing.”

Rotberg, a thick-necked and good-natured kibbutznik with a bushy mustache, was one of a small group, only a few hundred men, who took part in forming the original triumvirate of AMAN, Shin Bet, and the Mossad. In 1951, when Rotberg was assigned to a marine commando unit called Shayetet 13 (Flotilla 13), Israeli intelligence set up a secret facility north of Tel Aviv to teach “special demolitions” and manufacture sophisticated bombs. Rotberg, Flotilla 13’s explosives officer, was appointed to run it.

Rotberg had a large vat installed in which he mixed TNT and pentaerythritol tetranitrate and other chemicals into deadly concoctions. But though his mixtures were designed to kill people, he claimed that he did not act with hatred in his heart. “You need to know how to forgive,” he said. “You need to know how to forgive the enemy. However, we have no authority to forgive people like bin Laden. That, only God can do. Our job is to arrange a meeting between them. In my laboratory, I opened a matchmaker’s office, a bureau that arranged such meetings. I orchestrated more than thirty such meetings.”

When Rehavia Vardi and his men had identified a target, they would go to Rotberg for the bomb. “At first we worked with double-bottomed wicker baskets,” Rotberg said. “I would cushion the bottom part of the basket with impermeable paper and pour the concoction in from the vat. Then we’d put on a cover and, above that, fill it up with fruits and vegetables. For the triggering mechanism, we used pencils into which we inserted ampoules filled with acid that ate away at the cover until it reached the detonator, activated it, and set off the charge. The problem with the acid was that weather conditions affected the time it took to eat away the cover, producing nonuniform timing. A bomb in the Gaza Strip would go off at a different time than one in the West Bank, where it is generally colder. We then switched to clocks, which are much more accurate.”

But Rotberg’s bombs were hardly enough to solve the fedayeen problem. According to several sources, explosives killed only seven targets between mid-1951 and mid-1953, while in the process killing six civilians.

The attacks continued unabated, terrorizing Israeli civilians, humiliating the Israel Defense Forces. Vardi and his men, talented as they were at recruiting agents, managed to glean only sparse information about the identities of the fedayeen handlers, and even when the unit did ferret out specific targets, the IDF was unable to find or kill them. “We had our limitations,” says Yigal Simon, a Unit 504 veteran and later on its commander. “We didn’t always have intelligence, we couldn’t send our agents everywhere, and they didn’t appreciate us enough in the IDF. It was important to the high command to show that the IDF, Jewish hands, could execute these actions.”…

*

from

RISE AND KILL FIRST: The Secret History Of Israel’s Targeted Assassinations

by Ronen Bergman

get it at Amazon.com

My son, Osama bin Laden: the al-Qaida leader’s mother speaks for the first time – Martin Chulov.

“He was a very good child until he met some people who pretty much brainwashed him in his early 20s. You can call it a cult. They got money for their cause. I would always tell him to stay away from them, and he would never admit to me what he was doing, because he loved me so much.”

Nearly 17 years since 9/11, Osama bin Laden’s family remains an influential part of Saudi society as well as a reminder of the darkest moment in the kingdom’s history. Can they escape his legacy?

On the corner couch of a spacious room, a woman wearing a brightly patterned robe sits expectantly. The red hijab that covers her hair is reflected in a glass-fronted cabinet; inside, a framed photograph of her firstborn son takes pride of place between family heirlooms and valuables. A smiling, bearded figure wearing a military jacket, he features in photographs around the room: propped against the wall at her feet, resting on a mantlepiece. A supper of Saudi meze and a lemon cheesecake has been spread out on a large wooden dining table.

Alia Ghanem is Osama bin Laden’s mother, and she commands the attention of everyone in the room. On chairs nearby sit two of her surviving sons, Ahmad and Hassan, and her second husband, Mohammed al-Attas, the man who raised all three brothers. Everyone in the family has their own story to tell about the man linked to the rise of global terrorism; but it is Ghanem who holds court today, describing a man who is, to her, still a beloved son who somehow lost his way. “My life was very difficult because he was so far away from me,” she says, speaking confidently. “He was a very good kid and he loved me so much.” Now in her mid-7os and in variable health, Ghanem points at al-Attas a lean, fit man dressed, like his two sons, in an immaculately pressed white thobe, a gown worn by men across the Arabian peninsula. “He raised Osama from the age of three. He was a good man, and he was good to Osama.”

The family have gathered in a corner of the mansion they now share in Jeddah, the Saudi Arabian city that has been home to the Bin Laden clan for generations. They remain one of the kingdom’s wealthiest families: their dynastic construction empire built much of modern Saudi Arabia, and is deeply woven into the country’s establishment. The Bin Laden home reflects their fortune and influence, a large spiral staircase at its centre leading to cavernous rooms. Ramadan has come and gone, and the bowls of dates and chocolates that mark the three-day festival that follows it sit on tabletops throughout the house. Large manors line the rest of the street; this is well-to-do Jeddah, and while no guard stands watch outside, the Bin Ladens are the neighbourhood’s best-known residents.

For years, Ghanem has refused to talk about Osama, as has his wider family throughout his two decade reign as al-Qaida leader, a period that saw the strikes on New York and Washington DC, and ended more than nine years later with his death in Pakistan.

Now, Saudi Arabia’s new leadership, spearheaded by the ambitious 32-year-old heir to the throne, Crown Prince Mohammed bin Salman has agreed to my request to speak to the family. (As one of the country’s most influential families, their movements and engagements remain closely monitored.) Osama’s legacy is as grave a blight on the kingdom as it is on his family, and senior officials believe that, by allowing the Bin Ladens to tell their story, they can demonstrate that an outcast not an agent was responsible for 9/ 1 1. Saudi Arabia’s critics have long alleged that Osama had state support, and the families of a number of 9/ 1 1 victims have launched (so far unsuccessful) legal actions against the kingdom. Fifteen of the 19 hijackers came from Saudi Arabia.

Unsurprisingly, Osama bin Laden’s family are cautious in our initial negotiations; they are not sure whether opening old wounds will prove cathartic or harmful. But after several days of discussion, they are willing to talk. When we meet on a hot day in early June, a minder from the Saudi government sits in the room, though she makes no attempt to influence the conversation. (We are also joined by a translator.)

Sitting between Osama’s half-brothers, Ghanem recalls her firstborn as a shy boy who was academically capable. He became a strong, driven, pious figure in his early 20s, she says, while studying economics at King Abdulaziz University in Jeddah, where he was also radicalised. “The people at university changed him,” Ghanern says. “He became a different man.” One of the men he met there was Abdullah Azzam, a member of the Muslim Brotherhood who was later exiled from Saudi Arabia and became Osama’s spiritual adviser. “He was a very good child until he met some people who pretty much brainwashed him in his early 20s. You can call it a cult. They got money for their cause. I would always tell him to stay away from them, and he would never admit to me what he was doing, because he loved me so much.”

In the early 1980s, Osama travelled to Afghanistan to fight the Russian occupation. “Everyone who met him in the early days respected him,” says Hassan, picking up the story. “At the start, we were very proud of him. Even the Saudi government would treat him in a very noble, respectful way. And then came Osama the mujahid.”

A long uncomfortable silence follows, as Hassan struggles to explain the transformation from zealot to global jihadist. “I am very proud of him in the sense that he was my oldest brother,” he eventually continues. “He taught me a lot. But I don’t think I’m very proud of him as a man. He reached superstardom on a global stage, and it was all for nothing.”

Ghanem listens intently, becoming more animated when the conversation returns to Osama’s formative years. “He was very straight. Very good at school. He really liked to study. He spent all his money on Afghanistan, he would sneak off under the guise of family business.” Did she ever suspect he might become a jihadist? “It never crossed my mind.” How did it feel when she realised he had? “We were extremely upset. I did not want any of this to happen. Why would he throw it all away like that?”

The family say they last saw Osama in Afghanistan in 1999, a year in which they visited him twice at his base just outside Kandahar. “It was a place near the airport that they had captured from the Russians,” Ghanem says. “He was very happy to receive us. He was showing us around every day we were there. He killed an animal and we had a feast, and he invited everyone.”

Ghanem begins to relax, and talks about her childhood in the coastal Syrian city of Latakia, where she grew up in a family of Alawites, an offshoot of Shia Islam. Syrian cuisine is superior to Saudi, she says, and so is the weather by the Mediterranean, where the warm, wet summer air was a stark contrast to the acetylene heat of Jeddah in June. Ghanem moved to Saudi Arabia in the mid-1950s, and Osama was born in Riyadh in 1957. She divorced his father three years later, and married al-Attas, then an administrator in the fledgling Bin Laden empire, in the early 1960s. Osama’s father went on to have 54 children with at least 11 wives.

When Ghanem leaves to rest in a nearby room, Osama’s half brothers continue the conversation. It’s important, they say, to remember that a mother is rarely an objective witness. “It has been 17 years now since 9/11, and she remains in denial about Osama,” Ahmad says. “She loved him so much and refuses to blame him. Instead, she blames those around him. She only knows the good boy side, the side we all saw. She never got to know the jihadist side.”

“I was shocked, stunned,” he says now of the early reports from New York. “It was a very strange feeling. We knew from the beginning that it was Osama, within the first 48 hours. From the youngest to the eldest, we all felt ashamed of him. We knew all of us were going to face horrible consequences. Our family abroad all came back to Saudi.” They had been scattered across Syria, Lebanon, Egypt and Europe. “In Saudi, there was a travel ban. They tried as much as they could to maintain control over the family.” The family say they were all questioned by the authorities and, for a time, prevented from leaving the country. Nearly two decades on, the Bin Ladens can move relatively freely within and outside the kingdom.

Osama, age 14, in Oxford

Osama bin Laden’s formative years in Jeddah came in the relatively freewheeling 1970s, before the Iranian Revolution of 1979, which aimed to export Shia zeal into the Sunni Arab world. From then on, Saudi’s rulers enforced a rigid interpretation of Sunni Islam, one that had been widely practised across the Arabian peninsula since the 18th century, the era of cleric Muhammed ibn Abdul Wahhab. In 1744, Abdul Wahhab had made a pact with the then ruler Mohammed bin Saud, allowing his family to run affairs of state while hardlihe clerics defined the national character.

The modern day kingdom, proclaimed in 1932, left both sides, the clerics and the rulers too powerful to take the other on, locking the state and its citizens into a society defined by archconservative views: the strict segregation of non-related men and women; uncompromising gender roles; an intolerance of other faiths; and an unfailing adherence to doctrinal teachings, all rubber-stamped by the House of Saud.

Many believe this alliance directly contributed to the rise of global terrorism. Al-Qaida worldview and that of its offshoot, Islamic State (Isis) were largely shaped by Wahhabi scriptures; and Saudi clerics were widely accused of encouraging a jihadist movement that grew throughout the 1990s, with Osama bin Laden at its centre.

In 2018, Saudi’s new leadership wants to draw a line under this era and introduce what bin Salman calls “moderate Islam”. This he sees as essential to the survival of a state where a large, restless and often disaffected young population has, for nearly four decades, had little access to entertainment, a social life or individual freedoms. Saudi’s new rulers believe such rigid societal norms, enforced by clerics, could prove fodder for extremists who tap into such feelings of frustration.

Reform is beginning to creep through many aspects of Saudi society; among the most visible was June’s lifting of the ban on women drivers. There have been changes to the labour markets and a bloated public sector; cinemas have opened, and an anti-corruption drive launched across the private sector and some quarters of government. The government also claims to have stopped all funding to Wahhabi institutions outside the kingdom, which had been supported with missionary zeal for nearly four decades.

Such radical shock therapy is slowly being absorbed across the country, where communities conditioned to decades of uncompromising doctrine don’t always know what to make of it. Contradictions abound: some officials and institutions eschew conservatism, while others wholeheartedly embrace it. Meanwhile, political freedoms remain off-limits; power has become more centralised and dissent is routinely crushed.

Bin Laden’s legacy remains one of the kingdom’s most pressing issues. I meet Prince Turki al-Faisal, who was the head of Saudi intelligence for 24 years, between 1977 and 1 September 2001 (10 days before the 9/ 11 attacks), at his villa in Jeddah. An erudite man now in his mid-70s, Turki wears green cufflinks bearing the Saudi flag on the sleeves of his thobe. “There are two Osama bin Ladens,” he tells me. “One before the end of the Soviet occupation of Afghanistan, and one after it. Before, he was very much an idealistic mujahid. He was not a fighter. By his own admission, he fainted during a battle, and when he woke up, the Soviet assault on his position had been defeated.”

As Bin Laden moved from Afghanistan to Sudan, and as his links to Saudi Arabia soured, it was Turki who spoke with him on behalf of the kingdom. In the wake of 9/ 1 1, these direct dealings came under intense scrutiny. Then and 17 years later relatives of some of the 2,976 killed and more than 6,000 wounded in New York and Washington DC refuse to believe that a country that had exported such an archconservative form of the faith could have nothing to do with the consequences.

Certainly, Bin Laden travelled to Afghanistan with the knowledge and backing of the Saudi state, which opposed the Soviet occupation; along with America, the Saudis armed and supported those groups who fought it. The young mujahid had taken a small part of the family fortune with him, which he used to buy influence. When he returned to Jeddah, emboldened by battle and the Soviet defeat, he was a different man, Turki says. “He developed a more political attitude from 1990. He wanted to evict the communists and South Yemeni Marxists from Yemen. I received him, and told him it was better that he did not get involved. The mosques of Jeddah were using the Afghan example.” By this, Turki means the narrowly defined reading of the faith espoused by the Taliban. “He was inciting them, Saudi worshippers. He was told to stop.”

“He had a poker face,” Turki continues. “He never grimaced, or smiled. In 1992, 1993, there was a huge meeting in Peshawar organised by Nawaz Sharif’s government.” Bin Laden had by this point been given refuge by Afghan tribal leaders. “There was a call for Muslim solidarity, to coerce those leaders of the Muslim world to stop going at each other’s throats. I also saw him there. Our eyes met, but we didn’t talk. He didn’t go back to the kingdom. He went to Sudan, where he built a honey business and financed a road.”

Bin Laden’s advocacy increased in exile. “He used to fax statements to everybody. He was very critical. There were efforts by the family to dissuade him, emissaries and such but they were unsuccessful. It was probably his feeling that he was not taken seriously by the government.”

By 1996, Bin Laden was back in Afghanistan. Turki says the kingdom knew it had a problem and wanted him returned. He flew to Kandahar to meet with the then head of the Taliban, Mullah Omar. “He said, ‘I am not averse to handing him over, but he was very helpful to the Afghan people.’ He said Bin Laden was granted refuge according to Islamic dictates.” Two years later, in September 1998, Turki flew again to Afghanistan, this time to be robustly rebuffed. “At that meeting, he was a changed man,” he says of Omar. “Much more reserved, sweating profusely. Instead of taking a reasonable tone, he said, ‘How can you persecute this worthy man who dedicated his life to helping Muslims?”’ Turki says he warned Omar that what he was doing would harm the people of Afghanistan, and left.

Taliban leader, Mullah Omar

The family visit to Kandahar took place the following year, and came after a US missile strike on one of Bin Laden’s compounds, a response to al-Qaida attacks on US embassies in Tanzania and Kenya. It seems an entourage of immediate family had little trouble finding their man, where the Saudi and western intelligence networks could not.

According to officials in Riyadh, London and Washington DC, Bin Laden had by then become the world’s number one counterterrorism target, a man who was bent on using Saudi citizens to drive a wedge between eastern and western civilisations. “There is no doubt that he deliberately chose Saudi citizens for the 9/11 plot,” a British intelligence officer tells me. “He was convinced that was going to turn the west against his home country. He did indeed succeed in inciting a war, but not the one he expected.”

Turki claims that in the months before 9/11, his intelligence agency knew that something troubling was being planned. “In the summer of 2001, I took one of the warnings about something spectacular about to happen to the Americans, British, French and Arabs. We didn’t know where, but we knew that something was being brewed.”

Bin Laden remains a popular figure in some parts of the country, lauded by those who believe he did God’s work. The depth of support, however, is difficult to gauge. What remains of his immediate family, meanwhile, has been allowed back into the kingdom: at least two of Osama’s wives (one of whom was with him in Abbottabad when he was killed by US special forces) and their children now live in Jeddah.

“We had a very good relationship with Mohammed bin Nayef, the former crown prince,” Osama’s half-brother Ahmad tells me as a maid sets the nearby dinner table. “He let the wives and children return.” But while they have freedom of movement inside the city, they cannot leave the kingdom.

Osama’s mother rejoins the conversation. “I speak to his harem most weeks,” she says. “They live nearby.”

Osama’s half-sister, and the two men’s sister, Fatima al-Attas, was not at our meeting. From her home in Paris, she later emailed to say she strongly objected to her mother being interviewed, asking that it be rearranged through her. Despite the blessing of her brothers and stepfather, she felt her mother had been pressured into talking. Ghanem, however, insisted she was happy to talk and could have talked longer. It is, perhaps, a sign of the extended family’s complicated status in the kingdom that such tensions exist.

I ask the family about Bin Laden’s youngest son, 29-year-old Hamza, who is thought to be in Afghanistan. Last year, he was officially designated a “global terrorist” by the US and appears to have taken up the mantle of his father, under the auspices of al-Qaida’s new leader, and Osama’s former deputy, Ayman al-Zawahiri.

His uncles shake their heads. “We thought everyone was over this,” Hassan says. “Then the next thing I knew, Hamza was saying, ‘I am going to avenge my father.’ I don’t want to go through that again. If Hamza was in front of me now, I would tell him, ‘God guide you. Think twice about what you are doing. Don’t retake the steps of your father. You are entering horrible parts of your soul.’”

Hamza bin Laden’s continued rise may well cloud the family’s attempts to shake off their past. It may also hinder the crown prince’s efforts to shape a new era in which Bin Laden is cast as a generational aberration, and in which the hardline doctrines once sanctioned by the kingdom no longer offer legitimacy to extremism. While change has been attempted in Saudi Arabia before, it has been nowhere near as extensive as the current reforms. How hard Mohammed bin Salman can push against a society indoctrinated in such an uncompromising worldview remains an open question.

Saudia Arabia’s allies are optimistic, but offer a note of caution. The British intelligence officer I spoke to told me, “If Salman doesn’t break through, there will be many more Osamas. And I’m not sure they’ll be able to shake the curse.”

Give People Money. How a Universal Basic Income would end poverty, revolutionise work, and remake the world – Annie Lowrey.

A UBI is an ethos as much as it is a technocratic policy proposal. It contains within it the principles of universality, unconditionality, inclusion, and simplicity, and it insists that every person is deserving of participation in the economy, freedom of choice, and a life without deprivation. Our governments can and should choose to provide those things.

Surely just giving people money couldn’t work. Or could it?

Imagine if every month the government deposited £1000 in your bank account, with no strings attached and nothing expected in return. It sounds crazy, but Universal Basic Income (UBI) has become one of the most influential policy ideas of our time, backed by thinkers on both the left and the right. The founder of Facebook, Obama’s chief economist, governments from Canada to Finland are all seriously debating some form of UBI.

In this sparkling and provocative book, economics writer Annie Lowrey looks at the global UBI movement. She travels to Kenya to see how UBI is lifting the poorest people on earth out of destitution, India to see how inefficient government programs are failing the poor, South Korea to interrogate UBI’s intellectual pedigree, and Silicon Valley to meet the tech titans financing UBI pilots in expectation of a world with advanced artificial intelligence and little need for human labour. She also examines the challenges the movement faces: contradictory aims, uncomfortable costs, and most powerfully, the entrenched belief that no one should get something for nothing.

The UBI movement is not just an economic policy, it also calls into question our deepest intuitions about what we owe each other and what activities we should reward and value as a society.

Annie Lowrey is a contributing editor for The Atlantic, where she covers economic policy. She is a frequent guest on CNN, MSNBC, and NPR. She is a former writer for the New York Times, the New York Times Magazine, and Slate, among other publications.

Wages for Breathing

ONE OPPRESSIVELY HOT and Muggy day in July, I stood at a military installation at the top of a mountain called Dorasan, overlooking the demilitarized zone between South Korea and North Korea. The central building was painted in camouflage and emblazoned with the hopeful phrase “End of Separation, Beginning of Unification.” On one side was a large, open observation deck with a number of telescopes aimed toward the Kaesong industrial area, a special pocket between the two countries where, up until recently, communist workers from the North would come and toil for capitalist companies from the South, earning $90 million in wages a year. A small gift shop sold soju liquor made by Northern workers and chocolate-covered soybeans grown in the demilitarized zone itself. (Don’t like them? Mail them back for a refund, the package said.)

On the other side was a theater whose seats faced not a movie screen but windows looking out toward North Korea. In front, there was a labeled diorama. Here is a flag. Here is a factory. Here is a juche-inspiring statue of Kim ll Sung. See it there? Can you make out his face, his hands? Chinese tourists pointed between the diorama and the landscape, viewed through the summer haze.

Across the four-kilometer-wide demilitarized zone, the North Koreans were blasting propaganda music so loudly that I could hear not just the tunes but the words. I asked my tour guide, Soo-jin, what the song said. “The usual,” she responded. “Stuff about how South Koreans are the tools of the Americans and the North Koreans will come to liberate us from our capitalist slavery.” Looking at the denuded landscape before us, this bit of pomposity seemed impossibly sad, as did the incomplete tunnel from North to South scratched out beneath us, as did the little Potemkin village the North Koreans had set up in sight of the observation deck. It was supposedly home to two hundred families, who Pyongyang insisted were working a collective farm, using a child care center, schools, a hospital. Yet Seoul had determined that nobody had ever lived there, and the buildings were empty shells. Comrades would come turn the lights on and off to give the impression of activity. The North Koreans called it “peace village”; Soo-jin called it “propaganda village.”

A few members of the group I was traveling with, including myself, teared up at the stark difference between what was in front of us and what was behind. There is perhaps no place on earth that better represents the profound life-and-death power of our choices when it comes to government policy. Less than a lifetime ago, the two countries were one, their people a polity, their economies a single fabric. But the Cold War’s ideological and political rivalry between capitalism and communism had ripped them apart, dividing families and scarring both nations. Soo-jin talked openly about the separation of North Korea from the South as “our national tragedy.”

The Republic of Korea, the South, rocketed from third-world to first-world status, becoming one of only a handful of countries to do so in the postwar era. In 1960, about fifteen years after the division of the peninsula, its people were about as wealthy as those in the Ivory Coast and Sierra Leone. In 2016, they were closer income-wise to those in Japan, its former colonial occupier, and a brutal one. Citigroup now expects South Korea to be among the most prosperous countries on earth by 2040, richer even than the United States by some measures.

Yet the Democratic People’s Republic of Korea, the North, has faltered and failed, particularly since the 1990s. It is a famine-scarred pariah state dominated by governmental graft and military buildup. Rare is it for a country to suffer such a miserable growth pattern without also suffering from the curse of natural disasters or the horrors of war. As of a few years ago, an estimated 40 percent of the population was living in extreme poverty,‘ more than double the share of people in Sudan. Were war to befall the country, that proportion would inevitably rise.

Even from the remove of the observation deck, enveloped in steam, hemmed in by barbed wire, patrolled by passive young men with assault rifles, the difference was obvious. You could see it. I could see it. The South Korean side of the border was lush with forest and riven with well built highways. Everywhere, there were power lines, trains, docks, high-rise buildings. An hour south sat Seoul, as cosmopolitan and culturally rich a city as Paris, with far better infrastructure than New York or Los Angeles. But the North Korean side of the border was stripped of trees. People had perhaps cut them down for firewood and basic building supplies, Soo-jin told me. The roads were empty and plain, the buildings low and smalI. So were the people: North Koreans are now measurably shorter than their South Korean relatives, in part due to the stunting effects of malnutrition.

South Korea and North Korea demonstrated, so powerfully demonstrated, that what we often think of as economic circumstance is largely a product of policy. The way things are is really the way we choose for them to be. There is always a counterfactual. Perhaps that counterfactual is not as stark as it is at the demilitarized zone. But it is always there.

Imagine that a check showed up in your mailbox or your bank account every month.

The money would be enough to live on, but just barely. It might cover a room in a shared apartment, food, and bus fare. It would save you from destitution if you had just gotten out of prison, needed to leave an abusive partner, or could not find work. But it would not be enough to live particularly well on. Let’s say that you could do anything you wanted with the money. It would come with no strings attached. You could use it to pay your bills. You could use it to go to college, or save it up for a down payment on a house. You could spend it on cigarettes and booze, or finance a life spent playing Candy Crush in your mom’s basement and noodling around on the Internet. Or you could use it to quit your job and make art, devote yourself to charitable works, or care for a sick child. Let’s also say that you did not have to do anything to get the money. It would just show up every month, month after month, for as long as you lived. You would not have to be a specific age, have a child, own a home, or maintain a clean criminal record to get it. You just would, as would every other person in your community.

This simple, radical, and elegant proposal is called a universal basic income, or UBI. It is universal, in the sense that every resident of a given community or country receives it. It is basic, in that it is just enough to live on and not more. And it is income.

The idea is a very old one, with its roots in Tudor England and the writings of Thomas Paine, a curious piece of intellectual flotsam that has washed ashore again and again over the last half millennium, often coming in with the tides of economic revolution. In the past few years, with the middle class being squeezed, trust in government eroding, technological change hastening, the economy getting Uberized, and a growing body of research on the power of cash as an antipoverty measure being produced, it has vaulted to a surprising prominence, even pitching from airy hypothetical to near-reality in some places. Mark Zuckerberg, Hillary Clinton, the Black Lives Matter movement, Bill Gates, Elon Musk, these are just a few of the policy proposal’s flirts, converts, and supporters. UBI pilots are starting or ongoing in Germany, the Netherlands, Finland, Canada, and Kenya, with India contemplating one as well. Some politicians are trying to get it adopted in California, and it has already been the subject of a Swiss referendum, where its reception exceeded activists’ expectations despite its defeat.

Why undertake such a drastic policy change, one that would fundamentally alter the social contract, the safety net, and the nature of work? UBI’s strange bedfellows put forward a dizzying kaleidoscope of arguments, drawing on everything from feminist theory to environmental policy to political philosophy to studies of work incentives to sociological work on racism.

Perhaps the most prominent argument for a UBI has to do with technological unemployment, the prospect that robots will soon take all of our jobs. Economists at Oxford University estimate that about half of American jobs, including millions and millions of white-collar ones, are susceptible to imminent elimination due to technological advances. Analysts are warning that Armageddon is coming for truck drivers, warehouse box packers, pharmacists, accountants, legal assistants, cashiers, translators, medical diagnosticians, stockbrokers, home appraisers, I could go on.

In a world with far less demand for human work, a UBI would be necessary to keep the masses afloat, the argument goes. “I’m not saying I know the future, and that this is exactly what’s going to happen,” Andy Stern, the former president of the two-million member Service Employees International Union and a UBI booster, told me. But if “a tsunami is coming, maybe someone should figure out if we have some storm shutters around.”

A second common line of reasoning is less speculative, more rooted in the problems of the present rather than the problems of tomorrow. It emphasizes UBl’s promise at ameliorating the yawning inequality and grating wage stagnation that the United States and other high-income countries are already facing. The middle class is shrinking. Economic growth is aiding the brokerage accounts of the rich but not the wallets of the working classes. A UBl would act as a straightforward income support for families outside of the top 20 percent, its proponents argue. It would also radically improve the bargaining power of workers, forcing employers to increase wages, add benefits, and improve conditions to retain their talent. Why take a crummy job for $7.25 an hour when you have a guaranteed $1,000 a month to fall back on? “In a time of immense wealth, no one should live in poverty, nor should the middle class be consigned to a future of permanent stagnation or anxiety,” argues the Economic Security Project, a new UBI think tank and advocacy group.

In addition, a UBI could be a powerful tool to eliminate deprivation, both around the world and in the United States. About 41 million Americans were living below the poverty line as of 2016. A $1,000-a-month grant would push many of them above it, and would ensure that no abusive partner, bout of sickness, natural disaster, or sudden job loss means destitution in the richest civilization that the planet has ever known. This case is yet stronger in lower income countries. Numerous governments have started providing cash transfers, if not universal and unconditional ones, to reduce their poverty rates, and some policymakers and political parties, pleased with the results, are toying with providing a true UBI. In Kenya, a U.S.-based charity called GiveDirectly is sending thousands of adults about $20 a month for more than a decade to demonstrate how a UBl could end deprivation, cheaply and at scale. “We could end extreme poverty right now, if we wanted to,” Michael Faye, GiveDirectly’s cofounder, told me.

A UBI would end poverty not just effectively, but also efficiently, some of its libertarian-leaning boosters argue. Replacing the current American welfare state with a UBI would eliminate huge swaths of the government’s bureaucracy and reduce state interference in its citizens’ lives: Hello UBI, good-bye to the Departments of Health and Human Services and Housing and Urban Development, the Social Security Administration, a whole lot of state and local offices, and much of the Department of Agriculture. “Just giving people money is a very natural solution,” says Charles Murray of the American Enterprise Institute, a right-of center think tank. “It’s a way of cutting the Gordian knot. You don’t need to be drafting ever more sophisticated solutions to our problems.”

Protecting against a robot apocalypse, providing workers with bargaining power, jump-starting the middle class, ending poverty, and reducing the complexity of government: It sounds pretty good, right? But a UBI means that the government would send every citizen a check every month, eternally and regardless of circumstance. That inevitably raises any number of questions about fairness, government spending, and the nature of work.

When I first heard the idea, I worried about UBl’s impact on jobs. A $1,000 check arriving every month might spur millions of workers to drop out of the labor force, leaving the United States relying on a smaller and smaller pool of workers for taxable income to be distributed to a bigger and bigger pool of people not participating in paid labor. This seems a particularly prevalent concern given how many men have dropped out of the labor force of late, pushed by stagnant wages and pulled, perhaps, by the low-cost marvels of gaming and streaming. With a UBI, the country would lose the ingenuity and productivity of a large share of its greatest asset: its people. More than that, a UBI implemented to fight technological unemployment might mean giving up on American workers, paying them off rather than figuring out how to integrate them into a vibrant, techfueled economy. Economists of all political persuasions have voiced similar concerns.

And a UBI would do all of this at extraordinary expense. Let’s say that we wanted to give every American $1,000 a month in cash. Back-of-the-envelope math suggests that this policy would cost roughly $3.9 trillion a year. Adding that kind of spending on top of everything else the government already funds would mean that total federal outlays would more than double, arguably requiring taxes to double as well. That might slow the economy down, and cause rich families and big corporations to flee offshore. Even if the government replaced Social Security and many of its other antipoverty programs with a UBI, its spending would still have to increase by a number in the hundreds of billions, each and every year.

Stepping back even further: Is a UBI really the best use of scarce resources? Does it make any sense to bump up taxes in order to give people like Mark Zuckerberg and Bill Gates $1,000 a month, along with all those working-class families, retirees, children, unemployed individuals, and so on? Would it not be more efficient to tax rich people and direct money to poor people through means-testing, as programs like Medicaid and the Supplemental Nutrition Assistance Program, better known as SNAP or food stamps, already do? Even in the socialist Nordic countries, state support is generally contingent on circumstance. Plus, many lower-income and middle-income families already receive far more than $1,000 a month per person from the government, in the United States and in other countries. If a UBI wiped out programs like food stamps and housing vouchers, is there any guarantee that a basic income would be more fair and effective than the current system?

There are more philosophical objections to a UBI too. In no country or community on earth do individuals automatically get a pension as a birthright, with the exception of some princes, princesses, and residents of petrostates like Alaska. Why should we give people money with no strings attached? Why not ask for community service in return, or require that people at least try to work? Isn’t America predicated on the idea of pulling yourself up by your bootstraps, not on coasting by on a handout?

As a reporter covering the economy and economic policy in Washington, I heard all of these arguments for and objections against, watching as an obscure, never before tried idea became a global phenomenon. Not once in my career had I seen a bit of social-policy arcana go viral. Search interest in UBI more than doubled between 2011 and 2016, according to Google data.“ UBl barely got any mention in news stories as of the mid 2000s, but since then the growth has been exponential. It came up in books, at conferences, in meetings with politicians, in discussions with progressives and libertarians, around the dinner table.

I covered it as it happened. I wrote about that failed Swiss referendum, and about a Canadian basic income experiment that has provided evidence for the contemporary debate. I talked with Silicon Valley investors terrified by the prospect of a jobless future and rode in a driverless car, wondering how long it would be before artificial intelligence started to threaten my job. I chatted with members of Congress on both sides of the aisle about the failing middle class and whether the country needed a new, big redistributive policy to strengthen it. I had beers with European intellectuals enthralled with the idea. I talked with Hill aides convinced that a UBI would be a part of a 2020 presidential platform. I spoke with advocates certain that in a decade, millions of people around the world would have a monthly check to fall back on, or else would make up a miserable new precariat. I heard from philosophers convinced that our understanding of work, our social contract, and the underpinnings of our economy were about to undergo an epochal transformation.

The more I learned about UBI, the more obsessed I became with it, because it raised such interesting questions about our economy and our politics. Could libertarians in the United States really want the same thing as Indian economists as the Black Lives Matter protesters as Silicon Valley tech pooh-bahs? Could one policy be right for both Kenyan villagers living on 60 cents a day and the citizens of Switzerland’s richest canton? Was UBI a magic bullet, or a policy hammer in search of a nail? My questions were also philosophical. Should we compensate uncompensated care workers? Why do we tolerate child poverty, given how rich the United States is? Is our safety net racist? What would a robot jobs apocalypse actually look like?

I set out to write this book less to describe a burgeoning international policy movement or to advocate for an idea, than to answer those questions for myself. The research for it brought me to villages in remote Kenya, to a wedding held amid monsoon rains in one of the poorest states in India, to homeless shelters, to senators’ offices. I interviewed economists, politicians, subsistence farmers, and philosophers. I traveled to a UBI conference in Korea to meet many of the idea’s leading proponents and deepest thinkers, and stood with them at the DMZ contemplating the terrifying, heartening, and profound effects of our policy choices.

What I came to believe is this:

A UBI is an ethos as much as it is a technocratic policy proposal. It contains within it the principles of universality, unconditionality, inclusion, and simplicity, and it insists that every person is deserving of participation in the economy, freedom of choice, and a life without deprivation. Our governments can and should choose to provide those things, whether through a $1,000-a-month stipend or not.

This book has three parts. First, we’ll look at the issues surrounding UBI and work, then UBI and poverty, and finally UBI and social inclusion. At the end, we’ll explore the promise, potential, and design of universal cash programs. I hope that you will come to see, as I have, that there is much to be gained from contemplating this complicated, transformative, and mind-bending policy.

Chapter One

The Ghost Trucks

THE NORTH AMERICAN International auto show is a gleaming, roaring affair. Once a year, in bleakest January, carmakers head to the Motor City to show off their newest models, technologies, and concept vehicles to industry figures, the press, and the public. Each automaker takes its corner of the dark, carpeted cavern of the Cobo Center and turns it into something resembling a game-show set: spotlights, catwalks, light displays, scantily clad women, and vehicle after vehicle, many rotating on giant lazy Susans. I spent hours at a recent show, ducking in and out of new models and talking with auto executives and sales representatives. I sat in an SUV as sleek as a shark, the buttons and gears and dials on its dash-board replaced with a virtual cockpit straight out of science fiction. A race car so aerodynamic and low that I had to crouch to get in it. And driverless car after driverless car after driverless car.

The displays ranged in degrees of technological spectacle from the cool to the oh-my-word. One massive Ford truck, for instance, offered a souped-up cruise control that would brake for pedestrians and take over stop-and-go driving in heavy traffic. “No need to keep ramming the pedals yourself,” a representative said as l gripped the oversize steering wheel.

Across the floor sat a Volkswagen concept car that looked like a hippie caravan for aliens. The minibus had no door latches, just sensors. There was a plug instead of a gas tank. On fully autonomous driving mode, the dash swallowed the steering wheel. A variety of lasers, sensors, radar, and cameras would then pilot the vehicle, and the driver and front-seat passenger could swing their seats around to the back, turning the bus into a snug, space-age living room. “The car of the future!” proclaimed Klaus Bischoff, the company’s head of design.

It was a phrase that I heard again and again in Detroit. We are developing the cars of the future. The cars of the future are coming. The cars of the future are here. The auto market, I came to understand, is rapidly moving from automated to autonomous to driverless. Many cars already offer numerous features to assist with driving, including fancy cruise controls, backup warnings, lanekeeping technology, emergency braking, automatic parking, and so on. Add in enough of those options, along with some advanced sensors and thousands of lines of code, and you end up with an autonomous car that can pilot itself from origin to destination. Soon enough, cars, trucks, and taxis might be able to do so without a driver in the vehicle at all.

This technology has gone from zero to sixty, forgive me, in only a decade and a half. Back in 2002, the Defense Advanced Research Projects Agency, part of the Department of Defense and better known as DARPA, announced a “grand challenge,” an invitation for teams to build autonomous vehicles and race one another on a 142-mile desert course from Barstow, California, to Primm, Nevada. The winner would take home a cool million. At the marquee event, none of the competitors made it through the course, or anywhere close. But the promise of prize money and the publicity around the event spurred a wave of investment and innovation. “That first competition created a community of innovators, engineers, students, programmers, offroad racers, backyard mechanics, inventors, and dreamers who came together to make history by trying to solve a tough technical problem,” said Lt. Col. Scott Wadle of DARPA. “The fresh thinking they brought was the spark that has triggered major advances in the development of autonomous robotic ground vehicle technology in the years since.”

As these systems become more reliable, safer, and cheaper, and as government regulations and the insurance markets come to accommodate them, mere mortals will get to experience them. At the auto show, I watched John Krafcik, the chief executive of Waymo, Google’s self-driving spin-off, show off a fully autonomous Chrysler Pacifica minivan. “Our latest innovations have brought us closer to scaling our technology to potentially millions of people every day,” he said, describing how the cost of the threedimensional light-detection radar that helps guide the car has fallen 90 percent from its original $75,000 price tag in just a few years. BMW and Ford, among others, have announced that their autonomous offerings will go to market soon. “The amount of technology in cars has been growing exponentially,” said Sandy Lobenstein, a Toyota executive, speaking in Detroit‘s “The vehicle as we know it is transforming into a means of getting around that futurists have dreamed about for a long time.” Taxis without a taxi driver, trucks without a truck driver, cars you can tell where to go and then take a nap in: they are coming to our roads, and threatening millions and millions of jobs as they do.

*

from

Give People Money. How a Universal Basic Income would end poverty, revolutionise work, and remake the world

by Annie Lowrey

get it at Amazon.com

Modern Monetary Theory. The government is not a household and imports are still a benefit – Bill Mitchell.

A government is not a household, is a core Modern Monetary Theory (MMT) proposition because it separates the currency issuer from the currency user and allows us to appreciate the constraints that each has on its spending capacities.

Another core MMT proposition is that imports are a benefit and exports are a cost.

In the case of a household, there are both real and financial resource constraints which limit its spending and necessitate strategies being put in place to facilitate that spending (getting income, running down savings, borrowing, selling assets).

In the case of a currency issuing government the only spending constraints beyond the political are the available real resources that are for sale in that currency.

Beyond that, the government sector thus assumes broad responsibilities as the currency issuer, which are not necessarily borne by individual consumers. Its objectives are different. Which brings trade into the picture.

Another core MMT proposition is that imports are a benefit and exports are a cost.

So why would I support Jeremy Corbyn’s Build it in Britain policy, which is really an import competing strategy? Simple, the government is not a household.

Household consumers are users of the currency and aim to use their disposable incomes to create well-being, primarily for themselves and their families.

We exhibit generosity by extending our spending capacities to others when we give gifts.

But our aims are to get the best deal we can in our transactions. That means we like goods and services that satisfy our quality standards at the best price possible.

Which means that we will be somewhat indifferent to geography. If local suppliers are expensive and imported goods and services are cheaper, then as long as quality considerations are broadly met, we will purchase the imported commodity and be better off in a material sense.

If other nations are willing to send more goods and services to us than they get back in return then the real terms of trade are in our favour.

Exports require we give up our use of those real resources while imports mean we deprive other nations of the use of their resources.

There are nuances obviously.

A nation with lots of minerals (Australia) may not feel it is too much of a ‘cost’ to send boatloads of primary commodities to Japan or China.

We also individually might ascribe to broader goals in our purchasing decisions, although the evidence for this is weak.

For example, some of us believe that imports are only a beneflt if they come from nations that treat their workers reasonably (no sweat shops, killing trade unionists etc) and do not ravage the natural environment in the process of producing the goods.

I would guess those concerns do not dominate our decision making generally because if they did China would not have huge export surpluses.

But there are nuances.

However, a government is not a household. It has a wider remit (objectives) than a household and must consider a broad range of concerns when it uses its currency issuing capacity to shift real resources (as goods and services) from the non-government sector to the government sector to fulfill its elected mandate.

In that sense, imports remain a benefit but the broader concerns make the net decision more complex than it is for the nongovernment sector.

The government must consider regional disparities. When a household is making a decision to purchase a good or service, what is happening elsewhere in the nation might not rank very high in the decision.

The government must consider how best to maintain full employment. A household is really only concerned with their own employment although that doesn’t preclude us being generally concerned with high unemployment rates.

But ’buy local’ campaigns typically do not work when they try to steer household consumption expenditure.

The government can always maintain full employment through its fiscal spending decisions. We know that because it can always purchase the services of all idle labour that wants to work and receive payment in the currency of issue.

So from that starting point, there is no question that mass unemployment is a policy choice, not some uncontrollable outcome of a ’market‘.

In that context, the challenge for government is to work out how to frame the spending capacity to get the best employment outcomes:

* Direct public employment, that is obvious.

* Subsidy of local non-government firms, that is, operate by lowering the unit costs for firms to render them profitable when they otherwise would not be.

* ‘Build it in Britain’ that is, use procurement policies to sustain sales for local firms rather than subsidise their costs.

None of these full employment strategies negate the insight that imports are a benefit to a nation.

But the government has to consider broader concerns than just getting a good or service at the cheapest ’market’ price.

There are more considerations, but that is how we can understand this issue.

The End of Alchemy: Money, Banking and the Future of the Global Economy – Mervyn King.

If the economy had grown after the global financial crisis at the same rate as the number of books written about it, then we would have been back at full employment some while ago.

Modern economics has encouraged ways of thinking that make crises more probable. Economists have brought the problem upon themselves by pretending that they can forecast. No one can easily predict an unknowable future, and economists are no exception.

The fragility of our financial system stems directly from the fact that banks are the main source of money creation. Banks are man made institutions, important sources of innovation, prosperity and material progress, but also of greed, corruption and crises. For better or worse, they materially affect human welfare.

Unless we go back to the underlying causes we will never understand what happened and will be unable to prevent a repetition and help our economies truly recover.

The financial crisis of 2007-9 was merely the latest manifestation of our collective failure to manage the relationship between finance, the structure of money and banking, and a capitalist system.”

The former governor of the Bank of England on reforming global finance.

Mervyn King was governor of the Bank of England in 2003-13. In “The End of Alchemy” there is no gossip and few revelations. Instead Lord King uses his experience of the crisis as a platform from which to present economic ideas to non-specialists.

He does a good job of putting complex concepts into plain English. The discussion of the evolution of money, from Roman times to 19th-century America to today, is a useful introduction for those not quite sure what currency really is.

He explains why economies need central banks: at best, they are independent managers of the money supply and rein in the banking system. Central bankers like giving the impression that they have played such roles since time immemorial, but as Lord King points out the reality is otherwise. The Fed was created only in 1913; believe it or not, until 1994 it would not reveal to the public its interest rate decisions until weeks after the event. Even the Bank of England, founded in 1694, got the exclusive right to print banknotes, in England and Wales, only in 1844.

At times, Lord King can be refreshingly frank. He is no fan of austerity policies, saying that they have imposed “enormous costs on citizens throughout Europe”. He also reserves plenty of criticism for the economics profession. Since forecasting is so hit and miss, he thinks, the practice of giving prizes to the best forecasters “makes as much sense as it would to award the Fields Medal in mathematics to the winner of the National Lottery”.

The problem leading up to the global financial crisis, as Lord King sees it, is that commercial banks had little incentive to hold large quantities of safe, liquid assets. They knew that in a panic, the central bank would provide liquidity, no matter the quality of their balance sheets; in response they loaded up on risky investments.

The Economist

‘It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity …’ Charles Dickens, A Tale of Two Cities

The End of Alchemy, Mervyn King

The past twenty years in the modern world were indeed the best of times and the worst of times. It was a tale of two epochs in the first growth and stability, followed in the second by the worst banking crisis the industrialised world has ever witnessed. Within the space of little more than a year, between August 2007 and October 2008, what had been viewed as the age of wisdom was now seen as the age of foolishness, and belief turned into incredulity. The largest banks in the biggest financial centres in the advanced world failed, triggering a worldwide collapse of confidence and bringing about the deepest recession since the 1930s.

How did this happen? Was it a failure of individuals, institutions or ideas? The events of 2007-8 have spawned an outpouring of articles and books, as well as plays and films, about the crisis. If the economy had grown after the crisis at the same rate as the number of books written about it, then we would have been back at full employment some while ago.

Most such accounts like the media coverage and the public debate at the time focus on the symptoms and not the underlying causes. After all, those events, vivid though they remain in the memories of both participants and spectators, comprised only the latest in a long series of financial crises since our present system of money and banking became the cornerstone of modern capitalism after the Industrial Revolution in the eighteenth century. The growth of indebtedness, the failure of banks, the recession that followed, were all signs of much deeper problems in our financial and economic system.

Unless we go back to the underlying causes we will never understand what happened and will be unable to prevent a repetition and help our economies truly recover. This book looks at the big questions raised by the depressing regularity of crises in our system of money and banking. Why do they occur? Why are they so costly in terms of lost jobs and production? And what can we do to prevent them? It also examines new ideas that suggest answers.

In the spring of 2011, I was in Beijing to meet a senior Chinese central banker. Over dinner in the Diaoyutai State Guesthouse, where we had earlier played tennis, we talked about the lessons from history for the challenges we faced, the most important of which was how to resuscitate the world economy after the collapse of the western banking system in 2008. Bearing in mind the apocryphal answer of Premier Chou Enlai to the question of what significance one should attach to the French Revolution (it was ‘too soon to tell’), I asked my Chinese colleague what importance he now attached to the Industrial Revolution in Britain in the second half of the eighteenth century.

He thought hard. Then he replied: ‘We in China have learned a great deal from the West about how competition and a market economy support industrialisation and create higher living standards. We want to emulate that.’ Then came the sting in the tail, as he continued: ‘But I don’t think you’ve quite got the hang of money and banking yet.’ His remark was the inspiration for this book.

Since the crisis, many have been tempted to play the game of deciding who was to blame for such a disastrous outcome. But blaming individuals is counterproductive, it leads you to think that if just a few, or indeed many, of those people were punished then we would never experience a crisis again. If only it were that simple. A generation of the brightest and best were lured into banking, and especially into trading, by the promise of immense financial rewards and by the intellectual challenge of the work that created such rich returns. They were badly misled. The crisis was a failure of a system, and the ideas that underpinned it, not of individual policy makers or bankers, incompetent and greedy though some of them undoubtedly were. There was a general misunderstanding of how the world economy worked. Given the size and political influence of the banking sector, is it too late to put the genie back in the bottle? No it is never too late to ask the right questions, and in this book I try to do so.

If we don’t blame the actors, then why not the playwright? Economists have been cast by many as the villain. An abstract and increasingly mathematical discipline, economics is seen as having failed to predict the crisis. This is rather like blaming science for the occasional occurrence of a natural disaster. Yet we would blame scientists if incorrect theories made disasters more likely or created a perception that they could never occur, and one of the arguments of this book is that economics has encouraged ways of thinking that made crises more probable. Economists have brought the problem upon themselves by pretending that they can forecast. No one can easily predict an unknowable future, and economists are no exception.

Despite the criticism, modern economics provides a distinctive and useful way of thinking about the world. But no subject can stand still, and economics must change, perhaps quite radically, as a result of the searing experience of the crisis. A theory adequate for today requires us to think for ourselves, standing on the shoulders of giants of the past, not kneeling in front of them.

Economies that are capable of sending men to the moon and producing goods and services of extraordinary complexity and innovation seem to struggle with the more mundane challenge of handling money and banking. The frequency, and certainly severity, of crises has, if anything, increased rather than decreased over time.

In the heat of the crisis in October 2008, nation states took over responsibility for all the obligations and debts of the global banking system. In terms of its balance sheet, the banking system had been virtually nationalised but without collective control over its operations. That government rescue cannot conveniently be forgotten. When push came to shove, the very sector that had espoused the merits of market discipline was allowed to carry on only by dint of taxpayer support. The creditworthiness of the state was put on the line, and in some cases, such as Iceland and Ireland, lost. God may have created the universe, but we mortals created paper money and risky banks. They are man made institutions, important sources of innovation, prosperity and material progress, but also of greed, corruption and crises. For better or worse, they materially affect human welfare.

For much of modern history, and for good reason, money and banking have been seen as the magical elements that liberated us from a stagnant feudal system and permitted the emergence of dynamic markets capable of making the long-term investments necessary to support a growing economy. The idea that paper money could replace intrinsically valuable gold and precious metals, and that banks could take secure short-term deposits and transform them into long-term risky investments, came into its own with the Industrial Revolution in the eighteenth century. It was both revolutionary and immensely seductive. It was in fact financial alchemy, the creation of extraordinary financial powers that defy reality and common sense. Pursuit of this monetary elixir has brought a series of economic disasters from hyperinflations to banking collapses.

Why have money and banking, the alchemists of a market economy, turned into its Achilles heel?

The purpose of this book is to answer that question. It sets out to explain why the economic failures of a modern capitalist economy stem from our system of money and banking, the consequences for the economy as a whole, and how we can end the alchemy. Our ideas about money and banking are just as much a product of our age as the way we conduct our politics and imagine our past.

The twentieth century experience of depression, hyperinflation and war changed both the world and the way economists thought about it. Before the Great Depression of the early 1930s, central banks and governments saw their role as stabilising the financial system and balancing the budget. After the Great Depression, attention turned to policies aimed at maintaining full employment. But post-war confidence that Keynesian ideas, the use of public spending to expand total demand in the economy, would prevent us from repeating the errors of the past was to prove touchingly naive. The use of expansionary policies during the 1960s, exacerbated by the Vietnam War, led to the Great Inflation of the 1970s, accompanied by slow growth and rising unemployment, the combination known as ‘stagflation’.

The direct consequence was that central banks were reborn as independent institutions committed to price stability. So successful was this that in the 1990s not only did inflation fall to levels unseen for a generation, but central banks and their governors were hailed for inaugurating an era of economic growth with low inflation, the Great Stability or Great Moderation. Politicians worshipped at the altar of finance, bringing gifts in the form of lax regulation and receiving support, and sometimes campaign contributions, in return. Then came the fall: the initial signs that some banks were losing access to markets for short-term borrowing in 2007, the collapse of the industrialised world’s banking system in 2008, the Great Recession that followed, and increasingly desperate attempts by policy-makers to engineer a recovery. Today the world economy remains in a depressed state. Enthusiasm for policy stimulus is back in fashion, and the wheel has turned full circle.

The recession is hurting people who were not responsible for our present predicament, and they are, naturally, angry. There is a need to channel that anger into a careful analysis of what went wrong and a determination to put things right. The economy is behaving in ways that we did not expect, and new ideas will be needed if we are to prevent a repetition of the Great Recession and restore prosperity.

Many accounts and memoirs of the crisis have already been published. Their titles are numerous, but they share the same invisible subtitle: ‘how I saved the world’. So although in the interests of transparency I should make clear that I was an actor in the drama, Governor of the Bank of England for ten years between 2003 and 2013, during both the Great Stability, the banking crisis itself, the Great Recession that followed, and the start of the recovery, this is not a memoir of the crisis with revelations about private conversations and behind the scenes clashes. Of course, those happened as in any walk of life. But who said what to whom and when can safely, and properly, be left to dispassionate and disinterested historians who can sift and weigh the evidence available to them after sufficient time has elapsed and all the relevant official and unofficial papers have been made available.

Instant memoirs, whether of politicians or officials, are usually partial and self-serving. I see little purpose in trying to set the record straight when any account that I gave would naturally also seem self-serving. My own record of events and the accompanying Bank papers will be made available to historians when the twenty-year rule permits their release.

This book is about economic ideas. My time at the Bank of England showed that ideas, for good or ill, do influence governments and their policies. The adoption of inflation targeting in the early 1990s and the granting of independence to the Bank of England in 1997 are prime examples. Economists brought intellectual rigour to economic policy and especially to central banking. But my experience at the Bank also revealed the inadequacies of the ‘models’, whether verbal descriptions or mathematical equations, used by economists to explain swings in total spending and production. In particular, such models say nothing about the importance of money and banks and the panoply of financial markets that feature prominently in newspapers and on our television screens.

Is there a fundamental weakness in the intellectual economic framework underpinning contemporary thinking?

An exploration of some of these basic issues does not require a technical exposition, and I have stayed away from one. Of course, economists use mathematical and statistical methods to understand a complex world, they would be remiss if they did not. Economics is an intellectual discipline that requires propositions to be not merely plausible but subject to the rigour of a logical proof. And yet there is no mathematics in this book. It is written in (I hope) plain English and draws on examples from real life. Although I would like my fellow economists to read the book in the hope that they will take forward some of the ideas presented here, it is aimed at the reader with no formal training in economics but an interest in the issues.

In the course of this book, I will explain the fundamental causes of the crisis and how the world economy lost its balance; how money emerged in earlier societies and the role it plays today; why the fragility of our financial system stems directly from the fact that banks are the main source of money creation; why central banks need to change the way they respond to crises; why politics and money go hand in hand; why the world will probably face another crisis unless nations pursue different policies; and, most important of all, how we can end the alchemy of our present system of money and banking.

By alchemy I mean the belief that all paper money can be turned into an intrinsically valuable commodity, such as gold, on demand and that money kept in banks can be taken out whenever depositors ask for it. The truth is that money, in all forms, depends on trust in its issuer. Confidence in paper money rests on the ability and willingness of governments not to abuse their power to print money. Bank deposits are backed by long-term risky loans that cannot quickly be converted into money. For centuries, alchemy has been the basis of our system of money and banking. As this book shows, we can end the alchemy without losing the enormous benefits that money and banking contribute to a capitalist economy.

Four concepts are used extensively in the book: disequilibrium, radical uncertainty, the prisoner’s dilemma and trust. These concepts will be familiar to many, although the context in which I use them may not. Their significance will become clear as the argument unfolds, but a brief definition and explanation may be helpful at the outset.

Disequilibrium is the absence of a state of balance between the forces acting on a system. As applied to economics, disequilibrium is a position that is unsustainable, meaning that at some point a large change in the pattern of spending and production will take place as the economy moves to a new equilibrium. The word accurately describes the evolution of the world economy since the fall of the Berlin Wall, which I discuss in Chapter 1.

Radical uncertainty refers to uncertainty so profound that it is impossible to represent the future in terms of a knowable and exhaustive list of outcomes to which we can attach probabilities. Economists conventionally assume that ‘rational’ people can construct such probabilities. But when businesses invest, they are not rolling dice with known and finite outcomes on the faces; rather they face a future in which the possibilities are both limitless and impossible to imagine. Almost all the things that define modern life, and which we now take for granted, such as cars, aeroplanes, computers and antibiotics, were once unimaginable. The essential challenge facing everyone living in a capitalist economy is the inability to conceive of what the future may hold. The failure to incorporate radical uncertainty into economic theories was one of the factors responsible for the misjudgements that led to the crisis.

The prisoner’s dilemma may be defined as the difficulty of achieving the best outcome when there are obstacles to cooperation. Imagine two prisoners who have been arrested and kept apart from each other. Both are offered the same deal: if they agree to incriminate the other they will receive a light sentence, but if they refuse to do so they will receive a severe sentence if the other incriminates them. If neither incriminates the other, then both are acquitted. Clearly, the best outcome is for both to remain silent. But if they cannot cooperate the choice is more difficult. The only way to guarantee the avoidance of a severe sentence is to incriminate the other. And if both do so, the outcome is that both receive a light sentence. But this non-cooperative outcome is inferior to the cooperative outcome. The difficulty of cooperating with each other creates a prisoner’s dilemma. Such problems are central to understanding how the economy behaves as a whole (the field known as macroeconomics) and to thinking through both how we got into the crisis and how we can now move towards a sustainable recovery. Many examples will appear in the following pages. Finding a resolution to the prisoner’s dilemma problem in a capitalist economy is central to understanding and improving our fortunes.

Trust is the ingredient that makes a market economy work. How could we drive, eat, or even buy and sell, unless we trusted other people? Everyday life would be impossible without trust: we give our credit card details to strangers and eat in restaurants that we have never visited before. Of course, trust is supplemented with regulation, fraud is a crime and there are controls of the conditions in restaurant kitchens but an economy works more efficiently with trust than without. Trust is part of the answer to the prisoner’s dilemma. It is central to the role of money and banks, and to the institutions that manage our economy. Long ago, Confucius emphasised the crucial role of trust in the authorities: ‘Three things are necessary for government: weapons, food and trust. If a ruler cannot hold on to all three, he should give up weapons first and food next. Trust should be guarded to the end: without trust we cannot stand.’

Those four ideas run through the book and help us to understand the origin of the alchemy of money and banking and how we can reduce or even eliminate that alchemy.

When I left the Bank of England in 2013, I decided to explore the flaws in both the theory and practice of money and banking, and how they relate to the economy as a whole. I was led deeper and deeper into basic questions about economics. I came to believe that fundamental changes are needed in the way we think about macroeconomics, as well as in the way central banks manage their economies.

A key role of a market economy is to link the present and the future, and to coordinate decisions about spending and production not only today but tomorrow and in the years thereafter. Families will save if the interest rate is high enough to overcome their natural impatience to spend today rather than tomorrow. Companies will invest in productive capital if the prospective rate of return exceeds the cost of attracting finance. And economic growth requires saving and investment to add to the stock of productive capital and so increase the potential output of the economy in the future. In a healthy growing economy all three rates, the interest rate on saving, the rate of return on investment, and the rate of growth are well above zero. Today, however, we are stuck with extraordinarily low interest rates, which discourage saving, the source of future demand and, if maintained indefinitely, will pull down rates of return on investment, diverting resources into unprofitable projects. Both effects will drag down future growth rates. We are already some way down that road. It seems that our market economy today is not providing an effective link between the present and the future.

I believe there are two reasons for this failure. First, there is an inherent problem in linking a known present with an unknowable future. Radical uncertainty presents a market economy with an impossible challenge how are we to create markets in goods and services that we cannot at present imagine? Money and banking are part of the response of a market economy to that challenge. Second, the conventional wisdom of economists about how governments and central banks should stabilise the economy gives insufficient weight to the importance of radical uncertainty in generating an occasional large disequilibrium. Crises do not come out of thin air but are the result of the unavoidable mistakes made by people struggling to cope with an unknowable future. Both issues have profound implications and will be explored at greater length in subsequent chapters.

Inevitably, my views reflect the two halves of my career. The first was as an academic, a student in Cambridge, England, and a Kennedy scholar at Harvard in the other Cambridge, followed by teaching positions on both sides of the Atlantic. I experienced at first hand the evolution of macroeconomics from literary exposition where propositions seemed plausible but never completely convincing, into a mathematical discipline where propositions were logically convincing but never completely plausible. Only during the crisis of 2007-9 did I look back and understand the nature of the tensions between the surviving disciples of John Maynard Keynes who taught me in the 1960s, primarily Richard Kahn and Joan Robinson, and the influx of mathematicians and scientists into the subject that fuelled the rapid expansion of university economics departments in the same period. The old school ‘Keynesians’ were mistaken in their view that all wisdom was to be found in the work of one great man, and as a result their influence waned. The new arrivals brought mathematical discipline to a subject that prided itself on its rigour. But the informal analysis of disequilibrium of economies, radical uncertainty, and trust as a solution to the prisoner’s dilemma was lost in the enthusiasm for the idea that rational individuals would lead the economy to an efficient equilibrium. It is time to take those concepts more seriously.

The second half of my career comprised twenty-two years at the Bank of England, the oldest continuously functioning central bank in the world, from 1991 to 2013, as Chief Economist, Deputy Governor and then Governor. That certainly gave me a chance to see how money could be managed. I learned, and argued publicly, that this is done best not by relying on gifted individuals to weave their magic, but by designing and building institutions that can be run by people who are merely professionally competent. Of course individuals matter and can make a difference, especially in a crisis. But the power of markets, the expression of hundreds of thousands of investors around the world is a match for any individual, central banker or politician, who fancies his ability to resist economic arithmetic. As one of President Clinton’s advisers remarked, ‘I used to think if there was reincarnation, I wanted to come back as the president or the Pope or a .400 baseball hitter. But now I want to come back as the bond market. You can intimidate everybody.’ Nothing has diminished the force of that remark since it was made over twenty years ago.

In 2012, I gave the first radio broadcast in peacetime by a Governor of the Bank of England since Montagu Norman delivered a talk on the BBC in March 1939, only months before the outbreak of the Second World War. As Norman left Broadcasting House, he was mobbed by British Social Credits Party demonstrators carrying flags and slogan-boards bearing the words: CONSCRIPT THE BANKERS FIRST! Feelings also ran high in 2012. The consequences of the events of 2007-9 are still unfolding, and anger about their effects on ordinary citizens is not diminishing. That disaster was a long time in the making, and will be just as long in the resolving.

But the cost of lost output and employment from our continuing failure to manage money and banking and prevent crises is too high for us to wait for another crisis to occur before we act to protect future generations.

Charles Dickens’ novel A Tale of Two Cities has not only a very famous opening sentence but an equally famous closing sentence. As Sydney Carton sacrifices himself to the guillotine in the place of another, he reflects: ‘It is a far, far better thing that I do, than I have ever done …’ If we can find a way to end the alchemy of the system of money and banking we have inherited then, at least in the sphere of economics, it will indeed be a far, far better thing than we have ever done.

One

THE GOOD, THE BAD AND THE UGLY

‘I think that Capitalism, wisely managed, can probably be made more efficient for attaining economic ends than any alternative system yet in sight.’ John Maynard Keynes, The End of Laissez-faire (1926)

‘The experience of being disastrously wrong is salutary; no economist should be spared it, and few are.’ John Kenneth Galbraith, A Life in Our Times (1982)

History is what happened before you were born. That is why it is so hard to learn lessons from history: the mistakes were made by the previous generation. As a student in the 1960s, I knew why the 1930s were such a bad time. Outdated economic ideas guided the decisions of governments and central banks, while the key individuals were revealed in contemporary photographs as fuddy-duddies who wore whiskers and hats and were ignorant of modern economics. A younger generation, in academia and government, trained in modern economics, would ensure that the Great Depression of the 1930s would never be repeated.

In the 1960s, everything seemed possible. Old ideas and conventions were jettisoned, and a new world beckoned. In economics, an influx of mathematicians, engineers and physicists brought a new scientific approach to what the nineteenth-century philosopher and writer Thomas Carlyle christened the ‘dismal science’. It promised not just a better understanding of our economy, but an improved economic performance.

The subsequent fifty years were a mixed experience. Over that period, national income in the advanced world more than doubled, and in the so-called developing world hundreds of millions of people were lifted out of extreme poverty. And yet runaway inflation in the 1970s was followed in 2007-9 by the biggest financial crisis the world has ever seen. How do we make sense of it all? Was the post-war period a success or a failure?

The origins of economic growth

The history of capitalism is one of growth and rising living standards interrupted by financial crises, most of which have emanated from our mismanagement of money and banking. My Chinese colleague spoke an important, indeed profound, truth.

The financial crisis of 2007-9 (hereafter ‘the crisis’) was not the fault of particular individuals or economic policies. Rather, it was merely the latest manifestation of our collective failure to manage the relationship between finance, the structure of money and banking, and a capitalist system.

Failure to appreciate this explains why most accounts of the crisis focus on the symptoms and not the underlying causes of what went wrong. The fact that we have not yet got the hang of it does not mean that a capitalist economy is doomed to instability and failure. It means that we need to think harder about how to make it work.

Over many years, a capitalist economy has proved the most successful route to escape poverty and achieve prosperity.

Capitalism, as I use the term here, is an economic system in which private owners of capital hire wage earners to work in their businesses and pay for investment by raising finance from banks and financial markets.

The West has built the institutions to support a capitalist system, the rule of law to enforce private contracts and protect property rights, intellectual freedom to innovate and publish new ideas, anti-trust regulation to promote competition and break up monopolies, and collectively financed services and networks, such as education, water, electricity and telecommunications, which provide the infrastructure to support a thriving market economy. Those institutions create a balance between freedom and restraint, and between unfettered competition and regulation. It is a subtle balance that has emerged and evolved over time. And it has transformed our standard of living. Growth at a rate of 2.5 per cent a year, close to the average experienced in North America and Europe since the Second World War, raises real total national income twelvefold over one century, a truly revolutionary outcome.

Over the past two centuries, we have come to take economic growth for granted. Writing in the middle of that extraordinary period of economic change in the mid-eighteenth century, the Scottish philosopher and political economist, Adam Smith, identified the source of the breakout from relative economic stagnation, an era during which productivity (output per head) was broadly constant and any increase resulted from discoveries of new land or other natural resources, to a prolonged period of continuous growth of productivity: specialisation. It was possible for individuals to specialise in particular tasks, the division of labour, and by working with capital equipment to raise their productivity by many times the level achieved by a jack-of-all-trades. To illustrate his argument, Smith employed his now famous example of a pin factory:

A workman could scarce, perhaps, with his utmost industry, make one pin in a day, and certainly could not make twenty. But in the way in which this business is now carried on, not only the whole work is a peculiar trade, but it is divided into a number of branches. One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head The important business of making a pin is, in this manner, divided into about eighteen distinct operations, which, in some manufactories, are all performed by distinct hands.

The factory Smith was describing employed ten men and made over 48,000 pins in a day.

The application of technical knowhow to more and more tasks increased specialisation and raised productivity. Specialisation went hand in hand with an even greater need for both a means to exchange the fruits of one’s labour for an ever wider variety of goods produced by other specialists, money, and a way to finance the purchase of the capital equipment that made specialisation possible, banks.

As each person in the workforce became more specialised, more machinery and capital investment was required to support them, and the role of money and banks increased. After a millennium of roughly constant output per person, from the middle of the eighteenth century productivity started, slowly but surely, to rise. Capitalism was, quite literally, producing the goods. Historians will continue to debate why the Industrial Revolution occurred in Britain, population growth, plentiful supplies of coal and iron, supportive institutions, religious beliefs and other factors all feature in recent accounts.

But the evolution of money and banking was a necessary condition for the Revolution to take off.

Almost a century later, with the experience of industrialisation and a massive shift of labour from the land to urban factories, socialist writers saw things differently. For Karl Marx and Friedrich Engels the future was clear. Capitalism was a temporary staging post along the journey from feudalism to socialism. In their Communist Manifesto of 1848, they put forward their idea of ‘scientific socialism’ with its deterministic view that capitalism would ultimately collapse and be replaced by socialism or communism. Later, in the first volume of Das Kapital (1867), Marx elaborated (at great length) on this thesis and predicted that the owners of capital would become ever richer while excessive capital accumulation would lead to a falling rate of profit, reducing the incentive to invest and leaving the working class immersed in misery. The British industrial working class in the nineteenth century did indeed suffer miserable working conditions, as graphically described by Charles Dickens in his novels. But no sooner had the ink dried on Marx’s famous work than the British economy entered a long period of rising real wages (money wages adjusted for the cost of living). Even the two world wars and the intervening Great Depression in the 1930s could not halt rising productivity and real wages, and broadly stable rates of profit. Economic growth and improving living standards became the norm.

But if capitalism did not collapse under the weight of its own internal contradictions, neither did it provide economic security. During the twentieth century, the extremes of hyperinflations and depressions eroded both living standards and the accumulated wealth of citizens in many capitalist economies, especially during the Great Depression in the 1930s, when mass unemployment sparked renewed interest in the possibilities of communism and central planning, especially in Europe. The British economist John Maynard Keynes promoted the idea that government intervention to bolster total spending in the economy could restore full employment, without the need to resort to fully fledged socialism.

After the Second World War, there was a widespread belief that government planning had won the war and could be the means to win the peace. In Britain, as late as 1964 the newly elected Labour government announced a ‘National Plan’. Inspired by a rather naive version of Keynesian ideas, it focused on policies to boost the demand for goods and services rather than the ability of the economy to produce them. As the former outstripped the latter, the result was inflation. On the other side of the Atlantic, the growing cost of the Vietnam War in the late 1960s also led to higher inflation.

Rising inflation put pressure on the internationally agreed framework within which countries had traded with each other since the Bretton Woods Agreement of 1944, named after the conference held in the New Hampshire town in July of that year. Designed to allow a war-damaged Europe slowly to rebuild its economy and reintegrate into the world trading system, the agreement created an international monetary system under which countries set their own interest rates but fixed their exchange rates among themselves. For this to be possible, movements of capital between countries had to be severely restricted otherwise capital would move to where interest rates were highest, making it impossible to maintain either differences in those rates or fixed exchange rates. Exchange controls were ubiquitous, and countries imposed limits on investments in foreign currency. As a student, I remember that no British traveller in the 1960s could take abroad with them more than £50 a year to spend.

The new international institutions, the International Monetary Fund (IMF) and the World Bank, would use funds provided by its members to finance temporary shortages of foreign currency and the investment needed to replace the factories and infrastructure destroyed during the Second World War. Implicit in this framework was the belief that countries would have similar and low rates of inflation. Any loss of competitiveness in one country, as a result of higher inflation than in its trading partners, was assumed to be temporary and would be met by a deflationary policy to restore competitiveness while borrowing from the IMF to finance a short-term trade deficit. But in the late 1960s differences in inflation across countries, especially between the United States and Germany, appeared to be more than temporary, and led to the breakdown of the Bretton Woods system in 1970-1. By the early 1970s, the major economies had moved to a system of ‘floating’ exchange rates, in which currency values are determined by private sector supply and demand in the markets for foreign exchange.

Inevitably, the early days of floating exchange rates reduced the discipline on countries to pursue low inflation. When the two oil shocks of the 1970s, in 1973, when an embargo by Arab countries led to a quadrupling of prices, and 1979, when prices doubled after disruption to supply following the Iranian Revolution hit the western world, the result was the Great Inflation, with annual inflation reaching 13 per cent in the United States and 27 per cent in the United Kingdom.

Economic experiments

From the late 1970s onwards, the western world then embarked on what we can now see were three bold experiments to manage money, exchange rates and the banking system better. The first was to give central banks much greater independence in order to bring down and stabilise inflation, subsequently enshrined in the policy of inflation targeting, the goal of national price stability. The second was to allow capital to move freely between countries and encourage a shift to fixed exchange rates both within Europe, culminating in the creation of a monetary union, and in a substantial proportion of the most rapidly growing part of the world economy, particularly China, which fixed its exchange rates against the US dollar, the goal of exchange rate stability. And the third experiment was to remove regulations limiting the activities of the banking and financial system to promote competition and allow banks both to diversify into new products and regions and to expand in size, with the aim of bringing stability to a banking system often threatened in the past by risks that were concentrated either geographically or by line of business, the goal of financial stability.

These three simultaneous experiments might now be best described as having three consequences the Good, the Bad and the Ugly. The Good was a period between about 1990 and 2007 of unprecedented stability of both output and inflation the Great Stability. Monetary policy around the world changed radically. Inflation targeting and central bank independence spread to more than thirty countries. And there were significant changes in the dynamics of inflation, which on average became markedly lower, less variable and less persistent.

The Bad was the rise in debt levels. Eliminating exchange rate flexibility in Europe and the emerging markets led to growing trade surpluses and deficits. Some countries saved a great deal while others had to borrow to finance their external deficit. The willingness of the former to save outweighed the willingness of the latter to spend, and so long-term interest rates in the integrated world capital market began to fall. The price of an asset, whether a house, shares in a company or any other claim on the future, is the value today of future expected returns (rents, the value of housing services from living in your own home, or dividends). To calculate that price one must convert future into current values by discounting them at an interest rate. The immediate effect of a fall in interest rates is to raise the prices of assets across the board. So as long-term interest rates in the world fell, the value of assets especially of houses rose. And as the values of assets increased, so did the amounts that had to be borrowed to enable people to buy them. Between 1986 and 2006, household debt rose from just under 70 per cent of total household income to almost 120 per cent in the United States and from 90 per cent to around 140 per cent in the United Kingdom.

The Ugly was the development of an extremely fragile banking system. In the USA, Federal banking regulators’ increasingly lax interpretation of the provisions to separate commercial and investment banking introduced in the 1933 Banking Act (often known as Glass-Steagall, the senator and representative respectively who led the passage of the legislation) reached its inevitable conclusion with the Gramm-Leach-Bliley Act of 1999, which swept away any remaining restrictions on the activities of banks. In the UK, the so-called Big Bang of 1986, which started as a measure to introduce competition into the Stock Exchange, led to takeovers of small stockbroking firms and mergers between commercial banks and securities houses. Banks diversified and expanded rapidly after deregulation. In continental Europe so-called universal banks had long been the norm. The assets of large international banks doubled in the five years before 2008. Trading of new and highly complex financial products among banks meant that they became so closely interconnected that a problem in one would spread rapidly to others, magnifying rather than spreading risk.

Banks relied less and less on their own resources to finance lending and became more and more dependent on borrowing. The equity capital of banks, the funds provided by the shareholders of the bank accounted for a declining proportion of overall funding. Leverage, the ratio of total assets (or liabilities) to the equity capital of a bank, rose to extraordinary levels. On the eve of the crisis, the leverage ratio for many banks was 30 or more, and for some investment banks it was between 40 and 50. A few banks had ratios even higher than that. With a leverage ratio of even 25 it would take a fall of only 4 per cent in the average value of a bank’s assets to wipe out the whole of the shareholders’ equity and leave it unable to service its debts.

By 2008, the Ugly led the Bad to overwhelm the Good. The crisis, one might say catastrophe of the events that began to unfold under the gaze of a disbelieving world in 2007, was the failure of all three experiments. Greater stability of output and inflation, although desirable in itself, concealed the build-up of a major disequilibrium in the composition of spending. Some countries were saving too little and borrowing too much to be able to sustain their path of spending in the future, while others saved and lent so much that their consumption was pushed below a sustainable path. Total saving in the world was so high that interest rates, after allowing for inflation, fell to levels incompatible in the long run with a profitable growing market economy. Falling interest rates led to rising asset values and increases in the debt taken out against those more valuable assets. Fixed exchange rates exacerbated the burden of the debts, and in Europe the creation of monetary union in 1999 sapped the strength of many of its economies, as they became increasingly uncompetitive. Large, highly leveraged banks proved unstable and were vulnerable to even a modest loss of confidence, resulting in contagion to other banks and the collapse of the system in 2008.

At their outset the ill-fated nature of the three experiments was not yet visible. On the contrary, during the 1990s the elimination of high and variable inflation, which had undermined market economies in the 1970s, led to a welcome period of macroeconomic stability. The Great Stability, or the Great Moderation as it was dubbed in the United States, was seen, as in many ways it was, as a success for monetary policy. But it was unsustainable. Policy-makers were conscious of problems inherent in the first two experiments, but seemed powerless to do anything about them. At international gatherings, such as those of the IMF, policy-makers would wring their hands about the ‘global imbalances’ but no one country had any incentive to do anything about it. If a country had, on its own, tried to swim against the tide of falling interest rates, it would have experienced an economic slowdown and rising unemployment without any material impact on either the global economy or the banking system. Even then the prisoner’s dilemma was beginning to rear its ugly head.

Nor was it obvious how the unsustainable position of the world economy would come to an end. I remember attending a seminar of economists and policy-makers at the IMF as early as 2002 where the consensus was that there would eventually be a sharp fall in the value of the US dollar, which would produce a change in spending patterns. But long before that could happen, the third experiment ended with the banking crisis of September and October 2008. The shock that some of the biggest and most successful commercial banks in North America and Europe either failed, or were seriously crippled, led to a collapse of confidence which produced the largest fall in world trade since the 1930s. Something had gone seriously wrong.

Opinions differ as to the cause of the crisis. Some see it as a financial panic in which fundamentally sound financial institutions were left short of cash as confidence in the credit-worthiness of banks suddenly changed and professional investors stopped lending to them, a liquidity crisis. Others see it as the inevitable outcome of bad lending decisions by banks, a solvency crisis, in which the true value of banks’ assets had fallen by enough to wipe out most of their equity capital, meaning that they might be unable to repay their debts. But almost all accounts of the recent crisis are about the symptoms, the rise and fall of housing markets, the explosion of debt and the excesses of the banking system rather than the underlying causes of the events that overwhelmed the economies of the industrialised world in 2008. Some even imagine that the crisis was solely an affair of the US financial sector. But unless the events of 2008 are seen in their global economic context, it is hard to make sense of what happened and of the deeper malaise in the world economy.

The story of what happened can be explained in little more than a few pages, everything you need to know but were afraid to ask about the causes of the recent crisis. So here goes.

The story of the crisis

By the start of the twenty-first century it seemed that economic prosperity and democracy went hand in hand. Modern capitalism spawned growing prosperity based on growing trade, free markets and competition, and global banks. In 2008 the system collapsed. To understand why the crisis was so big, and came as such a surprise, we should start at the key turning point, the fall of the Berlin Wall in 1989. At the time it was thought to represent the end of communism, indeed the end of the appeal of socialism and central planning.

For some it was the end of history. For most, it represented a victory for free market economics. Contrary to the prediction of Marx, capitalism had displaced communism. Yet who would have believed that the fall of the Wall was not just the end of communism but the beginning of the biggest crisis in capitalism since the Great Depression?

What has happened over the past quarter of a century to bring about this remarkable change of fortune in the position of capitalist economies?

After the demise of the socialist model of a planned economy, China, countries of the former Soviet Union and India embraced the international trading system, adding millions of workers each year to the pool of labour around the world producing tradeable, especially manufactured, goods. In China alone, over 70 million manufacturing jobs were created during the twenty-first century, far exceeding the 42 million working in manufacturing in 2012 in the United States and Europe combined. The pool of labour supplying the world trading system more than trebled in size. Advanced economies benefited from an influx of cheap consumer goods at the expense of employment in the manufacturing sector.

The aim of the emerging economies was to follow Japan and Korea in pursuing an export-led growth strategy. To stimulate exports, their exchange rates were held down by fixing them at a low level against the US dollar. The strategy worked, especially in the case of China. Its share in world exports rose from 2 per cent to 12 per cent between 1990 and 2013. China and other Asian economies ran large trade surpluses. In other words, they were producing more than they were spending and saving more than they were investing at home. The desire to save was very strong. In the absence of a social safety net, households in China chose to save large proportions of their income to provide self-insurance in the event of unemployment or ill-health, and to finance retirement consumption. Such a high level of saving was exacerbated by the policy from 1980 of limiting most families to one child, making it difficult for parents to rely on their children to provide for them in retirement.

Asian economies in general also saved more in order to accumulate large holdings of dollars as insurance in case their banking system ran short of foreign currency, as happened to Korea and other countries in the Asian financial crisis of the 1990s.

*

from

The End of Alchemy: Money, Banking and the Future of the Global Economy

by Mervyn King

get it at Amazon.com

Depressive Realism. Interdisciplinary perspectives – Colin Feltham.

Depressive Realism argues that people with mild-to-moderate depression have a more accurate perception of reality than nondepressives.

This book challenges the tacit hegemony of contemporary positive thinking, as well as the standard assumption in cognitive behavioural therapy that depressed individuals must have cognitive distortions.

The kind of world we live in, and that we are, cyclically determines how we feel and think. Some of us perceive and construe the world in dismal terms and believe our construal to be truer than competing accounts. Depending on what the glass is half-full of, the Depressive Realist may regard it as worthless, tasteless, poisonous or ultimately futile to drink.

I do not mean to say that people who experience clinical depression should not have therapy if they wish to, nor even that it does not sometimes help. Rather, I believe the assumption should not be made that depressive or negative views about life and experience necessarily correlate with psychological illness.

Depressive Realism seriously questions the standard assumption in cognitive behaviour therapy that depressed individuals must have cognitive distortions, and indeed reverses this to ask whether DRs might have a more objective grasp of reality than others, and a stubborn refusal to embrace illusion.

I argue that human life contains many glaringly tragic and depressing components and the denial or minimisation of these adds yet another level of depression.

Depressive realism is a worldview of human existence that is essentially negative, and which challenges assumptions about the value of life and the institutions claiming to answer life’s problems. Drawing from central observations from various disciplines, this book argues that a radical honesty about human suffering might initiate wholly new ways of thinking, in everyday life and in clinical practice for mental health, as well as in academia.

Divided into sections that reflect depressive realism as a worldview spanning all academic disciplines, chapters provide examples from psychology, psychotherapy, philosophy and more to suggest ways in which depressive realism can critique each discipline and academia overall. This book challenges the tacit hegemony of contemporary positive thinking, as well as the standard assumption in cognitive behavioural therapy that depressed individuals must have cognitive distortions. It also appeals to the utility of depressive realism for its insights, its pursuit of truth, as well as its emphasis on the importance of learning from negativity and failure. Arguments against depressive realism are also explored.

This book makes an important contribution to our understanding of depressive realism within an interdisciplinary context. It will be of key interest to academics, researchers and postgraduates in the fields of psychology, mental health, psychotherapy, history and philosophy. It will also be of great interest to psychologists, psychotherapists and counsellors.

Colin Feltham is Emeritus Professor of Critical Counselling Studies at Sheffield Hallam University. He is also External Associate Professor of Humanistic Psychology at the University of Southern Denmark.

Introduction

One could declare this to be simply a book about pessimism but that term would be inaccurate and insufficient. A non-verbal shortcut into the subject could be had by listening to Tears for Fears’ Mad World or Dinah Washington’s This Bitter Earth, or perhaps just by reading today’s newspaper. Depressive realism is the term used throughout this book but it will often be abbreviated to DR for ease of reading, referring to the negative worldview and also to anyone subscribing to this worldview (a DR, or DRs). DRs themselves may regard the ‘depressive’ part of the label as gratuitous, thinking their worldview to be simply realism just as Buddhism holds dukkha to be a fact of life.

Initially, it may seem that this book has a traditional mental health or psychological focus, but it draws from a range of interdisciplinary sources, is pertinent to diverse contexts and hopefully of interest to readers in the fields of philosophical anthropology, philosophy of mental health and existentialism and psychotherapy. I imagine it may be of negative, argumentative interest to some theologians, anthropologists, psychologists, social scientists and related lay readers.

Although more implicitly than explicitly, the message running throughout the book is that the kind of world we live in, and that we are, cyclically determines how we feel and think. We will disagree about what kind of world it is, but I hope we might agree that the totality of our history and surroundings has much more impact on us than simply what goes round in our heads.

Depressive realism can be defined, described and contextualised in several ways. its first use appears to have been by Alloy and Abramson (1979) in a paper describing a psychology experiment comparing the judgements of mildly depressed and non-depressed people. It is necessary to make some clarification at the outset about ‘clinical depression’. I do not believe that depression is a desirable state, or that those who are severely depressed are more accurate in their evaluations of life than others (Carson et al., 2010). This is not a book advocating suicide as a solution to life’s difficulties, nor am I advocating voluntary human extinction, nor is the text intended to promote hatred of humanity. The DR discussed here should not be mistaken for a consensual, life-hating suicide cult even if it includes respect for the challenging views of Benatar (2006) and Perry (2014). Nor can one assume that all ‘depressives’ necessarily have permanently and identically pessimistic worldviews, nor indeed that the lines drawn by the psychological professionals between all such mood states are accurate. But one can ask that the majority worldview that ‘Iife is alright’ be set against the DR view that life contains arrestingly negative features (Ligotti, 2010).

The strictly psychological use of DR has now expanded into the world of literary criticism, for example, in Jeffery’s (2011) text on Michel Houellebecq. It is this second, less technical sense of DR on which I focus mainly in this book, that is, on the way in which some of us perceive and construe the world in dismal terms and believe our construal to be truer than competing accounts. Inevitably, within this topic we find ourselves involved in rather tedious realism wars or epistemological battles between yea-sayers, nay-sayers and those who fantasise that objective evidence exists that can end the wars.

Insofar as any term includes ‘realism’, we can say it has a philosophical identity. In the case of DR, the philosophical pessimism most closely associated with Schopenhauer may be its natural home. Existentialism is often considered a negative philosophy, and sometimes wholly nihilistic, but in fact it includes or allows for several varieties of worldview. DR receives the same kind of criticism as existentialism often has, which is that it is less an explicit philosophy than a mood, or a rather vague expression of the personalities, projections and opinions of certain writers or artists.

Depressive realism as it is translated from psychology to philosophy can be said to refer to the belief that phenomena are accurately perceived as having negative weighting. Put differently, we can say that ‘the truth about life’ always turns out to be more negative than positive, and hence any sustained truth-seeking must eventually find itself mired in unpleasant discoveries.

We then come to synonyms or closely related terms and ideas. These include, in alphabetical order, absurdism, anthropathology, antihumanism, cynicism, depressionism, disenchantment, emptiness, existential anxiety and depression, futilitarianism, meaninglessness, melancholia, misanthropy, miserabilism, nihilism, pessimism, radical scepticism, rejectionism, tedium vitae, tragedy, tragicomedy or Weltschmerz. We could add saturninity, melancology and other terms if we wanted to risk babellian excess, or flag up James Joyce’s ‘unhappitants of the earth’ as a suitable descriptor for DRs. We could stray into Buddhist territory and call up the concepts of samsara and dukkha. I do not claim that such terms are synonymous or that those who would sign up to DR espouse them all but they are closely associated, unless you are a semantically obsessive philosopher.

Dienstag (2006) denies any necessary commonality between different intellectual expressions of pessimism, and Weller (2011) demonstrates a connoisseur-ship of nuances of nihilism. Kushlev et al. (2015) point out that sadness and unhappiness are not identical. But Daniel (2013) stresses the assemblage of melancholy, and Bowring (2008) provides a very useful concise history, geography and semantics of melancholy.

Here is one simple illustration of how the shades of DR blend into one, not in any linear progression but pseudo-cyclically. The DR often experiences the weariness of one who has seen it all before, is bored and has had enough; the melancholy of the one who feels acutely the elusiveness and illusion of happiness, the impermanence of life and always smells death in the air; the pessimism of one whose prophetic intuition knows that all proposed quasi-novel solutions must eventually fade to zero; the nihilism of one whose folly-spotting and illusion-sensing radar never rests; the depression of one whose black dog was always there, returns from time to time and may grow a little blacker in old age; the sorrowful incredulity at the gullible credulity of hope-addicts and faith-dealers; the deep sadness of one who travels extensively and meets many people whose national and personal suffering is written all over their faces; and the bleakly aloof fundamentalism of one who believes his epistemology to be superior to other, always shallower accounts. In some cases an extreme form of DR may tip into contemptuous or active nihilism, for example, DeCasseres’s (2013) ‘baleful vision’.

But DR need not be, seldom is, a state of maximum or unchanging bleakness or sheer unhappiness, and many DRs like Cioran, Beckett and Zapffe could be very funny, as is Woody Allen. But grey-skies thinking is the DR’s natural default position and ambivalence his highest potential.

A broad, working definition of depressive realism runs as follows: depressive realism is a worldview of human existence as essentially negative. To qualify this, we have to say that some DRs regard the ‘world’ (everything from the cosmos to everyday living) as wholly negative, as a burdensome absurdity, while some limit its negativity to human experience, or to certain aspects or eras of humanity or to sensate life. ‘Existence is no good at all’ probably covers the first outlook (see Ligotti, 2010), and ‘existence contains much more bad than good’ the second (Benatar, 2006). We might also speak of dogmatic DR and a looser, attitudinal DR that seeks dialogue.

Critics of DR, of whom there are many as we shall see, often joke lamely about the perceived glass half empty mentality underlying this view, and tirelessly point out the cliché that a glass half empty is half-full. DR may not deny that life includes or seems to include some positive values, sometimes, but it is founded on the belief, the assertion, that it is overall more negative than positive. And, depending on what the glass is haIf-full of, the DR may regard it as worthless, tasteless, poisonous or ultimately futile to drink.

The succinct ingredients of DR are perhaps as follows. The human species is overdeveloped into two strands, the clever and inventive, and the destructive and distressing, all stemming from evolutionarily accidental surplus consciousness. We have developed to the point of outgrowing the once necessary God myth, confronting the accidental origins of everything and realising that our individual lives end completely at death. We have to live and grow old with these sad and stubborn facts. We must sometimes look at the vast night sky and see our diminutive place reflected in it, and we realise that our species’ existence itself is freakishly limited and all our earthly purposes are ultimately for nought.

We can never organise optimal living conditions for ourselves, and we realise that our complex societies contain abundant absurdities. World population increases, information overload increases and new burdens outweigh any benefits of material progress however clever and inventive we are. We claim to value truth but banish these facts from our consciousness by all manner of mendacious, tortuous mental and behavioural devices. The majority somehow either denies all of the above or manages not to think about it. But it unconsciously nags at even the most religious and optimistic, and the compulsion to deny it drives fundamentalist religious revival, capitalist growth, war and mental illness.

Depressive realism may generate a range of attitudes from decisive suicidality or leaden apathy through to cheerful cynicism, eloquent disenchantment and compassionate or violent nihilism. We can argue that everyone has a worldview whether implicit or explicit, unconscious or conscious, inarticulate or eloquent. Wilhelm von Humboldt is credited with the origins of the concept, using the term Weltansicht (world meaning), with Weltanschauung arriving a little later with Kant and Hegel.

DR may contain idiosyncratic affects, perceptions and an overall worldview, the scale of negativity of which fluctuates. It may be embodied at an early age or emerge later with ageing and upon reflection, or after suffering a so-called ‘nadir experience’, and may even be overturned, although this event is probably rare. Often, we cannot help but see the world in the way we happen to see it, whether pessimistically or optimistically, even if our moods sometimes fluctuate upwards or downwards. Typically, no matter how broadminded or open to argument we consider ourselves to be, we all feel that we are right. The DR certainly fits this position, often regarding himself as a relentlessly sceptical truth-seeker where others buy into complacent thought and standard social illusions.

The person who has no particular take on existence, who genuinely takes each day or moment as it comes, is arguably rare.

We should ask what it is that is depressed in DR and what it is to which the realism points. Melancholy was once the more common term, depression simply meaning something being pushed downwards, as in dejected spirits. This downwardness places depression in line etymologically with the downwardness of pessimism, not to mention countless metaphors such as Bunyan’s trough of despond.

From the 17th century depression gained its clinical identity but the roots lie in much earlier humoral theory. Whichever metaphor is employed, however, we might ask why ‘upwards’ is implied to be the norm, and in what sense ‘downwards’ should be applied to ‘unhappy consciousness’. Heaven has always been located upwards and hell downwards. More accurate metaphors for depression might involve inward or horizontal states. But this would still leave the question of why outwardness and verticality should be regarded as more normal, or the view of the depressed, melancholic, downward, inward or horizontal human being as less acceptable or normal than its opposite, unless on purely statistical grounds.

I think it is fair and proper to make my own position as transparently clear as possible. In spite of critiques of writing from ‘the view from nowhere’, most academic writing persists in a quasi-objective style resting on the suspiciously erased person of the author. Like most DRs, my personality and outlook has always included a significantly depressive or negative component. I was once diagnosed in my early 30s in a private psychotherapy clinic as having chronic mild depression. I have often been the butt of teasing and called an Eeyore or cynic. I am an atheist.

I have had a fair amount of therapy during my life but in looking back I have to say that:

a. none of that therapy has fundamentally changed the way I experience life, and

b. my mature belief is that I was always this way, that is, someone with a ‘depressive outlook’.

Only quite recently have I come to regard this as similar to the claim made by most gay people that they were born gay, or have been gay for as long as they remember, that they do not think of themselves in pathological terms and they do not believe homosexuality to be a legitimate object for therapeutic change.

I do not mean to say that people who experience clinical depression should not have therapy if they wish to, nor even that it does not sometimes help. Rather, I believe the assumption should not be made that depressive or negative views about life and experience necessarily correlate with psychological illness.

Since I have worked in the counselling and psychotherapy field for about 35 years, I have some explaining to do, which appears mainly in Chapter 6.

Appearing in the series Explorations in Mental Health as this book does, I should like to give a brief sense of location here. In truth this is an interdisciplinary subject that by its nature has no exclusive home. On the other hand, given my academic background, there are some clear links with psychology, psychotherapy and counselling. On the question of mental health, the contribution of DR is to re-examine assumptions about ‘good’ mental health and in particular to challenge the standard pathological view of depression as sick, and with therapists as having a clinical mandate to pronounce on everything with depressive or gloomy connotations.

The line between so-called existential anxiety and so-called death and health anxiety can be a fine one, and we should question the agonised revisions and diagnostic hyperinflation by the contributors to the DSM over such matters (APA, 2013; Frances, 2014).

DR seriously questions the standard assumption in cognitive behaviour therapy that depressed individuals must have cognitive distortions, and indeed reverses this to ask whether DRs might have a more objective grasp of reality than others, and a stubborn refusal to embrace illusion.

In conducting this challenge we are taken well beyond psychology into ontology, history, the philosophy of mental health and other disciplines. The mission of this book is hardly to revolutionise the field of mental health, but it is in part to reassess the link between perceived depression, pessimism and negative worldviews.

But a book of this kind emerges not only from a personal position and beliefs. I may experience my share of low mood, insomnia, conflict and death anxiety, but my views are also informed abundantly by wide reading, observations of everyday life and friends. Mirroring the ‘blind, pitiless indifference and cruelty of nature’ (Dawkins, 2001), I see around me a man in his 80s passing his days in the fog of Alzheimer’s, another in his 70s with Parkinson’s disease, a woman suffering from many sad medical after-effects of leg amputation, another woman suffering from menopausal mood swings, couples revealing the cracks in their allegedly smooth relationships, several young men struggling gloomily to find any fit between their personalities and the workplace, colleagues putting a brave face on amid insane institutional pressures and the list of merely first world suffering could go on and on.

The sources of this common brutalism are biological and social. The examples of suffering easily outnumber any clear examples of the standard optimistic depiction of happy humans, yet this latter narrative continues to assert itself, backed up by cheerful statistics and miserabilism countering examples.

I argue that human life contains many glaringly tragic and depressing components and the denial or minimisation of these adds yet another level of depression.

The lead characters in DR will emerge during the book. It may be useful here, however, to mention those who feature prominently in the DR gallery. These include Gautama Buddha, Arthur Schopenhauer, Giacomo Leopardi, Philipp Mainlander, Thomas Hardy, Edgar Saltus, Sigmund Freud, Samuel Beckett, E.M. Cioran, Peter Wessel Zapffe, Thomas Ligotti, John Gray, David Benatar and Michel Houellebecq.

One of the admitted difficulties in such billing is that those still alive might well disown membership of this or any group. Another problem is who can really be excluded: for example, why not include Kierkegaard, Nietzsche, Dostoevsky, Kafka, Camus? As well as the so-called greats, we should pause to remember more minor writers, for example, the Scottish poet James Thomson (1834-82) whose The City of Dreadful Night captures perfectly many DR themes (see Chapter 4). Sloterdijk (1987) included in his similar ‘cabinet of cynics’ an idiosyncratic trawl from Diogenes to Heidegger; Feld’s (2011) ‘children of Saturn’ features Dante and Ficino.

In truth DRs may be scattered both interdisciplinarily and transhistorically (Breeze, 2014). To some extent questions of DR membership are addressed in the text, but it is true to say such discriminations are not my main focus.

This book is structured loosely by disciplines in order to demonstrate the many sources and themes involved. My treatment of these disciplines will not satisfy experts in those disciplines and must appear at times náive, imprecise or inaccurate, but these fields impinge on us, claim to define how we live and suffer and what remedies might exist. In another kind of civilisation we might have no such epistemological divisions. I look at how these disciplines inform DR but also use DR as critical leverage to examine their shortcomings.

Hence, Chapter 1 excavates some of the relevant evolutionary and common historical themes.

Chapter 2 looks at some religious themes and the theologies explicating these, as well as the contemporary fascination with spirituality and its downsides.

In Chapter 3 I examine a number of philosophical themes connecting with DR.

Some examples in literature and film are analysed in Chapter 4.

Psychology comes into focus in Chapter 5, to be complemented and contrasted with psychotherapy and the psychological therapies in Chapter 6.

In Chapter 7 socio-political themes are scrutinised insofar as they illustrate DR.

I then move on to science, technology and the future in Chapter 8, again in order to depict the dialectic between these and DR.

The ‘lifespan and everyday life’ is the focus of Chapter 9, which takes a partial turn away from academic disciplines to the more experiential.

Arguments against DR, as comprehensive as I can make them in a concise form, comprise Chapter 10, while the final chapter envisages the possible utility of DR.

One of the many things DRs find depressing about the societies we live in is that those of us shaped ironically by twisted educational systems to think and write about such matters, and lucky to find a haIf-accommodating employment niche, are likely to be in or associated with academia. This institution has survived for many centuries and in spite of its elitist niche remains somewhat influential, although far less influential than its personnel imagine.

In its current form it is being ravaged by the so-called new public management but at the same time in its social science, arts and humanities departments is defiantly dominated by left-wing academics whose writing style is often highly symbolic, obfuscatory, arguably often meaningless (Sokal, 2009) and designed for coded communication with a mere minutiae of the general population, that is, academic peers.

On the other hand, academia can also suffer from a kind of censorship-by-demand-for-evidence, meaning that common observation, subjectivity and anecdote are erased or downgraded and a statistics inebriated tyranny reigns supreme. Once when presenting some of the themes in this book to an academic ‘research group’, I was told I had cherry-picked too many bad examples, as if my colleagues were all paragons of balanced argument and nothing short of watertight pseudo-objectivity could be tolerated: in my view this itself is an example of silencing the DR nihilism that threatens an uncritically ‘life is good’ assumption.

A dilemma facing anyone who hopes to capture the essence of depressive realism and the parrhesia within it concerns the style in which to write and the assumptions and allusions to make. Universities seem barely fit for purpose any longer, or their purpose is unclear and some have predicted their demise (Readings, 1997; Evans, 2005). This should not surprise us, on the contrary, we should learn to expect such decline as an inescapable part of the entropy of human institutions but it is a current aspect of our depressing social landscape.

I have only partly followed the academic convention of obsessively citing evidence and precise sources of evidence. In some cases, where no references are given, my figures and examples derive from unattributed multiple internet sources; I do not necessarily make any claims to authority or accuracy, and the reader should check on sources if he or she has such a need. In many instances I use terms such ‘many people believe’, which might irritate conventional social scientists. I also use anecdote, opinion and naturalistic observations fairly freely. Academic discourse is, I think, very similar to the ‘rhetoric’ exposed by Michelstaedter (2004), in contrast with the persuasion of personally earned insights and authentic observation, as Kierkegaard too would have recommended.

A confession. What appears above is what is expected of a writer, a logical outline, a promise of reading pleasures to come and of finding and offering meaning even in the teeth of meaninglessness (a trick accomplished by the sophistry of Critchley [1997], among other academic prestidigitators). As I moved from the publisher’s acceptance of my proposal to the task of actual composition I began to wonder if I could in fact do it. ‘Let’s do this thing’ is a common American expression of committed and energetic project initiation. As befits a text on depressive realism, the author is bedevilled by doubt: more of a Beckettian ‘is this thing even worth beginning?’ The topic is so massive that one is suffocated on all sides by the weight of precedents and related information, the beckoning nuances, the normative opposition to it and the hubris of attempting it. I anguished over the possibility of a subtitle, something like ‘perspectives on pointlessness’, that might convey a mixture of nihilism and humour. Such are our needs for and struggles with sublimation, and our neophilia, that it is tempting not to bother. However, here it is.

Chapter 1

Big history, anthropathology and depressive realism

Can we say there is something intrinsically fantastic (unlikely), admirable (beautifully complex) and simultaneously tragic (entropically doomed from the outset) about the universe? And about ourselves, the only selfconscious part of the universe as far as we know, struggling to make sense of our own existence, busily constructing and hoping for explanations even as we sail individually and collectively into oblivion? Was the being or something that came out of nothing ever a good thing (a random assertion of will in Schopenhauerian terms), a good thing for a while that then deteriorated, a good thing that has its ups and downs but will endure or a good thing that must sooner or later end? Or perhaps neither good nor bad?

Depressive realism looks not only to the distant future but into the deepest past, interpreting it as ultimately negatively toned.

It is quite possible and indeed common practice for depressive realists and others to explicate their accounts without recourse to history. It appears that much contemporary academic discourse, certainly in the social sciences, is tacitly structured abiologically and ahistorically, as if in spite of scientific accounts we have not yet accepted any more than creationists that we are blindly evolved and evolving beings. In other words, in spite of much hand-wringing, many maintain a resignedly agnogenic position as regards the origins of the human malaise: we do not and may never know the causes.

But we have not appeared from nowhere, we are not selfcreating or God-created, we were not born as a species a few hundred or a few thousand years ago, we are not in any deep sense merely Plato’s heirs. Neither Marxist dialectical materialism nor Engels’ dialectics of nature capture the sheer temporal depth of evolution and its ultimate cosmogony (Shubin, 2014). Existence, beyond the animal drive to survive, is atheleological and unpromising. Religious and romantic theleologies largely avoid examination of our material roots and probable limits. From a certain DR perspective it is not only the future that has a dismal hue; an analysis of the deep past also yields much sorry material.

My preference is to begin with certain historical and materialist questions. The reasoning behind this is that (a) we have accounts of and claims to explain the existence of life as once benign but having become at some stage corrupt; (b) we might find new, compelling explanations for the negative pathways taken by humanity; (c) recorded observations of human tragedy that can be loosely called depressive realism are found in some of the earliest literature; (d) this procedure helps us to compare large scale and long-term DR propositions with relevant microphenomena and transient patterns. This anchorage in deep history does not necessarily imply a materialist reductionism to follow but it tends, I believe, to show a ceaselessly adaptive, evolutionarily iterative process and entropic trajectory via complexity.

The emerging disciplines of deep and big history challenge the arbitrary starting points, divisions and events of traditional history by going back to the earliest known of cosmic and non-human events, charting any discernible patterns and drawing tentative conclusions. Spier (2011) offers an excellent condensed account of this kind, but we probably need to add as a reinforcer the argument from Kraus (2012) that something from nothing is not only possible but inevitable and explicable by scientific laws. Indeed, it is necessary to begin here as a way of further eroding theistic claims that want to start with God and thereby insist on God’s (illusory) continuing sustenance and guiding purpose.

It is not the creation ex nihilo of the mythological, pre-scientific God, the omnipotent being who brought forth the universe from chaos that any longer helps us to understand our world, but modern science.

We do not know definitively how we evolved, but we have convincing enough causal threads at our disposal. Here I intend to sift through those of most interest in exploring the question of why our world has become such a depressing place.

We are animals but apparently higher animals, so far evolved beyond even our nearest relatives that some regard human beings as of another order of nature altogether. Given the millennia of religious belief that shaped our picture of ourselves, the Darwinian revolution even today is not accepted by all. Even some scientists who purport to accept the standard evolutionary account do not seem to accept our residual animal nature emotionally (Tallis, 2011).

But it is important to begin by asking about the life of wild animaIs. They must defend themselves against predators by hiding or fighting, and they must eat by grazing, scavenging or predation; they must reproduce and where necessary protect their young. Many animals spend a great deal of their time asleep, and some play. Social animals cultivate their groups by hunting together, communicating or grooming. Some animals protect their territory, build nests or rudimentary homes and a few make primitive tools; some migrate, and some maintain hierarchical structures. Most animals live relatively short lives, live with constant risk and are vigilant.

However it happened, human beings differ from animals in having developed a consciousness linked with tool-making, language and massive, highly structured societies that have taken us within millennia into today’s complex, earth spanning and nature dominating civilisation. Wild animals certainly suffer, contrary to idyllic fantasies of a harmonious nature but their suffering is mostly acute, resulting from injury, hunger and predation, and their lives are not extended beyond their natural ability to survive.

Our ingenuity and suffering are two sides of the same coin.

Weapon making and cooperation allowed us to rise above constant vulnerability to predators, but our lives are now often too safe, bland and boring, since we have forfeited the purpose of day-to-day survival. We have also benefited from becoming cleverer at the cost of loss of sensory acuity. Accordingly, and with painful paradox, we are driven to seek ‘meaning’ and we are gratuitously violent (Glover, 2001; White, 2012). Animals have no such problems.

*

from

Depressive Realism. Interdisciplinary perspectives

by Colin Feltham

get it at Amazon.com

Straight Talk on Trade. Ideas for a Sane World Economy – Dani Rodrik.

Are economists responsible for Donald Trump’s shocking victory in the US presidential election?

Adam Smith and David Ricardo would turn over in their graves if they read the details of, say, the Trans Pacific Partnership on intellectual property rules or investment regulations.

Economists’ failure to provide the full picture on trade, with all the necessary distinctions and caveats, has made it easier to tar trade, often wrongly, with all sorts of ill effects.

It is impossible to have hyperglobalization, democracy, and national sovereignty all at once; we can have at most two out of three.

We need to place the requirements of liberal democracy ahead of those of international trade and investment.

Globalization’s ills derive from the imbalance between the global nature of markets and the domestic nature of the rules that govern them.

Who needs the nation-state? We all do.

Nearly two decades ago, as my book Has Globalization Gone Too Far? went to press, I approached a well known economist to ask him if he would provide an endorsement for the back cover. I claimed in the book that, in the absence of a more concerted government response, too much globalization would deepen societal divisions, exacerbate distributional problems, and undermine domestic social bargains, arguments that have become conventional wisdom since.

The economist demurred. He didn’t really disagree with any of the analysis but worried that my book would provide “ammunition for the barbarians.” Protectionists would latch on to the book’s arguments about the downsides of globalization to provide cover for their narrow, selfish agenda.

It’s a reaction I still get from my fellow economists. One of them will hesitantly raise his hand following a talk and ask: Don’t you worry that your arguments will be abused and serve the demagogues and populists you are decrying?

There is always a risk that our arguments will be hijacked in the public debate by those with whom we disagree. But I have never understood why many economists believe this implies we should skew our argument about trade in one particular direction. The implicit premise seems to be that there are barbarians on only one side of the trade debate. Apparently, those who complain about World Trade Organization rules or trade agreements are dreadful protectionists, while those who support them are always on the side of the angels.

In truth, many trade enthusiasts are no less motivated by their own narrow, selfish agendas. Pharmaceutical firms pursuing tougher patent rules, banks pushing for unfettered access to foreign markets, or multinationals seeking special arbitration tribunals have no greater regard for the public interest than protectionists do. So when economists shade their arguments, they effectively favor one set of self-interested parties, “barbarians” over another.

It has long been an unspoken rule of public engagement for economists that they should champion trade and not dwell too much on the fine print. This has produced a curious situation. The standard models of trade with which economists work typically yield sharp distributional effects: income losses by certain groups of producers or workers are the flip side of the “gains from trade.” And economists have long known that market failures, including poorly functioning labor markets, credit market imperfections, knowledge or environmental externalities, and monopolies, can interfere with reaping those gains.

They have also known that the economic benefits of trade agreements that reach beyond borders to shape domestic regulations, as with the tightening of patent rules or the harmonization of health and safety requirements, are fundamentally ambiguous.

Nonetheless, economists can be counted on to parrot the wonders of comparative advantage and free trade whenever trade agreements come up. They have consistently minimized distributional concerns, even though it is now clear that the distributional impact of, say, the North American Free Trade Agreement or China’s entry into the World Trade Organization was significant for the most directly affected communities in the United States. They have overstated the magnitude of aggregate gains from trade deals, though such gains have been relatively small since at least the 1990s. They have endorsed the propaganda portraying today’s trade deals as “free trade agreements,” even though Adam Smith and David Ricardo would turn over in their graves if they read the details of, say, the Trans-Pacific Partnership on intellectual property rules or investment regulations.

This reluctance to be honest about trade has cost economists their credibility with the public. Worse still, it has fed their opponents’ narrative. Economists’ failure to provide the full picture on trade, with all the necessary distinctions and caveats, has made it easier to tar trade, often wrongly, with all sorts of ill effects.

For example, as much as trade may have contributed to rising inequality, it is only one factor contributing to that broad trend, and in all likelihood a relatively minor one, compared to technology. Had economists been more upfront about the downside of trade, they may have had greater credibility as honest brokers in this debate.

Similarly, we might have had a more informed public discussion about social dumping if economists had been willing to recognize that imports from countries where labor rights are not protected raise serious questions about distributive justice. It may have been possible then to distinguish cases where low wages in poor countries reflect low productivity from cases of genuine rights violations. And the bulk of trade that does not raise such concerns may have been better insulated from charges of “unfair trade.”

Likewise, if economists had listened to their critics who warned about currency manipulation, trade imbalances, and job losses, instead of sticking to models that assumed away unemployment and other macroeconomic problems, they might have been in a better position to counter excessive claims about the adverse impact of trade deals on employment.

In short, had economists gone public with the caveats, uncertainties, and skepticism of the seminar room, they might have become better defenders of the world economy. Unfortunately, their zeal to defend trade from its enemies has backfired. If the demagogues making nonsensical claims about trade are now getting a hearing, and actually winning power, it is trade’s academic boosters who deserve at least part of the blame.

This book is an attempt to set the record straight, and not just about trade, as the title suggests, but about several areas in which economists could have offered a more balanced, principled discussion. Though trade is a central aspect of those areas, and in large part emblematic of what’s happened in all of them, the same failures can be observed in policy discussions about financial globalization, the euro zone, or economic development strategies.

The book brings together much of my recent popular and nontechnical work on globalization, growth, democracy, politics, and the discipline of economics itself. The material that follows has been drawn from a variety of sources, my monthly syndicated columns for Project Syndicate as well as a few other short and lengthier pieces. In most cases, I have done only a light edit of the original text, updating it, providing connections with other parts of the book, and adding some references and supporting material. In places, I have rearranged the material from the original sources to provide a more seamless narrative. The full set of sources is listed at the back of the book.

The book shows how we could have constructed a more honest narrative on the world economy, one that would have prepared us for the eventual backlash and, perhaps, even rendered it less likely. It also suggests ideas for moving forward, to create better functioning national economies as well as a healthier globalization.

Chapter One

A Better Balance

The global trade regime has never been very popular in the United States. Neither the World Trade Organization (WTO) nor the multitudes of regional trade deals such as the North American Free Trade Agreement (NAFTA) and the Trans Pacific Partnership (TPP) have had strong support among the general public. But opposition, while broad, tended to be diffuse.

This has enabled policy makers to conclude a succession of trade agreements since the end of World War II. The world’s major economies were in a perpetual state of trade negotiations, signing two major global multilateral deals: the General Agreement on Tariffs and Trade (GATT) and the treaty establishing the World Trade Organization. In addition, more than five hundred bilateral and regional trade agreements were signed, the vast majority of them since the WTO replaced the GATT in 1995.

The difference today is that international trade has moved to the center of the political debate. During the most recent US election, presidential candidates Bernie Sanders and Donald Trump both made opposition to trade agreements a key plank of their campaigns. And, judging from the tone of the other candidates, standing up for globalization amounted to electoral suicide in the political climate of the time. Trump’s eventual win can be chalked up at least in part to his hard line on trade and his promise to renegotiate deals that he argued had benefited other nations at the expense of the United States.

Trump’s and other populists’ rhetoric on trade may be excessive, but few deny any longer that the underlying grievances are real. Globalization has not lifted all boats. Many working families have been devastated by the impact of Iow-cost imports from China, Mexico, and elsewhere. And the big winners have been the financiers and skilled professionals who can take advantage of expanded markets. Although globalization has not been the sole, or even the most important, force driving inequality in the advanced economies, it has been a key contributor. Meanwhile, economists have struggled to find large gains from recent trade agreements for the economy as a whole.

What gives trade particular political salience is that it often raises fairness concerns in ways that the other major contributor to inequality, technology, does not. When I lose my job because my competitor innovates and introduces a better product, I have little cause to complain. When he outcompetes me by outsourcing to firms abroad that do things that would be illegal here, for example, prevent their workers from organizing and bargaining collectively, may have a legitimate gripe. It is not inequality per se that people tend to mind. What’s problematic is unfair inequality, when we are forced to compete under different ground rules.

During the 2016 US presidential campaign, Bernie Sanders forcefully advocated the renegotiation of trade agreements to reflect better the interests of working people. But such arguments immediately run up against the objection that any standstill or reversal on trade agreements would harm the world’s poorest, by diminishing their prospect of escaping poverty through export-led growth. “If you’re poor in another country, this is the scariest thing Bernie Sanders has said,” ran a headline in the popular and normally sober Vox.com news site.

But trade rules that are more sensitive to social and equity concerns in the advanced countries are not inherently in conflict with economic growth in poor countries. Globalization’s cheerleaders do considerable damage to their cause by framing the issue as a stark choice between existing trade arrangements and the persistence of global poverty. And progressives needlessly force themselves into an undesirable trade-off.

The standard narrative about how trade has benefited developing economies omits a crucial feature of their experience. Countries that managed to leverage globalization, such as China and Vietnam, employed a mixed strategy of export promotion and a variety of policies that violate current trade rules. Subsidies, domestic-content requirements, investment regulations, and, yes, often import barriers were critical to the creation of new, highervalue industries. Countries that rely on free trade alone (Mexico comes immediately to mind) have languished.

That is why trade agreements that tighten the rules, such as TPP would have done, are in fact mixed blessings for developing countries. China would not have been able to pursue its phenomenally successful industrialization strategy if the country had been constrained by WTO-type rules during the 1980s and 1990s. With the TPP, Vietnam would have had some assurance of continued access to the US market (existing barriers on the US side are already quite low), but in return would have had to submit to restrictions on subsidies, patent rules, and investment regulations.

And there is nothing in the historical record to suggest that poor countries require very low or zero barriers in the advanced economies in order to benefit greatly from globalization. In fact, the most phenomenal export-oriented growth experiences to date, Japan, South Korea, Taiwan, and China, all occurred when import tariffs in the United States and Europe were at moderate levels, and higher than where they are today.

So, for progressives who worry both about inequality in the rich countries and poverty in the rest of the world, the good news is that it is indeed possible to advance on both fronts. But to do so, we must transform our approach to trade deals in some drastic ways.

The stakes are extremely high. Poorly managed globalization is having profound effects not only in the United States but also in the rest of the developed world, especially Europe, and the low-income and middle-income countries in which a majority of the world’s workers live. Getting the balance between economic openness and policy space management right is of huge importance.

Europe on the Brink

The difficulties that deep economic integration raises for governance and democracy are nowhere in clearer sight than in Europe. Europe’s single market and single currency represent a unique experiment in what I havecalled in my previous work “hyperglobalization.” This experiment has opened a chasm between extensive economic integration and limited political integration that is historically unparalleled for democracies.

Once the financial crisis struck and the fragility of the European experiment came into full view, the weaker economies with large external imbalances needed a quick way out. European institutions and the International Monetary Fund (IMF) had an answer: structural reform. Sure, austerity would hurt. But a hefty dose of structural reform, liberalization of labor, product, and service markets, would make the pain bearable and help get the patient back on his feet. As I explain later in the book, this was a false hope from the very beginning.

It is undeniable that the euro crisis has done much damage to Europe’s political democracies. Confidence in the European project has eroded, centrist political parties have weakened, and extremist parties, particularly of the far right, are the primary beneficiaries. Less appreciated, but at least as important, is the damage that the crisis has done to democracy’s prospects outside the narrow circle of eurozone countries.

The sad fact is that Europe is no longer the shining beacon of democracy it was for other countries.

A community of nations that is unable to stop the unmistakable authoritarian slide in one of its members, Hungary, can hardly be expected to foster and cement democracy in countries on its periphery. We can readily see the consequences in a country like Turkey, where the loss of the “European anchor” has played a facilitating role in enabling Erdogan’s repeated power plays, and less directly in the faltering of the Arab Spring.

The costs of misguided economic policies have been the most severe for Greece. Politics in Greece has exhibited all the symptoms of a country being strangled by the trilemma of deep integration. It is impossible to have hyperglobalization, democracy, and national sovereignty all at once; we can have at most two out of three. Because Greece, along with others in the euro, did not want to give up any of these, it ended up enjoying the benefits of none. The country has bought time with a succession of new programs, but has yet to emerge out of the woods. It remains to be seen whether austerity and structural reforms will eventually return the country to economic health.

History suggests some grounds for skepticism. In a democracy, when the demands of financial markets and foreign creditors clash with those of domestic workers, pensioners, and the middle class, it is usually the locals who have the last say.

As if the economic ramifications of a full-blown eventual Greek default were not terrifying enough, the political consequences could be far worse. A chaotic eurozone breakup would cause irreparable damage to the European integration project, the central pillar of Europe’s political stability since World War II. It would destabilize not only the highly indebted European periphery but also core countries like France and Germany, which have been the architects of that project.

The nightmare scenario would be a 1930s style victory for political extremism. Fascism, Nazism, and communism were children of a backlash against globalization that had been building since the end of the nineteenth century, feeding on the anxieties of groups that felt disenfranchised and threatened by expanding market forces and cosmopolitan elites.

Free trade and the gold standard had required downplaying domestic priorities such as social reform, nationbuilding, and cultural reassertion. Economic crisis and the failure of international cooperation undermined not only globalization but also the elites that upheld the existing order. As my Harvard colleague Jeff Frieden has written, this paved the path for two distinct forms of extremism.

Faced with the choice between equity and economic integration, communists chose radical social reform and economic self-sufficiency. Faced with the choice between national assertion and globalism, fascists, Nazis, and nationalists chose nation-building.

Fortunately, fascism, communism, and other forms of dictatorships are passe today. But similar tensions between economic integration and local politics have long been simmering. Europe’s single market has taken shape much faster than Europe’s political community has; economic integration has leaped ahead of political integration.

The result is that mounting concerns about the erosion of economic security, social stability, and cultural identity could not be handled through mainstream political channels. National political structures became too constrained to offer effective remedies, while European institutions still remain too weak to command allegiance.

It is the extreme right that has benefited most from the centrists’ failure. In France, the National Front has been revitalized under Marine Le Pen and has turned into a major political force mounting a serious challenge for the presidency in 2017. In Germany, Denmark, Austria, Italy, Finland, and the Netherlands, right-wing populist parties have capitalized on the resentment around the euro to increase their vote shares and in some cases play kingmaker in their national political systems.

The backlash is not confined to eurozone members. In Scandinavia, the Sweden Democrats, a party with neoNazi roots, were running ahead of Social Democrats and had risen to the top of national polls in early 2017. And in Britain, of course, the antipathy toward Brussels and the yearning for national autonomy has resulted in Brexit, despite warnings of dire consequences from economists.

Political movements of the extreme right have traditionally fed on anti-immigration sentiment. But the Greek, Irish, Portuguese, and other bailouts, together with the euro’s troubles, have given them fresh ammunition. Their euro skepticism certainly appears to be vindicated by events. When Marine Le Pen was asked if she would unilaterally withdraw from the euro, she replied confidently, “When I am president, in a few months’ time, the eurozone probably won’t exist.”

As in the 1930s, the failure of international cooperation has compounded centrist politicians’ inability to respond adequately to their domestic constituents’ economic, social, and cultural demands. The European project and the eurozone have set the terms of debate to such an extent that, with the eurozone in tatters, these elites’ legitimacy has received an even more serious blow.

Europe’s centrist politicians have committed themselves to a strategy of “more Europe” that is too rapid to ease local anxieties, yet not rapid enough to create a real Europe-wide political community. They have stuck for far too long to an intermediate path that is unstable and beset by tensions.

By holding on to a vision of Europe that has proven unviable, Europe’s centrist elites have endangered the idea of a unified Europe itself.

The short-run and long-run remedies for the European crisis are not hard to discern in their broad outlines, and they are discussed below. Ultimately, Europe faces the same choice it always faced: it will either embark on political union or loosen the economic union. But the mismanagement of the crisis has made it very difficult to see how this eventual outcome can be produced amicably and with minimal economic and political damage to member countries.

Fads and Fashions in the Developing World

The last two decades have been good to developing countries. As the United States and Europe were reeling under financial crisis, austerity, and the populist backlash, developing economies led by China and India engineered historically unprecedented rates of economic growth and poverty alleviation. And for once, Latin America, Sub-Saharan Africa, and South Asia could join the party alongside East Asia. But even at the height of the emerging markets hype, one could discern two dark clouds.

First, would today’s crop of low income economies be able to replicate the industrialization path that delivered rapid economic progress in Europe, America, and East Asia? And second, would they be able to develop the modern, liberal-democratic institutions that today’s advanced economies acquired in the previous century? I suggest that the answers to both of these questions may be negative.

On the political side, the concern is that building and sustaining liberal democratic regimes has very special pre-requisites. The crux of the difficulty is that the beneficiaries of liberal democracy, unlike in the case of electoral democracies or dictatorships, typically have neither numbers nor resources on their side. Perhaps we should not be surprised that even advanced countries are having difficulty these days living up to liberal democratic norms. The natural tendency for countries without long and deep liberal traditions is to slide into authoritarianism. This has negative consequences not just for political development but economic development as well.

The growth challenge compounds the democracy challenge. One of the most important economic phenomena of our time is a process I have called “premature deindustrialization.” Partly because of automation in manufacturing and partly because of globalization, low income countries are running out of industrialization opportunities much sooner than their earlier counterparts in East Asia did. This would not be a tragedy if manufacturing was not traditionally a powerful growth engine, for reasons I discuss below.

With hindsight, it has become clear that there was in fact no coherent growth story for most emerging markets. Unlike China, Vietnam, South Korea, Taiwan, and a few other manufacturing miracles, the recent crop of growth champions did not build many modern, export-oriented industries. Scratch the surface, and you find high growth rates driven not by productive transformation but by domestic demand, in turn fueled by temporary commodity booms and unsustainable levels of public or, more often, private borrowing. Yes, there are plenty of world-class firms in emerging markets, and the expansion of the middle class is unmistakable. But only a tiny share of these economies’ labor is employed in productive enterprises, while informal, unproductive firms absorb the rest.

Is liberal democracy doomed in developing economies, or might it be saved by giving it different forms than it took in today’s advanced economies? What kind of growth models are available to developing countries if industrialization has run out of steam? What are the implications of premature deindustrialization for labor markets and social inclusion? To overcome these novel future challenges, developing countries will need fresh, creative strategies that deploy the combined energies of both the private and public sectors.

No Time for Trade Fundamentalism

“One of the crucial challenges” of our era “is to maintain an open and expanding international trade system.” Unfortunately, “the liberal principles” of the world trade system “are under increasing attack.” “Protectionism has become increasingly prevalent.” “There is great danger that the system will break down or that it will collapse in a grim replay of the 1930s.”

You would be excused for thinking that these lines are culled from one of the recent outpourings of concern in the business and financial media about the current backlash against globalization. In fact, they were written thirty-six years ago, in 1981.

The problem then was stagflation in the advanced countries. And it was Japan, rather than China, that was the trade bogeyman, stalking, and taking over, global markets. The United States and Europe had responded by erecting trade barriers and imposing “voluntary export restrictions” on Japanese cars and steel. Talk about the creeping “new protectionism” was rife.

What took place subsequently would belie such pessimism about the trade regime. Instead of heading south, global trade exploded in the 1990s and 2000s, driven by the creation of the World Trade Organization, the proliferation of bilateral and regional trade and investment agreements, and the rise of China. A new age of globalization, in fact something more like hyperglobalization was launched.

In hindsight, the “new protectionism” of the 1980s was not a radical break with the past. It was more a case of regime maintenance than regime disruption, as the political scientist John Ruggie has written. The import “safeguards” and “voluntary” export restrictions (VERs) of the time were ad hoc, but they were necessary responses to the distributional and adjustment challenges posed by the emergence of new trade relationships.

The economists and trade specialists who cried wolf at the time were wrong. Had governments listened to their advice and not responded to their constituents, they would have possibly made things worse. What looked to contemporaries like damaging protectionism was in fact a way of letting off steam to prevent an excessive buildup of political pressure.

Are observers being similarly alarmist about today’s globalization backlash? The International Monetary Fund, among others, has recently warned that slow growth and populism might lead to an outbreak of protectionism. “It is vitally important to defend the prospects for increasing trade integration,” according to the IMF’s chief economist, Maurice Obstfeld.

So far, however, there are few signs that governments are moving decidedly away from an open economy. President Trump may yet cause trade havoc, but his bark has proved worse than his bite. The website globaltradealert.org maintains a database of protectionist measures and is a frequent source for claims of creeping protectionism. Click on its interactive map of protectionist measures, and you will see an explosion of fireworks, red circles all over the globe. It looks alarming until you click on liberalizing measures and discover a comparable number of green circles.

The difference this time is that populist political forces seem much more powerful and closer to winning elections, partly a response to the advanced stage of globalization achieved since the 1980s. Not so long ago, it would have been unimaginable to contemplate a British exit from the European Union, or a Republican president in the United States promising to renege on trade agreements, build a wall against Mexican immigrants, and punish companies that move offshore. The nation-state seems intent on reasserting itself.

But the lesson from the 1980s is that some reversal from hyperglobalization need not be a bad thing, as long as it serves to maintain a reasonably open world economy. In particular, we need to place the requirements of liberal democracy ahead of those of international trade and investment. Such a rebalancing would leave plenty of room for an open global economy; in fact, it would enable and sustain it.

What makes a populist like Donald Trump dangerous is not his specific proposals on trade. It is the nativist, illiberal platform on which he seems intent to govern. And it is as well the reality that his economic policies don’t add up to a coherent vision of how the United States and an open world economy can prosper side by side.

The critical challenge facing mainstream political parties in the advanced economies today is to devise such a vision, along with a narrative that steals the populists’ thunder. These center-right and center-left parties should not be asked to save hyperglobalization at all costs. Trade advocates should be understanding if they adopt unorthodox policies to buy political support.

We should look instead at whether their policies are driven by a desire for equity and social inclusion or by nativist and racist impulses, whether they want to enhance or weaken the rule of law and democratic deliberation, and whether they are trying to save the open world economy, albeit with different ground rules, rather than undermine it.

The populist revolts of 2016 will almost certainly put an end to the last few decades’ hectic deal making in trade. Though developing countries may pursue smaller trade agreements, the two major regional deals on the table, the Trans Pacific Partnership and the Transatlantic Trade and Investment Partnership, were as good as dead immediately after the election of Donald Trump as US president.

We should not mourn their passing. We should instead have an honest, principled discussion on putting globalization and development on a new footing, cognizant of our new political and technological realities and placing the requirements of liberal democracy front and center.

Getting the Balance Right

The problem with hyperglobalization is not just that it is an unachievable pipe dream susceptible to backlash, after all, the nation-state remains the only game in town when it comes to providing the regulatory and legitimizing arrangements on which markets rely. The deeper objection is that our elites’ and technocrats’ obsession with hyperglobalization makes it more difficult to achieve legitimate economic and social objectives at home, economic prosperity, financial stability, and social inclusion.

The questions of our day are: How much globalization should we seek in trade and finance? Is there still a case for nation-states in an age where the transportation and communications revolutions have apparently spelled the death of geographic distance? How much sovereignty do states need to cede to international institutions? What do trade agreements really do, and how can we improve them? When does globalization undermine democracy? What do we owe, as citizens and states, to others across the border? How do we best carry out those responsibilities?

All of these questions require that we restore a sane, sensible balance between national and global governance. We need a pluralist world economy where nationstates retain sufficient autonomy to fashion their own social contracts and develop their own economic strategies. I will argue that the conventional picture of the world economy as a “global commons”, one in which we would be driven to economic ruin unless we all cooperate, is highly misleading. If our economic policies fail, they do so largely for domestic rather than international reasons. The best way in which nations can serve the global good in the economic sphere is by putting their own economic houses in order.

Global governance does remain crucial in those areas such as climate change where the provision of global public goods is essential. And global rules sometimes can help improve domestic economic policy, by enhancing democratic deliberation and decision-making. But, I will argue, democracy-enhancing global agreements would look very different than the globalization-enhancing deals that have marked our age.

We begin with an entity at the very core of our political and economic existence, but which has for decades been under attack: the nation-state.

Chapter Two

How Nations Work

In October 2016, British Prime Minister Theresa May shocked many when she disparaged the idea of global citizenship. “If you believe you’re a citizen of the world,” she said, “you’re a citizen of nowhere.” Her statement was met with derision and alarm in the financial media and among liberal commentators. “The most useful form of citizenship these days,” one analyst lectured her, “is one dedicated not only to the wellbeing of a Berkshire parish, say, but to the planet.” The Economist called it an “illiberal” turn. A scholar accused her of repudiating Enlightenment values and warned of “echoes of 1933” in her speech.

I know what a “global citizen” looks like: I make a perfect specimen myself. I grew up in one country, live in another, and carry the passports of both. I write on global economics, and my work takes me to far-flung places. I spend more time traveling in other countries than I do within either country that claims me as a citizen. Most of my close colleagues at work are similarly foreign-born. I devour international news, while my local paper remains unopened most weeks. In sports, I have no clue how my home teams are doing, but I am a devoted fan of a football team on the other side of the Atlantic.

And yet May’s statement strikes a chord. It contains an essential truth, the disregard of which says much about how we, the world’s financial, political, and technocratic elite, distanced ourselves from our compatriots and lost their trust.

Economists and mainstream politicians tend to view the backlash as a regrettable setback, fueled by populist and nativist politicians who managed to capitalize on the grievances of those who feel they have been left behind and deserted by the globalist elites. Nevertheless, today globalism is in retreat and the nation-state has shown that it is very much alive.

For years, an intellectual consensus on the declining relevance of the nation-state reigned supreme. All the craze was about global governance, the international rules and institutions needed to underpin the apparently irreversible tide of economic globalization and the rise of cosmopolitan sensibilities.

Global governance became the mantra of our era’s elite. The surge in cross-border flows of goods, services, capital, and information produced by technological innovation and market liberalization has made the world’s countries too interconnected, their argument went, for any country to be able to solve its economic problems on its own. We need global rules, global agreements, and global institutions. This claim is still so widely accepted today that challenging it may seem like arguing that the sun revolves around Earth.

To understand how we got to this point, let’s take a close look at the intellectual case against the nationstate and the arguments in favor of globalism in governance.

The Nation-State Under Fire

The nation-state is roundly viewed as an archaic construct that is at odds with twenty-first-century realities. The assault on the nation-state transcends traditional political divisions and is one of the few things that unite economic liberals and socialists. “How may the economic unity of Europe be guaranteed, while preserving complete freedom of cultural development to the peoples living there?” asked Leon Trotsky back in 1934. The answer was to get rid of the nation-state: “The solution to this question can be reached by completely liberating productive forces from the fetters imposed upon them by the national state.”

Trotsky’s answer sounds surprisingly modern in light of the eurozone’s current travails, it is one to which most neoclassical economists would subscribe. Many moral philosophers today join liberal economists in treating national borders as irrelevant, if not descriptively then certainly prescriptively. Here is Peter Singer:

If the group to which we must justify ourselves is the tribe, or the nation, then our morality is likely to be tribal, or nationalistic. If, however, the revolution in communications has created a global audience, then we might need to justify our behavior to the whole world. This change creates the material basis for a new ethic that will serve the interests of all those who live on this planet in a way that, despite much rhetoric, no previous ethic has done.

And Amartya Sen:

There is something of a tyranny of ideas in seeing the political divisions of states (primarily, national states) as being, in some way, fundamental, and in seeing them not only as practical constraints to be addressed, but as divisions of basic significance in ethics and political philosophy.

Sen and Singer think of national borders as a hindrance, a practical obstacle that can and should be overcome as the world becomes more interconnected through commerce and advances in communications. Meanwhile, economists deride the nation-state because it is the source of the transaction costs that block fuller global economic integration. This is so not just because governments impose import tariffs, capital controls, visas, and other restrictions at their borders, impeding the global circulation of goods, money, and people. More fundamentally, it is because the multiplicity of sovereigns creates jurisdictional discontinuities and associated transaction costs. Differences in currencies, legal regimes, and regulatory practices are today the chief obstacles to a unified global economy. As overt trade barriers have come down, the relative importance of such transaction costs has grown. Import tariffs now constitute a tiny fraction of total trade costs. James Anderson and Eric van Wincoop estimated these costs to be a whopping 170 percent (in ad valorem terms) for advanced countries, an order of magnitude higher than import tariffs themselves.

To an economist, this amount is equivalent to leaving $100 bills on the sidewalk. Remove the jurisdictional discontinuities, the argument goes, and the world economy would reap large gains from trade, similar to the multilateral tariff liberalization experienced over the postwar period. So, the global trade agenda has increasingly focused on efforts to harmonize regulatory regimes, everything from sanitary and phytosanitary standards to financial regulations. That is also why European nations felt it was important to move to a single currency to make their dream of a common market a reality. Economic integration requires repressing nation-states’ ability to issue their own money, set different regulations, and impose different legal standards.

The Continued Vitality of the Nation-State

The death of the nation-state has long been predicted. “The critical issue for every student of world order is the fate of the nation-state,” wrote political scientist Stanley Hoffman in 1966. Sovereignty at Bay was the title of Raymond Vernon’s 1971 classic. Both scholars would ultimately pour cold water on the passing of the nationstate, but their tone reflects a strong current of prevailing opinion. Whether it was the European Union (on which Hoffman focused) or the multinational enterprise (Vernon’s topic), the nation-state has been widely perceived as being overwhelmed by developments larger than it.

Yet the nation-state refuses to wither away. It has proved remarkably resilient and remains the main determinant of the global distribution of income, the primary locus of market-supporting institutions, and the chief repository of personal attachments and affiliations. Consider a few facts.

To test my students’ intuition about the determinants of global inequality, I ask them on the first day of class whether they would rather be rich in a poor country or poor in a rich country. I tell them to consider only their own consumption level and to think of rich and poor as referring to the top and bottom 5 percent of a country’s income distribution. A rich country, in turn, is one in the top 5 percent of the inter country distribution of per capita incomes, while a poor country is one in the bottom. Armed with this background, typically a majority of the students respond that they would rather be rich in a poor country.

They are in fact massively wrong. Defined the way I just did, the poor in a rich country are almost five times richer than the rich in a poor country. The optical illusion that leads the students astray is that the superrich with the BMWs and gated mansions they have seen in poor countries are a miniscule proportion of the population, significantly fewer than the top 5 percent on which I asked them to focus. By the time we consider the average of the top ventile as a whole, we have taken a huge leap down the income scale.

The students have just discovered a telling feature of the world economy: our economic fortunes are determined primarily by where (which country) we are born and only secondarily by our location on the income distribution scale. Or to put it in more technical but also more accurate terms, most global inequality is accounted for by inequality across rather than within nations. So much for globalization having revoked the relevance of national borders.

Second, consider the role of national identity. One may imagine that attachments to the nation-state have worn thin between the push of transnational affinities, on the one hand, and the pull of local connections, on the other hand. But this does not seem to be the case. National identity remains alive and well, even in some surprising corners of the world. And this was true even before the global financial crisis and the populist backlash that has unfolded since.

To observe the continued vitality of national identification, let us turn to the World Values Survey, which covers more than eighty thousand individuals in fifty-seven countries (http://www.worldvaluessurvey.org/). The respondents to the survey were asked a range of questions about the strength of their local, national, and global attachments. I measured the strength of national attachments by computing the percentages of respondents who “agreed” or “strongly agreed” with the statement “I see myself as a citizen of [country, nation].” I measured the strength of global attachments, in turn, by the percentages of respondents who “agreed” or “strongly agreed” with the statement “I see myself as a world citizen.” In each case, I subtracted these percentages from analogous percentages for “I see myself as a member of my local community” to provide for some kind of normalization. In other words, I measured national and global attachments relative to local attachments. I rely on the 2004-2008 round of the survey since it was carried out before the financial crises in Europe and the United States and isolates the results from the confounding effects of the economic downturn.

Figure 2.1 National, global, and EU citizenship (relative to attachment to local community). Percentages of respondents who “agree” or “strongly agree” with the statements “I see myself as a citizen of [country, nation]” and “I see myself as a worId citizen,” subtracted from analogous percentages for “I see myself as a member of my local community.” Source: D. Rodrik, “Who Needs the Nation State?” Economic Geography, 89(1), January 2013: 1-19.

Figure 2.1 shows the results for the entire global sample, as well as for the United States, the European Union, China, and India individually. What stands out is not so much that national identity is vastly stronger than identity as a “global citizen”, that much was predictable. The surprising finding is how it apparently exerts a stronger pull than membership in the local community, as can be observed in the positive percentages for normalized national identity. This tendency is true across the board and the strongest in the United States and India, two vast countries where we may have expected local attachments to be, if anything, stronger than attachment to the nation-state.

I find it also striking that European citizens feel so little attachment to the European Union. In fact, as Figure 2.1 shows, the idea of citizenship in the European Union seems as remote to Europeans as that of global citizenship, despite long decades of European integration and institution building.

It is not a surprise to find that global attachments have worn even thinner since 2008. Measures of world citizenship have gone down significantly in some of the European countries especially: from -18 percent to -29 percent in Germany and -12 percent to -22 percent in Spain. (These are comparisons between the 2010-2014 and 2004-2008 waves.)

One may object that such surveys obfuscate differences among subgroups within the general population.

We would expect mainly the young, the skilled, and the well educated to have been unhinged from their national mooring and to have become global in their outlook and attachments. As Figure 2.2 indicates, there are indeed differences among these groups that go in the predicted direction. But they are not as large as one may have thought and do not change the overall picture. Even among the young (less than twenty-five years old), those with a university education and professionals, national identity trumps local and, even more massive, global attachments.

Finally, any remaining doubts about the continued relevance of the nation-state must have been dispelled by the experience in the aftermath of the global financial crisis of 2008. It was domestic policy makers who had to step in to prevent an economic meltdown: it was national governments that bailed out banks, pumped liquidity, provided a fiscal stimulus, and wrote unemployment checks. As Bank of England chairman Mervyn King once memorably put it, banks are global in life and national in death.

Figure 2.2 Effect of socio-demographics. Percentages of respondents who “agree” or “strongly agree” with the statements “I see myself as a citizen of [country, nation]” and “I see myself as a world citizen,” subtracted from analogous percentages for “I see myself as a member of my local community.” Source: D. Rodrik, “Who Needs the Nation State?” Economic Geography, 89(1), January 2013: 1-19.

The International Monetary Fund and the newly upgraded Group of 20 were merely talking shops. In the eurozone, it was decisions taken in national capitals from Berlin to Athens that determined how the crisis would play out, not actions in Brussels (or Strasbourg). And it was national governments that ultimately took the blame for everything that went wrong, or the credit for the little that went right.

A Normative Case for the Nation-State

Historically, the nation-state has been closely associated with economic, social, and political progress. It curbed internecine violence, expanded networks of solidarity beyond local communities, spurred mass markets and industrialization, enabled the mobilization of human and financial resources, and fostered the spread of representative political institutions.

Civil wars and economic decline are the usual fate of today’s “failed states.” For residents of stable and prosperous countries, it is easy to overlook the role that the construction of the nation-state played in overcoming such challenges. The nation-state’s fall from intellectual grace is in part a consequence of its achievements.

But has the nation-state, as a territorially confined political entity, truly become a hindrance to the achievement of desirable economic and social outcomes in view of the globalization revolution? Or does the nation-state remain indispensable to the achievement of those goals? In other words, is it possible to construct a more principled defense of the nation-state, one that goes beyond stating that it exists and that it has not withered away?

Let me begin by clarifying my terminology. The nation-state evokes connotations of nationalism. The emphasis in my discussion will be not on the “nation” or “nationalism” part but on the “state” part. In particular, I am interested in the state as a spatially demarcated jurisdictional entity. From this perspective, I view the nation as a consequence of a state, rather than the other way around. As Abbe Sieyes, one of the theorists of the French revolution, put it: “What is a nation? A body of associates living under one common law and represented by the same legislature.” I am not concerned with debates over what a nation is, whether each nation should have its own state, or how many states there ought to be.

Instead, I want to develop a substantive argument for why robust nation-states are actually beneficial, especially to the world economy. I want to show that the multiplicity of nation-states adds rather than subtracts value. My starting point is that markets require rules and that global markets would require global rules. A truly borderless global economy, one in which economic activity is fully unmoored from its national base, would necessitate transnational rule-making institutions that match the global scale and scope of markets. But this would not be desirable, even if it were feasible. Market supporting rules are nonunique. Experimentation and competition among diverse institutional arrangements therefore remain desirable. Moreover, communities differ in their needs and preferences regarding institutional forms. And history and geography continue to limit the convergence in these needs and preferences.

So, I accept that nation-states are a source of disintegration for the global economy. My claim is that an attempt to transcend them would be counterproductive. It would get us neither a healthier world economy nor better rules.

My argument can be presented as a counterpoint to the typical globalist narrative, depicted graphically in the top half of Figure 2.3. In this narrative, economic globalization, spurred by the revolutions in transportation and communication technologies, breaks down the social and cultural barriers among people in different parts of the world and fosters a global community. It, in turn, enables the construction of a global political community, global governance, that underpins and further reinforces economic integration.

Figure 2.3 Alternative reinforcing dynamics Source: D. Rodrik, “Who Needs the Nation State?” Economic Geography, 89(1), January 2013: 1-19.

My alternative narrative (shown at the bottom of Figure 2.3) emphasizes a different dynamic, one that sustains a world that is politically divided and economically less than fully globalized. In this dynamic, preference heterogeneity and institutional nonuniqueness, along with geography, create a need for institutional diversity. Institutional diversity blocks full economic globalization. Incomplete economic integration, in turn, reinforces heterogeneity and the role of distance. When the forces of this second dynamic are sufficiently strong, as I will argue they are, operating by the rules of the first can get us only into trouble.

The Futile Pursuit of Hyperglobalization

Markets depend on nonmarket institutions because they are not self-creating, self-regulating, self-stabilizing, or self-legitimating. Anything that goes beyond a simple exchange among neighbors requires investments in transportation, communications, and logistics; enforcement of contracts, provision of information, and prevention of cheating; a stable and reliable medium of exchange; arrangements to bring distributional outcomes into conformity with social norms; and so on. Well-functioning, sustainable markets are backed by a wide range of institutions that provide the critical functions of regulation; redistribution, monetary and fiscal stability, and conflict management.

These institutional functions have so far been provided largely by the nation-state. Throughout the postwar period, this not only did not impede the development of global markets but it facilitated it in many ways. The guiding philosophy behind the Bretton Woods regime, which governed the world economy until the 1970s, was that nations, not only the advanced nations but also the newly independent ones, needed the policy space within which they could manage their economies and protect their social contracts.

Capital controls, restricting the free flow of finance between countries, were viewed as an inherent element of the global financial system. Trade liberalization remained limited to manufactured goods and to industrialized nations; when imports of textiles and clothing from low-cost countries threatened domestic social bargains by causing job losses in affected industries and regions, these, too, were carved out as special regimes.

Yet trade and investment flows grew by leaps and bounds, in no small part because the Bretton Woods recipe made for healthy domestic policy environments. In fact, economic globalization relied critically on the rules maintained by the major trading and financial centers. As John Agnew has emphasized, national monetary systems, central banks, and financial regulatory practices were the cornerstones of financial globalization. In trade, it was more the domestic political bargains than GATT rules that sustained the openness that came to prevail.

The nation-state was the enabler of globalization, but also the ultimate obstacle to its deepening. Combining globalization with healthy domestic polities relied on managing this tension well. Veer too much in the direction of globalization, as in the 1920s, and we would erode the institutions’ underpinning markets. Veer too much in the direction of the state, as in the 1930s, and we would forfeit the benefits of international commerce.

From the 1980s on, the ideological balance took a decisive shift in favor of markets and against governments. The result internationally was an all-out push for what I have called “hyperglobalization’”, the attempt to eliminate all transaction costs that hinder trade and capital flows. The World Trade Organization was the crowning achievement of this effort in the trade arena. Trade rules were new extended to services, agriculture, subsidies, intellectual property rights, sanitary and phytosanitary standards, and other types of what were previously considered to be domestic policies. In finance, freedom of capital mobility became the norm, rather than the exception, with regulators focusing on the global harmonization of financial regulations and standards. A majority of European Union members went the furthest by first reducing exchange-rate movements among themselves and ultimately adopting a single currency.

The upshot was that domestic governance mechanisms were weakened while their global counterparts remain incomplete. The flaws of the new approach became evident soon enough. One type of failure arose from pushing rule making onto supranational domains too far beyond the reach of political debate and control. This failure was exhibited in persistent complaints about the democratic deficit, lack of legitimacy, and loss of voice and accountability. These complaints became permanent fixtures attached to the World Trade Organization and Brussels institutions.

Where rule making remained domestic, another type of failure arose. Growing volumes of trade with countries at different levels of development and with highly dissimilar institutional arrangements exacerbated inequality and economic insecurity at home. What was even more destructive, the absence of institutions at the global level that have tamed domestic finance (a lender of last resort, deposit insurance, bankruptcy laws, and fiscal stabilizers) rendered global finance a source of instability and periodic crises of massive proportions. Domestic policies alone were inadequate to address the problems that extreme economic and financial openness created. Suitably enough, the countries that did the best in the new regime were those that did not let their enthusiasm for free trade and free flows of capital get the better of them.

China, which engineered history’s most impressive poverty reduction and growth outcomes, was, of course, a major beneficiary of others’ economic openness. But for its part, it followed a highly cautious strategy that combined extensive industrial policies with selective, delayed import liberalization and capital controls. Effectively, China played the globalization game by Bretton Woods rules rather than by hyperglobalization rules.

Is Global Governance Feasible or Desirable?

By now it is widely understood that globalization’s ills derive from the imbalance between the global nature of markets and the domestic nature of the rules that govern them. As a matter of logic, the imbalance can be corrected in only one of two ways: expand governance beyond the nation-state or restrict the reach of markets. In polite company, only the first option receives much attention.

Global governance means different things to different people. For policy officialdom, it refers to new intergovernmental forums, such as the Group of 20 and the Financial Stability Forum. For some analysts, it means the emergence of transnational networks of regulators setting common rules from sanitary to capital adequacy standards. For other analysts, it is “private governance” regimes, such as fair trade and corporate social responsibility. Yet others imagine the development of accountable global administrative processes that depend “on local debate, is informed by global comparisons, and works in a space of public reasons.” For many activists, it signifies greater power for international nongovernmental organizations.

It remains without saying that such emergent forms of global governance remain weak. But the real question is whether they can develop and become strong enough to sustain hyperglobalization and spur the emergence of truly global identities. I do not believe they can.

I develop my argument in four steps: (1) market-supporting institutions are not unique, (2) communities differ in their needs and preferences regarding institutional forms, (3) geographic distance limits the convergence in those needs and preferences, and (4) experimentation and competition among diverse institutional forms is desirable.

Market-supporting Institutions Are Not Unique

It is relatively straightforward to specify the functions that market-supporting institutions serve, as I did previously. They create, regulate, stabilize, and legitimate markets. But specifying the form that institutions should take is another matter altogether. There is no reason to believe that these functions can be provided only in specific ways or to think that there is only a limited range of plausible variation. In other words, institutional function does not map uniquely into form.

All advanced societies are some variant of a market economy with dominantly private ownership. But the United States, Japan, and the European nations have evolved historically under institutional setups that differ significantly. These differences are revealed in divergent practices in labor markets, corporate governance, social welfare systems, and approaches to regulation. That these nations have managed to generate comparable amounts of wealth under different rules is an important reminder that there is not a single blueprint for economic success. Yes, markets, incentives, property rights, stability, and predictability are important. But they do not require cookie-cutter solutions.

Economic performance fluctuates, even among advanced countries, so institutional fads are common. In recent decades, European social democracy, Japanese style industrial policy, the US model of corporate governance and finance, and Chinese state capitalism have periodically come into fashion, only to recede from attention once their stars faded. Despite efforts by international organizations, such as the World Bank and the Organisation for Economic Co-operation and Development (OECD), to develop “best practices,” institutional emulation rarely succeeds.

One reason is that elements of the institutional landscape tend to have a complementary relationship to each other, dooming partial reform to failure. For example, in the absence of labor market training programs and adequate safety nets, deregulating labor markets by making it easier for firms to fire their workers can easily backfire. Without a tradition of strong stakeholders that restrain risk taking, allowing financial firms to selfregulate can be a disaster. In their well known book Varieties of Capitalism, Peter Hall and David Soskice identified two distinct institutional clusters among advanced industrial economies, which they called “liberal market economies” and “coordinated market economies.”We can certainly identify additional models as well if we turn to Asia.

The more fundamental point has to do with the inherent malleability of institutional designs. As Roberto Unger has emphasized, there is no reason to think that the range of institutional divergence we observe in the world today exhausts all feasible variation. Desired institutional functions, aligning private incentives with social optimality, establishing macrostability, achieving social justice, can be generated in innumerable ways, limited only by our imagination.

The idea that there is a best-practice set of institutions is an illusion.

This is not to say that differences in institutional arrangements do not have real consequences. Institutional malleability does not mean that institutions always perform adequately: there are plenty of societies whose institutions patently fail to provide for adequate incentives for production, investment, and innovation, not to mention social justice. But even among relatively successful societies, different institutional configurations often have varying implications for distinct groups. Compared to coordinated market economies, liberal market economies, for example, present better opportunities for the most creative and successful members of society, but also tend to produce greater inequality and economic insecurity for their working classes. Richard Freeman has shown that more highly regulated labor market environments produce less dispersion in earnings but not necessarily higher rates of unemployment.

There is an interesting analogy here to the second fundamental theorem of welfare economics. The theorem states that any Pareto-efficient equilibrium can be obtained as the outcome of a competitive equilibrium with an appropriate distribution of endowments. Institutional arrangements are, in effect, the rules that determine the allocation of rights to a society’s resources; they shape the distribution of endowments in the broadest term. Each Pareto-efficient outcome can be sustained by a different set of rules. And conversely, each set of rules has the potential to generate a different Pareto-efficient outcome. (I say potential because “bad” rules will clearly result in Pareto-inferior outcomes.)

It is not clear how we can choose ex ante among Pareto-efficient equilibria. It is precisely this indeterminacy that makes the choice among alternative institutions a difficult one, best left to political communities themselves.

Heterogeneity and Diversity

Immanuel Kant wrote that religion and language divide people and prevent a universal monarchy. But there are many other things that divide us. As I discussed in the previous section, institutional arrangements have distinct implications for the distribution of well-being and many other features of economic, social, and political life.

We do not agree on how to trade equality against opportunity, economic security against innovation, stability against dynamism, economic outcomes against social and cultural values, and many other consequences of institutional choice. Differences in preferences are ultimately the chief argument against institutional harmonization globally.

Consider how financial markets should be regulated. There are many choices to be made. Should commercial banking be separated from investment banking? Should there be a limit on the size of banks? Should there be deposit insurance, and, if so, what should it cover? Should banks be allowed to trade on their own account? How much information should they reveal about their trades? Should executives’ compensation be set by directors, with no regulatory controls? What should the capital and liquidity requirements be? Should all derivative contracts be traded on exchanges? What should be the role of credit-rating agencies? And so on.

A central trade-off here is between financial innovation and financial stability. A light approach to regulation will maximize the scope for financial innovation (the development of new financial products), but at the cost of increasing the likelihood of financial crises and crashes. Strong regulation will reduce the incidence and costs of crises, but potentially at the cost of raising the cost of finance and excluding many from its benefits. There is no single ideal point along this trade-off. Requiring that communities whose preferences over the innovation-stability continuum vary all settle on the same solution may have the virtue that it reduces transaction costs in finance. But it would come at the cost of imposing arrangements that are out of sync with local preferences. This is the conundrum that financial regulation faces at the moment, with banks pushing for common global rules and domestic legislatures and policy makers resisting.

Here is another example from food regulation. In a controversial 1998 case, the World Trade Organization sided with the United States in ruling that the European Union’s ban on beef reared on certain growth hormones violated the Agreement on Sanitary and Phytosanitary Standards (SP8). It is interesting that the ban did not discriminate against imports and applied to imported and domestic beef alike. There did not seem to be a protectionist motive behind the ban, which had been pushed by consumer lobbies in Europe that were alarmed by the potential health threats. Nonetheless, the World Trade Organization judged that the ban violated the requirement in the SPS agreement that policies be based on “scientific evidence.” (In a similar case in 2006, the World Trade Organization also ruled against the European Union’s restrictions on genetically modified food and seeds [GMOs], finding fault once again with the adequacy of the European Union’s scientific risk assessment.)

There is indeed scant evidence to date that growth hormones pose any health threats. The European Union argued that it had applied a broader principle not explicitly covered by the World Trade Organization, the “precautionary principle,” which permits greater caution in the presence of scientific uncertainty. The precautionary principle reverses the burden of proof. Instead of asking, “Is there reasonable evidence that growth hormones, or GMOs, have adverse effects?” it requires policy makers to ask, “Are we reasonably sure that they do not?” In many unsettled areas of scientific knowledge, the answer to both questions can be no. Whether the precautionary principle makes sense depends both on the degree of risk aversion and on the extent to which potential adverse effects are large and irreversible.

As the European Commission argued (unsuccessfully), regulatory decisions here cannot be made purely on the basis of science. Politics, which aggregates a society’s risk preferences, must play the determining role. It is reasonable to expect that the outcome will vary across societies. Some (like the United States) may go for low prices; others (like the European Union) will go for greater safety.

The suitability of institutional arrangements also depends on levels of development and historical trajectory.

Alexander Gerschenkron famously argued that lagging countries would need institutions, such as large banks and state-directed investments, that differed from those present in the original industrializers. To a large extent, his arguments have been validated. But even among rapidly growing developing nations, there is considerable institutional variation. What works in one place rarely does in another.

Consider how some of the most successful developing nations joined the world economy. South Korea and Taiwan relied heavily on export subsidies to push their firms outward during the 1960s and 1970s and liberalized their import regime only gradually. China established special economic zones in which export-oriented firms were allowed to operate under different rules than those applied to state enterprises and to others focused on the internal market. Chile, by contrast, followed the textbook model and sharply reduced import barriers to force domestic firms to compete with foreign firms directly in the home market. The Chilean strategy would have been a disaster if applied in China, because it would have led to millions of job losses in state enterprises and incalculable social consequences. And the Chinese model would not have worked as well in Chile, a small nation that is not an obvious destination for multinational enterprises.

Alberto Alesina and Enrico Spolaore have explored how heterogeneity in preferences interacts with the benefits of scale to determine endogenously the number and size of nations. In their basic model, individuals differ in their preferences over the type of public goods, or, in my terms, the specific institutional arrangements provided by the state? The larger the population over which the public good is provided, the lower the unit cost of provision. On the other hand, the larger the population, the greater the number of people who find their preferences ill served by the specific public good that is provided. Smaller countries are better able to respond to their citizens’ needs. The optimum number of jurisdictions, or nation-states, trades off the scale benefits of size against the heterogeneity costs of the provision of public good.

The important analytical insight of the Alesina-Spolaore model is that it makes little sense to optimize along the market-size dimension (and eliminate jurisdictional discontinuities) when there is heterogeneity in preferences along the institutional dimension. The framework does not tell us whether we have too many nations at present or too few. But it does suggest that a divided world polity is the price we pay for institutional arrangements that are, in principle at least, better tailored to local preferences and needs.

Distance Lives: The Limits to Convergence

We need to consider an important caveat to the discussion on heterogeneity, namely, the endogenous nature of many of the differences that set communities apart. That culture, religion, and language are in part a side product of nation-states is an old theme that runs through the long trail of the literature on nationalism. From Ernest Renan down, theorists of nationalism have stressed that cultural differences are not innate and can be shaped by state policies. Education, in particular, is a chief vehicle through which national identity is molded. Ethnicity has a certain degree of exogeneity, but its salience in defining identity is also a function of the strength of the nation-state. A resident of Turkey who defines himself as Muslim is potentially a member of a global community, whereas a “Turk” owes primary loyalty to the Turkish state.

Much the same can be said about other characteristics along which communities differ. If poor countries have distinctive institutional needs arising from their low levels of income, we may perhaps expect these distinctions to disappear as income levels converge. If societies have different preferences over risk, stability, equity, and so on, we may similarly expect these differences to narrow as a result of greater communication and economic exchange across jurisdictional boundaries. Today’s differences may exaggerate tomorrow’s differences. In a world where people are freed from their local moorings, they are also freed from their local idiosyncrasies and biases. Individual heterogeneity may continue to exist, but it need not be correlated across geographic space.

There is some truth to these arguments, but they are also counterweighed by a considerable body of evidence that suggests that geographic distance continues to produce significant localization effects despite the evident decline in transportation and communication costs and other man-made barriers. One of the most striking studies in this vein was by Anne-Celia Disdier and Keith Head, which looked at the effect of distance on international trade over the span of history. It is a stylized fact of the empirical trade literature that the volume of bilateral trade declines with the geographic distance between trade partners. The typical distance elasticity is around 1.0, meaning that trade falls by 10 percent for every 10 percent increase in distance. This is a fairly large effect. Presumably, what lies behind it is not just transportation and communication costs but the lack of familiarity and cultural differences. (Linguistic differences are often controlled for separately.)

Disdier and Head undertook a meta-analysis, collecting 1,467 distance effects from 103 papers covering trade flows at different points in time, and stumbled on a surprising result: distance matters more now than it did in the late nineteenth century. The distance effect seems to have increased from the 1960s, remaining persistently high since then (see Figure 2.4). If anything, globalization seems to have raised the penalty that geographic distance imposes on economic exchange. This apparent paradox was also confirmed by Matias Berthelon and Caroline Freund, who found an increase in the (absolute value) of the distance elasticity from -1 .7 to -1.9 between 1985 and 1989 and between 2001 and 2005 using a consistent trade data set. Berthelon and Freund showed that the result was not due to a compositional switch from low-to high-elasticity goods but to “a significant and increasing impact of distance on trade in almost 40 percent of industries.”

Figure 2.4 Estimated distance effect (H) over time. Source: Disdier, A.-C., and Head, K. 2008. “The Puzzling Persistence of the Distance Effect on Bilateral Trade,” The Review of Economics and Statistics 90(1): 37-48. With permission from MIT Press Journals.

Leaving this puzzle aside for the moment, let us turn to an altogether different type of evidence. In the mid-1990s a new housing development in one of the suburbs of Toronto engaged in an interesting experiment. The houses were built from the ground up with the latest broadband telecommunications infrastructure and came with a host of new Internet technologies. Residents of Netville (a pseudonym) had access to high-speed lnternet, a videophone, an online jukebox, online health services, discussion forums, and a suite of entertainment and educational applications. These new technologies made the town an ideal setting for nurturing global citizens. The people of Netville were freed from the tyranny of distance. They could communicate with anyone in the world as easily as they could with a neighbor, forge their own global links, and join virtual communities in cyberspace. One might expect they would begin to define their identities and interests increasingly in global, rather than in local, terms.

What actually transpired was quite different. Glitches experienced by the telecom provider left some homes without a link to the broadband network. This situation allowed researchers to compare wired and nonwired households and reach some conclusions about the consequences of being wired. Far from letting local links erode, wired people actually strengthened their existing local social ties. Compared to nonwired residents, they recognized more of their neighbors, talked to them more often, visited them more frequently, and made many more local phone calls. They were more likely to organize local events and mobilize the community around common problems. They used their computer network to facilitate a range of social activities, from organizing barbecues to helping local children with their homework.

Netville exhibited, as one resident put it, “a closeness that you don’t see in many communities.” What was supposed to have unleashed global engagement and networks had instead strengthened local social ties.

There are plenty of other examples that belie the death of distance. One study identified strong “gravity” effects on the Internet: “Americans are more likely to visit websites from nearby countries, even controlling for language, income, immigrant stock, etc.” For digital products related to music, games, and pornography, a 10 percent increase in physical distance reduces the probability that an American will visit the website by 33 percent, a distance elasticity even higher (in absolute value) than for trade in goods.

Despite the evident reduction in transportation and communication costs, the production location of globally traded products is often determined by regional agglomeration effects. When the New York Times recently examined why Apple’s iPhone is manufactured in China, rather than in the United States, the answer turned out to have little to do with comparative advantage. China had already developed a massive network of suppliers, engineers, and dedicated workers in a complex known informally as Foxconn City that provided Apple with benefits that the United States could not match.

More broadly, incomes and productivity do not always exhibit a tendency to converge as markets for goods, capital, and technology become more integrated. The world economy’s first era of globalization produced a large divergence in incomes between the industrializing countries at the center and lagging regions in the periphery that specialized in primary commodities. Similarly, economic convergence has been the exception rather than the rule in the postwar period.

Economic development depends perhaps more than ever on what happens at home. If the world economy exerts a homogenizing influence, it is at best a partial one, competing with many other influences that go the other way.

Relationships based on proximity are one such offsetting influence. Many, if not most, exchanges are based on relationships, rather than textbook style anonymous markets. Geographic distance protects relationships. As Ed Learner put it, “geography, whether physical or cultural or informational, limits competition since it creates cost-advantaged relationships between sellers and buyers who are located ‘close’ to one another.” But relationships also create a role for geography. Once relationship-specific investments are made, geography becomes more important. The iPhone could have been produced anywhere, but once relationships with local suppliers were established, there are lock-in effects that make it difficult for Apple to move anywhere else.

Technological progress has an ambiguous effect on the importance of relationships. On the one hand, the decline in transportation and communication costs reduces the protective effect of distance in market relationships. It may facilitate the creation of long-distance relationships that cross national boundaries. On the other hand, the increase in complexity and product differentiation, along with the shift from Fordist mass production to new, distributed modes of learning, increases the relative importance of spatially circumscribed relationships. The new economy runs on tacit knowledge, trust, and cooperation, which still depend on personal contact. As Kevin Morgan put it, spatial reach does not equal “social depth.”

Hence, market segmentation is a natural feature of economic life, even in the absence of jurisdictional discontinuities. Neither economic convergence nor preference homogenization is the inevitable consequence of globalization.

Experimentation and Competition

Finally, since there is no fixed, ideal shape for institutions and diversity is the rule rather than exception, a divided global polity presents an additional advantage. It enables experimentation, competition among institutional forms, and learning from others. To be sure, trial and error can be costly when it comes to society’s rules. Still, institutional diversity among nations is as close as we can expect to a laboratory in real life. Josiah Ober has discussed how competition among Greek city-states during 800-300 BCE fostered institutional innovation in areas of citizenship, law, and democracy, sustaining the relative prosperity of ancient Greece.

There can be nasty sides to institutional competition. One of them is the nineteenth-century idea of a Darwinian competition among states, whereby wars are the struggle through which we get progress and seIf-realization of humanity. The equally silly, if less bloody, modern counterpart of this idea is the notion of economic competition among nations, whereby global commerce is seen as a zero-sum game.

Both ideas are based on the belief that the point of competition is to lead us to the one perfect model. But competition works in diverse ways. In economic models of “monopolistic competition,” producers compete not just on price but on variety, by differentiating their products from others’.

Similarly, national jurisdictions can compete by offering institutional “services” that are differentiated along the dimensions I discussed earlier.

One persistent worry is that institutional competition sets off a race to the bottom. To attract mobile resources, capital, multinational enterprises, and skilled professionals, jurisdictions may lower their standards and relax their regulations in a futile dynamic to outdo other jurisdictions. Once again, this argument overlooks the multidimensional nature of institutional arrangements. Tougher regulations or standards are presumably put in place to achieve certain objectives: they offer compensating benefits elsewhere. We may all wish to be free to drive at any speed we want, but few of us would move to a country with no speed limit at all where, as a result, deadly traffic accidents would be much more common. Similarly, higher labor standards may lead to happier and more productive workers; tougher financial regulation to greater financial stability; and higher taxes to better public services, such as schools, infrastructure, parks, and other amenities. Institutional competition can foster a race to the top.

The only area in which some kind of race to the bottom has been documented is corporate taxation. Tax competition has played an important role in the remarkable reduction in corporate taxes around the world since the early 1980s. In a study on OECD countries, researchers found that when other countries reduce their average statutory corporate tax rate by 1 percentage point, the home country follows by reducing its tax rate by 0.7 percentage points. The study indicated that international tax competition takes place only among countries that have removed their capital controls. When such controls are in place, capital and profits cannot move as easily across national borders and there is no downward pressure on capital taxes. So, the removal of capital controls appears to be a factor in driving the reduction in corporate tax rates.

On the other hand, there is scant evidence of similar races to the bottom in labor and environmental standards or in financial regulation. The geographically confined nature of the services (or public goods) offered by national jurisdictions often presents a natural restraint on the drive toward the bottom. If you want to partake of those services, you need to be in that jurisdiction. But corporate tax competition is also a reminder that the costs and benefits need not always neatly cancel each other. Although it is not a perfect substitute for local sourcing, international trade does allow a company to serve a high-tax market from a low-tax jurisdiction. The problem becomes particularly acute when the arrangement in question has a “solidarity” motive and is explicitly redistributive (as in many tax examples). In such cases, it becomes desirable to prevent “regulatory arbitrage” even if it means tightening controls at the border.

What Do Global Citizens Do?

Let’s circle back to Teresa May’s comments at the beginning of this chapter. What does it even mean to be a “global citizen”? The Oxford English Dictionary defines “citizen” as “a legally recognized subject or national of a state or commonwealth.” Hence, citizenship presumes an established polity, “a state or commonwealth”, of which one is a member. Countries have such polities; the world does not.

Proponents of global citizenship quickly concede that they do not have a literal meaning in mind. They are thinking figuratively. Technological revolutions in communications and economic globalization have brought citizens of different countries together, they argue. The world has shrunk, and we must act bearing the global implications in mind. And besides, we all carry multiple, overlapping identities. Global citizenship does not, and need not, crowd out parochial or national responsibilities.

All well and good. But what do global citizens really do?

Real citizenship entails interacting and deliberating with other citizens in a shared political community. It means holding decision makers to account and participating in politics to shape the policy outcomes. In the process, my ideas about desirable ends and means are confronted with and tested against those of my fellow citizens.

Global citizens do not have similar rights or responsibilities. No one is accountable to them, and there is no one to whom they must justify themselves. At best, they form communities with like-minded individuals from other countries. Their counterparts are not citizens everywhere but self-designated “global citizens” in other countries.

Of course, global citizens have access to their domestic political systems to push their ideas through. But political representatives are elected to advance the interests of the people who put them in office. National governments are meant to look out for national interests, and rightly so. This does not exclude the possibility that constituents might act with enlightened self-interest, by taking into account the consequences of domestic action for others.

But what happens when the welfare of local residents comes into conflict with the wellbeing of foreigners as it often does? Isn’t disregard of their compatriots in such situations precisely what gives so-called cosmopolitan elites their bad name?

Global citizens worry that the interests of the global commons may be harmed when each government pursues its own narrow interest. This is certainly a concern with issues that truly concern the global commons, such as climate change or pandemics. But in most economic areas, taxes, trade policy, financial stability, fiscal and monetary management, what makes sense from a global perspective also makes sense from a domestic perspective. Economics teaches that countries should maintain open economic borders, sound prudential regulation, and full-employment policies, not because these are good for other countries but because they serve to enlarge the domestic economic pie.

Of course, policy failures, for example, protectionism, do occur in all of these areas. But these reflect poor domestic governance, not a lack of cosmopolitanism. They result either from policy elites’ inability to convince domestic constituencies of the benefits of the alternative, or from their unwillingness to make adjustments to ensure that everyone does indeed benefit.

Hiding behind cosmopolitanism in such instances when pushing for trade agreements, for example, is a poor substitute for winning policy battles on their merits. And it devalues the currency of cosmopolitanism when we truly need it, as we do in the fight against global warming.

Few have expounded on the tension between our various identities, local, national, global, as insightfully as the philosopher Kwame Anthony Appiah. In this age of “planetary challenges and interconnection between countries,” he wrote in response to May’s statement, “the need has never been greater for a sense of a shared human fate.” It is hard to disagree.

Yet cosmopolitans often come across like the character from Fyodor Dostoyevsky’s The Brothers Karamazov who discovers that the more he loves humanity in general, the less he loves people in particular. Global citizens should be wary that their lofty goals do not turn into an excuse for shirking their duties toward their compatriots.

We have to live in the world we have, with all its political divisions, and not the world we wish we had. The best way to serve global interests is to live up to our responsibilities within the political institutions that matter: those that exist, within national borders.

Who Needs the Nation-State?

The design of institutions is shaped by a fundamental trade-off. On the one hand, relationships and preference heterogeneity push governance down. On the other hand, the scale and scope of the benefits of market integration push governance up. A corner solution is rarely optimal. An intermediate outcome, a world divided into diverse polities, is the best that we can do.

Our failure to internalize the lessons of this simple point leads us to pursue dead ends. We push markets beyond what their governance can support. We set global rules that defy the underlying diversity in needs and preferences. We downgrade the nation-state without compensating improvements in governance elsewhere. The failure lies at the heart of globalization’s unaddressed ills as well as the decline in our democracies’ health.

Who needs the nation-state? We all do.

Chapter 3

Europe’s Struggles

The eurozone was an unprecedented experiment. Its members tried to construct a single, unified market, in goods, services, and money, while political authority remained vested in the constituting national units. There would be one market, but many polities.

The closest historical parallel was that of the Gold Standard. Under the Gold Standard, countries effectively subordinated their economic policies to a fixed parity against gold and the requirements of free capital mobility. Monetary policy consisted of ensuring the parity was not endangered. Since there was no conception of countercyclical fiscal policy or the welfare state, the loss of policy autonomy that these arrangements entailed had little political cost. Or so it seemed at the time. Starting with Britain in 1931, the Gold Standard would eventually unravel precisely because the high interest rates required to maintain the gold parity became politically unsustainable in view of domestic unemployment.

The postwar arrangements that were erected on the ashes of the gold standard were consciously designed to facilitate economic management by national political authorities. John Maynard Keynes’s signal contribution to saving capitalism was recognizing that it required national economic management. Capitalism worked only . . .

*

from

Straight Talk on Trade. Ideas for a Sane World Economy

by Dani Rodrik

get it at Amazon.com

Trump’s phony, blowhard trade war just got real, the Economic Consequences – Barry Eichengreen.

For those who observe that the economic and financial fallout from US President Donald Trump’s trade war has been surprisingly small, the best response is that a lagged effect is exactly what we should expect, just wait.

US President Donald Trump’s phony, blowhard trade war just got real.

The steel and aluminum tariffs that the Trump administration imposed at the beginning of June were important mainly for their symbolic value, not for their real economic impact. While the tariffs signified that the United States was no longer playing by the rules of the world trading system, they targeted just $45 billion of imports, less than 0.25% of GDP in an $18.5 trillion US economy.

On July 6, however, an additional 25% tariff on $34 billion of Chinese exports went into effect, and China retaliated against an equivalent volume of US exports. An angry Trump has ordered the US trade representative to draw up a list of additional Chinese goods, worth more than $400 billion, that could be taxed, and China again vowed to retaliate. Trump has also threatened to impose tariffs on $350 billion worth of imported motor vehicles and parts. If he does, the European Union and others could retaliate against an equal amount of US exports.

We are now talking about real money: nearly $1 trillion of US imports and an equivalent amount of US export sales and foreign investments.

The mystery is why the economic and financial fallout from this escalation has been so limited. The US economy is humming along. The Purchasing Managers’ Index was up again in June. Wall Street has wobbled, but there has been nothing resembling its sharp negative reaction to the Smoot-Hawley Tariff of 1930. Emerging markets have suffered capital outflows and currency weakness, but this is more a consequence of Federal Reserve interest-rate hikes than of any announcements emanating from the White House.

There are three possible explanations. First, purchasing managers and stock market investors may be betting that sanity will yet prevail. They may be hoping that Trump’s threats are just bluster, or that the objections of the US Chamber of Commerce and other business groups will ultimately register.

But this ignores the fact that Trump’s tariff talk is wildly popular with his base. One recent poll found that 66% of Republican voters backed Trump’s threatened tariffs against China. Trump ran in 2016 on a protectionist vow that he would no longer allow other countries to “take advantage” of the US. His voters expect him to deliver on that promise, and he knows it.

Second, the markets may be betting that Trump is right when he says that trade wars are easy to win. Other countries that depend on exports to the US may conclude that it is in their interest to back down. In early July, the European Commission was reportedly contemplating a tariff-cutting deal to address Trump’s complaint that the EU taxes American cars at four times the rate the US taxes European sedans.

But China shows no willingness to buckle under US pressure. Canada, that politest of countries, is similarly unwilling to be bullied; it has retaliated with 25% tariffs on $12 billion of US goods. And the EU would contemplate concessions only if the US offers some in return such as eliminating its prohibitive tariffs on imported light pickup trucks and vans and only if other exporters like Japan and South Korea go along.

Third, it could be that the macroeconomic effects of even the full panoply of US tariffs, together with foreign retaliation, are relatively small. Leading models of the US economy, in particular, imply that a 10% increase in the cost of imported goods will lead to a one-time increase in inflation of at most 0.7%.

This is simply the law of iterated fractions at work. Imports are 15% of US GDP. Multiply 0.15 by 0.10 (the hypothesized tariff rate), and you get 1.5%. Allow for some substitution away from more expensive imported goods, and the number drops below 1%. And if growth slows because of the higher cost of imported intermediate inputs, the Fed can offset this by raising interest rates more slowly. Foreign central banks can do likewise.

Still, one worries, because the standard economic models are notoriously bad at capturing the macroeconomic effects of uncertainty, which trade wars create with a vengeance. Investment plans are made in advance, so it may take, say, a year for the impact of that uncertainty to materialize, as was the case in the United Kingdom following the 2016 Brexit referendum. Taxing intermediate inputs will hurt efficiency, while shifting resources away from dynamic high-tech sectors in favor of old-line manufacturing will depress productivity growth, with further negative implications for investment. And these are outcomes that the Fed cannot easily offset.

So, for those who observe that the economic and financial fallout from Trump’s trade war has been surprisingly small, the best response is: just wait.

*

Barry Eichengreen is Professor of Economics at the University of California, Berkeley, and a former senior policy adviser at the International Monetary Fund. His latest book is The Populist Temptation: Economic Grievance and Political Reaction in the Modern Era.

Breaking Down Is Waking Up. Can psychological suffering be a spiritual gateway? – Dr Russell Razzaque.

There are as many types of mental illness as there are people who suffer them.

The World Health Organization estimates that approximately 450 million people worldwide have a mental health problem.

None of us is immune from the existential worry that nags away in the back of our mind. We are all vulnerable to emotional and psychological turmoil in our lives and there is something fundamental about the human condition that makes it so.

There is something at the core of the experience of mental illness that draws sufferers towards the spiritual. Their suffering is an echo of the suffering we all contain within us.

EVERYONE NEEDS A BANISTER; a fixed point of reference from which we understand and engage with life. We need something to hold on to, so that when we’re hit by life’s inevitable disappointments, pain or traumas, we won’t fall too far into confusion, despair or hopelessness. With a weak banister we risk getting knocked off course, losing our bearings and falling prey to stress, psychological turmoil and mental illness. A strong banister will stand the test of time in an ever changing world, giving us more confidence to face the knocks and hardships of life more readily.

Understanding who we are and how we fit into the world is a quest we start at birth and continue through the whole of our lives. Sometimes these questions come to the fore, but usually they bubble away somewhere beneath the surface: ‘Who am I?’ ‘Am I normal?’ ‘Why am I here?’ ‘Is there any real point to life?’ Deep down inside we know that nothing lasts, the trees, landscapes and life around us will all one day perish, just as surely as we ourselves will, and everyone we know too. But we have evolved ways to hold this reality and the questions it hurls up at bay.

We construct banisters to help us navigate our way round this maze of pain and insecurity: a set of beliefs and lifestyles that help us form a concrete context to make sense of things and, as the saying goes, ‘keep calm and carry on’. But, for most of us, the core beliefs and lifestyles that hold us together still leave us vulnerable to instability. The sense of identity we evolve is so precarious that we’re often buffeted by life onto shaky ground. And, as a consequence, we become prone to various forms of psychological distress; indeed, for vast swathes of society this proceeds all the way to mental illness, whether that be labelled as anxiety, depression, bipolar disorder or the most severe form of mental illness, psychosis.

There are as many types of mental illness as there are people who suffer them. One of the reasons I decided to specialize in psychiatry, shortly after qualifying from medical school, was that, unlike any other branch of medicine, no two people I saw ever came to me with the same issues. Although different presentations might loosely fit into different categories, there appeared to me to be as many ways of becoming mentally unwell as there were ways of being human. I have since specialized in the more severe and acute end of psychiatry. I currently work in a secure, intensive care facility but to this day, in 16 years of practice, I have never seen two cases that were exactly the same.

And the numbers just seem to be going up. In the UK today, one in four adults experiences at least one diagnosable mental health problem in any one year. In the USA, the figure is the same and this equates to just over 20 million people experiencing depression and 2.4 million diagnosed with schizophrenia, a severe form of mental illness where the individual experiences major disturbances in thoughts and perceptions. The World Health Organization estimates that approximately 450 million people worldwide have a mental health problem.

Beyond these figures, however, are all the people who struggle with various levels of stress throughout life and, all the while, carry a fear at the back of their minds, that they too may one day slide into mental illness. In my experience, this is a fear that pervades virtually every stratum of society. Rarely am I introduced as a psychiatrist to new people in a social gathering without at least some of them quietly feeling, or even explicitly reporting, that they worry that one day they are going to need my help. Such comments are often made in jest, but the genuine anxiety that underlies them is rarely far beneath the surface. There is a niggling worry at the back of many people’s minds that something might be wrong with them; that something isn’t quite right. What they don’t realize, however, in their own private suffering, is just how much company they have in this fear. Indeed, I include myself and my colleagues among them, too. None of us is immune from the existential worry that nags away in the back of our mind.

But, if we look closely, there is also another process that can be discerned underneath all of this. Deep down inside every bubbling cauldron of insecurity, we can also find the seeds of a kind of liberation. Something is just waiting to burst forth. This something is hard to define or describe in language, but it is often in our darkest hours that we can feel it the most. And the further we fall the closer to it we get. This is why, I believe, mental illness can be so powerful, not just because of the deep distress that it contains, but also because of the authentic potential that it represents.

Mental illness, however, is just one aspect of a continuum we are all on. All of us have different ways of reacting emotionally to the experiences we encounter in life and the ones that involve a high level of distress either for oneself or for others are the ones we choose to label as mental illness. And it is this end of the spectrum that l will focus on most in this book, as it is these most stark forms of distress that present us with the greatest opportunity to observe the seeds within, and thus, ultimately, learn what is in all of us too.

There may be a variety of factors that contribute to the various forms of mental illness, of course, from childhood traumas to one’s genetic make up, but as the cut-off point always centres around distress which is grounded in subjective experience the definition itself will always remain somewhat arbitrary. That’s not to say that such definitions have no utility. By helping us communicate with each other about these complex shapes of suffering, they will also help us communicate our ideas with one another about how to help reduce the suffering encountered.

That is why I use these terms in this book, but it should be noted that I attach this large caveat from the outset. Ultimately, the only person who can really describe a person’s suffering is the sufferer himself; outside that individual, the rest of us are always necessarily off the mark. What must invariably be remembered, however, is that there is no ‘them’ and ‘us’. We are all vulnerable to emotional and psychological turmoil in our lives and there is something fundamental about the human condition that makes it so.

That is why I believe, as a psychiatrist, that the best research I ever engage in is when I explore my own vulnerabilities. That is when I start to connect with threads of the suffering that my patients are undergoing too. And what I find particularly fascinating about this process is that the deeper I descend into my own world of emotional insecurity, the more I grow to appreciate an indescribable dimension to reality that so many of my patients talk about in spiritual terms, engage with, and indeed rely upon so much of the time.

In a survey of just under 7,500 people, published in early 2013, researchers from University College London found a strong correlation between people suffering mental illness and those with a spiritual perspective on life. Though the results confused many, to me they made perfect sense.

There is something at the core of the experience of mental illness that draws sufferers towards the spiritual. Their suffering is an echo of the suffering we all contain within us.

That is why I can say from the outset, and without reticence, that my insights are based largely on a subjective pathway to our shared inner world. And it is through this perspective that I have evolved what I believe is a new banister: a new way of seeing the world and being within it. It is, however, not just that my introspection has taught me about my patients, but that my patients have also taught me about myseIf. Indeed I can safely say that I have gleaned just as much from the individuals I have cared for as I have from the professionals and teachers I have learnt from.

I consider myself hugely lucky to work in a profession in which looking into myself and learning about my own inner world has been, and continues to be, a vital requirement of my work (though, it has to be said that, sadly, many within my profession do not recognize this). It has propelled me into a journey of limitless exploration of both myself and the people I care for and this has led me to ever deeper understandings of the nature of mental illness, the mind and reality itself. I have drawn upon a diverse array of wisdom along the way, and my journey has ultimately led me to construct a synthesis of modern psychiatry and ancient philosophy; of new scientific findings and old spiritual practices.

But this banister comes with a health warning, as indeed all should. Just as a set of perspectives and insights can be a useful support in times of instability, so too can overreIiance on them become counterproductive. That is why a banister needs to be held lightly. Gripping too tightly to anything in life is a recipe for exhaustion and, consequently, even greater instability.

What we need is a banister that, when held lightly, can allow us to move forward, rather than hold us back. I believe that such an understanding of reality and our place within it actually exists; it is also imperative to our survival as a species. I believe that life’s potential is far greater than most of us are ever aware of, and that our limitations are a lot more illusory than we know. In a sense I feel we are all suffering from a form of mental illness a resistance to the realization of our true nature, and to that end I humbly offer this book as a guiding rail out of the turmoil.

My Journey. An Exploration of Inner and Outer Worlds

Chapter 1

Wisdom in Bedlam

‘One must still have chaos in oneself to be able to give birth to a dancing star.’ Friedrich Nietzsche

MENTAL ILLNESS IS SOMETHING that most of us shy away from. Someone who exhibits behaviour or feelings that are considered out of the ordinary will, sooner or later, experience a fairly broad radius of avoidance around them. Even in psychiatric hospitals this is evident, where the less ill patients will veer away from those who are more unwell. The staff themselves are often prone to such avoidance, too. But contrary to this natural reflex that exists within all of us, moving closer to, and spending time with, someone suffering mental illness can often be quite an enlightening experience. It took me many years to realize this myself, but through the cloud of symptoms, a fascinating display of insight and depth can often be found in even the most acutely unwell. And this turned out to be true whatever the type of mental illness. The problem might be mood related for example, depression or bipolar or what we term neurotic like anxiety, panic or post-traumatic stress disorder or all the way up to the paranoia or hearing voices that we see at the most severe stage of mental illness termed psychosis. Indeed, the more severe the symptoms, the deeper the wisdom that appeared to be contained (though often hidden) within it.

A frequent observation of mine, for example, is just how perceptive the people I treat can be, regardless of the very evident turbulence that is going on inside. It is not uncommon for those who are newly admitted to share with me their impressions of the nursing and other staff on the ward with an uncanny degree of accuracy within only a few days of arrival. They’ll sometimes rapidly intuit the diverse array of temperaments, perspectives and personality traits among staff members and so have a feel for who is best to approach, avoid, or even wind up, depending on their mental state and needs at the time. It is likely that this acute sensitivity is one of the initial causes of their mental illness in the first place, but the flip side is that they have also managed to glean a lot about life from their experiences to date. This wisdom is often hidden by the symptoms of their illness, but it lurks there under the surface, often ready to flow out after a little gentle probing. I am frequently struck by the profundity of what I hear from my patients during our sessions and I often find myself feeding this same wisdom back to them even when, at the same time, they are undoubtedly experiencing and manifesting a degree of almost indescribable psychological pain.

Most of us spend our lives going to work, earning a salary, feeding our families and perhaps indulging in sport or entertainment at the weekends. Rarely are we able to step back from it all and wonder what the purpose of all this is, or whether or not we have our perspectives right. During the football World Cup one year, a patient told me that he felt such events served a deeper purpose for society, ‘It stops us thinking about the plight of the poor around the world.’ Events such as this kept us anaesthetized, he believed, so we could avoid confronting the depths of inequality and injustice around the globe, and that would ultimately enable the system that propped up the very corporations who were sponsoring these events to keep going. I had to admit that I had never thought of it that way before.

Compassion is a frequent theme I observe in those suffering mental illness, even though they are usually receiving treatment in a hospital setting because, on some level, they are failing to demonstrate compassion towards either themselves or others. I have often been moved by hearing of an older patient with a more chronic history of mental ill-health, perhaps due to repeated long-term drug use, or failure to engage with therapy, taking the time to approach a younger man, maybe admitted to hospital for the first time, and in effect tell him, ‘Don’t do what I did, son. Please learn from my mistakes.’ There are few moments, I believe, that are more powerfully therapeutic than that.

It is only in the last few years that we have discovered, after trialling a variety of treatments, that one of the most powerful interventions for what are known as the ‘negative symptoms’ of schizophrenia, is exercise. These negative features relate to a lack of energy, drive, motivation and, often, basic functional activity. Whatever the diagnostic label you choose to put on it, this can often be the most disabling part of such illnesses, and there are hardly any known treatments for it. Although an evidence base has recently evolved around the practice of regular exercise. I never quite understood why this could be until a patient one day put forward a hypothesis to me. It takes you out of your mind, he explained to me. ‘You see doc, you can’t really describe a press-up. You just do it.’ The whirlwinds within could be overcome for a few moments at least, while attention is paid, instead, to the body. Suddenly I realized why going to the gym was the highlight of his week.

A rarely described but key feature of mental illness, therefore, is just how paradoxical it can be, with the same person who is plagued by negative, obsessional or irrational thoughts, also able to demonstrate an acute and perceptive understanding of the people and world around him. It is as if one mental faculty deteriorates, only for another one to branch out somewhere else; or rather, consciousness constricts in one area only to expand in another. There is actually some quite startling experimental evidence to back this up. An interesting study was conducted by neuroscientists at Hannover Medical School in Germany and University College London, Institute of Cognitive Neuroscience. It involved a hollowmask experiment. Essentially, when we are shown a two-dimensional photograph of a white face mask, it will look exactly the same whether it is pointing outwards with the convex face towards the camera or inwards with the concave inside of the face towards the camera. This is known as the hollow-mask illusion.

Such photographs were shown to a sample of control volunteers.

Sometimes the face pointed outwards, and sometimes inwards. Almost every time the hollow, inward-pointing concave face was shown to them, they misinterpreted it and reported that they were seeing the outward-pointing face of the mask instead. This miscategorization of the illusion actually occurred 99% of the time. The same experiment was then performed on a sample of individuals with a diagnosis of schizophrenia. They did not fall for the illusion: 93% of the time, this group was actually correctly able to identify when the photo placed before them was, in fact, an inward-pointing concave mask.

Clearly what we see here is an expansion in perceptual ability compared to normal controls. Data like this has begun to pierce the notion that mental illness is purely a negative or pathological experience. In fact, in this study, it was the normal controls who were less in touch with reality than those with a psychotic illness!

The most interesting aspect of this is that, whether they be suffering neurosis, depression, bipolar or even psychotic disorders, many people actually have some awareness of the fact that they are also somehow connecting, through this process, to a more profound reality that they were like the rest of us hitherto ignorant of. The experience might be disconcerting, even acutely frightening, but there is a sense that there is also something restorative about it too; they are rediscovering some roots they, perhaps along with the rest of us, had long forgotten about. One patient put it to me this way, ‘I feel like I am waking up. But it’s very scary because I feel like I have been regressing at the same time. It’s almost as if I needed to go through this in order to wake up.’

This sense of a wider meaning and purpose behind a breakdown is not an uncommon theme among the people I see but it is, nevertheless, so counterintuitive that it continues to halt me in my tracks whenever I encounter it. In psychiatry, for genuinely caring reasons, we are striving to reduce the distress that the people we see are experiencing. That, after all, is the reason we became health-care professionals in the first place: to heal the sick. So our reflex, whenever we see people in any kind of pain, is to remove it. But when one senses that the sufferer himself/herself sees value in the experience then we need to stop and think. So long as they are not a risk to themselves or others, perhaps our usual reflex to extinguish such an experience might lead to the suppression of something that could otherwise have been valuable or even potentially transformative.

I have had many experiences of treating people who, even after a terrible episode of psychotic breakdown, came out the other end saying that this was good for them and that the experience, despite being horrendous, was something they needed to go through. This has sometimes been attributed to an expansion of awareness that they felt they needed, and that they believed the illness brought to them. A patient once talked with me about a profound, almost overwhelming, sense of gentleness and warmth he felt when listening to music one evening, just hours before his relapse into psychosis, and as we were talking in the session, he suddenly looked up at me and said, with a mixture of awe and joy on his face, and tears in his eyes, ‘Sometimes I feel that there is something out there so beautiful and so much bigger than me, but I just can’t handle it.’

Though we will be exploring the whole gamut of psychological distress and mental illness in this book, it is the psychotic experience that usually invokes the greatest stereotype and stigma, and so merits extra attention in this opening chapter. Psychosis is when someone is said to have lost touch with reality, and this may involve hearing voices, seeing things or holding some delusional ideas. The idea that someone suffering psychosis can also be the conduit of genuinely deep wisdom and insight, therefore, surprises most people, even mental-health professionals who might not be familiar with this client group. First-person accounts of this are not easy to find in the academic literature, but one particularly good case study was published by David Lukoff in the Journal of Transpersonal Psychology. He wrote it in conjunction with a gentleman who had himself suffered a psychotic breakdown and went by the pseudonym of Howard Everest. Howard was able, in a very articulate way, to describe his own breakdown which he referred to as a form of personal odyssey both during and after it actually happened.

. . .

*

Dr Russell Razzaque is a London based psychiatrist with sixteen years experience in adult mental health. He has worked for a number of national and international organizations during his career including the University of Cambridge, the UK Home Office and the Ministry of Justice, and he currently works in acute mental health services in the NHS in east London. He is also a published author in human psychology with several books on the subject, and he writes columns for a number of publications including Psychology Today, The Independent, The Guardian and USA Today.

*

from

Breaking Down Is Waking Up. Can psychological suffering be a spiritual gateway?

by Dr Russell Razzaque

get it at Amazon.com

Meditation increases prosociality? Meditation under the microscope – Ute Kreplin.

It’s hailed as the panacea for everything from cancer to war.

Inflated study results for the power of meditation fuel magical beliefs about its benefits. Mindfulness websites market it as a ‘happy pill, with no side effects’; it is said it can bring world peace in a generation, if only children would breathe deep and live in the moment.

But what if meditation doesn’t work for you? Or worse, what if it makes you feel depressed, anxious or psychotic?

Does research into its efficacy meet scientific standards? Can we be sure that there are no unexpected outcomes that neither benefit the individual nor society? Is it possible that meditation can fuel dysfunctional environments and indeed itself create a path to mental illness?

One day there will be a more complete picture of this potent and poorly understood practice. For now, our understanding is mostly warped.

Among the promised psychological and physical benefits of meditation are the elimination or reduction of stress, anxiety and depression, as well as bipolar disorder, eating disorders, diabetes, substance abuse, chronic pain, blood pressure, cancer, autism and schizophrenia. It is a panacea for the individual.

There are also apparent interpersonal and collective effects. Mindfulness and other Buddhist-derived meditation techniques, such as compassion and loving-kindness meditation, can perhaps increase prosocial emotions and behaviours, yielding greater social connection and altruism, tampering aggression and prejudice.

‘If every eight year old in the world is taught meditation,’ the Dalai Lama purportedly said, ‘the world will be without violence within one generation.’ The quote is widely shared online.

Such a useful activity naturally finds a variety of applications. Meditation techniques have been deployed in the military with the aim of increasing the wellbeing and work effectiveness of soldiers. Snipers are known to meditate in order to disengage emotionally from the act of killing, to steady the hand that takes a life (the element of peacefulness associated with meditation having been rather set aside). Corporations counteract stress and burnout with meditation which, on the surface, is an amiable aim, but it can also help create compliant workers. And in schools, meditation interventions aim to calm children’s minds, offering students the ability to better deal with the pressure of attaining high grades. Here, too, the goal is to reduce misbehaviour and aggression in a bid to increase prosociality and compliance.

Psychological research often upholds this optimism about the efficacy of meditation. Indeed, studies on the prosocial effects of meditation almost always support the power of meditation, the power not only of transforming the individual but of changing society. So it appears well grounded that meditation might improve socially advantageous behaviour. This brings with it the prospect of applications in a variety of contexts, where it might find its use in social conflicts, such as mitigation of war and terrorism. The problem, however, is with the research that bolsters such claims.

Last year, the experimental psychologists Miguel Farias, Inti A Brazil and I conducted a systematic review and meta-analysis that examined the scientiiic literature behind the claim that meditation increases prosociality. We looked at randomised controlled studies, where meditators were compared with nonmeditating individuals, and reviewed more than 20 studies that evaluated the effect of various types of meditation on prosocial feelings and behaviours such as how compassionate, empathetic or connected individuals felt.

The studies we reviewed used a variety of methodologies and interventions. For example, one used an eight-week meditation intervention called ‘mindfulness-based stress-reduction’. Individuals learned how to conduct mindful breathing and to practise ‘being in the moment’, letting go of their thoughts and feelings. Meanwhile the control group, with which the meditators were compared, engaged in a weekly group discussion about the benefits of compassion. Another study compared guided relaxation (participants listening to an audio recording about deep breathing and unwinding) with a control group that simply did nothing in a waiting room. Most studies required participants to fill in questionnaires about their experience of the meditation intervention, and their levels of compassion towards themselves and others. Some studies also included behavioural measures of compassion, in one case assessed by how willing a person was to give up a chair in a (staged) full waiting room.

Initially, the results were promising. Our meta-analysis indicated that meditation did indeed have a positive, though moderate, impact on prosociality. But digging deeper, the picture became more complicated. While meditation made people feel somewhat more compassionate or empathetic, it did not reduce aggression or prejudice, nor did it improve how socially connected one felt. So the prosocial benefits are not straightforward, but they are apparently measurable. The issue is the way in which those benefits were measured.

To fully dissect the studies, we conducted a secondary comparison to see how methodological considerations would change our initial findings. This analysis looked at the use of control groups and whether the teacher of the intervention was also an author of the study, which might be an indication of bias. The results were astounding.

Let’s start with the control groups. The purpose of the control group is to isolate the effects of the intervention (in our case, meditation) and to eliminate unintentional bias. The importance of adequate control conditions was first brought to light by the discovery of the placebo effect in drug trials, which is when a treatment is effective even though no active agent (or drug) is used. To avoid this effect, each group in a drug trial receives identical treatments, except one group receives a placebo (or sugar pill) and the other gets the real drug. Neither the experimenter nor the participants know who is in which trial (this is called a double-blind design), which helps to eliminate unintentional bias. This way they can tell if it’s the active agent that is effective and not something else.

But the use of adequate controls is tricky in studies that look at behavioural change, because it is harder to create a control group (or placebo) when the treatment is not just a pill but an action. The control has to be similar to the intervention but lack some important components that differentiate it from the experimental counterpart. This is known as an active control. A passive control group simply does nothing, compared with the group that has the intervention.

Meditation did indeed improve compassion when the intervention was compared with a passive control group, that is, a group that completed only the questionnaires and surveys but did not engage in any real activity. So participants who undertook eight weeks of loving-kindness meditation were found to have improved compassion following the intervention compared with a passive waiting-room control group.

Our analysis suggests that meditation per se does not, alas, make the world a more compassionate place.

But have we isolated the effects of meditation or are we simply demonstrating that doing something is better than doing nothing? It might be that compassion improved simply because individuals spent eight weeks thinking about being more compassionate, and felt good about having engaged in a new activity. An active control group (eg, participants taking part in a discussion about compassion) is a more effective tool to isolate the effects of the meditation intervention because both groups have now engaged in a new activity that involves cultivating compassion. And here the results of our analysis suggest that meditation per se does not, alas, make the world a more compassionate place.

A well designed control condition allows studies with a double-blind design. Developing an effective placebo for a meditation intervention is often said to be impossible, but it has in fact been done and with considerable success. In the heydays of transcendental meditation research in the 1970s, Jonathan C Smith developed a 71-page manual describing the rationale and beneiits of a meditation technique. He gave the manual to a research assistant, who was unaware that the technique was completely made-up therefore, a placebo and who then proceeded to give a lecture to participants in the control group about the merits of the technique. (When it came to the actual placebo technique, participants were instructed to sit quietly for 20 minutes twice per day in a dark room, and to think of anything they wanted.) The point is, the placebo can work in studying meditation, it’s just not often used.

Double-blind designs can help to eliminate the accidental bias of the participants through the researcher. These biases have a longstanding history in psychology, and are called experimenter biases (when the experimenter inadvertently influences the participant’s behaviour) and demand characteristics (when participants behave in a way that they think will please the experimenter). The importance of avoiding experimenter bias and demand characteristics was discussed as early as the 1960s. Recent work indicates that experimenter biases remain, particularly in the study of meditation.

In light of the discussion around experimenter bias and demand characteristics, it is surprising to find that, in 48 per cent of the studies we looked at, the meditation intervention was taught by one of the studv’s authors, often its lead author.

More importantly, little attempt was made to control for any potential bias that an enthusiastic teacher and researcher might have had on the participants. Such a bias is often not intentional but stems from subconsciously giving preferential treatment or being particularly enthusiastic to participants in the experimental group. The prevalence of authors as teachers was so great that we decided to look at it statistically in our meta-analysis. We compared studies that had used an author with studies that had used an external teacher or other form of instruction (eg, an audio recording).

We found that compassion increased only in those studies where the author was also the teacher of the intervention.

Experimenter bias often goes hand-in-hand with demand characteristics, where participants behave or respond in a way that they think is in line with the expectations of the researcher. For example, participants might respond regardless of their true feelings more enthusiastically on a questionnaire about compassion because the researcher herself was enthusiastic about compassion. The media buzz around meditation which portrays it as a cure for a range of mental health problems, the key to improved wellbeing and to changing one’s brain for the better is also very likely to feed back to participants, who will expect to see benefits from a meditation intervention.

Yet, almost none of the studies we examined controlled for expectation effects, and this methodological concern is generally absent in the meditation literature.

The prevalence of experimenter bias is only one side of the coin. Another troubling but rarely discussed bias concerns data-analysis and reporting. Interpreting statistical results and choosing what to highlight is challenging. Data do not speak for themselves: they are interpreted by academics whose minds are not blank states. Academics often tread a thin line between the duty of impartial data-analysis and their own beliefs, desires and expectations. In 2003, Ted Kaptchuk of Harvard Medical School summarised a number of interpretative biases that have become widespread in science reporting: confirmation bias, rescue bias (finding selective fault with an experiment to justify an expectation), and ‘time will tell’ bias (holding on to an expectation discounted by data because additional data might in fact support it), among others. All were overwhelmingly present in the meditation literature we reviewed.

The most common bias we encountered was a ‘confirmation bias’, in which evidence that supports one’s preconceptions is favoured over evidence that challenges these convictions. Confirmation bias was particularly prevalent in the form of an overreporting of marginally significant results. When using statistical testing, a p-value of 0.05 and below typically indicates that the results are statistically significant in psychological research. But it has become common practice to report results as ‘trends’ or as ‘marginally significant’ if they are close to, but don’t quite reach the desired 0.05 cut-off. The problem is that there is little consensus in psychology as to what might constitute ‘marginal significance’, which in our review ranged from p-values of 0.06 to 0.14 hardly even marginal. (It is debatable whether p-values are not the most accurate way to conduct science anyway, but we should stick to the rules if we are using this type of testing.)

The positive view of meditation and the fight to protect its reputation make it harder to publish negative results.

Being liberal with statistical methods that were designed to have clear cut-offs increases the chance of finding an effect when there is none. A further problem with the use of ‘marginal significance’ is reporting it free from bias. For instance, in one study the authors reported a marginally significant difference (p = 0.069) in favour of the meditation intervention relative to the control group. However, on the following page, when the authors reported a different set of results that did not favour the meditation group, they claimed the exact same p-level as non-significant. When the results confirmed their hypothesis, it was ‘significant’ but only in that case.

In fact, the majority of studies in our review discussed the marginally significant as equal to statistically significant.

Confirmation bias is difficult to overcome. Journals rely on reviewers to spot them, but because some of these biases have become standard practice (through the reporting of marginally significant effects, say) they often slip through. Reviewers and authors also face academic pressures that make these biases more likely since journals favour the reporting of positive results.

But in the study of meditation there is another complication: many of the researchers, and therefore the reviewers of journal articles, are personally invested in meditation not only as practitioners and enthusiasts but also as providers of meditation programmes from which their institutions or themselves financially profit. The overly positive view of meditation and the fierce fight to protect its untarnished reputation make it harder to publish negative results.

My aim is not to discredit science, but scientists do have a duty to produce an evidence base that aims to be bias-free and aware of its limitations. This is important because the inflated results for the power of meditation fuel magical beliefs about its benefits. Mindfulness websites market it as a ‘happy pill, with no side effects’; it is said it can bring world peace in a generation, if only children would breathe deep and live in the moment. But can we be sure that there are no unexpected outcomes that neither benefit the individual nor society? Is it possible that meditation can fuel dysfunctional environments and indeed itself create a path to mental illness?

The utilisation of meditation techniques by large corporations such as Google or Nike has created growing tensions within the wider community of individuals who practise and endorse its benefits. Those of a more traditional bent argue that meditation without the ethical teachings can lead into the wrong kind of meditation (such as the sniper who steadies the killing shot, or the compliant worker who submits to an unhealthy work environment). But what if meditation doesn’t work for you? Or worse, what if it makes you feel depressed, anxious or psychotic? The evidence for such symptoms is predictably scarce in recent literature, but reports from the 1960s and ’70s warn of the dark side of transcendental meditation. There is a danger that those few cases that receive psychiatric attention are discounted by psychologists as having had a predisposition to mental illness.

In The Buddha Pill (2015), Miguel Farias and Catherine Wikholm take a critical look at the symptoms of depression, anxiety, restlessness, mania and psychosis that are triggered directly by meditation. They argue that the prevalence of adverse effects has not been assessed by the scientific community, and it is easy to think that the few anecdotal cases that might surface are due to an individual’s predisposition to mental-health problems. But a simple search on Google shows that reports of depression, anxiety and mania are not uncommon in meditation forums and blogs. For example, one Buddhist blog features a number of reports on adverse mental-health effects that are framed as ‘dark nights’. One blogger writes:

I’ve had one pretty intense dark night, it lasted for nine months, included misery, despair, panic attacks, inability to concentrate (to the point that it was difficult to do simple tasks), inability to socialise (because of bad feelings, but also because I had a hard time following and understanding what others were saying, due to lack of concentration), loneliness, auditory hallucinations, mild paranoia, treating my friends and family badly, long episodes of nostalgia and regret, obsessive thoughts (usually about death), etc, etc, etc.

In Buddhist circles, these so-called ‘dark nights’ are part of meditation. In an ideal situation, ‘dark nights’ are worked through with an experienced teacher under the framework of Buddhist teachings, but what about those who don’t have such a teacher or who meditate in a secular context?

Those who meditate alone can be left isolated in the claws of mental ill-health.

The absence of reported adverse effects in the current literature might be accidental, but it is more likely that those suffering from them believe that such effects are a part of meditation, or they don’t connect them to the practice in the first place. Considering its positive image and the absence of negative reports on meditation, it is easy to think that the problem lies within. In the best-case scenario, one might simply stop meditating, but many webpages and articles often frame these negative or ambivalent feelings as a part of meditation that will go away with practice. Yet continuing to practise can result in a full-blown psychotic episode (at worst), or have more subtle adverse effects. For example, in 1976 the clinical psychologist Arnold A Lazarus reported that a ‘young man found that the benefits he had been promised from transcendental meditation simply did not emerge, and instead of questioning the veracity of the exaggerated claims, he developed a strong sense of failure, futility, and ineptitude’.

In a best-case scenario, individuals will have a psychiatrist or experienced meditation teacher to guide them, but those who practise alone can be left isolated in the claws of mental ill-health. Lazarus warned that meditation is not for everyone, and we need to consider individual differences and be aware of adverse effects in its application in a secular context. ‘One man’s meat is another man’s poison,’ he once said about transcendental meditation. Researchers and therapists need to know both the benefits and the risks of meditation for different kinds of people, it is not unvarnished good news.

In The Buddha Pill, Farias and Wikholm write:

We haven’t stopped believing in meditation’s ability to fuel change but we are concerned that the science of meditation is promoting a skewed view: meditation wasn’t developed so we could lead less stressful lives or improve our wellbeing. Its primary purpose was more radical to rupture your idea of who you are; to shake to the core your sense of self so that you realise there is ‘nothing there’. But that’s not how we see meditation courses promoted in the West. Here, meditation has been revamped as a natural pill that will quieten your mind and make you happier.

There must be a more balanced view of meditation, one that understands the limitations of meditation and its adverse effects. One day there will be a more complete picture of this potent and poorly understood practice. For now, our understanding is mostly warped.

Ute Kreplin is lecturer in psychology at Massey University in New Zealand. Her research has been published in Nature and Neuropsychologia, among others.

ANATOMY OF TERROR. From the Death of bin Laden to the Rise of the Islamic State – Ali Soufan.

We can hope that the Islamist movement ignited by Osama bin Laden, fanned into an inferno by Abu Musab al-Zarqawi, and now fueled, like a vision of hell, by thousands of corpses, will not endure quite as long as the death cult inaugurated by bin Laden’s medieval doppelgénger, Hassan-i Sabbah. But at the same time, let us also recognize that al-Qaeda’s story is far from over.

We have killed the messenger. But the message lives.

FRIENDS AND ENEMIES

On a crisp morning in December of 2001, I picked up a pockmarked clay brick, one of thousands like it littering the site of what only weeks before had been a hideout for the most wanted man on earth. Perhaps, I thought, this very brick had formed part of the wall of Osama bin Laden’s sleeping quarters, or the floor where he habitually sat to receive visitors. As I felt the heft and contour of that brick in my hands, I contemplated the unlikely sequence of eventssome in my lifetime, others over long centuries that had brought me to that extraordinary time and place.

I was born in Lebanon, emigrated to America, and went to college and then grad school in Pennsylvania. I took a double major in political science and international relations, with a minor in cultural anthropology, and followed that up with a master’s in foreign relations. With the Cold War freshly over and America’s position as the world’s only superpower seemingly secure, it was tempting to conceive of the world as a complex but orderly machine, in which nation-states would set rational policies and those rational policies would dictate logical strategies.

Yet there was something fundamentally unsatisfying about this clockwork view of the world. From my graduate studies, one prominent counterexample stuck in my mind, one from 2,500 years ago. The Peloponnesian War pitted Athens’s Delian League against a coalition of states led by Sparta and eventually aided by the mighty Persian Empire. After a quarter century of alarms and reversals, Athens finally surrendered. By paving the way for Alexander’s unification of Greece and his subsequent conquests, the war changed the course of European and world history. But the outcome was by no means foreordained.

I came to see that all the key decisions were based neither on policy nor on strategy but on personalities.

Speeches and emotional appeals consistently carried the day. Half a millennium later, Cato the Younger would mark this same phenomenon in Rome’s rocky transformation from republic to empire. “When Cicero spoke,” he said, “people marveled. When Caesar spoke, people marched.”

Theories are great tools to think with. They open your mind, broaden your perspective. But it is people who make the world go round. Individual human beings, with all their idiosyncrasies and contradictions and baggage, with their ideas sculpted by culture and belief and education and economics and family, are the agents of every grand historical force that future generations will see smoldering in the tangled wreckage of the past.

While I was still a student, I began following through the Arabic press the exploits of a dissident Saudi millionaire named Osama bin Laden and his nascent extremist organization, al-Qaeda, the Base. I marveled at this man’s audacity in declaring war on America, and his charismatic ability to attract followers to his side. But my own calling could not have been more different. Fresh out of grad school, I joined the Federal Bureau of Investigation, where one of my first assignments was to write a paper on this man bin Laden and his group. My report came to the attention of John O’Neill, the legendary head of the bureau’s counterterrorism section, based in Manhattan. In time, John became my mentor and a close friend. When suicide bombers murdered seventeen American sailors aboard the USS Cole in October 2000, John assigned me to lead the investigation. I traveled to Sanaa, Yemen’s ancient capital, and began running down leads and interrogating suspects.

John O’Neill retired from the bureau in the summer of 2001. I took him out to lunch to celebrate, and told him I was getting married. He gave me his blessing. But this would prove to be our last meeting. On August 23, John became security director for the World Trade Center. Two weeks later, he died rushing back into the south tower, courageous to the very end, determined to do what he had been doing his whole career: save lives.

Three months later, standing with my colleagues in the remains of bin Laden’s bombed out Kabul compound, I felt myself overcome by a strong sense of revenge, for my country, for the thousands murdered, and especially for John. Ever since the attacks, the alQaeda leader had been confidently predicting America’s imminent downfall. Now, bin Laden and his extremist cohorts were learning that the United States and its broad coalition of allies would not give in to terrorism so easily. For now, the sheikh still evaded capture, but the tide had turned. The piles of rubble, the lone wall that remained of a sizable residence, the twisted metal of what had once been a staircase, the smattering of air-dropped leaflets offering twenty-five million dollars for information leading to bin Laden’s capture, all bore witness to the turn of fortune’s wheel. Back home in the United States, some political leaders were already talking about Afghanistan as a future democratic beacon for the region.

In the decade that followed, my life changed utterly. I spent another four years with the FBI, investigating the 9/11 attacks and other terrorist crimes. I got married, left the bureau, and eventually became the father of three very energetic boys. And so it was that, on a Sunday evening in the spring of 2011, I found myself at home, assembling a pair of swing seats for our newborn twins as the television chattered away in the background. At around 9:45 pm, a special announcement broke through the babble: the president would shortly be addressing the nation. Clearly, something big had happened.

It was 11:35 pm. by the time President Obama approached a podium in the East Room of the White House and confirmed to the world that U.S. Navy SEALS had killed Osama bin Laden. As the president spoke of the people bin Laden had murdered, of the families bereaved, of the children left fatherless, my thoughts turned again to John O’Neill and the other friends I had lost along the way. Near the end of his remarks, Obama said, “Justice has been done.” That was certainly true, but the ramifications of bin Laden’s demise had yet to play out.

Would the jihadist edifice simply crumble without its keystone? Or would bin Laden prove more powerful as a martyr than he ever had been as a living leader?

No doubt these questions were on the president’s mind, too. ABC News’s Martha Raddatz had reported “absolute jubilation throughout government.” For my part, I could not help but feel more troubled than jubilant.

Emails began flooding my inbox, from friends and colleagues congratulating me, and from reporters seeking my take on events. An editor from the New York Times asked if I would put my views in an op-ed for the paper. I sat down to analyze the situation. I thought of all the dozens of al-Qaeda acolytes I had interrogated over the years, playing high-stakes games of mental chess with extremists and murderers for the sake of extracting priceless evidence. They had pledged bayat to bin Laden, swearing allegiance neither to the office nor the organization but to the man himself. To whom would zealots such as these now declare fealty?

Osama bin Laden had been uniquely well equipped to lead the network he founded. He had walked away from the wealth and luxury of the Saudi upper crust in order to devote himself to jihad, against the Soviets and then against America. This personal history helped him in two ways. First, his freely chosen asceticism helped inspire fanatical devotion among his followers. Secondly, at the same time, his privileged background endowed him with contacts among wealthy elites willing to bankroll terrorism. Bin Laden’s death would therefore leave a gaping hole in aI-Qaeda’s recruitment and fund-raising efforts.

It seemed likely that bin Laden’s longtime deputy, Ayman al Zawahiri, would be named the new emir. If so, I knew that he would struggle. To be sure, Zawahiri is clever and strategic. He is, after all, a fully trained surgeon who honed his militant skills battling the Sadat and Mubarak regimes in his native Egypt. He is also a zealot of uncompromising brutality, responsible more than anyone for justifying the tactic of suicide bombing and by extension for the tragic toll it has taken on innocent Muslims. But for all his intelligence, his cunning, and his zeal, Zawahiri possesses none of the charisma bin Laden had. Indeed, his personality has alienated many people over the years. More importantly still, Zawahiri is an Egyptian. Within al-Qaeda, his appointment would inflame the already tense internecine rivalry between his countrymen and the Gulf Arabs who make up the jihadi rank and file.

As an organization, then, al-Qaeda was in deep trouble. But what of bin Ladenism as an idea? That, I felt, was a different story. I feared that some of the regional groups that bin Laden had worked so hard to keep in line-like al-Qaeda in the Arabian Peninsula (AQAP), al-Qaeda in Iraq (AQI), al-Qaeda in the Islamic Maghreb (AQIM), and aI-Shabaab in the Horn of Africa, would split off. They might even intensify their ideology. No doubt they would see the nascent Arab Spring as an opportunity to impose their ideas on their fellow Muslims. In the pages of the New York Times I wrote:

We cannot rest on our laurels. Most of Al Qaeda’s leadership council members are still at large, and they command their own followers. They will try to carry out operations to prove Al Qaeda’s continuing relevance. And with Al Qaeda on the decline, regional groups that had aligned themselves with the network may return to operating independently, making them harder to monitor and hence deadlier.

It brings me no pleasure to see those premonitions borne out. Al-Qaeda has indeed fractured into regional units. Zawahiri, the cold bureaucrat, has struggled to maintain control. Meanwhile, the cancer of bin Ladenism has metastasized across the Middle East and North Africa and beyond, carried by even more virulent vectors. Whereas on 9/11 al-Qaeda had around 400 members, today it has thousands upon thousands, in franchises and affiliates spread from the shores of the Pacific to Africa’s Atlantic seaboard, and that is without even counting the breakaway armed group that calls itself the Islamic State. AI-Qaeda’s Syrian branch alone has more members than bin Laden ever imagined for his entire network. It is striking to note that, in October of 2015, more than fourteen years after the 9/11 attacks, US. forces disrupted what is believed to be the largest al-Qaeda training camp ever, all thirty square miles of it-right in the organization’s historic heartland of Afghanistan.

In the Middle East, the Islamic State, al-Qaeda’s most vicious offshoot to date, employs methods so savage that even hardened terrorists publicly denounce their brutality. Where bin Laden encouraged militants in his network to focus on attacking the West directly rather than hitting regimes in the Muslim world, the Islamic State has successfully done both. It has brought mass murder to the streets of Paris, airports in Brussels and Istanbul, a Russian airliner in the skies over Sinai, and a Christmas market in Berlin. It has killed worshipers at mosques in Yemen and Kuwait, attacked police, soldiers, and border guards in Egypt and Saudi Arabia, and bombed political rallies in Turkey. At the same time, it has conquered millions of acres across Iraq and Syria, aided by tens of thousands of foreign recruits. The organization’s formal break with al-Qaeda in 2014 has not stopped the Islamic State from expanding to other troubled regions of the world, most notably Libya. The group has even established a beachhead in remote regions of Afghanistan, where it vies violently for control with al-Qaeda’s longstanding allies, the Taliban, who governed Afghanistan until the United States removed them from power in 2001.

A video popular among Muslims living in the projects of East London, Birmingham, and elsewhere in England shows a man squatting in a Syrian field, his features covered with a ski mask, his rifle at the ready. Fighting in the Levant is “not as easy as pulling out your nine-millimeter on a back road of the streets of London and blasting a guy,” he says in a forthright East London accent. “It’s not as easy as putting up your feet on the couch after a hard day’s work on the corner.”

Inspired by such bin Ladenist propaganda, as many as 38,000 foreigners had joined the fighting in Syria by the end of 2015. Compare that to the Afghan jihad against the Soviets, which attracted “only” 8,000 foreign nationals. And whereas those who made the journey to that conflict came overwhelmingly from Muslim-majority countries, the war in Syria has attracted over 5,000 foreign fighters from the United States and the European Union, as well as many hundreds from Russia. Around 20 to 30 percent of these fighters have already returned home.

Not all of them are plotting violence, by any means; but the numbers are so great that even if only a small proportion of these fighters emerge from the conflict as hardened terrorists, it could spell big trouble for the West. How big? Think of it this way: the islamic State’s attacks on Paris in November of 2015, in which 130 innocent people died, were perpetrated by just 9 men.

My first book, The Black Banners, told the tale of al-Qaeda up to the death of its founder. In this book, I aim to take the story further. True to my conviction that personalities matter, I will focus my story through the eyes of several key individuals, notably bin Laden himself; Saif al Adel, his wily security chief; Ayman al-Zawahiri, his deputy and successor; Abu Musab al Zarqawi, the Jordanian militant who founded the organization that would become the Islamic State; Abu Bakr al Baghdadi, the group’s current “caliph”; and the men (and in bin Laden’s case, the women) of their inner circles. Through these characters, we will trace the transformation of al-Qaeda as an organization, the simultaneous development of bin Ladenism into a far more potent and lethal force, the rise and decline of the Islamic State, and the impending resurgence of al-Qaeda.

In its landmark final report, the 9/11 Commission concluded that the tragic attacks of September 2001 were allowed to proceed in part because of a catastrophic “failure of imagination” on the part of US. intelligence. Analysts commonly asserted that they simply couldn’t imagine someone flying a plane into a building. In a similar vein, a month before the US. invasion of Iraq in 2003, Deputy Secretary of Defense Paul Wolfowitz told a Senate panel, “It’s hard to conceive that it would take more forces to provide stability in post-Saddam Iraq than it would take to conduct the war itself and to secure the surrender of Saddam’s security forces and his army.” It took less than two months, and minimal U.S. casualties, to conquer the country; yet eight years, five thousand coalition deaths, and $1.7 trillion were nowhere near enough to “provide stability in post-Saddam Iraq.”

Know your enemy, Sun Tsu admonishes us across the millennia. And yet, time and again, when inquiries are held and hard questions asked, the response amounts to, “We couldn’t conceive, we couldn’t imagine, we couldn’t wrap our heads around the possibility that something like this could happen.” Or, just as bad, we did imagine some worst-case scenario and therefore it was sure to happen, as in the so-called One Percent Doctrine espoused by Vice President Dick Cheney, who told Americans, “If there is a one percent chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It’s not about our analysis. It’s about our response.” That is the high road to an absurd and ruinous waste of finite intelligence, military, diplomatic, and law-enforcement resources.

The key to a more constructive use of our imaginations is empathy, not in the colloquial sense of sharing another person’s perspective, but in the clinical sense of being able to see the world through another person’s eyes. Sadly, after fifteen years of the war on terrorism, we still do not really know our enemy in this deeper sense.

In this book, by delving into the personalities of men who mean us harm, I aim not to create sympathy for them, far from it, but to help point the way to a deeper understanding of their worldview, their motivations, and how best to combat the destructive ideology they represent.

I still have that battered clay brick I picked up in bin Laden’s shattered hideout. A decade and a half later, it sits on a shelf in my office in Midtown Manhattan. Looking at it while I work reminds me of the progress we have made against terrorism since I first picked it up on that winter morning, but also of the missteps we have made along the way, and above all of how far we have still to go.

We have killed the messenger. But the message lives.

PROLOGUE

THE OLD MAN OF THE MOUNTAIN

Once upon a time, there was a terrorist who dwelled in the mountains. Throughout the Muslim world and beyond, his name became a byword for brutality. Tribal Chieftains, great religious leaders, even sovereign rulers would take extraordinary pains to protect themselves against the terrorist and the cadre of killers he commanded. So loyal were his acolytes to their sheikh, so certain of the Paradise he promised, that they were prepared to die horribly, on his command. His followers claimed to be the most faithful among the faithful. Their aim was twofold: to shield from its perceived enemies the religious sect to which they belonged, and to eliminate from this imperfect world the corrupting influence of apostasy and religious impurity. Their modus operandi was public murder: every death a spectacle, every spectacle a political message.

Niceties such as guilt or innocence did not trouble the terrorist or his men; they operated under a fatwa, an infallible religious ruling, commanding the murder of “infidels”, non Muslims and “apostates”, Muslims who failed to live up to the terrorist’s own austere interpretation of Islam. And, of course, the terrorist and his men arrogated to themselves the right to distinguish between faithful and faithless. It was no surprise, therefore, that the vast majority of the terrorist’s victims were not Christians, Jews, or Zoroastrians but fellow Muslims.

Today, this terrorist is dead, long dead. His name was Hassan-i Sabbah. He was born sometime in the mid-eleventh century and died in 1124. The death cult he founded has long since faded away, but not before outliving its creator by more than a hundred years. Its name has passed into legend around the world, the Assassins. For Hassan-i Sabbah, the most prominent apostates were the Seljuk, the Turkish dynasty that ruled over much of the medieval Islamic world. The principal infidels were the Crusaders, who periodically rode in from western Europe to impose their disfigured version of Christian morality on the Holy Land.

Today’s terrorists see the world in similar terms. Their apostates are the modern-day rulers of the Islamic world, be they secular, like Egypt’s military strongmen, or allied to the West, like the House of Saud. Their infidels are the Christians, the Jews, the Americans, the West in general. They imagine themselves beset by contemporary Crusades, both literal and figurative. Some, like Boko Haram in Nigeria and the Taliban in Afghanistan, see modern, Western-style education as a conspiracy against Islam. Today’s fanatic killers may use suicide bombs instead of poison-tipped daggers, but they deploy eerily similar fatwas to justify their indiscriminate murder of innocent people at the World Trade Center in New York, in neighborhoods of Beirut, on trains in London and Madrid, on a residential street in Baghdad, at a Bastille Day celebration in Nice, in a nightclub in Istanbul, and on and on.

In Hassan-i Sabbah’s day, he and his followers were dismissed as wild outliers, able to execute their murderous missions only because they were stoked on drugs. The very word “Assassin” was said to derive from the Arabic hashishin, meaning “marijuana users.” In the popular imagination, today’s suicide bombers are seen as similarly brainwashed or brain-dead. In reality, many are troubled young people who discern little meaning in their own lives and view their acts as an ultimate expression of faith. Similarly, modern scholarship teaches that the word “Assassin” more likely derives not from any pharmacological association but from the Arabic asas (foundation of the faith). The Assassins were seen as returning to the basic principles of their religion, in other words, as fundamentalists. That is a vital difference, and one with enormous contemporary resonance. Not for nothing is the most notorious modern terrorist group known as al-Qaeda, The Base, or, in an alternate rendering, The Foundation.

It was not always thus. In fact, Islam began as a liberalizing force. It introduced racial and social equality to an Arab tribal society that had previously enjoyed neither. Islam was supposed to enlighten Arabia and deliver it from the Jahiliyyah, the Days of Ignorance. Through the new faith, women gained the right to inherit property and divorce their husbands 1,300 years before many of their Western sisters would win similar privileges. Ijtihad, independent thinking, was actively encouraged, one large reason why philosophy, literature, and the sciences all flourished throughout the first few hundred years of the faith.

Then, around the tenth century, the political and religious establishments determined that critical thinking posed a direct challenge to their authority, which rested on dogma and ritual. The “Gate of ljtihad” was closed. There was, these rulers said, nothing more to be learned. It was the end of history. It became impossible even to discuss whether the hijab, the head and neck scarf worn by some observant Muslim women, was ordained by law or custom, because that question and thousands of others were supposedly settled for all time centuries ago, and the state would silence anyone who dared say otherwise. In such an environment, there is little scope for constructive progress on the difficult questions of politics and society.

In 1989, the year of revolution against Soviet despotism, the National Interest magazine published an essay by Francis Fukuyama entitled “The End of History?” It captured the spirit of the age. “What we may be witnessing,” Fukuyama wrote, “is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.” In terms of governance, this was akin to saying that there was fundamentally nothing more to be learned. Western, free-market liberalism had triumphed; all that was left was for the rest of the world to catch up.

The reality was exactly the reverse. The Cold War, with its four-decade thermonuclear stalemate, did not initiate history’s thrilling denouement; in fact, it functioned more like an intermission. With the fall of the Berlin Wall, the movie could begin again. Great screenwriters tell us that, stripped down to essentials, there are only so many basic plots to choose from. Real life is like that, too. Scenarios repeat; roles recur; different actors don the costumes.

A Saudi millionaire dresses like an eleventh-century rebel, takes up arms, and encourages his followers to ascribe divine powers to him. In response to his atrocities, the West becomes mired in Afghanistan, a country whose highways are lined with the carcasses of Soviet tanks, and later in Iraq, a land created arbitrarily one hundred years ago by colonial fiat. After a decade of violence in that country, a shy bookworm from the sticks proclaims himself caliph of the Muslim world, puts on a black turban in imitation of the Prophet Muhammad, and demands the allegiance of all Muslims on pain of death.

This false caliph’s murderous movement draws sustenance from a war in neighboring Syria that bears more than passing similarities to eighteenth-century conflicts between Persian shahs, Russian tsars, and Turkish sultans.

We can hope that the Islamist movement ignited by Osama bin Laden, fanned into an inferno by Abu Musab al-Zarqawi, and now fueled, like a vision of hell, by thousands of corpses, will not endure quite as long as the death cult inaugurated by bin Laden’s medieval doppelgénger, Hassan-i Sabbah. But at the same time, let us also recognize that al-Qaeda’s story is far from over.

Chapter 1

THE SNAKE WITH BROKEN TEETH

Helicopter hovering above Abbottabad at 1AM (is a rare event).

-TWEET BY SOHAIB ATHAR, @REALLYVIRTUAL, 12.58 AM PKT, MAY 2, 2011

Go away helicopter before I take out my giant SWATTER :-/

-@REALLW|RTUAL, 1.05 AM PKT

A huge window shaking bang here in Abbottabad Cantt. lhope its not the start of something NASTY :-$

-@REALLYV|RTUAL, 1.09 AM PKT

Sohaib Athar just wanted to get away from it all. His life in the Pakistani megacity of Lahore had been a dizzying burlesque of stifling heat, filthy air, unreliable power, and the everpresent danger of terrorist attack. After a while, it had all become too much for the young software developer. So he had packed up his laptops and fled for the relative tranquillity of the mountains north of Islamabad. Abbottabad must have seemed a promising place for a new start. The city lies cupped in a high-walled valley in the foothills of what becomes, much farther to the north and east, the outer reaches of the Himalayas. At an elevation of four thousand feet, roughly comparable to that of Salt Lake City, Utah, Abbottabad is known throughout the region for its agreeable hill-station climate. The town’s founder and namesake, the British Army Major James Abbott, waxed poetic about its “sweet air” and twittering birds. Its Anglican church, St. Luke’s, also established by the British, and built in a style that would have been familiar to soldiers homesick for the English countryside, still ministers to parishioners on Jinnah Road in the heart of the old town. Abbottabad was founded as a garrison city, and it remains so today; since Pakistan’s independence, it has been home to the prestigious Kakul Military Academy, the country’s answer to West Point. The academy has trained much of the country’s military leadership, including its former president, Pervez Musharraf. It is also a frequent port of call for top military brass from Pakistan’s allies; General David Petraeus visited in February 2010 while serving as overall commander of US. forces in the Middle East, Pakistan, and Afghanistan. Abbottabad’s relative isolation and strong military presence conspire to create a sense of security that is sorely lacking in so many of Pakistan’s other major cities.

Unsurprisingly, therefore, Sohaib Athar was not alone in seeing Abbottabad as a place of refuge. Throughout the first decade of the twentyfirst century, people had moved there from elsewhere in the country, fleeing earthquakes, flooding, and the violent war against Islamic extremists ongoing in places like Waziristan, a notoriously lawless region in the Federally Administered Tribal Areas, or FATA, two hundred miles to the southwest, along Pakistan’s frontier with Afghanistan. Abbottabad had also sheltered its fair share of less welcome transplants. Umar Patek, a key conspirator in the Bali nightclub bombing that killed more than 200 people in 2002, was arrested in Abbottabad in January 2011, together with Mohammed Tahir Shahzad, an al-Qaeda fixer who had arranged for Patek to travel to Waziristan alongside two French jihadists. it was not inconceivable, therefore, that other aI-Qaeda operatives, perhaps even senior figures, could still be laying low somewhere in Abbottabad.

About a mile and a half across town from where Sohaib Athar plied his screens and keyboards, in a relatively wealthy neighborhood where a few large houses rose over gardens in which residents grew food, there stood a spacious compound of the type known locally as a “mansion.” It consisted of a three-story main house, a guesthouse, and a number of outbuildings, all surrounded by uneven high walls, in places rising to twelve or eighteen feet, and crowned with a two-foot tangle of barbed wire.

The compound had no cable or telephone connections, although it did have a satellite dish. It lacked regular trash pickup; evidently its inhabitants preferred to burn their refuse on site. The balcony on the third floor of the big house, added following an earthquake that occurred in October 2005, was surrounded by an unbroken seven-foot screen wall. The plans for this edifice listed the property’s owner as Mohammed Arshad Naqab Khan. Khan was seldom seen, but when he did appear, he told neighbors that he was a wealthy money changer or gold merchant from the tribal regions, and that he needed high security to protect himself and his family from “enemies” he had made in that business. This seemed plausible enough. Besides, it was not uncommon for pious Pashtun families from the tribal lands to live in large, high-walled properties, to sequester their women and children indoors, and generally to keep to themselves.

But Arshad Khan and his backstory were a fiction, an alias concocted to hide the true identity of the compound’s owner.

Ibrahim Saeed Ahmed aka al-Kuwaiti

Ibrahim Saeed Ahmed was an ethnic Pakistani Pashtun whose family hailed from Shangla, a rugged, sparsely populated district in the mountains northwest of Abbottabad. Ahmed, however, was born and raised in Kuwait, and like many jihadis went by his nisbah, or toponym, al-Kuwaiti. Growing up in the tiny desert emirate, al-Kuwaiti had become the boyhood boon companion of a fellow Pakistani, an ethnic Baluch named Khalid Sheikh Mohammed.

Khalid Sheikh Mohammed

KSM, as he later became known to investigators, had been a jihadi since he was sixteen years old. Having fought the Soviets in the 1980s, he would go on to mastermind the 9/11 attacks in 2001 and carry out the beheading of the Wall Street Journal reporter Daniel Pearl the following year.

Khalid Sheikh Mohammed also served as al-Kuwaiti’s mentor in jihad. He got his friend a position as emir of an al-Qaeda guesthouse in the city of Karachi, in Pakistan’s deep south, and introduced him to his sheikh, a Saudi militant Chieftain named Osama bin Laden. Not long after this fateful meeting, al-Kuwaiti would begin a long service to bin Laden and his family as courier, domestic servant, and bodyguard. He kept this work, along with his other jihadi duties, a grave secret, even from those closest to him. In 2001, when he was around thirty-five years old, he married a fourteen-year-old girl from his home district and brought her to live with him in Karachi. He explained his frequent absences from the marital home by saying that he often traveled back to the Gulf on business. Throughout this time, al-Kuwaiti remained close to his old friend Khalid Sheikh Mohammed; KSM’s wife hosted a wedding feast for the new couple at her house. But it would be years before aI-Kuwaiti would tell his bride who this mysterious friend was or admit that he, like KSM, was in reality a mujahid of al-Qaeda. By then, there would be no going back.

Following bin Laden’s defeat at the cave complex of Tora Bora in late 2001, the al-Qaeda leader fled over the mountains into hiding in Pakistan, shaving his long beard to evade recognition. Al-Kuwaiti was once again called upon to assist the sheikh in his time of need. In the summer of 2002, he set up a house for bin Laden in Swat, not far from his ancestral homeland in the north of Pakistan. Al-Kuwaiti moved his wife and children there, too, and they were soon joined by his brother, whose name was Abrar, and Abrar’s own growing family. The brothers, both olive-skinned and beardless, but with close-cropped mustaches in the traditional Pakistani style, did not look out of place in their country of origin. In exchange for their hospitality and protection, bin Laden paid the Kuwaiti brothers a salary of 9,000 rupees per month, around $100, which he supplemented from time to time with gifts and zakat (charity).

The Swat house nestled in a pretty stretch of countryside by the banks of a river. To Osama bin Laden, this bucolic setting may have seemed a welcome respite from the relentIess pace of frontline jihad. But any feeling of serenity would prove to be short-Iived. In early 2003, al-Kuwaiti’s old friend Khalid Sheikh Mohammed brought his family to stay at the Swat house for two weeks. Just a month after he left, al-Kuwaiti was watching the news with his wife when KSM’s face unexpectedly flashed onto the screen. The 9/11 planner had been arrested in Rawalpindi, the twin city of the Pakistani capital, Islamabad. Al-Kuwaiti flew into a panic; KSM was a tough personality and an experienced operative, but there was no telling what secrets he might divulge, knowingly or otherwise, under interrogation. Within a week, al-Kuwaiti, bin Laden, and the other residents of the Swat house had fled. Quickly, the brothers moved them to Haripur, a city to the east surrounded by squalid camps sheltering some of the millions of refugees displaced by a quarter century of conflict in neighboring Afghanistan. Bin Laden’s house in the suburbs, by contrast, was pretty and spacious, with three bedrooms, a lawn, and a roof terrace. But nobody ever visited him there. One neighbor noted that the brothers kept their gates shut, which was unusual for the area. When they needed to make phone calls, they would travel up to ninety miles away to use public call boxes.

By late 2004, aI-Kuwaiti, operating under his assumed identity of Arshad Khan, had begun buying up tracts of land in Abbottabad Cantonment for what would become bin Laden’s mansion. In August of 2005, with construction on the main building complete, bin Laden moved in, together with two of his wives, his son Khalid, and a number of his daughters and grandchildren. Al-Kuwaiti lived with his wife and children in the guesthouse on site, while Abrar and his family occupied the ground floor of the main house. Eventually, the screened-off third floor built after the October earthquake became bin Laden’s living quarters.

Bin Laden always claimed to live in accordance with the ways of the Prophet, and few parallels between their two lives would have escaped him. So it is quite possible that he would have compared his flight from Afghanistan to Pakistan with Muhammad’s Hijra, or migration, from Mecca to Yathrib, the desert settlement that would eventually become Medina. In fact, he often called on his followers to make their own hijra to Afghanistan. Since his arrival in Pakistan, bin Laden’s movements, from Swat to Haripur to Abbottabad, had traced a path roughly due east, deeper and deeper into the country. Four years after 9/11, he had made it roughly two hundred miles from Tora Bora, about the same distance as the Prophet traveled from Mecca to Yathrib. Perhaps this was an auspicious sign.

Everything about the Abbottabad mansion was geared toward privacy and self-sufficiency. The brothers hired a local farmer, a man called Shamraiz, to plow an adjacent field for growing vegetables. There were animals at the site, too, including chickens and a cow. Whatever food and provisions could not be grown, raised, or made on the premises, al-Kuwaiti and Abrar would buy at the bazaar in town. Bin Laden was no stranger to spartan living conditions. Indeed, for decades, he had deliberately sought out a life of privation. Like charismatic leaders before him, including the Assassin leader Hassan-i Sabbah, he cultivated this ascetic image as an important part of his appeal. Frugality came naturally to him; indeed, it seemed to exhilarate him. When he returned to Afghanistan in 1996, he chose a grim, unkempt hideout in the mountains in preference to several much cushier residences, including a former royal palace. Later, in the compound at Kandahar, his house was among the simplest on the base, with not even a carpet on the floor. In 2005, upon his arrival in Abbottabad, bin Laden’s wardrobe consisted of no more than a black jacket, a couple of sweaters, and six shalwar kameez, the traditional Pashtun dress of baggy pants and a long shirt.

In accordance with his fundamentalist reading of Islam, he had always kept the women of his household in strict purdah, separation from men outside their immediate family. In Abbottabad that prohibition became a matter of security as well as religious obligation. Indeed, his rules were so absolute that, from the age of three, the bin Laden women were banned from watching television, so that they would never see an unfamiliar male face. His children and grandchildren were sequestered inside the house almost twenty-four hours a day. The sheikh personally home-schooled them in the bin Laden brand of extreme religion and forbade them from playing with the children of al-Kuwaiti and Abrar, who lived just feet away within the same compound. Such was their isolation that the sheikh did not even allow them to be vaccinated for polio along with the other children. The nearest the bin Laden children came to fun was their occasional competitions to see which of them could grow the biggest vegetables in the garden.

Despite his well known penchant for sports, hiking, and horseback riding, the sheikh’s own health had taken a downturn in early adulthood from which he had never fully recovered. Fortyeight when he began living in Abbottabad, he was practically blind in one eye, the result of a childhood injury he successfully concealed from the public for many years. In his twenties and thirties, during the jihad against Afghanistan’s Soviet occupiers in the 1980s, he had suffered crippling bouts of pain and paralysis, which the former surgeon Ayman al-Zawahiri had treated with a glucose drip. Having inhaled Russian napalm in Afghanistan, he frequently had trouble with his larynx. In Abbottabad he complained of pain in his heart and kidneys, but there was no question of visiting a doctor. Instead, when bin Laden felt ill, he would treat himself with al-tibb al-nabawi, traditional medicine based on the hadith, sayings ascribed to the Prophet. Some believe, for example, that Muhammad recommended barley broth and honey to treat an upset stomach, senna for constipation, truffle water for eye ailments, and henna for aches and wounds. “God has not made a disease without appointing a remedy for it,” says one well known hadith, “with the exception of one disease, namely old age.” By his early fifties, Osama Bin Laden had become, prematurely, an old man. In videos made inside the compound, he appears hunched and frail, his face lined, his eyes tired. His beard, salt-and-pepper at the time of the 9/11 attacks, was rapidly turning white, although he was not above dyeing it jet black in video messages meant for public consumption.

In his three decade career of murder and mayhem, Osama bin Laden had gone by many names. His followers called him Azmaray, the sheikh, the emir, the director, Abu Abdullah. His code name at the US. Joint Special Operations Command was Crankshaft, reflecting his vital importance in driving the engine of al-Qaeda. But one final nickname captured the diminished circumstances of his existence in Abbottabad. In the months leading up to bin Laden’s death, observing his daily walks within the bounds of a compound he never seemed to leave, analysts with the Central Intelligence Agency had taken to calling him The Pacer. But Osama bin Laden was no ordinary shut-in, and he was by no means cut off from the world. Far from it: Until the day he died, the sheikh remained in active control of the deadliest terror network in history.

Communication with the outside was difficult, to be sure. Ever since the arrest of Khalid Sheikh Mohammed so soon after his visit to the house in Swat, bin Laden had cut off face-to-face contact with other senior jihadis, or, indeed, any al-Qaeda members other than his immediate protectors. No doubt this was a wise precaution for a man with a twenty-five-million-dollar U.S. bounty on his head. Besides, house calls would be an impractical way of governing a network that bestrode much of the Islamic world. But remote means of communication were scarcely any more secure. Email was not to be trusted; bin Laden knew from past experience that the Americans were capable of intercepting such messages, even with encryption. As he himself wrote in August of 2010, “Computer science is not our science and we are not the ones who invented it. . . . Encryption systems work with ordinary people, but not against those who created email and the Internet.” Cellular communication, too, was risky, because it could give away a person’s location and perhaps even call forth one of the hated unmanned “spy planes” that patrolled the skies over northern Pakistan. By this time, al-Kuwaiti had evidently acquired a cellphone; but whenever he needed to place a call, he would drive out from Abbottabad for ninety minutes or more before even placing the battery in his device.

*

from

ANATOMY OF TERROR. From the Death of bin Laden to the Rise of the Islamic State

by Ali Soufan

get it at Amazon.com

‘Puer Aeternus’, Failure to Launch, The Millenial dilemma – Gillian McCann, and Gitte U Bechsgaard * Millennials who leave home before moving back in are causing havoc for their families – Emilia Mazza * Millennials May Never Be Able To Move Out Of Their Parents’ Homes.

From Italy to Britain to Canada, more and more millennials are failing to launch and remain at home well into their thirties.

If the child cannot move into adulthood their parents also cannot move onto the next stage of their lives.

No one is saying we need to return to early marriages but clearly our rites of passage have not kept up with the times.

When an adult child moves back home after they’ve left, parents can start to feel resentful, especially if their child is acting the same way they did before they left home.

“Puer Aeternus: Someone who remains too long in adolescent psychology.” Marie-Louise Von Franz

It is disturbing to think we have come to this but without an alternative it is likely we will see more court cases where parents take extreme measures in order to launch their adult children.

Recently the eyes of the world were riveted on a court case in Upstate New York. At the centre of the media storm was a couple, pictured sitting stoically in a courtroom, who were using the legal system to remove their 30-year-old son from the family home. How could it have come to this? Journalists, news anchors, and radio discjockeys rushed in to try and make sense of this story which seemed to resonate around the world.

There was good reason for British journalists to show up on the lawn of this family, this is not just an American problem. From Italy to Britain to Canada, more and more millennials are failing to launch and remain at home well into their thirties. The 2016 Canadian census showed a record-breaking 34.7% of young adults remained in the family home.

While economics, longer education times and helicopter parenting clearly have something to do with this situation we will leave those aspects to others to examine. We want to look at the psychology that is contributing to the increasingly common phenomenon of children who are seemingly unable to move into adulthood. A number of changes have occurred within our societies in the last 40 years to contribute to this seemingly baffling situation.

Beginning in the 1960s Jungian analyst Marie-Louise Von Franz gave a series of lectures on a complex that she referred to as the puer aeternus. Von Franz described this syndrome as someone who “remains too long in adolescent psychology.” At the time that she was giving these lectures this was a very rare psychological problem, but societal changes have resulted in it becoming increasingly common. Across the western world sociological surveys are registering a sea change in how people move, or don’t, into adulthood.

More and more people seem to be getting caught in the phase of adolescence in both their attitudes and lifestyles, unable to move into full adulthood. This inability has implications both for the psychological health of the individual and the well-being of their families.

If the child cannot move into adulthood their parents also cannot move onto the next stage of their lives.

What few have seemed to note amid all the public discussion is that adulthood is not a given but is defined by family, culture and society. We are not born knowing what a adult is or how one is supposed to act. However, many millennials are left without clear definitions about what a mature person would look or act like. Along with many progressive changes some of the negative impact of the 1960’s has been an obsession with youth and a suspicion of adulthood that continues to linger long after the hippie generation crossed the 30-year mark and thus were unable to trust themselves.

Contributing to this problem is the fact that many in our society have discarded the rituals that used to usher us through the different phases of life. Without these rites of passage and clearly marked changes in status it is very easy to become caught in what anthropologist van Gennep referred to as a liminal state betwixt and between. With the deciine of religious practice and community life fewer people now have access to the rites of passage that structure human and community life. As van Gennep writes, these rituais “enable the individual to pass from one defined position to another which is equaily well defined.“

Around the world there are a wide variety of usually religiously based rituals that signal to the individual, and their community that they are moving into adulthood. These range from the confirmation ceremonies of Christianity to the bar and bat mitzvahs of Judaism and the Tirundukuli of Hinduism and many more. These ceremonies witnessed by family and community, formal clothes and party are all a clear indication that the person’s status was changing. These rituals were meant to signal to their community the individuals new maturity and also to reinforce this psychologically as they took on more outer signs of independence such as a job and learning how to handle money.

Another feature of the failure to launch is that fewer and fewer people are getting married or are getting married later. For our parents’ generation the transition to adulthood happened in one fell swoop: You got married and moved out of the house often starting your own family shortly thereafter.

Michael Rotondo’s parents sued him to get him out of their house.

No one is saying we need to return to early marriages but clearly our rites of passage have not kept up with the times.

It is clear that we need as a society to determine what we mean by adulthood and then help the younger generation to makes these transitions. This requires a clear sense of what being an adult entails for example: the ability to think beyond one’s narrow selfinterest, emotional maturity, financial independence, and participation in community. If we ourselves don’t know it is impossible to expect the younger generation to embody these characteristics and they are left flailing. Life can become like a vast ocean without any markers to indicate where we are in the journey.

Lacking the ability to enforce these passages in the traditional manner the Rotondo family was forced to take it all to the next level and use the courts in order to enforce independence on their son. This may seem absurd but is perhaps not really surprising. For a period of time the Italian government was considering legislation to move their legion of mammones out of the house. In Italy currently 66% of 18-34 year olds live at home.

It is disturbing to think we have come to this but without an alternative it is likely we will see more cases where parents take extreme measures in order to launch their adult children.

The boomerang kids who are ruining their parents’ lives: Generation of millennials who leave home before moving back in are causing havoc for their families

Emilia Mazza

Adult children who move out of home and then move back, or those who simply refuse to leave the comforts of family life, are ruining parents lives.

Adult children who fly the coop and return home if their situation doesn’t work out have been dubbed the ‘Boomerang Generation’, while those who don‘t want to move out because they are at university longer or struggling with the cost of living have earned themselves the title of ‘adult-escents’, fully grown children who still live at home and act like teenagers.

Dr Justin Coulson says that although a move home by an adult child may be justified, this can have an effect on the well being of parents. He explained how research by the London School of Economics found adult children who return to the family home after leaving can cause a significant decline in their parents’ quality of life.

“Parents experience the same frustrations as they did when their kids lived at home but these seem to be multiplied because they have had a reprieve. They can start to feel as if their parenting duties have to start all over again.”

The author of 10 Things Every Parent Needs to Know said when children leave home parents enter a new phase of life, one that’s far less burdened with the responsibility of bringing up kids.

“You start to do things your way, you do things that are convenient for you when they are convenient. And you don’t have to put yourself out for anyone else anymore. When an adult child moves back home after they’ve left, parents can start to feel resentful, especially if their child is acting the same way they did before they left home. They may start to worry about who left the garbage in the bin, or who left socks under the dining table or forgot to lock the house.”

Then there is the question of who is going to contribute and how. Whether or not they are going to pay rent, if they are, will they need to be chased.

“The accumulation of these smaller problems can be a real source of tension for parents who may have been thinking they no longer needed to worry about these things. Once a child has moved out, they are considered an adult so if parents have to pick up after them again then this can be a source of frustration and difficulty.”

Dr Coulson also explained there are adult children who simply refuse to take any responsibility for their lives, despite the fact they are of an age where they could. As well as a rise in millennials moving back home, adult children were also staying at home longer because the transition to adulthood was taking longer.

“Not only are we seeing more move back in, we’re seeing fewer kids moving out in the first place… We call it “adult-essence” instead of adolescence.

Grown children who haven’t moved out might become too cosy at home; they might fail to pull their weight around the house, or not pay their way.

They’re sloppy, they don’t clean up the dishes or they won’t clean their room. We feel like they’re at uni or at work but we’re still waking them up and they’re grown ups.”

Dr Coulson said although parents could face certain challenges when children do return home, there were times when offering a child a safe place was important.

“If parents can be responsive to the reasons that have led them to moving back home then they are less likely to experience the decline in satisfaction.”

Dr Coulson’s advice on how do deal with kids who do move back

* Parents shouldn’t be afraid to ask their adult children for rent

* Establish guidelines from the outset and expect your child to adhere to these

* Allocate responsibility, this can be a weekly chore such as taking out the rubbish, moving the lawns or helping to care for younger siblings

* If you feel you are being taken advantage it is okay to ask your adult children to leave

“Just because the research says you will be unhappy doesn’t mean we should say no to our kids if they have struck a difficult situation. We need to remember to be compassionate and offer to help.”

One important thing parents need to watch out for is a child who is trying to take advantage of the situation. Some kids are just looking for a free ride and that’s when the resentment and negative feelings can come up even more. If we can establish effective guidelines, living with adult children can be fantastic, they can contribute financially, do certain chores or babysit younger kids.

“It really doesn’t have to be bad but it comes down to having conversations from the outset, and being clear that if they don’t live up to these expectations it’s okay to ask them to leave.”

Millennials May Never Be Able To Move Out Of Their Parents’ Homes – Narcity

Studies show that millennials are, well, screwed.

‘Generation Screwed’ is the latest epithet assigned to millennials by boomers, and while it may be a rather harsh characterization, it does bear some truth. While it’s common for young adults to move back home with their parents after university, many of them are staying there for longer than expected, and sometimes it’s for reasons that are beyond their control.

Often times the current circumstances just don’t work in their favour. While the economy is somewhat looking up, graduates today are still faced with an unwelcoming job market and a real estate situation that is more volatile than ever. The combination of these two factors makes it difficult for millennials to establish the stable footing they require to leave the nest.

Most Canadian millennials have difficulty finding a job, with the unemployment rate for 15 to 24 year olds at a concerning 13.2%. Those that do manage to find work (that is, 48% of young Canadian adults), often land parttime or precarious jobs that end up being nothing more than temporary gigs. And those who can’t land a job at all resort to unpaid positions that garner as many as 300,000 willing interns across the country.

Without stable work, other life milestones like getting married or owning a house become fleeting fantasies rather than achievable ideals. It doesn’t help that the real estate market in Canada is out of control. According to the Canadian Real Estate Association (CREA), national sales are to drop by 3.3% this year, with the average price of a home in Canada now being more than $500,000. The millennials that do move out resort to renting; but even that presents some financial burden with rent increases doubling in some areas.

All of this is to say that those who stay at home with one’s parents shouldn’t automatically be misjudged as lazy and entitled individuals. Because the reality is that, for many people, staying home isn’t a choice, it’s necessity.

ORIGINS OF HATE. Bring the War Home. The White Power Movement and Paramilitary America – Kathleen Belew.

“WE NEED EVERY ONE OF YOU,” proclaimed an anonymous 1985 article in a major white power newspaper. “We need every branch of fighting, militant whites. We are too few right now to excommunicate each other…. Whatever will save our race is what we will do!”

White power activists increasingly saw the state as their enemy. Many pursued the idea of an all-white, racial nation. The militant rallying cry “white power,” which echoed in all corners of the movement, was its most accurate self-descriptor.

Movement leader Louis Beam urged activists to continue fighting the Vietnam War on American soil. He referred to two wars: the one he had fought in Vietnam and the white revolution he hoped to wage in the United States.

In the wake of military failure in Southeast Asia, masculinity provided an ideological frame for the New Right, challenged antiwar sentiment, and idealized bygone and invented familial and gender orders throughout American society. The white power movement capitalized on this wave of broader cultural paramilitarism for its own, violent ends.

Conventional politics was unsalvageable and signaled a state of emergency that could not be resolved through political action alone. Their paramilitary infrastructure stood ready; the war could not wait.

The white power movement sought revolution and separation, the founding of a racial utopian nation.

A large contingent of white power activists in the post-Vietnam moment believed in white supremacy as a component of religious faith. Christian Identity congregations heard their pastors explain that whites were the true lost tribe of Israel and that nonwhites and Jews were descended from Satan or from animals.

White power violence reached a climax in the 1995 bombing of the Alfred P. Murrah Federal Building in Oklahoma City.

War is not neatly contained in the space and time legitimated by the state. It reverberates in other terrains and lasts long past armistice. It comes home in ways bloody and unexpected.

The article spoke of emergency and government treachery. It foretold imminent apocalyptic race war. It called to believers in white supremacist congregations, to Klansmen and southern separatists, and to neoNazis. The white power movement united a wide array of groups and activists previously at odds, thrown together by tectonic shifts in the cultural and political landscape. Narratives of betrayal and crisis cemented their alliances.

Though often described by others as “white nationalist” and by its members as patriotic, this movement did not seek to defend the American nation, even when it celebrated some elements of US. history and identity. Instead white power activists increasingly saw the state as their enemy.

Many pursued the idea of an all-white, racial nation, one that transcended national borders to unite white people from the United States, Canada, Europe, Australia, South Africa, and beyond. The militant rallying cry “white power,” which echoed in all corners of the movement, was its most accurate self-descriptor.

At the end of the tumultuous 1970s, in the wake of the Vietnam War and in the midst of economic turmoil and widespread distrust of public institutions, the white power movement consolidated and expanded. In these turbulent years, many Americans lost faith in the state that they had trusted to take care of them. Loss in Vietnam and the Watergate scandal undermined their confidence in elected officials and besmirched the presidency itself. As legislation dramatically increased immigration, many worried that the arrival of immigrants would change the very meaning of American identity. They saw the rights movements of the 1960s redefine race and gender relations at home and at work. They noted with alarm the government’s failure to help those who lost their farms to the banks or their factories to faraway places. As the mainstream right and left took up these concerns in a variety of ways, so did this troubled social and political context incubate white power activism.

PeopIe from all regions of the country answered the white power movement’s call to action, bridging the divide between rural and urban. They were men, women, and children. They were high school dropouts and holders of advanced degrees; rich and poor; farmers and industrial workers. They were felons and religious leaders. They were civilians, veterans, and active duty military personnel.

From its formal unification in 1979 through its 1983 turn to revolutionary war on the government and its militia phase in the early 1990s, the white power movement mobilized adherents using a cohesive social network based on commonly held beliefs. These activists operated with discipline and clarity, training in paramilitary camps and undertaking assassinations, mercenary soldiering, armed robbery, counterfeiting, and weapons trafficking.

White power violence reached a climax in the 1995 bombing of the Alfred P. Murrah Federal Building in Oklahoma City.

A holistic study of the white power movement reveals a startling and unexpected origin: the aftermath of the Vietnam War.

The story activists told about Vietnam and the response to the war on the right were major forces in uniting disparate strands of American white supremacism and in sustaining that unity. As narrated by white power proponents, the Vietnam War was a story of constant danger, gore, and horror. It was also a story of soldiers’ betrayal by military and political leaders and of the trivialization of their sacrifice. This narrative facilitated intergroup alliances and increased paramilitarism within the movement, escalating violence.

In his speeches, newsletters, and influential 1983 collection Essays of a Klansman, movement leader Louis Beam urged activists to continue fighting the Vietnam War on American soil. When he exhorted readers to “bring it on home,” he meant a literal extension of military style combat into civilian space. He referred to two wars: the one he had fought in Vietnam and the white revolution he hoped to wage in the United States.

White power activists would also engage in other wars. Some would become mercenaries in military interventions ranging from Latin America to southern Africa. Others would fight in the Gulf War. Although they comprised only a small number of the combatants in these conflicts, their mercenary and active-duty soldiering assimilated them into the broader militarization and paramilitary culture that was more prominent in American society. Their ventures set the stage for later encounters, such as the sieges of separatist compounds at Ruby Ridge and Waco by militarized police forces, which would, in turn, spur the movement to its largest mass casualty.

The white power movement that emerged from the Vietnam era shared some common attributes with earlier racist movements in the United States, but it was no mere echo. Unlike previous iterations of the Ku Klux Klan and white supremacist vigilantism, the white power movement did not claim to serve the state. Instead, white power made the state its target, declaring war against the federal government in 1983. This call for revolution arrived during Ronald Reagan’s presidency, which many historians have considered the triumph of the mainstream New Right.

Antistatism in general, and hostility toward the federal government in particular, had motivated and shaped earlier conservative and reactionary mobilizations as well as the New Right itself, but white power capitalized on a larger current of discontent among conservatives.

By 1984, Time magazine had noticed a “thunder on the right”: a growing dissatisfaction, especially among evangelicals, with the distance between Reagan’s campaign promises and his policies, particularly concerning social issues that galvanized voters, such as abortion.

White power activists responded to Reagan’s first term with calls for a more extreme course of action. Reagan’s moderation, as activists saw it, revealed conventional politics as unsalvageable and signaled a state of emergency that could not be resolved through political action alone. Their paramilitary infrastructure stood ready; the war could not wait.

After declaring war, activists plotted to overthrow the government through attacks on infrastructure, assassinations, and counterfeiting to undermine public confidence in currency. They armed themselves with weapons and matériel stolen from military installations. They matched this revolutionary work with the publication and circulation of printed material, recruitment drives aimed at mainstream conservatives, political campaigns, talk show appearances, and radio programs.

These activities both disseminated a common set of beliefs, goals, and messages to the movement faithful and worked to recruit new members. In the late 1980s, many activists reorganized into militias. Although some militias disclaimed white supremacy in public, many shared funds, weapons, and personnel with white power organizations.

While white power was certainly a fringe movement, it surpassed earlier mobilizations such as the anticommunist John Birch Society. Membership alone is a poor measure of white power activity, with records often hidden, distorted, or destroyed, but nevertheless illuminates the movement’s relative size. Scholars and watchdog groups who have attempted to calculate the numbers of people in the movement’s varied branches, including, for instance, Klansmen and neoNazis, who are often counted separately, estimate that there were about 25,000 “hard-core members” in the 1980s. An additional 150,000-175,000 people bought white power literature, sent contributions to groups, or attended rallies or other events, signifying a larger, although less formal, level of membership. Another 450,000 did not themselves participate or purchase materials but read the literature. The John Birch Society, in contrast, reached only 100,000 members at its 1965 peak.

With the 1983 turn to revolution, the movement adopted a new strategy, “leaderless resistance.” Following this strategy, independent cells and activists would act without direct contact with movement leadership. The aim was to prevent the infiltration of groups, and the prosecution of organizations and individuals, by formally dissociating activists from each other and by eliminating official orders. Popularized throughout the underground, leaderless resistance changed recruitment goals, emphasizing the importance of enlisting a small number of fully committed activists rather than large groups of the less committed. This is another reason membership counts alone could not accurately convey the movement’s impact, activity, or capacity for violence.

Yet to the degree that there is power in numbers, the movement reached a new peak during its militia phase. At the height of its mainstream appeal in the mid-1990s, the militia movement counted some five million members and sympathizers, according to one watchdog analyst. That number certainly represents the upper bound of possibility, and it is likely that the white-power-identified cohort of militia members and sympathizers was significantly smaller. However, five million places the militia movement in line with the largest surge of the Ku Klux Klan, whose membership peaked in 1924 at four million.

While white power activists held worldviews that aligned or overlapped with those of mainstream conservatism, including opposition to immigration, welfare, abortion, feminism, and gay and lesbian rights, the movement was not dedicated to political conservatism aimed at preserving an existing way of life, or even to the reestablishment of bygone racial or gender hierarchies. Instead, it emphasized a radical future that could be achieved only through revolution. While some white power activists might have longed for the reinstatement of Jim Crow laws, white-minority rule as in Rhodesia and South Africa, or slavery, most agreed that such systems could not be resurrected through electoral politics alone but would have to be achieved by more drastic measures.

This abandonment of the political process reflects a profound shift in the American electorate wrought by the Voting Rights Act of 1965, which barred disenfranchisement on the basis of race. Reactionary politics, conservatism, and American nationalism had characterized the Klan in the early part of the twentieth century. The white power movement sought revolution and separation, the founding of a racial utopian nation.

Many activists connected ideas of a radical political future with belief in imminent apocalypse. The theologies espoused by white power activists in this period differed significantly from the Protestantism of the reactionary second-era Klan that peaked in the 1920s. White power religious radicalism emerged in part from Cold War understandings of communism as a threat to Christianity. At the same time, a large contingent of white power activists in the post-Vietnam moment believed in white supremacy as a component of religious faith. Christian Identity congregations heard their pastors explain that whites were the true lost tribe of Israel and that nonwhites and Jews were descended from Satan or from animals. Other racist churches adopted similar theologies that lauded whiteness as holy and sought to preserve the white race. Activists also adopted Odinism and other forms of neoPagan white supremacy that posited a shared, pan-European white cultural heritage.

The movement’s religious extremism was integral to its broader revolutionary character. While increasingly politicized evangelical congregations espoused belief in the rapture, a foretold moment when the faithful would be peacefully transported from the world as the apocalyptic end times began Christian Identity and other white theologies offered believers no such guarantees of safety. Instead, they held that the faithful would be tasked with ridding the world of the unfaithful, the world’s nonwhite and Jewish population, before the return of Christ. At the very least, the faithful would have to outlast the great tribulation, a period of bloodshed and strife.

Many movement followers prepared by becoming survivalists: stocking food and learning to administer medical care. Other proponents of white cosmologies saw it as their personal responsibility to amass arms and train themselves to take part in a coming end-times battle that would take the shape of race war.

A war of this scale and urgency demanded that partisans set aside their differences. The movement therefore was flexible in its adoption of racist symbols and beliefs. A Klansman in the South might participate in burning crosses, wear the white robe and hood, and embrace the Confederate battle flag alongside a Lost Cause narrative of the Civil War. A neo-Nazi in the North might march under the banner of the swastika and don an SS uniform. But the once disparate approaches to white supremacy represented by these symbols and ideas were drawn together in the white power movement. A suburban California skinhead might bear Klan tattoos, read Nazi tracts, and attend meetings of a local Klan chapter, a National Socialist political party, the militant White Aryan Resistance, or all three. At the Aryan Nations compound in northern Idaho, Klansmen and neo-Nazis ignited both crosses and swastikas as they heard Christian Identity sermons and speakers from an array of white power groups. Activists circulated among groups and belief systems, each of which might include theological, political, and pseudoscientific varieties of racism, antisemitism, and antifeminism.

Amid this multiplicity of symbolic presentations and beliefs, most white power activists found common ground. They believed in white supremacy and the need for a white homeland.

They feared that the government would eradicate the white population through interference with the birth of white children, through interracial marriage, rape, birth control, abortion, and immigration. The antisemitism long espoused by the Klan was reinforced by neo-Nazis. And the movement adopted a strict set of gender and familial roles, particularly regarding the sexual and supportive behavior of white women and their protection by white men.

Another unifying feature of the movement was its strident anticommunism, which at first aligned with mainstream Cold War conservatism and then transformed into an apocalyptic, antiinternationalist, antisemitic set of beliefs and conspiracy theories about what activists called the Zionist Occupational Government (ZOG) and, later, the New World Order.

Increasingly, white power activists believed that the Jewish-Ied ZOG controlled the United Nations, the US. federal government, and the banks, and that ZOG used people of color, communists, liberals, journalists, academics, and other enemies of the movement as puppets in a conspiracy to eradicate the white race and its economic, social, and cultural accomplishments.

To confront this grave threat, activists organized as a paramilitary army and adopted masculine cultural forms. The article that levied the plea “We Need Every One of You” was titled “White Soldier Boy” for a reason. It targeted young white men, not women, for recruitment into the presumptively male world of camouflage fatigues, military-style camps and drills, and military-grade weapons. It also spoke directly to combat veterans and active-duty military personnel.

In this respect, white power can be understood as an especially extreme and violent manifestation of larger social forces that wed masculinity with militancy, in the form of paintball, war movies, gun shows, and magazines such as Soldier of Fortune that were aimed at armchair and weekend warriors. This is not to suggest that such cultural forms were coequal with white power, or with conservatism more broadly. But it is not by coincidence that white power gathered steam amid the wider post-Vietnam “remasculinization of America.” In the wake of military failure in Southeast Asia, masculinity provided an ideological frame for the New Right, challenged antiwar sentiment, and idealized bygone and invented familial and gender orders throughout American society. The white power movement capitalized on this wave of broader cultural paramilitarism for its own, violent ends.

However, the white power movement departed from mainstream paramilitary culture in carving out an important place for women, relied on as symbols of the cause and as activists in their own right. As bearers of white children, women were essential to the realization of white power’s mission: to save the race from annihiiation. More concretely, their supporting roles, auxiliary organizations, and recruiting skills sustained white power as a social movement. They brokered social relationships that cemented intergroup alliances and shaped the movement from within.

In all these ways, its unity, revolutionary commitments, organizing strategy, anticommunist focus, and Vietnam War inheritance, white power was something new. Yet it has often been misunderstood as a simple resurgence of earlier Klan activity. Historians divide the Klan into “eras,” with the first following the Civil War, the second in the 1920s, and the third dedicated to opposing the civil rights movement. To understand white power as a Klan resurgence rests upon an artificial distinction between nonviolent and violent activism, in which the socalled fourth era refers to nonviolent, publicsphere activities, such as rallies and political campaigns, and the fifth era to the criminal activity of a secret, violent underground. This terminology arose from the white power movement itself and evokes previous surges in Klan membership that occurred one after another with lulls between. But the supposed fourth and fifth eras occurred simultaneously. This terminology therefore hinders an understanding of the activism it attempts to describe.

White power should be recognized as something broader than the Klan, encompassing a wider range of ideologies and operating simultaneously in public and underground. Such an understanding is vital lest we erroneously equate white power with covert violence and thereby ignore its significant inroads into mainstream society, which hardly came under cover of night. Activists such as David Duke mounted political campaigns that influenced local and national elections. They produced a vibrant print culture with crossover appeal that reached more mainstream readers. They traveled from church to church, linking religious belief with white power ideology. They created a series of computer message boards to further their cause. They pursued social ties between groups, cementing their political affinities with one another through marriages and other intimate bonds.

These political activists were often the same people who trained in paramilitary camps, plotted race war, and carried out criminal and terrorist acts. The death toll included journalists, state and federal employees, political opponents, and white power activists themselves. The Oklahoma City bombing, undertaken by movement activists, killed 168 people, making it the largest deliberate mass casualty on American soil between the bombing of Pearl Harbor and the terrorist attacks of September 11 , 2001.

But the body count alone cannot fully account for the effects of white power violence. That number ignores the lives disrupted by the movement’s rage. The dead left behind grieving, struggling families. And while many were physically attacked, many others were threatened. It would be impossible to tally those who were harassed and wounded emotionally, left too afraid to speak or work. But these wounds, too, bear out the long and broad ramifications of the movement’s violence.

Although the movement’s militancy, and therefore its violence, owes much to the rightwing framing of the Vietnam War, other elements of the 1970s also infused the movement. White power also responded to the changing meaning of the state, sovereignty, and liberal institutions in and after that decade. The dramatic, hard-won gains of feminism, civil rights, secularism, and gay liberation left the 1970s ripe for conservative backlash.

Another factor was emerging economic threat. The post-World War II welfare state had promised jobs, education, and health, but, beginning in 1973, a series of economic shocks displaced the expectation of continued growth and prosperity. An oil crisis brought about the realization that natural resources would not always be cheap and plentiful? Wealth inequality grew and unemployment rose. For the first time since the late 1940s, the promise of prosperity stalled.

Dwindling economic prospects became bound up with cultural backlash. Volition and need alike drove more women into the workforce, threatening both men’s exclusive access to certain jobs and the Cold War-era vision of the suburban, white nuclear family with a wife who stayed at home. The successful civil rights mobilizations of the 1960s gave way to white resistance as news coverage turned to black radicalism, urban riots, and integration. Forced busing of children to integrated public schools became a heated issue, and whites fought back both through school privatization and in heated public protest.

In this context, defense of the family intertwined with defense of free-market ideology. As the stark limitations of New Deal liberalism became clearer, and as civil rights laws made it more difficult to deny opportunities and benefits to nonwhites just as an economic downturn set in, the state could be recast as a menace to morality and prosperity? For many Americans, the state became the enemy. White power activists, driven by their narrative of the Vietnam War, took this sentiment to the extreme in calling for revolution.

Some have argued that white power did not properly constitute a social movement. This claim typically turns on a supposed disconnect between white power and the militia wave, or on a narrow definition of social movements that rests on centralized leadership and harmony among members. But social movement theorists attuned to the grassroots mobilizations of the mid to late twentieth century make the case for a more encompassing definition.

While white power featured a diversity of views and an array of competing leaders, all corners of the movement were inspired by feelings of defeat, emasculation, and betrayal after the Vietnam War and by social and economic changes that seemed to threaten and victimize white men.

White power also qualifies as a social movement through its central features: the contiguous activity of an inner circle of key figures over two decades, frequent public displays, and development of a wide reaching social network. White power activists used a shared repertoire of actions to assert collectivity. They rallied openly, formed associations and coalitions, and gave statements to the press. Public displays of uniformed activists chanting slogans and marching in formation aimed to demonstrate worthiness, unity, numbers, and commitment to both members and observers.

Activists encouraged dress codes and rules about comportment and featured the presence of mothers with children, Vietnam veterans, and active-duty military personnel. Members showed unity by donning uniforms and by marching and chanting in formation. They made claims about their numbers. They underscored their commitment with pledges to die rather than abandon the fight; preparing to risk their lives for white power; and undertaking acts that put them at legal and physical risk. A regular circulation of people, weapons, funds, images, and rhetoric, as well as intermarriages and other social relationships, bound activists together. These actions produced common “ideas and culture,” what social movement theorists have called “frames,” that served to “legitimate and motivate collective action.”

The primacy of the Vietnam War among these frames is clear in the cultural artifacts that inspired and coordinated the movement. These included uniforms, language, strategies, and matériel derived from the war itself. Activists adopted terminology, such as “gooks,” associated with US. soldiers in Vietnam; camouflage fatigues; civilian versions of the era’s military weapons, as well as the genuine articles, sometimes illegally obtained; and training and combat methods modeled on soldiers’ experience and US. Army manuals.

Also essential in binding the movement together was the 1974 white utopian novel The Turner Diaries, which channeled and responded to the nascent white power narrative of the Vietnam War. The novel provided a blueprint for action, tracing the structure of leaderless resistance and modeling, in fiction, the guerrilla tactics of assassination and bombing that activists would embrace for the next two decades. Activists distributed and quoted from the book frequently. It was more than a guide, though. The popularity of The Turner Diaries made it a touchstone, a point of connection among movement members and sympathizers that brought them together in common cause.

Writing the history of a subversive movement presents archival challenges. White power activists routinely attempted to hide their activity, even when it was legal. Documentary resources are scattered and fragmentary. This is especially true of the period after 1983, when white power activists worked particularly hard to avoid being depicted as a coherent movement. They used old Klan strategies such as maintaining secret membership rolls, as well as new ideas such as cell-style organizing. Such strategies foiled government informants and forestalled public awareness of violence, obscuring the scale and intentions of the movement and limiting opposition. Activists understated or denied their involvement to protect themselves and their allies. But when they felt it useful, they also overstated their influence and membership in order to boost their apparent strength.

This deliberate obfuscation has clouded many journalistic and scholarly accounts. Press coverage too often portrayed organized white power violence as the work of lone gunmen driven by grievance and mental illness. Sensational truecrime and undercover reporting in pulp magazines and one-source interviews in small-town newspapers kept activists safely ensconced within their cells and depicted every case of violence as uniquely senseless. Thus groups went undetected, and the motivations underlying violence were rarely taken seriously. Accounts after the Oklahoma City bombing concluded that if white power had ever constituted a social movement, it had become so riddled by interand intragroup conflicts and personal vendettas that it no longer deserved the designation. Yet infighting had been a constant feature of white power formation and activity. White power organizing did change in the late 1990s, but this resulted from large-scale historical shifts such as increased pressure and expanding online activity, not internecine feuds.

Not all journalistic accounts of white power were so flawed. Veteran reporters from the Christian Science Monitor, the Oregonian, and the Houston Chronicle, among others, spent years covering white power on their beats and began to connect local episodes to activity elsewhere. And even the one-off accounts can be useful to the historian because white power activists sometimes spoke to undercover reporters directly and contemporaneously about their motivations.

The Federal Bureau of Investigation (FBI), Bureau of Alcohol, Tobacco, and Firearms (ATF), U.S. Marshal Service, and Department of Justice monitored the white power movement during this period, generating another source of archival materials. Authors of these records range from undercover agents who had deep familiarity with white power groups to clerical staff at the tail end of a long game of telephone, who sometimes misunderstood crucial details. The motivations of federal agents, some prevented crimes and mounted major prosecutions; others declined to report, prevent, or prosecute such groups; yet others unleashed their own violence upon separatist compounds, shaped these records as well, affecting their reliability. Government documents also vary widely in their level of redaction. Many such sources are accessible only through Freedom of Information Act requests, which means that not everything the government collected is available to researchers. Even full access would provide but a partial glimpse of white power activity, filtered through state interests and the perspectives of individual state actors.

When it comes to the flourishing of white power activism in prisons, sources are especially limited. Groups such as the prison gang the Aryan Brotherhood are largely absent from the archive.

We can detect some effects of their mobilizations, such as monetary contributions sent beyond prison walls. Members who joined the movement while incarcerated and continued their activism after release also have greater presence in available sources. But much less is known about white power mobilizations within prison walls.

Legal documents, too, provide less information than we might hope, particularly because the white power movement flourished between the end of excellent paper record keeping and the beginning of effective digitization of documents. While several acts of white power violence and harassment have resulted in civil and criminal prosecutions, many resources from those trials have been lost or destroyed, in whole or in part. Some of what remains can be obtained only at prohibitive expense. And what is available comes with the same complications as any trial record. Some people who testified about their roles in the movement, especially women, may have done so under the threat of separation from their families. Several activists made plea deals in return for testifying against the movement. Legal documents, especially testimonies, must be read with such motivations in mind.

An important source of information about the movement is the opposition. Watchdog groups such as the Southern Poverty Law Center, the Anti-Defamation League, and the Center for Democratic Renewal collected material on white power activists as part of their mission to combat intolerance. Some compiled extensive databases including biographical information, photographs, news clippings, and legal records. They also obtained photographs, transcriptions of conversations from undercover informants, journalists’ notes, and other items outside the published record. Although these files are rich with information, they, too, must be treated cautiously. Watchdog groups can have motives that reach beyond simple documentation: they exist through fundraising, and donations may increase when there is a sense of urgency. Watchdog groups may have sometimes overestimated the movement’s influence and level of organization.

A final, essential resource is the archive created by the white power movement itself. This includes correspondence, ephemera, illustrations, autobiographies, books, printed periodicals, and “zines.” Some printed material circulated widely and had a transnational readership. Activists selfpublished their writings on presses, mimeograph and Xerox machines, and the Internet. Large collections of these published materials are housed at three university libraries in the United States. Although these collections are fundamentally different, one assembled by a journalist writing on an episode of movement violence, one by an archivist who asked political extremists from across the spectrum for contributions, and one by collectors who obtained literature at meetings of extremist groups, the materials in these three archives are remarkably similar. They offer, therefore, a fairly complete picture of the movement’s printed output.

At the same time, one must be mindful of what an archival study of white power cannot reveal. Military service records, for instance, are not publicly available, nor are the membership rolls of each white power group. In their absence, one cannot make a quantitative study of the levels of veteran and active-duty-military participation in the movement. The archive offers very little information on the childhood and early life of most activists. Information on marriages and divorces, particularly involving those who, as part of their antistatist activism, refused to register unions, cannot always be corroborated by official documents. Nor can an archival study stray from the stated beliefs and concrete actions of white power actors in an effort to attempt a psychological assessment. In most cases, the historian has neither the training nor the access to enter this discussion. However, one can grapple with the record of speech and action to offer an approximation of a historical actor’s motives and actions.

Given these limitations, I have assumed that each document might reflect a particular agenda and have taken certain precautions as a result. When possible, I use multiple sources to corroborate information. If, say, a fact appears in a redacted FBI file, an undercover reporter’s interview with a white power activist, and a mainstream press report, it probably can be relied upon. I present unverifiable statements as such and identify those that are demonstrably false.

When relevant, I include information about sources, their biases, and possible alternative interpretations of the material in question.

That the archive is imperfect should disturb neither historians nor readers. Indeed, it is precisely the work of the historian to assemble an account based upon the information available, even if it is scattered, incomplete, and sometimes contradictory. In many ways, this approach enables a better understanding of how historical actors experienced their own moment, without the veneer of hindsight that clouds other kinds of accounts, such as interviews and memoirs produced years after the fact.

A sizable literature, both academic and journalistic, has engaged with portions of the white power archive, but this book is the first work to attempt a comprehensive approach. Unlike studies focused on one segment of white power, particular activists, events, locations, symbols, ideological discourses, or disputes-this one captures the entire movement as it formed and changed over time.

I find in the archival sources the story of the emergence, rise, and fall of a unique, cohesive effort to build a new nation on the ashes of a state accused of having abandoned its own. To understand the impact of this effort on American society, politics, and culture, and to take stock of its relationship with mainstream conservatism, requires engaging it synthetically, not piece by piece.

Bring the War Home follows the formation of the white power movement, its war on the state, and its apocalyptic confrontation with militarized state power. Part I documents the role of violence in motivating and constituting the movement. Chapter 1 traces the creation of a Vietnam War narrative that united the movement and inspired its paramilitary culture and infrastructure. Chapter 2 shows how paramilitary training camps worked to form white power groups and augmented their capacity for violence. In Chapter 3, I discuss the formal unification of the movement through a common experience of violence: the 1979 mass shooting of communist protestors in Greensboro, North Carolina. Chapter 4 documents the intersections between white power and other forms of paramilitarism by focusing on transnational antidemocratic paramilitary combat by mercenary soldiers, some with movement ties.

Part II turns to the white power revolution declared in 1983. At this point, the movement definitively distinguished itself from previous vigilante mobilizations, such as the earlier Ku Klux Klan, whose perpetrators claimed to act for the good of the state or to uphold its laws. In Chapters 5 through 7, I examine the movement’s declaration of war, use of early computer networks, and deployment of cell-style organizing. Critical to these efforts were attempts, some successful, to obtain stolen military-grade weapons and materiel from the state. I also recount the acquittal of thirteen movement activists on federal charges including seditious conspiracy. Their defense, based on a purported need to protect white women, demonstrates that even though white power broke away from earlier white supremacist movements, it maintained a degree of ideological and rhetorical continuity with them, even as it turned to newly violent antistatism in its revolutionary actions.

Part III describes the crescendo and climax of white power revolution in which groups both confronted and participated in events characterized by apocalyptic, world-destroying violence. Although many were killed and others were harmed, the effort never achieved the biblical scale activists had anticipated. The movement was inflamed by encounters with state power, such as the standoff between federal agents and a white separatist family at Ruby Ridge, Idaho, and the siege of the Branch Davidians in Waco, Texas.

Cataclysmic, militarized state violence helped to inspire the growth of militias, leading to the Oklahoma City bombing. That act stands as the culmination of two decades of white power organizing and is the most significant single event in the movement’s history.

The bombing destroyed an edifice, lives, and families, but not only those. It also shattered meaning, wiping out a public understanding of the white power movement by cementing its violence, in public memory, as the act of a few men. Despite its many attempts to disappear, and despite its obscurity even at the height of its strength during the militia phase, the movement left lasting marks on mainstream American politics and popular culture. It has continued to instigate and shape violence years after the Oklahoma City bombing.

The story of white power as a social movement exposes something broader about the enduring impact of state violence in America. It reveals one catastrophic ricochet of the Vietnam War, in the form of its paramilitary aftermath. It also reveals something important about war itself.

War is not neatly contained in the space and time legitimated by the state. It reverberates in other terrains and lasts long past armistice. It comes home in ways bloody and unexpected.

1 The Vietnam War

Forever trapped in the rice paddies of Vietnam. -Louis Beam, 1989

LOUIS BEAM SPENT eighteen months in Vietnam. He served an extended tour as a gunner on a UH1 Huey helicopter in the U.S. Army’s 25th Aviation Battalion. He logged more than a thousand hours shooting at the enemy and transporting his fellow soldiers, including the injured and fallen, to and from the front. By his own account, he killed between twelve and fifty-one “communists” before returning home to Texas, decorated, in 1968. But he never stopped fighting. Beam would use his Vietnam War story to militarize a resurgent Ku Klux Klan and to wage a white power revolution.

He brought many things home with him: his uniforms, virulent anticommunism, and hatred of the Viet Cong. He brought home the memory of death and mutilation sealed in heavy-duty body bags. He brought home racism, military training, weapons proficiency, and a readiness to continue fighting. His was a story about government betrayal, soldiers left behind, and a nation that spat upon his service and would never appreciate his sacrifice. Indeed, he brought home the war as he fought it, and dedicated his life to urging others to “bring it on home.”

On both the right and left of the political spectrum, the war worked to radicalize and arm paramilitary groups in the post-Vietnam War period. On the left, veterans played instrumental roles in groups organized around politics and labor, and in militant groups that fought racial inequality, such as the Black Panther Party. Occasionally these left and right-wing mobilizations would overlap and feed off one another, with white power activists robbing the same Brinks armored car company hit by the leftwing Weather Underground a few years earlier, and with the paramilitary Latino Brown Berets and the Klan Border Watch focused on the same stretch of terrain in South Texas.

Throughout the twentieth century, many veterans of color understood their postwar activism as an extension of their wartime combat. Veterans played key roles in fostering the civil rights and armed seIf-defense movements. The influence of key veterans upon the white power movement, therefore, is part of a longer story about veterans’ claims on society, and about the expansive aftermath of modern war.

Just as some veterans fought for racial equality, others fought to oppose it. Indeed, Ku Klux Klan membership surges have aligned more neatly with the aftermath of war than with poverty, anti-immigration sentiment, or populism, to name a few common explanations.

After the Civil War, the Confederate veterans who formed the first Klan terrorized both black communities and the Reconstruction-era state. World War I veterans led second-era Klan efforts to violently ensure “all American” racial, religious, and nationalist power. Third-era Klansmen who had served in World War II and Korea played key roles in the violent opposition to civil rights, including providing explosives expertise and other skills they had learned in the military.

After each war, veterans not only joined the Klan but also played instrumental roles in leadership, providing military training to other Klansmen and carrying out acts of violence. The effect of war was not simply about the number or percentage of veterans involved, but about the particular expertise, training, and culture they brought to paramilitary groups. Significantly, in each surge of activity, veterans worked hand in hand with Klan members who had not served. Without the participation of civilians, these aftershocks of war would not have found purchase at home. The overspills of state violence from wars, therefore, spread through the whole of American society; they did not affect veterans alone.

So, too, did the Vietnam War broadly affect American culture and politics. Narratives of the war as a government betrayal and as a source of grievance laid the groundwork for white power activism. Once again, the war story drew in both veterans and civilians. But the Vietnam War was also historically distinct; it represented loss, frustration, and doubt.

By intervening to support South Vietnam, the United States sought to halt the spread of communism, and to stop the Soviet Union, which supported North Vietnam and revolutionaries in the South, from amassing global power in the midst of the Cold War. In practice, the United States found itself intervening in a local, civil conflict, one shaped by the legacy of French colonial rule. American soldiers entered a morally ambiguous proxy war and faced an enemy comprising highly motivated guerrillas, partisan soldiers, and supportive or ambivalent civilians. This, together with enormous differences in culture and climate, created high levels of despair among the troops.

Combat in Vietnam often took a form unfamiliar to a generation of soldiers raised on World War II films that depicted war as righteous and tempered depictions of its violence. In Vietnam, American soldiers waged prolonged, bloody fights for terrain that was soon abandoned. They often described enemies and allies as indistinguishable. Infantry patrols embarked on long, aimless marches in the hope of drawing fire from hidden guerrillas. “Freefire zones” and “strategic hamlets”, designations that labeled as enemies anyone who did not evacuate from certain areas, placed civilians in the path of war.

And because success was often measured in the number of people killed, rather than in terrain held, a mix of circumstances in Vietnam created a situation in which violence against civilians, mutilation of bodies, souvenir collecting, sexual violence, and other war crimes were not just isolated incidents but ubiquitous features of war that permeated the chain of command.

The United States and its people had understood the wars of the first half of the century as shared civil projects, but the Vietnam War undermined this notion. When the commitment of soldiers, bombs, and money failed to produce decisive victories in Southeast Asia, civilians at home grew increasingly disenchanted with the war, helping to foster the narrative of abandonment that white power activists such as Beam would later exploit.

Mobilizations of protest in the United States, particularly the mass antiwar movement, openly questioned the war’s morality by critiquing American involvement as an imperialist exercise. Television broadcasts of wartime violence created what the writer Susan Sontag called a “new tele-intimacy with death and destruction.” Many returning veterans denounced the quagmire of war both in the streets and in the halls of government, and journalists documented wartime atrocities. As the war dragged on, victory in the realm of public perception seemed less and less possible.

*

from

Bring the War Home. The White Power Movement and Paramilitary America

by Kathleen Belew

get it at Amazon.com

FINDING THE MONEY, Modern Monetary Theory – Bryan Gould.

Most of the money in our economy sits in bank accounts, and a large proportion of that money is created by the banks when they make loans, usually on mortgage.

Money, in a developed economy, is what the government says it is.

Governments all around the world have over recent years pursued policies of “quantitative easing”, and on a very large scale and “quantitative easing” is just another way of describing the creation of new money.

So, the chickens are coming home to roost, and with a vengeance. The tragedy for the new government is that the chickens were bred and raised by the previous government, and are only now flying in, in large numbers and with hefty price tags.

We are now getting some idea of the price that has to be paid for those ”business-friendly” policies that were celebrated for their success in producing a “surplus” (at least for the government).

That price includes large numbers of underpaid public servants nurses, teachers, midwives, care workers, Inland Revenue workers and underfunded public services health care, schools, keeping our water and rivers clean, and bio-security at our borders. The bio-security failure alone will cost the current government around $900 million the amount awarded by the courts for the previous government’s negligence in allowing PSA to decimate the kiwifruit industry (and that’s to say nothing of the cost of the myco plasma bovis outbreak).

Through no fault of its own, the new government is having to pay up for the mess made by its predecessor, and that costs money that cannot, it seems, be easily found. Every dollar paid to clean up the mess is said to be a dollar less for the government’s real aims to improve our public services, to rescue our environment, to save families from poverty, to provide recent housing for everyone.

But is that really the case? There may be other shortages labour or land, or skills or technology, or materials but a shortage of money should not be one of them. How do we know that? Because, as an increasing number of experts recognise, and as our own experience teaches us, the government of a sovereign country need never be short of money.

This is because money, in a developed economy, is what the government says it is. Indeed, it is often called fiat money because it exists only by the sayso of the government and, as the economist, Ann Pettifor, says, that means that “we can afford what we can do.”

Most of the money in our economy sits in bank accounts, and a large proportion of that money is created by the banks when they make loans, usually on mortgage. The fact that the commercial banks create over 90% of the money in circulation out of nothing is still disputed by some (including by those who should know better) but is now attested to by the world’s central banks, by top monetary economists (such as Lord Adair Turner, former Chair of the UK’s Financial Services Authority and a leading advocate of “helicopter money”) and by leading economic journals such as the Financial Times and The Economist.

This raises the question if the banks are allowed to create money out of nothing (and then to charge interest on it), why should governments be inhibited about doing so? And indeed, they are not so inhibited, governments all around the world have over recent years pursued policies of “quantitative easing”, and on a very large scale and “quantitative easing” is just another way of describing the creation of new money.

The money created in this way has been directed to building up the balance sheets of the banks in the wake of the Global Financial Crisis, but there is no reason why it should not be applied to other (and more productive) purposes as it has been in many countries, as well as New Zealand, in the past. Japan, for example, both today and immediately after the Second World War, used this technique to get their economy moving and to build the strength of their manufacturing industry; in doing so, they followed the precepts of the great Japanese economist, Osamu Shimomura, who is virtually unknown in the West.

The Chinese government today follows similar policies. President Roosevelt in the US did likewise, before the US entered the Second World War, so as to build the strength of American industry and military capability; and, in New Zealand, Michael Joseph Savage authorised the Reserve Bank to issue interest-free credit in the 1930s so as to take us out of recession and finance the building of thousands of state houses.

All that inhibits our current government from using this technique is the fear that some will disapprove and regard it as taking risks with infiation. But, as John Maynard Keynes observed, “there may be good reasons for a shortage of land but there are no good reasons for a shortage of capital.” He went on to say that, if an increase in the money supply is applied to productive purposes so that output is increased, it cannot be inflationary.

As the new Labour-led government faces financial constraints not of its own making, why not emulate Michael Joseph Savage and authorise the issuing of interest-free credit to be applied to investment in stimulating new production? The Provincial Growth Fund would seem to be an ideal vehicle; funding investment in new infrastructure in this way would free up financial resources that could then be applied to current expenditure, such as paying the nurses and teachers what they deserve.

Our natural world is disappearing before our eyes. We have to save it – George Monbiot.

The creatures we feared our grandchildren wouldn’t see have vanished: its happened faster than even pessimists predicted.

Our use of natural resources has tripled in 40 years. The great expansion of mining, logging, meat production and industrial fishing is cleansing the planet of its wild places and natural wonders.

It felt as disorienting as forgetting my pin number. I stared at the caterpillar, unable to attach a name to it. I don’t think my mental powers are fading: I still possess an eerie capacity to recall facts and figures and memorise long screeds of text. This is a specific loss. As a child and young adult, I delighted in being able to identify almost any wild plant or animal. And now it has gone. This ability has shrivelled from disuse: I can no longer identify them because I can no longer find them.

Perhaps this forgetfulness is protective. I have been averting my eyes. Because I cannot bear to see what we have done to nature, I no longer see nature itself; otherwise, the speed of loss would be unendurable. The collapse can be witnessed from one year to the next.

The swift decline of the swift (down 25% in five years) is marked by the loss of the wild screams that, until very recently, filled the skies above my house. My ambition to see the seabird colonies of Shetland and St Kilda has been replaced by the intention never to visit those islands during the breeding season: I could not bear to see the empty cliffs, where populations have crashed by some 90% in the past two decades.

I have lived long enough to witness the vanishing of wild mammals, butterflies, mayflies, songbirds and fish that I once feared my grandchildren would not experience: it has all happened faster than even the pessimists predicted. Walking in the countryside or snorkelling in the sea is now as painful to me as an art lover would find visits to a gallery, if on every occasion another old master had been cut from its frame.

The cause of this acceleration is no mystery. The United Nations reports that our use of natural resources has tripled in 40 years. The great expansion of mining, logging, meat production and industrial fishing is cleansing the planet of its wild places and natural wonders. What economists proclaim as progress, ecologists recognise as ruin.

This is what has driven the quadrupling of oceanic dead zones since 1950; the “biological annihilation” represented by the astonishing collapse of vertebrate populations; the rush to carve up the last intact forests; the vanishing of coral reefs, glaciers and sea ice; the shrinkage of lakes, the drainage of wetlands. The living world is dying of consumption.

We have a fatal weakness: failure to perceive incremental change. As natural systems shift from one state to another, we almost immediately forget what we have lost. I have to make a determined effort to remember what I saw in my youth. Could it really be true that every patch of nettles, at this time of year, was reamed with caterpillar holes? That flycatchers were so common I scarcely gave them a second glance? That the rivers, around the autumn equinox, were almost black with eels?

Others seem oblivious. When I have criticised current practice, farmers have sent me images of verdant monocultures of perennial ryegrass, with the message: “Look at this and try telling me we don’t look after nature.”

It’s green, but it’s about as ecologically rich as an airport runway.

One reader, Michael Groves, records the shift he has seen in the field beside his house, where the grass that used to be cut for hay is now cut for silage. Watching the cutters being driven at great speed across the field, he realised that any remaining wildlife would be shredded. Soon afterwards, he saw a roe deer standing in the mown grass. She stayed throughout the day and the following night. When he went to investigate, he found her fawn, its legs amputated. “I felt sickened, angry and powerless how long had it taken to die?” That “grass-fed meat” the magazines and restaurants fetishise? This is the reality.

When our memories are wiped as clean as the land, we fail to demand its restoration. Our forgetting is a gift to industrial lobby groups and the governments that serve them. Over the past few months I have been told repeatedly that the environment secretary, Michael Gove, gets it. I have said so myself: he genuinely seems to understand what the problems are and what needs to be done. Unfortunately, he doesn’t do it.

Gove cannot be blamed for all of the fiascos to which he has put his name. The 25-year plan for nature was, it seems, gutted by the prime minister’s office. The environmental watchdog he proposed was de-fanged by the Treasury (it has subsequently been lent some dentures by parliament). Other failures are all his own work. In response to lobbying from sheep farmers, Gove has allowed ravens, a highly intelligent and long-lived species just beginning to recover from centuries of persecution, to be killed once more in order to protect lambs. There are 23 million sheep in this country and 7,400 pairs of ravens. Why must all other species give way to the white plague?

Responding to complaints that most of our national parks are wildlife deserts, Gove set up a commission to review them. But governments choose their conclusions in advance, through the appointments they make. A more dismal, backwardlooking and uninspiring panel would be hard to flnd.

Not one of its members, as far as I can tell, has expressed a desire for significant change in our national parks, and most of them, if their past statements are anything to go by, are determined to keep them in their sheepwrecked and grouse-trashed state.

Now the lobbyists demand a New Zealand settlement for farming after Brexit: deregulated, upscaled, hostile to both wildlife and the human eye. If they get their way no landscape, however treasured, will be safe from broiler sheds and mega dairy units, no river protected from runoff and pollution, no songbird saved from local extinction.

The merger between Bayer and Monsanto brings together the manufacturer of the world’s most lethal pesticides with the manufacturer of the world’s most lethal herbicides. Already the concentrated power of these behemoths is a hazard to democracy; together they threaten both political and ecological disaster. Labour’s environment team has scarcely a word to say about any of it. Similarly, the big conservation groups have gone missing in inaction.

We forget even our own histories. We fail to recall, for example, that the 1945 Dower report envisaged wilder national parks than we now possess, and that the conservation white paper the government issued in 1947 called for the kind of large-scale protection that is considered edgy and innovative today. Remembering is a radical act.

That caterpillar, by the way, was a six-spot burnet: the larva of a stunning iridescent black and pink moth that once populated my neighbourhood.

I will not allow myself to forget again: I will work to recover the knowledge I have lost. For I now see that without the power of memory, we cannot hope to defend the world we love.

As Israelis, we call on the world to intervene on behalf of the Palestinians – Ilana Hammerman and David Hare * The Biggest Prison on Earth. The History of the Occupied Territories – Ilan Pappe.

We’re patriotic citizens but are horrified by the escalating tensions in our country: we fear for those who live here.

Israeli courts are in the process of legitimising the destruction of entire villages, and the Knesset is passing new laws that steadily decrease the ability of the courts to have a say at all.

The state of Israel is facing a catastrophic situation, which could, alarmingly soon, lead to extensive bloodshed. It is time for the international community to act decisively. Substantive external pressure, political, economic and cultural offers the only chance of emerging from this impossible situation before it is too late. Not a sweeping BDS-style boycott of the country, but diverse, carefully crafted, acts of pressure.

We represent a group of intellectuals and cultural figures central to Israeli society, several of whom are world renowned in their fields. We are patriotic Israeli citizens who love our country and who contribute tirelessly to Israeli science and culture, and to that of the world at large. We fully intend to stay here and continue to contribute, but we are horrified by the situation and fear deeply for our lives and those of our offspring, and for the lives of the 13 million Jews and Arabs who live here and who have no other homeland.

The decision to direct our plea to the outside world is not taken lightly, and we do so with a heavy heart. The pressure we believe is needed must come from governments and parliaments, of course, but also from civil society, individuals and establishments.

Ever since 1967, not a single Israeli government has put a stop to the expansion of settlements in the occupied West Bank. Moreover, in recent years, the official and openly stated ideological policy of the elected Israeli government has it that this land, from the Mediterranean to the Jordan river, belongs in its entirety to the Jewish people, wherever they may be.

In the spirit of this ideology, the processes involving oppression, expulsion and ethnic cleansing of the Palestinians living in the West Bank are broadening and deepening. This includes Jerusalem, too, which was annexed by Israel in 1967, and the borderlines of which extend almost from Bethlehem in the south to Ramallah in the north. Israeli courts are in the process of legitimising the destruction of entire villages, and the Knesset is passing new laws that steadily decrease the ability of the courts to have a say at all. Others legitimise the additional expropriation of private Palestinian land in favour of the settlements built on them. These acts of one-sided expropriation violate those parts of international law that protect civilians of occupied territories, and some are even in violation of Israeli law.

For years the international community has been talking about a solution based on separate Israeli and Palestinian states coexisting in peace and security. But current Israeli policy renders this impossible. During the 51 years of military rule on the West Bank, Israel has taken over large quantities of land, and has placed around 600,000 Israeli citizens there in hundreds of settlements. It supplies them with roads, water and electricity, has built and financed their health, education and cultural institutions, and has given them the same civil and political rights enjoyed by citizens residing within its sovereign territory.

In contrast, Israel is squeezing the living space of Palestinian residents, who enjoy no civil or political rights. With the aid of laws, special regulations and military orders it shuts them out of the areas it has allotted to its citizens and for its military training activities. It delineates and then expropriates their private and public land on the basis of rules it sets down for the sole beneflt of its own citizens. It confines their villages by surrounding them with fences and barriers, destroys houses and refuses to allow them to expand; it imposes collective punishments, detains thousands of men, women and minors, tries them in a military court system and imprisons them in its sovereign territory.

Since all these actions are being carried out in violation of international law, the resulting situation is no longer just an internal Israeli issue. The institutions of the international community have taken many decisions intended to curb these actions, but none has ever been accompanied by enforcement mechanisms.

And so a destructive, violent and explosive reality is becoming the norm in these areas. We, who are located in the midst of this reality, believe the international community must help, since that community alone is responsible for enforcing compliance, with its treaties and with the decisions of its institutions, and since in the current circumstances only it can do so.

Never have these issues been as clear cut and as urgent as they are today: if peace is not established in this part of the world very soon, an area that has become a timebomb of national and religious tensions, there will be no future and no life for us or the Palestinians.

Ilana Hammerman is an Israeli writer and translator.

David Harel is Vice-president of the Israel Academy of Sciences and Humanities

*

See also:

The Biggest Prison on Earth. The History of the Occupied Territories

by Ilan Pappe

The ‘Shacham Plan’, ‘The Organization of Military Rule in the Occupied Territories’.

“We will change the world, starting from the very beginning.” Building Babies Brains. Criança Feliz, Brazil’s audacious plan to fight poverty – Jenny Anderson * Advancing Early Childhood Development: from Science to Scale – The Lancet * A groundbreaking study offers undeniable proof that the fight against inequality starts with moms – Jenny Anderson.

“How can we most dramatically improve the quality of life for our citizens, their health, their education? The answer to that question lies in starting at the beginning, at pregnancy, and in the first few years of a child’s life.” Osmar Terra

Decades of groundbreaking research shows that the love and sense of safety experienced by a baby directly impacts how the child’s brain is wired. Adversity, especially persistent, stress-triggering adversity like neglect and abuse, hampers that development, and can result in poorer health, educational attainment, and early death.

“Children who experience profound neglect early in life, if you don’t reverse that by the age of two, the chance they will end up with poor development outcomes is high. The strongest buffer to protect against that? A parent, or caring adult.” Charles Nelson

The best investment a policymaker can make is in the earliest years of childhood, because that’s when intervention has the highest payoffs. Strong biological, psychosocial, and economic arguments exist for intervening as early as possible, starting from and even before conception, to promote, protect, and support children’s development.

Studies have found that children whose mothers received coaching made significant developmental gains, and not just in the short term. Twenty-two years later, the kids from one group who had received those home visits as young children not only had higher scores on tests of reading, math, and general knowledge, they had stayed in school longer. They were less likely to exhibit violent behavior, less likely to experience depression, and had better social skills. They also earned 25% more on average than a control group of kids whose mothers had not received the coaching.

Osmar Terra is a tall man with a deep voice and an easy laugh, one that disguises the scale of his ambition to transform Brazilian society. A federal representative for nearly two decades, he is the driving force behind the world’s biggest experiment to prove that teaching poor parents how to love and nurture their infants will dramatically influence what kind of adults they become, and give Brazil its best shot at changing its current trajectory of violence, inequality, and poverty.

Terra, aged 68, first became obsessed with the question of how humans develop nearly 30 years ago. As a cardiologist in the 1990s, he would read endless research papers about the neuroscience of early childhood. When he entered politics, becoming mayor of Santa Rosa in Rio Grande do Sul in 1992, he continued to grapple with the question, even studying for a master’s degree in neuroscience. The science, he believed, should lead to smart policy. As a doctor and a manager, a mayor and a state health secretary, he was always trying to figure out how to to tackle poverty head-on. “In every single activity I always ask myself, ‘What is the public policy that can be more transformative?’” he says. “How can we most dramatically improve the quality of life for our citizens, their health, their education?”

The answer to that question, he came to realize, lay in starting at the beginning, at pregnancy, and in the first few years of a Child’s life.

Decades of groundbreaking research shows that the love and sense of safety experienced by a baby directly impacts how the child’s brain is wired. Adversity, especially persistent, stress-triggering adversity like neglect and abuse, hampers that development, and can result in poorer health, educational attainment, and early death. While science underpins his mission, Terra’s palpable passion for the topic and his skill at politicking eventually led him to create Criança Feliz, a highly ambitious parent coaching program he helped launch in 2017 to try and reach four million pregnant women and children by 2020.

Under Criança Feliz, an army of trained social workers, a sort of national baby corps, are dispatched to the poorest corners of Brazil. Traveling by boat, sometimes battling crocodiles and floods, by foot, by car, by truck and by bus, these social workers go to people’s homes to show them how to play, sing, and show affection to their infants and young children. They explain to parents why this matters:

Emotional safety underpins cognitive growth. Intelligence is not fixed, but formed through experience.

Parent coaching, and specifically, home visiting, is not new. The most famous study, which took place in Jamaica in the 1970s, showed that well trained home visitors supporting poor mothers with weekly visits for two years led to big improvements in children’s cognition, behavior, and future earnings. One group of infants in that program who received coaching in their earliest years earned 25% more than a control group more than 20 years later.

But Brazil’s ambition is audacious. No city or country has ever attempted to reach so many people in such a short amount of time. (The largest program doing this now is probably in Peru, reaching about 100,000 families; Criança Feliz is already reaching 300,000.) “They are raising the bar for what is possible nationally,” says Jan Sanderson, the former deputy minister of children from Manitoba, Canada, who is an expert in home visiting and recently traveled to observe the program.

Just how Brazil, a massive country with endemic poverty and grating inequality, came to embrace parent coaching as the next frontier in combating poverty is a story of Terra’s political will, the strategic savvy of a few foundations, the pivotal role of a Harvard program, and the compassion of a growing group of unlikely allies, from communists to far-right wing politicians. Talking to lawmakers in Brazil can feel like wandering around a neuroscience convention: One senator from the south can’t stop talking about working memory, while a mayor from the northern town of Boa Vista in Roirama state is fixated on synapse connection.

At least 68 senators and congresspeople, judges, and mayors have converted to the cause, becoming evangelical in their focus on early childhood development.

“I believe that this is the solution, not only for Brazil, but for any country in the world in terms of security, public security, education, and health care,” says Iosé Medeiros, a senator from the state of Mato Grosso who heads the parliamentary committee on early childhood development. “It’s a cheap solution.”

Terra’s claims are more dramatic. “We will change the world, starting from the very beginning.”

Those words are hardly surprising coming from the man whom Ely Harasawa, Criança Feliz’s director, calls the program’s “godfather.” But the devil, of course, is in the details, and in Terra and his allies’ ability to steer a course through some rather treacherous political terrain.

Criança Feliz in action

On a hot day in May, Adriana Miranda, a 22-year-old accounting student, visits Gabriela Carolina Herrera Campero, also 22, who is 36 weeks pregnant with her third child. Campero arrived in Brazil less than a year ago from Venezuela, fleeing with her husband and two children from that country’s financial collapse and ensuing chaos. She lives in Boa Vista, a city in the north of Brazil where 10% of the population are estimated to be refugees.

The two women greet each other warmly and start chatting, in spite of the fact that Miranda is speaking in Portuguese and Campero in Spanish. They sit together on plastic chairs on a concrete patio as Miranda goes through a checklist of questions about the pregnancy. Has Campero been to her prenatal visits? (Yes.) How is she feeling? (Hot.) Is she drinking enough water? (Yes.) And walking? (When it’s not too hot.) Is she depressed or anxious? (No, but worried, yes.) Does she feel supported by her husband? (Yes.) How is she sleeping and what kinds of foods is she eating? (She’s not sleeping well because she always has to pee, and she is eating a lot of fruit.)

Miranda moves on to talking with Campero about attachment, how to create a strong bond with a baby in utero, and also once the baby is born. Does she know that at five months, the baby can hear her and that her voice will provide comfort to the baby when it is born?

“It’s important the baby feel the love we are transmitting. When he is in distress, he will know your voice and it will calm him,” says Miranda.

It’s a topic they have discussed before. Campero is eager to show what she has learned about the baby. (A part of the program requires that visitors check for knowledge.) “It has five senses, and if I talk, he will know my voice,” she says. “The baby will develop more.” They discuss the importance of cuddling a baby and being patient.

Having a baby in the best of circumstances can be challenging. As an impoverished refugee, in a new country, it can be utterly overwhelming.

I ask Campero, in Spanish, whether the program has been helpful. After all, she already has two kids. Doesn’t she know what to expect? She starts to cry. “They have helped me emotionally,” she says. “She has taught me so many things I didn’t know.” For example, she didn’t know to read to a baby, or that her baby could hear her in utero. Her son used to hit her belly; he now sings songs to the baby because she explained to him what she learned from Miranda. “I feel supported,” she tells me.

Many people, rich and poor alike, have no idea what infants are capable of. Psychologists and neuroscientists believe they are creative geniuses, able to process information in far more sophisticated ways than we ever knew. But for that genius to show itself, the baby needs to feel safe and loved and to have attention.

Medeiros explains how he viewed parenting before he went to the Harvard program.

”I raised my kids as if I were taking care of a plant,” he recalls. “You give them food, you take care of them.” He says he did the best he could, but “I did not have all this information. If I had encouraged them, stimulated them more, I would have been able to contribute much more to their development.”

He is hardly the exception. A 2012 nationally representative survey in Brazil asked mothers, 5200 of whom were college educated, what things were most important for the development of their children up to three years of age. Only 19% mentioned playing and walking, 18% said receiving attention from adults, and 12% picked receiving affection. “So playing, talking to the child, attachment, it’s not important for more than 80% of the people who are interviewed,” says Harasawa, the director of Criança Feliz.

Criança Feliz is part of Brazil’s welfare program for its poorest citizens, called Bolsa Familia. Started 15 years ago, the welfare program is rooted in a cash transfer system that makes payments contingent on kids getting vaccines and staying in school, and pregnant mothers getting prenatal care. Vaccination rates in Brazil exceed 95% and primary school enrollment is near universal. Originally derided, and still criticized by some in Brazil as a handout program for the poor, Bolsa Familia is nevertheless being replicated worldwide.

But a powerful coterie of Brazil’s political leaders believe it’s not enough. Cash transfers alleviate the conditions of poverty, but do not change its trajectory.

That’s where Criança Feliz comes in. The program is adapted from UNICEF and the World Health Organization’s Care for Child Development parent coaching program. Trained social workers visit pregnant women every month and new parents once a week for the hrst three years of a child’s life. Sessions last about an hour. The goal is to not to play with the baby or train the parent, but to help parents have a more loving relationship with their children. The program costs $20 per child per month. The ministry of social development allocated $100 million in 2017 and $200 million in 2018.

Cesar Victoria, an epidemiology professor at the Federal University of Pelotas in Brazil, will conduct a three year randomized control trial comparing kids in the program to kids who are not, on measures of cognition, attachment, and motor development. Caregivers will be evaluated to see what they have learned about stimulation and play.

Criança Feliz neither pities poverty nor romanticizes it. It recognizes that low income people often lack information about how to raise their children and offers that information up, allowing parents to do what they will with it. “It’s one thing to say ‘read to your baby twice a day,”’ says Sanderson. “It’s another thing to say, ‘when your baby hears your voice, there are little sparks firing in his brain that are helping him get ready to learn.’”

Of course, it’s a delicate balance between respecting the right of a family to raise their children the way they see fit and offering information and evidence that could help the child and the family. “You’re in their home, you can’t interfere,” says Teresa Surrita, mayor of Boa Vista. “But you are there to change their mindset.”

Liticia Lopes da Silva 23, a home visitor from Arujé, outside Sao Paulo, says that the initial Visits with families can be hard. “They don’t understand the importance of stimulation and they are resistant to the idea of playing with children,” she says. “They are raised a different way, their parents did not have this interaction with them.” The issue is not just that some mothers don’t play with their babies; some barely look at them. Others treat the visitors as nannies, leaving them to play with the child, thus thwarting the very purpose of the visit, the interaction between parent and child.

But after a few weeks of watching a social worker sit on the floor, playing with the child, and talking with her about the baby’s development, the mothers sometimes join in. “It’s amazing to see the families evolve,” says one home visitor in Arujé. “Three to four months after, you see the difference in how the mother plays with the child. In a different way, the whole family gets involved. Fathers often get involved and many families start to ask the visitors to come more often, although the visitors cannot oblige.

When a home visitor named Sissi Elisabeth Gimenes visits a family in Aruié, she brings a color wheel painted onto a piece of recycled cardboard, along with painted clothespins. She asks Agatha, age three, to put a brown clip on the brown color.

Agatha doesn’t know her colors and gets very shy. Sissi encourages Agatha while chatting with her mother, Alda Ferreira, about how play beneflts brain development. She quietly models how to use encouragement and praise, praising Agatha for finding white, ”the color of clouds”, as the girl slowly gets more confident and gets off her mother’s lap to play.

The activity is intentional. The clips hone Agatha’s fine motor skills as well as her cognitive ones; the interaction with her mother helps create the synaptic connections that allow her brain to grow and pave the way to more effective learning later on. Alda tells us her daughter knows many things that her older daughter did not at the same age.

Agatha

The process changes the social workers as well. One social worker, who has a three year old herself, says that as parents, we think we know everything. “But I knew nothing.” In Aruja, where the home visitors are all psychology students at the local university, working with the program as part-time interns, many admitted to being shocked at seeing the reality of what they’d been taught in the classroom. Poverty looks different off the page. “We are changing because we are out of the bubble,” said one. “Theory is very shallow.”

As we leave Campero’s house, I ask Miranda what she thought of the visit. She too starts to cry. “Gabriella recognizes the program is making a difference in her life,” she says, embarrassed and surprised at her own emotions. Campero had told Miranda a few weeks earlier that she was worried because the baby was not moving. Miranda suggested that Campero try singing to the child in her womb; the baby started to move.

The man who made it happen

In 2003, as secretary of health in Rio Grande do Sul, Terra created Programa Primeira Infancia Melhor (the Better Early Childhood Development Program, or PIM), a home visiting program based on Educa tu Hijo, a very successful case study from Cuba. Results have been mixed, but Terra saw the impact it had on families and communities. He set his sights on expanding the program nationally.

One of the most persuasive arguments for the program, he knew, was the science. But he had to build votes for that science. In 2011, he started lobbying everyone he could to try and get financial backing from congress to fund a week-long course that he helped create at Harvard University’s Center for the Developing Child. He thought if lawmakers, who would be attracted to the prestige of a course at Harvard, could learn from the neuroscientists and physicians there, they might also become advocates for the policy.

“Anybody in the corridor he sees, it’s a hug, it’s a tap on the chest, and then it’s early childhood development,” says Mary Young, director of the Center for Child Development at the China Development Research Foundation and an advisor to Criança Feliz. “He’s got the will and the skill.”

One convert, Michel Temer, who was vice president from 2011 and became president in 2016 when his boss was impeached, tapped Terra to be minister of social development. Soon after, Criança Feliz was born. But trying to get Terra to talk about legislation can be a challenge. What he wants to talk about are neurons, synapses, and working memory. Did I know that one million new neural connections are formed every second in the first few years of life?

And that those neural connections are key to forming memories?

“The number of connections depends on the stimuli of the environment,” he says. And the environment of poverty is relentlessly unkind to the stimuli available to children.

Attachment, he explains, is key, not just psychologically, but neurobiologically. “If a child feels emotionally safe and secure and attached they explore the world in a better way. The safer they feel, the safer their base, the faster they learn,” he says.

The first 1,000 days

Over the past 20 years, scientists have focused on the importance of the first 1,000 days of life. Brains build themselves, starting with basic connections and moving to more complex ones. Like a house, the better the foundation of basic connections, the more complex are the ones that can be built on top. In an infant’s earliest days, it’s not flashcards that create their brains, but relationships, via an interactive process that scientists call “serve and return.” When an infant or young child babbles, looks at an adult, or cries, and the adult responds with an affectionate gaze, words, or hugs, neural connections are created in the child’s brain that allow them to later develop critical tools like self-control and communication.

If kids do not experience stimulation and nurturing care, or if they face repeated neglect or abuse, the neural networks do not organize well. And that, says Charles Nelson, a pediatrics professor at Harvard Medical School, can affect the immune system, the cardiovascular system, the metabolic system, and even alter the physical structure of the brain. “Children who experience profound neglect early in life, if you don’t reverse that by the age of two, the chance they will end up with poor development outcomes is high,” he says.

The strongest buffer to protect against that? A parent, or caring adult.

The case for early childhood as policy was elevated by Nobel Prize winning economist James Heckman. As founder of the Center for the Economics of Human Development at the University of Chicago, he demonstrated the economic case for why the best investment a policymaker can make is in the earliest years of childhood, because that’s when intervention has the highest payoffs.

“The highest rate of return in early childhood development comes from investing as early as possible, from birth through age five, in disadvantaged families,” Heckman said in 2012. His work showed that every dollar invested in a child over those years delivers a 13% return on investment every year. “Starting at age three or four is too little too late, as it fails to recognize that skills beget skills in a complementary and dynamic way,” he said.

More than 506 Brazilian legislators, judges, mayors, state politicians and and prosecutors have attended the Harvard course that Terra helped set up. There, Jack Shonkoff, a pediatrician and professor, explains what infants need to thrive, what toxic stress does to a child and how to build resilience. The attendees are put in groups, maybe a state senator from one state with council members from municipalities in the same state, to spend the week on a project; in the next two-and-a-half months, they finish it with the help of a technical facilitator.

”It’s a little facilitation and a little manipulation,” says Eduardo Queiroz, outgoing head of the Fundaeéo Maria Cecilia Souto Vidigal, a foundation which has played an integral role in supporting and shepherding Criança Feliz. “We create a community.”

It costs $8,800 to attend the program. Some pay their own way. Congress pays for lawmakers to go, and the Fundagéo Maria Cecilia Souto Vidigal funds between 10 and 12 scholarships a year. The fellowship does not require the participants to do anything with their knowledge. But many have. Surrita, who is in her fifth term as mayor of Boa Vista, focused her early governing efforts on working with teens, tackling drugs and gangs as a way to help them. After her week at Harvard, she changed her approach, deciding to make Boa Vista the “early childhood development capital of Brazil.” Investing in young children, she argues, will mean not so many problems with teens:

”After taking this course Harvard on the ECD I realized how important it would be for us to work with the kids from pregnancy up to 6 years old that to develop them mentally and cognitively and that way I realized that it would be possible for us to improve the performance of the teenagers lives by working on them when they’re kids.”

Obstacles and opportunities

Criança Feliz faces two significant threats: the prospect of being shut down, and the challenges created by its own ambition.

Although the Legal Framework for Early Childhood Development, passed in 2016, underpins Criança Feliz, it currently exists as a decree of the president. Of the last three presidents, one is in jail, one was impeached and the current one, Temer, faces criminal charges. With approval ratings of around 3%, Temer has decided not to run again, and the program’s supporters are worried that whoever wins the election will dismantle what the previous government has done (a common practice in Brazil). “We are concerned every day because the program is ongoing and we don’t know if the [next] president will support it,” says Ilnara Trajano, the state coordinator from Roirama state.

Mederios and Terra say the solution to avoiding political death is to create a law that will automatically fund Criança Feliz at the state level, rather than relying on presidential support. Terra, who exudes confidence and optimism, is sure such a law can be passed before the October date set for presidential elections. Others, including Harasawa, are not so sanguine. “We are in a race against time,” she says. She is working around the clock to build support one municipality at a time. She worries that not everyone thinks the government should play a role in parenting. “We are not trying to replace the family,” she says. “We are trying to support it.”

Beyond its political future, the program itself faces a host of issues. In many places, there aren’t enough skilled workers to act as home visitors. There’s also the fraught logistics of getting around. In Careiro da Varzea, in Amazonas state, home visitors often travel five hours, by foot, to reach pregnant women and young children; they are tired when they arrive. In Arujé, seven home visitors share one car to visit 200 families, or 30 visits each, per week. Internet services can be terrible, and wild, dogs often chase the social workers.

The visitors are trained in a curriculum that tells them which materials to use, what to teach and when, and the research that underpins the guidance they give to mothers. But they need more training, and the curriculum does not always prepare them for the poverty and distress they see. Some mothers want to give up their babies; they did not want them in the first place.

Many suffer from depression. The social workers are trained to support nurturing care, but they are not mental health experts. Inevitably, turnover is high.

The evidence for the value of home visiting at scale is at once highly compelling and frustratingly imprecise. Consider the case of Colombia: From 2009 to 2011, researchers there studied 1,419 children between the ages of 12 to 24 months to see whether coaching their mothers on interactions with their babies could help the children’s development. After 18 months, the researchers found a host of benefits. The children whose mothers had received coaching got smarter. Their language skills improved, and their home environments were judged to be more stimulating. But when researchers went back two years later, they found the children, now about five years old, had not maintained those benetits. “Two years after the intervention ended, we found no effects on children’s cognition, language, school readiness, executive functioning, or behavioral development,” the study reported. (Criança Feliz run for a longer period of time, however.)

Governments face notoriously hard choices about where to invest their money. “Early childhood development is a really valuable investment,” says Dave Evans, an economist at the World Bank. “But so is primary education and the quality of primary education, and if you spend a dollar in one place, it’s a dollar you aren’t spending in another place.”

Samuel, Keith, and Giliane

One of the Virtues of a home visiting program, compared to say, building childcare centers, is that social workers can see what is happening inside a home: signs of domestic violence, other children in need, a mother’s depression, a father’s unemployment. They can help with kids like Samuel, who was born with cerebral palsy.

At two-and-a-half years old, Samuel loves his ball, and shrieks with delight when he is presented with a truck. He can’t stop smiling at his mother, Giliane de Almedida Trindade Dorea. She and social worker Keith Mayara Ribeiro da Silva, gather around him to talk and play.

“Where is the dog? Yes! That’s the dog. Very good Samuel!” says da Silva.

The two encourage Samuel to try and stand up. He struggles. “Get up, use your legs,” says Dorea. “You are lazy. Be strong!”

Samuel ignores the women’s requests. He wants to play. They shift gears. “Where is the ball?” da Silva asks. He grabs it and plays. “He’s very smart!” she says. She and Dorea are trying to get Samuel to use one hand, which cannot open, to play with the ball and then the truck. They work together for 15 minutes to find a way to get him to use his weak hand, but he just wants to play with his dominant hand.

Dorea adores her son and plays with him patiently. But it has been hard, she says. When da Silva started to visit, Samuel could not sit up, he was quite shy and often cried. Da Silva has helped the family access the services and care that Samuel needs: a physiotherapist, an occupational therapist, an acupuncturist, and a doctor to check his hearing. These are services the government will provide, but finding them and organizing the appointments is time consuming and can be overwhelming.

Dorea says Samuel has changed since Keith has been coming. “His interaction with people, he’s totally different. He was so shy.” In fact, she says the whole family has benefited. Her older daughter also knows how to play with Samuel and loves to help. She appreciates the support. Raising a child with a disability is hard work. “The visitor is a like a friend who comes every week not just for fun but also to share my concerns,” she says. Her biggest complaint about the program? “It’s too short.”

Will it survive?

There is a maxim in investing that you have to survive short-run volatility to get to the long run, you can’t make money if you don’t have any. Criança Feliz faces the same problem. Child development takes time. It is not a jobs program or a construction project, which voters can see.

The benefits can take years to show up, and politicians have never been known for their long-term thinking.

Alberto Beltrame, the current minister of social development, is a believer. Start early and you shape character, transforming the child into a better young adult and, eventually, creating an improved workforce, he says. You reduce violence and crime. He agrees that Bolsa Familia alone is not enough. It does not promote autonomy, or break the cycle of poverty. What is needed is a two-pronged approach: In the short term, promote training, microcredit, and entrepreneurialism to create jobs. For the medium and long term, Criança Feliz.

“We have a huge array of benefits that we are going to gain with this one program, and the cost is very, very low compared to others,” he says.

In every home we visited, mothers said they loved the support, be it information, toys, or more often, company to share their challenges and triumphs. Priscila Soares da Silva has three children, including six month old Allyce, and another on the way. With Allyce, she says, she has changed her approach to parenting, setting time aside to play every day now. “You raise children your way,” she explains cooing over Allyce. “When you see there are other visions, you see the way you did it was not so right.” She is also refreshingly honest about something all parents know: We do it better when someone is watching. “There are things we know, but we are lazy. When she comes, we are better.”

When I quietly ask her teenage daughter, who is lingering in the corner, what she thinks of the visits, she answers immediately: “She’s so much more patient,” she says of her mother. Her own takeaway: Parenting is hard, and she does not want to do it anytime soon. Priscila smiles at this, agreeing she started too soon, and noting the benefits of the program have extended beyond Allyce and the baby she will soon have. “The program got the family closer.”

Evans, from the World Bank, is watching the program closely. “I see Criança Feliz as a big, bold, gamble about which I am optimistic,” he says. “But I think the measurement and the evaluation is crucial to see if it is a model that other countries want to echo.”

If it survives the near term political turbulence, Beltrame says it can go way beyond the poor to beneiit everyone. “We are trying to make the Brazilian people realize, independent from their level of income, that stimulating children from pregnancy through the first 1,000 days of life is important,” he says. Better young people equal healthier and better adults, who are more emotionally connected and can be better citizens.

With Crianea Feliz, Beltrame says, we have the “possibility of having a new destiny and future for each one of these children.”

The Lancet

Advancing Early Childhood Development: from Science to Scale

An Executive Summary for The Lancet’s Series

Overview of the Series

The 2016 Lancet Early Childhood Development Series highlights early childhood development at a time when it has been universally endorsed in the 2030 Sustainable Development Goals.” This Series considers new scientific evidence for interventions, building on the findings and recommendations of previous Lancet Series on child development (2007, 2011), and proposes pathways for implementation of early childhood development at scale.

The Series emphasises “nurturing care”, especially of children below three years of age, and multi-sectoral interventions starting with health, which can have wide reach to families and young children through health and nutrition.

Key messages from the Series

The burden and cost of inactian is high.

A staggering 43 percent of children under five years of age, an estimated 250 million, living in low and middle income countries are at risk of suboptimal development due to poverty and stunting. The burden is currently underestimated because risks to health and wellbeing go beyond these two factors. A poor start in life can lead to poor health, nutrition, and inadequate learning, resulting in low adult earnings as well as social tensions. Negative consequences impact not only present but also future generations. Because of this poor start, affected individuals are estimated to suffer a loss of about a quarter of average adult income per year while countries may forfeit up to twice their current GDP expenditures on health and education.

– Young children need nurturing care from the start.

Development begins at conception. Scientific evidence indicates that early childhood is not only a period of special sensitivity to risk factors, but also a critical time when the benefits of early interventions are amplified and the negative effects of risk can be reduced. The most formative experiences of young children come from nurturing care received from parents, other family members, caregivers, and community based services. Nurturing care is characterised by a stable environment that promotes children’s health and nutrition, protects children from threats, and gives them opportunies for early learning, through affectionate interactions and relationships. Benefits of such care are life long, and include improved health, wellbeing, and ability to learn and earn. Families need support to provide nurturing care for young children, including material and financial resources, national policies such as paid parental leave and provision of population based services in a range of sectors, includlng health, nutrition, education, and child and social protection.

We must deliver multi-sector interventions, with health as a starting point for reaching the youngst children.

Interventions, including support for families to provide nurturing care and solving difficulties when they occur, target multiple risks to development, and can be integrated into existing maternal and child health services. Services should be two pronged, considering the needs of the child as well as the primary caregiver, and include both (are for child development as well as maternal and family health and wellbeing. This affordable approach is an important entry point for multi-sectoral collaborations that support families and reach very young children. Essential among these are nutrition, to support growth and health, child protection, for violence prevention and family support. social protection, for family financial stability and capacity to access services; and education, for quality early learning opportunities.

– We must strengthen government leadership, to scale up what works.

It is possible to scale up projects to nationwide programmes that are effective and sustainable, as indicated by four country case studies in diverse world regions. However, government leadership and political prioritisation are prerequisites. Governments may choose different pathways for achievmg early childhood development goals and targets, from introducing tansformative government wide initiatives to progressively enhancing existing services. Services and interventions to support early childhood development are essential to ensuring that everyone reaches their potential over their life course and into the next generation, the vision that is core to the Sustainable Development Goals.

Risks to early childhood development remain high

Updated definitions of stunting and extreme poverty and improved source data were used to re-estimate the number of children under 5 years in low and middle income countries who are at risk of not reaching their developmental potential. Between 2004 and 2010, this number declined from 279 million (51 percent of children in 2004) to 249 million (43 percent of chiidren in 2010), with the highest prevalence in sub Saharan Africa (70 percent in 2004 and 66 percent in 2010).” An illustrative analysis from 15 countries with available Multiple indicator Cluster Surveys in 2010 or 2011 demonstrates the implications of additional risks to chiidren’s development beyond poverty and stunting, induding low maternal schooling (completed primary school) and child physical abuse by either parent or by caregivers (severe punishment of children aged 2 to 5 years, such as hitting a child as hard as possible, or with a belt or stick). Estimates of children at risk increase dramatically when low maternal schooling and this kind of physmal abuse are added, from 62.7 percent (exposed to risks of stunting or extreme poverty), to 754 percent, with large disparities among sub national social and economic groups.

Global commitments to early childhood development are growing

Since 2000, the rapid increase in publications on the topic of early childhood development surpassed the general trend for health sciences publications. However, only a few of the publications reported on interventions.

The numbers of countries with national multi sectoral early childhood development policies increased from seven in 2000 to 68 in 2014, of which 45 percent were low and middle income countries. There has also been substantial investment in early childhood development during that time period. For example, since 2000 the Inter American Development Bank has approved m