Category Archives: Science & Research

MOONSHOT FOR BIOLOGY. $5bn project to map DNA of every animal, plant and fungus – Hannah Devlin * The Earth BioGenome Project.

International sequencing drive will involve reading genomes of 1.5m species.
The total volume of biological data that will be gathered is expected to be on the “exascale”, more than that accumulated by Twitter, YouTube or the whole of astronomy.

An ambitious international project to sequence the DNA of every known animal, plant and fungus in the world over the next 10 years has been launched.

Described as “the next moonshot for biology”, the Earth BioGenome Project is expected to cost $4.7bn (£3.6bn) and involve reading the genomes of 1.5m species.

Prof Harris Lewin of the University of California, Davis, who chairs the project, said it could be as transformational for biology as the Human Genome Project, which decoded the human genome between 1990 and 2003.

. . . The Guardian

Powerful advances in genome sequencing technology, informatics, automation, and artificial intelligence, have propelled humankind to the threshold of a new beginning in understanding, utilizing, and conserving biodiversity. For the first time in history, it is possible to efficiently sequence the genomes of all known species, and to use genomics to help discover the remaining 80 to 90 percent of species that are currently hidden from science.


The Earth BioGenome Project (EBP), a moonshot for biology, aims to sequence, catalog and characterize the genomes of all of Earth’s eukaryotic biodiversity over a period of ten years.


Create a new foundation for biology to drive solutions for preserving biodiversity and sustaining human societies.

. . . Earth BioGenome Project

The Man Who Drew Neurons – Loretta G. Breuning Ph.D. – Advice for a Young Investigator – Santiago Ramón y Cajal.

There is every reason to believe that when the human intellect ignores reality and concentrates within, it can no longer explain the simplest inner workings of life’s machinery or of the world around us.

By abandoning the ethereal realm of philosophical principles and abstract methods we can descend to the solid ground of experimental science, as well as to the sphere of ethical considerations involved in the process of inquiry. In taking this course, simple, genuinely useful advice for the novice can be found.

Cajal had discovered the synapse and with fundamental insight went on to describe the organization of all the major neural systems in terms of chains of independent neurons and the concept of functional polarity unidirectional information flow in circuits.

The man who discovered the synapse, Santiago Ramón y Cajal also wrote a book of career advice. “Advice for a Young Investigator”

He made huge contributions to our knowledge of the brain. He discovered that neurons transmit electricity in only one direction. He wrote a textbook on the nervous system, based on his own extensive lab work and sketches, that was used to train doctors for generations.

I wanted career advice from this person.

So why does the same advice need to be repeated every generation?

Because it conflicts with our natural impulses.

Natural selection built a brain that makes careful decisions about where to invest its energy. It doesn’t like to waste effort on failed endeavors. A lion would starve to death if it kept running after gazelles that got away. This makes it hard to persist when we run into setbacks.

The brain we’ve inherited seeks safety in numbers. A gazelle that wandered off would soon be eaten alive. Thus our brain alarms us with a bad feeling when we’re isolated. This makes it hard to trust your judgment when the rest of the herd walks away.

Our brain is designed to weigh risk and reward, but it defines them with neural pathways built from the risks and rewards of your past. New information has trouble getting in unless we invest our full attention. This leaves us with less energy for other things, so we often just stick to our old risk/reward pathways.

Psychology Today

Santiago Ramón y Cajal (1852-1934) was a Spanish neuroscientist and pathologist, specializing in neuroanatomy, particularly the histology of the central nervous system. He and Camillo Golgi received the Nobel Prize in Physiology or Medicine in 1906, with Ramón y Cajal thereby becoming the first person of Spanish origin who won a scientific Nobel Prize.

His original investigations of the microscopic structure of the brain made him a pioneer of modern neuroscience. Hundreds of his drawings illustrating the delicate arborizations of brain cells are still in use for educational and training purposes.

Advice for a Young Investigator (1897)

Santiago Ramón y Cajal


Santiago Ramon y Cajal (1852-1934) is one of the more fascinating personalities in science. Above all he was the most important neuroanatomist since Andreas Vesalius, the Renaissance founder of modern biology. However, Cajal was also a thoughtful and inspired teacher, he made several lasting contributions to Spanish literature (his autobiography, a popular book of aphorisms, and reflections on old age), and he wrote one of the early books on the theory and practice of color photography. Furthermore, he was an exceptional artist, perhaps the best ever to draw the circuits of the brain, which he could never photograph to his satisfaction.

In his early thirties, Cajal wrote and illustrated the first original textbook of histology in Spain, which remained a standard throughout his lifetime.

The first draft of his unique book of practical, fatherly advice to young people in the early stages of their research careers was begun soon after moving to the chair of histology and pathological anatomy at the University of Madrid about a decade later, when he also wrote the first major review of his investigations with Camillo Golgi’s silver chromate method: New Ideas on the Structure of the Nervous System (1894).

This succinct book redefined how brain circuits had been described. In it, Cajal presented histological evidence that the central nervous system is not a syncytium or reticulum of cells as commonly believed at the time. Instead, it consists of individual neurons that usually conduct information in just one direction. The information output of the neuron is down a single axon and its branches to terminal boutons that end on or near the input side of another neuron (its cell body and dendrites). Cajal had discovered the synapse and with fundamental insight went on to describe the organization of all the major neural systems in terms of chains of independent neurons and the concept of functional polarity (unidirectional information flow in circuits). He was the first to explain in modern terms the organization of reflex and voluntary control pathways to the motor system, and this conceptual advance was the structural foundation of Sir Charles Sherrington’s modern physiological explanation of reflexes and their control.

By the time the Advice for a Young Investigator was finally published, he was beginning to synthesize the vast research that established his reputation in a three volume masterpiece, the Histology of the Nervous System in Man and Vertebrates (1899-1904). So the Advice became a popular vehicle for Cajal to write down the thoughts and anecdotes he would give to students and colleagues about how to make important original contributions in any branch of science, and it was so successful that the third edition is still in print (in Spanish).

Part of the Advice is based on an analysis of his own success, while the rest comes from a judicious selection of wisdom from other places and other people’s lives. Nevertheless, it is obviously Cajal’s analysis of his own scientific career. As such, it is deeply embedded in contemporary Spanish culture and in the childhood of a country doctor’s son. Hard work, ambition, patience, humility, seriousness, and passion for work, family, and country were among the traits he considered essential. But above all, master technique and produce original data; all the rest will follow.

It is interesting to compare Cajal the writer and Cajal the scientist. As a distinguished author of advice, autobiography, and reflections on life, he displayed a complex mixture of the romantic, idealist, patriot, and realist. And a sense of humor is obvious in his delightful chapter here on diseases of the will, where stereotypes of eccentric scientists are diagnosed according to symptoms we have all seen, and their prognosis discussed. In stark contrast, his scientific publications are almost ruthlessly systematic, descriptive, and deductive. He once wrote that his account of nervous system structure was not based on the appearance of a nerve cell here and there, but on the analysis of millions of neurons.

Because Cajal revealed so much about his thoughts and feelings in the Advice and in his autobiography, Reflections on My Life, it is easy to see his genius as well as his flaws. He deals with many broad issues of morals, religion, and patriotism that are often avoided, invariably generate controversy, and go in and out of fashion. However, it is important to bear in mind that he was writing in the late nineteenth century to aspiring researchers in his native Spain, which at the time was not one of the scientifically and politically elite countries of Europe. Thus, some of his advice may now appear dated or irrelevant to young people in North America and Europe who enjoy relative peace, prosperity, and intellectual security. However, it may become relevant to them sometime in the future, and it still applies to many other cultures.

This translation is based on two sources, the fourth edition of Reglas y Consejos sobre Investigacion Biologica (Ios tonicos de la voluntad) (1916), and an English translation of the sixth edition by J.Ma. Sanchez-Perez, which was edited and annotated by C.B. Courville as Precepts and Counsels in Scientific Investigation. Stimulants of the Spirit (1951).

We had originally thought that it would be worthwhile simply to reprint the Sanchez-Perez and Courville work, but finally decided that the translation was too literal, and in some few cases inaccurate, for today’s students. Our goal has been to write a modern rather than literal translation, retaining as much flavor of the original as we could.

The fourth edition was published when Cajal was over sixty, and was never substantially revised again. The later Spanish editions have two chapters at the end that are concerned primarily with conditions in Spain at the time, and they have not been translated because of their limited relevance today. We thank Graciela Sanchez-Watts for help with translating certain difficult passages.

Larry W. Swanson Los Angeles, February 1, 1998

1 Introduction

Thoughts about general methods. Abstract rules are sterile. Need to enlighten the mind and strengthen resolve. Organization of the book.

I shall assume that the reader’s general education and background in philosophy are sufficient to understand that the major sources of knowledge include observation, experiment, and reasoning by induction and deduction.

Instead of elaborating on accepted principles, let us simply point out that for the last hundred years the natural sciences have abandoned completely the Aristotelian principles of intuition, inspiration, and dogmatism.

The unique method of reflection indulged in by the Pythagoreans and followers of Plato (and pursued in modern times by Descartes, Fichte, Krause, Hegel, and more recently at least partly by Bergson) involves exploring one’s own mind or soul to discover universal laws and solutions to the great secrets of life. Today this approach can only generate feelings of sorrow and compassion, the latter because of talent wasted in the pursuit of chimeras, and the former because of all the time and work so pitifully squandered.

The history of civilization proves beyond doubt just how sterile the repeated attempts of metaphysics to guess at nature’s laws have been. Instead, there is every reason to believe that when the human intellect ignores reality and concentrates within, it can no longer explain the simplest inner workings of life’s machinery or of the world around us.

The intellect is presented with phenomena marching in review before the sensory organs. It can be truly useful and productive only when limiting itself to the modest tasks of observation, description, and comparison, and of classification that is based on analogies and differences. A knowledge of underlying causes and empirical laws will then come slowly through the use of inductive methods.

Another commonplace worth repeating is that science cannot hope to solve Ultimate Causes. In other words, science can never understand the foundation hidden below the appearance of phenomena in the universe. As Claude Bernard has pointed out, researchers cannot transcend the determinism of phenomena; instead, their mission is limited to demonstrating the how, never the why, of observed changes.

This is a modest goal in the eyes of philosophy, yet an imposing challenge in actual practice. Knowing the conditions under which a phenomenon occurs allows us to reproduce or eliminate it at will, therefore allowing us to control and use it for the benefit of humanity. Foresight and action are the advantages we obtain from a deterministic view of phenomena.

The severe constraints imposed by determinism may appear to limit philosophy in a rather arbitrary way. However, there is no denying that in the natural sciences, and especially in biology, it is a very effective tool for avoiding the innate tendency to explain the universe as a whole in terms of general laws. They are like a germ with all the necessary parts, just as a seed contains all the potentialities of the future tree within it. Now and then philosophers invade the field of biological sciences with these beguiling generalizations, which tend to be unproductive, purely verbal solutions lacking in substance. At best, they may prove useful when viewed simply as working hypotheses.

Thus, we are forced to concede that the ”great enigmas” of the universe listed by Du Bois-Raymond are beyond our understanding at the present time. The great German physiologist pointed out that we must resign ourselves to the state of ignoramus, or even the inexorable ignorabimus.

There is no doubt that the human mind is fundamentally incapable of solving these formidable problems (the origin of life, nature of matter, origin of movement, and appearance of consciousness). Our brain is an organ of action that is directed toward practical tasks; it does not appear to have been built for discovering the ultimate causes of things, but rather for determining their immediate causes and invariant relationships. And whereas this may appear to be very little, it is in fact a great deal. Having been granted the immense advantage of participating in the unfolding of our world, and of modifying it to life’s advantage, we may proceed quite nicely without knowing the essence of things.

It would not be wise in discussing general principles of research to overlook those panaceas of scientific method so highly recommended by Claude Bernard, which are to be found in Bacon’s Novum Organum and Descartes’s Book of Methods. They are exceptionally good at stimulating thought, but are much less effective in teaching one how to discover. After confessing that reading them may suggest a fruitful idea or two, I must further confess an inclination to share De Maistre’s view of the Novum Organum: ”Those who have made the greatest discoveries in science never read it, and Bacon himself failed to make a single discovery based on his own rules.”

Liebig appears even more harsh in his celebrated Academic Discourse when he states that Bacon was a scientific dilettante whose writings contain nothing of the processes leading to discovery, regardless of inflated praise from jurists, historians, and others far removed from science.

No one fails to use instinctively the following general principles of Descartes when approaching any difficult problem: ”Do not acknowledge as true anything that is not obvious, divide a problem into as many parts as necessary to attack it in the best way, and start an analysis by examining the simplest and most easily understood parts before ascending gradually to an understanding of the most complex.” The merit of the French philosopher is not based on his application of these principles but rather on having formulated them clearly and rigorously after having profited by them unconsciously, like everyone else, in his thinking about philosophy and geometry.

I believe that the slight advantage gained from reading such work, and in general any work concerned with philosophical methods of investigation, is based on the vague, general nature of the rules they express. In other words, when they are not simply empty formulas they become formal expressions of the mechanism of understanding used during the process of research. This mechanism acts unconsciously in every well-organized and cultivated mind, and when the philosopher reflexly formulates psychological principles, neither the author nor the reader can improve their respective abilities for scientific investigation.

Those writing on logical methods impress me in the same way as would a speaker attempting to improve his eloquence by learning about brain speech centers, about voice mechanics, and about the distribution of nerves to the larynx, as if knowing these anatomical and physiological details would create organization where none exists, or refine what we already have.

It is important to note that the most brilliant discoveries have not relied on a formal knowledge of logic. Instead, their discoverers have had an acute inner logic that generates ideas with the same unstudied unconsciousness that allowed Jourdain to create prose. Reading the work of the great scientific pioneers such as Galileo, Kepler, Newton, Lavoisier, Geoffroy Saint-Hilaire, Faraday, Ampere, Bernard, Pasteur, Virchow, and Liebig is considerably more effective. However, it is important to realize that if we lack even a spark of the splendid light that shone in those minds, and at least a trace of the noble zeal that motivated such distinguished individuals, this exercise may if nothing else convert us to enthusiastic or insightful commentators on their work, perhaps even to good scientific writers, but it will not create the spirit of investigation within us.

A knowledge of principles governing the historical unfolding of science also provides no great advantage in understanding the process of research. Herbert Spencer proposed that intellectual progress emerges from that which is homogeneous and that which is heterogeneous, and by virtue of the instability of that which is homogeneous, and of the principle that every cause produces more than one effect, each discovery immediately stimulates many other discoveries. However, even if this concept allows us to appreciate the historical march of science, it cannot provide us with the key to its revelations. The important thing is to discover how each investigator, in his own special domain, was able to segregate heterogeneous from homogeneous, and to learn why many of those who set out to accomplish a particular goal did not succeed.

Let me assert without further ado that there are no rules of logic for making discoveries, let alone for converting those lacking a natural talent for thinking logically into successful researchers. As for geniuses, it is well known that they have difficulty bowing to rules, they prefer to make them instead. Condorcet has noted that ”The mediocre can be educated; geniuses educate themselves.”

Must we therefore abandon any attempt to instruct and educate about the process of scientific research? Shall we leave the beginner to his own devices, confused and abandoned, struggling without guidance or advice along a path strewn with difficulties and dangers?

Definitely not. In fact, just the opposite, we believe that by abandoning the ethereal realm of philosophical principles and abstract methods we can descend to the solid ground of experimental science, as well as to the sphere of ethical considerations involved in the process of inquiry. In taking this course, simple, genuinely useful advice for the novice can be found.

In my view, some advice about what should be known, about what technical education should be acquired, about the intense motivation needed to succeed, and about the carelessness and inclination toward bias that must be avoided, is far more useful than all the rules and warnings of theoretical logic. This is the justification for the present work, which contains those encouraging words and paternal admonitions that the writer would have liked so much to receive at the beginning of his own modest scientific career.

My remarks will not be of much value to those having had the good fortune to receive an education in the laboratory of a distinguished scientist, under the beneficial influence of living rules embodied in a learned personality who is inspired by the noble vocation of science combined with teaching. They will also be of little use to those energetic individuals, those gifted souls mentioned above, who obviously need only the guidance provided by study and reflection to gain an understanding of the truth. Nevertheless, it is perhaps worth repeating that they may prove comforting and useful to the large number of modest individuals with a retiring nature who, despite yearning for reputation, have not yet reaped the desired harvest, due either to a certain lack of determination or to misdirected efforts.

This advice is aimed more at the spirit than the intellect because I am convinced, and Payot wisely agrees, that the former is as amenable to education as the latter. Furthermore, I believe that all outstanding work, in art as well as in science, results from immense zeal applied to a great idea.

The present work is divided into nine chapters. In the second I will try to show how the prejudices and lax judgment that weaken the novice can be avoided. These problems destroy the self-confidence needed for any investigation to reach a happy conclusion. In the third chapter I will consider the moral values that should be displayed, which are like stimulants of the will. In the fourth Chapter I will suggest what needs to be known in preparing for a competent struggle with nature. In the fifth, I will point out certain impairments of the will and of judgment that must be avoided. In the sixth, I will discuss social conditions that favor scientific work, as well as influences of the family circle. In the seventh, I will outline how to plan and carry out the investigation itself (based on observation, explanation or hypothesis, and proof). In the eighth I will deal with how to write scientific papers; and finally, in the ninth Chapter the investigator’s moral obligations as a teacher will be considered.


Advice for a Young Investigator

by Santiago Ramón y Cayal

get it at

Epigenetics: The Evolution Revolution – Israel Rosenfield and Edward Ziff * The Epigenetics Revolution – Nessa Carey.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

We are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

Israel Rosenfield and Edward Ziff

At the end of the eighteenth century, the French naturalist Jean-Baptiste Lamarck noted that life on earth had evolved over long periods of time into a striking variety of organisms. He sought to explain how they had become more and more complex. Living organisms not only evolved, Lamarck argued; they did so very slowly, “little by little and successively.” In Lamarckian theory, animals became more diverse as each creature strove toward its own “perfection,” hence the enormous variety of living things on earth. Man is the most complex life form, therefore the most perfect, and is even now evolving.

In Lamarck’s view, the evolution of life depends on variation and the accumulation of small, gradual changes. These are also at the center of Darwin’s theory of evolution, yet Darwin wrote that Lamarck’s ideas were “veritable rubbish.” Darwinian evolution is driven by genetic variation combined with natural selection, the process whereby some variations give their bearers better reproductive success in a given environment than other organisms have. Lamarckian evolution, on the other hand, depends on the inheritance of acquired characteristics. Giraffes, for example, got their long necks by stretching to eat leaves from tall trees, and stretched necks were inherited by their offspring, though Lamarck did not explain how this might be possible.

When the molecular structure of DNA was discovered in 1953, it became dogma in the teaching of biology that DNA and its coded information could not be altered in any way by the environment or a person’s way of life. The environment, it was known, could stimulate the expression of a gene. Having a light shone in one’s eyes or suffering pain, for instance, stimulates the activity of neurons and in doing so changes the activity of genes those neurons contain, producing instructions for making proteins or other molecules that play a central part in our bodies.

The structure of the DNA neighboring the gene provides a list of instructions, a gene program, that determines under what circumstances the gene is expressed. And it was held that these instructions could not be altered by the environment. Only mutations, which are errors introduced at random, could change the instructions or the information encoded in the gene itself and drive evolution through natural selection. Scientists discredited any Lamarckian claims that the environment can make lasting, perhaps heritable alterations in gene structure or function.

But new ideas closely related to Lamarck’s eighteenth century views have become central to our understanding of genetics. In the past fifteen years these ideas, which belong to a developing field of study called epigenetics, have been discussed in numerous articles and several books, including Nessa Carey’s 2012 study The Epigenetic Revolution and The Deepest Well, a recent work on childhood trauma by the physician Nadine Burke Harris.

The developing literature surrounding epigenetics has forced biologists to consider the possibility that gene expression could be influenced by some heritable environmental factors previously believed to have had no effect over it, like stress or deprivation. “The DNA blueprint,” Carey writes,

Isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life.

That might seem a commonsensical view. But it runs counter to decades of scientific thought about the independence of the genetic program from environmental influence. What findings have made it possible?

In 1975, two English biologists, Robin Holliday and John Pugh, and an American biologist, Arthur Riggs, independently suggested that methylation, a chemical modification of DNA that is heritable and can be induced by environmental influences, had an important part in controlling gene expression. How it did this was not understood, but the idea that through methylation the environment could, in fact, alter not only gene expression but also the genetic program rapidly took root in the scientific community.

As scientists came to better understand the function of methylation in altering gene expression, they realized that extreme environmental stress, the results of which had earlier seemed self explanatory, could have additional biological effects on the organisms that suffered it. Experiments with laboratory animals have now shown that these outcomes are based on the transmission of acquired changes in genetic function. Childhood abuse, trauma, famine, and ethnic prejudice may, it turns out, have long term consequences for the functioning of our genes.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

Epigenesis does not change the information coded in the genes or a person’s genetic makeup, the genes themselves are not affected, but instead alters the manner in which they are “read” by blocking access to certain genes and preventing their expression.

This mechanism can be the hidden cause of our feelings of depression, anxiety, or paranoia. What is perhaps most surprising of all, this alteration could, in some cases, be passed on to future generations who have never directly experienced the stresses that caused their forebears’ depression or ill health.

Numerous clinical studies have shown that childhood trauma, arising from parental death or divorce, neglect, violence, abuse, lack of nutrition or shelter, or other stressful circumstances, can give rise to a variety of health problems in adults: heart disease, cancer, mood and dietary disorders, alcohol and drug abuse, infertility, suicidal behavior, learning deficits, and sleep disorders.

Since the publication in 2003 of an influential paper by Rudolf Jaenisch and Adrian Bird, we have started to understand the genetic mechanisms that explain why this is the case. The body and the brain normally respond to danger and frightening experiences by releasing a hormone, a glucocorticoid that controls stress. This hormone prepares us for various challenges by adjusting heart rate, energy production, and brain function; it binds to a protein called the glucocorticoid receptor in nerve cells of the brain.

Normally, this binding shuts off further glucocorticoid production, so that when one no longer perceives a danger, the stress response abates. However, as Gustavo Turecki and Michael Meaney note in a 2016 paper surveying more than a decade’s worth of findings about epigenetics, the gene for the receptor is inactive in people who have experienced childhood stress; as a result, they produce few receptors. Without receptors to bind to, glucocorticoids cannot shut off their own production, so the hormone keeps being released and the stress response continues, even after the threat has subsided.

“The term for this is disruption of feedback inhibition,” Harris writes. It is as if “the body’s stress thermostat is broken. Instead of shutting off this supply of ‘heat’ when a certain point is reached, it just keeps on blasting cortisol through your system.”

It is now known that childhood stress can deactivate the receptor gene by an epigenetic mechanism, namely, by creating a physical barrier to the information for which the gene codes. What creates this barrier is DNA methylation, by which methyl groups known as methyl marks (composed of one carbon and three hydrogen atoms) are added to DNA.

DNA methylation is long-lasting and keeps chromatin, the DNA-protein complex that makes up the chromosomes containing the genes, in a highly folded structure that blocks access to select genes by the gene expression machinery, effectively shutting the genes down. The long-term consequences are chronic inflammation, diabetes, heart disease, obesity, schizophrenia, and major depressive disorder.

Such epigenetic effects have been demonstrated in experiments with laboratory animals. In a typical experiment, rat or mouse pups are subjected to early-life stress, such as repeated maternal separation. Their behavior as adults is then examined for evidence of depression, and their genomes are analyzed for epigenetic modifications. Likewise, pregnant rats or mice can be exposed to stress or nutritional deprivation, and their offspring examined for behavioral and epigenetic consequences.

Experiments like these have shown that even animals not directly exposed to traumatic circumstances, those still in the womb when their parents were put under stress, can have blocked receptor genes. It is probably the transmission of glucocorticoids from mother to fetus via the placenta that alters the fetus in this way. In humans, prenatal stress affects each stage of the child’s maturation: for the fetus, a greater risk of preterm delivery, decreased birth weight, and miscarriage; in infancy, problems of temperament, attention, and mental development; in childhood, hyperactivity and emotional problems; and in adulthood, illnesses such as schizophrenia and depression.

What is the significance of these findings?

Until the mid-1970s, no one suspected that the way in which the DNA was “read” could be altered by environmental factors, or that the nervous systems of people who grew up in stress free environments would develop differently from those of people who did not. One’s development, it was thought, was guided only by one’s genetic makeup.

As a result of epigenesis, a child deprived of nourishment may continue to crave and consume large amounts of food as an adult, even when he or she is being properly nourished, leading to obesity and diabetes. A child who loses a parent or is neglected or abused may have a genetic basis for experiencing anxiety and depression and possibly schizophrenia.

Formerly, it had been widely believed that Darwinian evolutionary mechanisms, variation and natural selection, were the only means for introducing such long lasting changes in brain function, a process that took place over generations. We now know that epigenetic mechanisms can do so as well, within the lifetime of a single person.

It is by now well established that people who suffer trauma directly during childhood or who experience their mother’s trauma indirectly as a fetus may have epigenetically based illnesses as adults. More controversial is whether epigenetic changes can be passed on from parent to child.

Methyl marks are stable when DNA is not replicating, but when it replicates, the methyl marks must be introduced into the newly replicated DNA strands to be preserved in the new cells. Researchers agree that this takes place when cells of the body divide, a process called mitosis, but it is not yet fully established under which circumstances marks are preserved when cell division yields sperm and egg, a process called meiosis, or when mitotic divisions of the fertilized egg form the embryo. Transmission at these two latter steps would be necessary for epigenetic changes to be transmitted in full across generations.

The most revealing instances for studies of intergenerational transmission have been natural disasters, famines, and atrocities of war, during which large groups have undergone trauma at the same time. These studies have shown that when women are exposed to stress in the early stages of pregnancy, they give birth to children whose stress response systems malfunction. Among the most widely studied of such traumatic events is the Dutch Hunger Winter. In 1944 the Germans prevented any food from entering the parts of Holland that were still occupied. The Dutch resorted to eating tulip bulbs to overcome their stomach pains. Women who were pregnant during this period, Carey notes, gave birth to a higher proportion of obese and schizophrenic children than one would normally expect. These children also exhibited epigenetic changes not observed in similar children, such as siblings, who had not experienced famine at the prenatal stage.

During the Great Chinese Famine (1958-1961), millions of people died, and children born to young women who experienced the famine were more likely to become schizophrenic, to have impaired cognitive function, and to suffer from diabetes and hypertension as adults. Similar studies of the 1932-1933 Ukrainian famine, in which many millions died, revealed an elevated risk of type II diabetes in people who were in the prenatal stage of development at the time. Although prenatal and early childhood stress both induce epigenetic effects and adult illnesses, it is not known if the mechanism is the same in both cases.

Whether epigenetic effects of stress can be transmitted over generations needs more research, both in humans and in laboratory animals. But recent comprehensive studies by several groups using advanced genetic techniques have indicated that epigenetic modifications are not restricted to the glucocorticoid receptor gene. They are much more extensive than had been realized, and their consequences for our development, health, and behavior may also be great.

It is as though nature employs epigenesis to make long lasting adjustments to an individual’s genetic program to suit his or her personal circumstances, much as in Lamarck’s notion of “striving for perfection.”

In this view, the ill health arising from famine or other forms of chronic, extreme stress would constitute an epigenetic miscalculation on the part of the nervous system. Because the brain prepares us for adult adversity that matches the level of stress we suffer in early life, psychological disease and ill health persist even when we move to an environment with a lower stress level.

Once we recognize that there is an epigenetic basis for diseases caused by famine, economic deprivation, war related trauma, and other forms of stress, it might be possible to treat some of them by reversing those epigenetic changes. “When we understand that the source of so many of our society’s problems is exposure to childhood adversity,” Harris writes,

The solutions are as simple as reducing the dose of adversity for kids and enhancing the ability of caregivers to be buffers. From there, we keep working our way up, translating that understanding into the creation of things like more effective educational curricula and the development of blood tests that identify biomarkers for toxic stress, things that will lead to a wide range of solutions and innovations, reducing harm bit by bit, and then leap by leap.

Epigenetics has also made clear that the stress caused by war, prejudice, poverty, and other forms of childhood adversity may have consequences both for the persons affected and for their future unborn children, not only for social and economic reasons but also for biological ones.

The Epigenetics Revolution

Nessa Carey

Sometimes, when we read about biology, we could be forgiven for thinking that those three letters explain everything. Here, for example, are just a few of the statements made on 26 June 2000, when researchers announced that the human genome had been sequenced:

Today we are learning the language in which God created life. US President Bill Clinton

We now have the possibility of achieving all we ever hoped for from medicine. UK Science Minister Lord Sainsbury

Mapping the human genome has been compared with putting a man on the moon, but I believe it is more than that. This is the outstanding achievement not only of our lifetime, but in terms of human history. Michael Dexter, The Wellcome Trust

From these quotations, and many others like them, we might well think that researchers could have relaxed a bit after June 2000 because most human health and disease problems could now be sorted out really easily. After all, we had the blueprint for humankind. All we needed to do was get a bit better at understanding this set of instructions, so we could fill in a few details. Unfortunately, these statements have proved at best premature. The reality is rather different.

We talk about DNA as if it’s a template, like a mould for a car part in a factory. In the factory, molten metal or plastic gets poured into the mould thousands of times and, unless something goes wrong in the process, out pop thousands of identical car parts.

But DNA isn’t really like that. It’s more like a script. Think of Romeo and Juliet, for example. In 1936 George Cukor directed Leslie Howard and Norma Shearer in a film version. Sixty years later Baz Luhrmann directed Leonardo DiCaprio and Claire Danes in another movie version of this play. Both productions used Shakespeare’s script, yet the two movies are entirely different. Identical starting points, different outcomes.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

The implications of this for human health are very wide ranging, as we will see from the case studies we are going to look at in a moment. In all these case studies it’s really important to remember that nothing happened to the DNA blueprint of the people in these case studies. Their DNA didn’t change (mutate), and yet their life histories altered irrevocably in response to their environments.

Audrey Hepburn was one of the 20th century’s greatest movie stars. Stylish, elegant and with a delicately lovely, almost fragile bone structure, her role as Holly Golightly in Breakfast at Tiffany’s has made her an icon, even to those who have never seen the movie. It’s startling to think that this wonderful beauty was created by terrible hardship. Audrey Hepburn was a survivor of an event in the Second World War known as the Dutch Hunger Winter. This ended when she was sixteen years old but the after effects of this period, including poor physical health, stayed with her for the rest of her life.

The Dutch Hunger Winter lasted from the start of November 1944 to the late spring of 1945. This was a bitterly cold period in Western Europe, creating further hardship in a continent that had been devastated by four years of brutal war. Nowhere was this worse than in the Western Netherlands, which at this stage was still under German control. A German blockade resulted in a catastrophic drop in the availability of food to the Dutch population. At one point the population was trying to survive on only about 30 per cent of the normal daily calorie intake. People ate grass and tulip bulbs, and burned every scrap of furniture they could get their hands on, in a desperate effort to stay alive. Over 20,000 people had died by the time food supplies were restored in May 1945.

The dreadful privations of this time also created a remarkable scientific study population. The Dutch survivors were a well defined group of individuals all of whom suffered just one period of malnutrition, all of them at exactly the same time. Because of the excellent healthcare infrastructure and record keeping in the Netherlands, epidemiologists have been able to follow the long term effects of the famine. Their findings were completely unexpected.

One of the first aspects they studied was the effect of the famine on the birth weights of children who had been in the womb during that terrible period. If a mother was well fed around the time of conception and malnourished only for the last few months of the pregnancy, her baby was likely to be born small. If, on the other hand, the mother suffered malnutrition for the first three months of the pregnancy only (because the baby was conceived towards the end of this terrible episode), but then was well fed, she was likely to have a baby with a normal body weight. The foetus ‘caught up’ in body weight.

That all seems quite straightforward, as we are all used to the idea that foetuses do most of their growing in the last few months of pregnancy. But epidemiologists were able to study these groups of babies for decades and what they found was really surprising. The babies who were born small stayed small all their lives, with lower obesity rates than the general population. For forty or more years, these people had access to as much food as they wanted, and yet their bodies never got over the early period of malnutrition. Why not? How did these early life experiences affect these individuals for decades? Why weren’t these people able to go back to normal, once their environment reverted to how it should be?

Even more unexpectedly, the children whose mothers had been malnourished only early in pregnancy, had higher obesity rates than normal. Recent reports have shown a greater incidence of other health problems as well, including certain tests of mental activity. Even though these individuals had seemed perfectly healthy at birth, something had happened to their development in the womb that affected them for decades after. And it wasn’t just the fact that something had happened that mattered, it was when it happened. Events that take place in the first three months of development, a stage when the foetus is really very small, can affect an individual for the rest of their life.

Even more extraordinarily, some of these effects seem to be present in the children of this group, i.e. in the grandchildren of the women who were malnourished during the first three months of their pregnancy.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

Let’s consider a different human story. Schizophrenia is a dreadful mental illness which, if untreated, can completely overwhelm and disable an affected person. Patients may present with a range of symptoms including delusions, hallucinations and enormous difficulties focusing mentally. People with schizophrenia may become completely incapable of distinguishing between the ‘real world’ and their own hallucinatory and delusional realm. Normal cognitive, emotional and societal responses are lost. There is a terrible misconception that people with schizophrenia are likely to be violent and dangerous. For the majority of patients this isn’t the case at all, and the people most likely to suffer harm because of this illness are the patients themselves. Individuals with schizophrenia are fifty times more likely to attempt suicide than healthy individuals.

Schizophrenia is a tragically common condition. It affects between 0.5 per cent and 1 per cent of the population in most countries and cultures, which means that there may be over fifty million people alive today who are suffering from this condition. Scientists have known for some time that genetics plays a strong role in determining if a person will develop this illness. We know this because if one of a pair of identical twins has schizophrenia, there is a 50 per cent chance that their twin will also have the condition. This is much higher than the 1 per cent risk in the general population.

Identical twins have exactly the same genetic code as each other. They share the same womb and usually they are brought up in very similar environments. When we consider this, it doesn’t seem surprising that if one of the twins develops schizophrenia, the chance that his or her twin will also develop the illness is very high. In fact, we have to start wondering why it isn’t higher. Why isn’t the figure 100 per cent? How is it that two apparently identical individuals can become so very different? An individual has a devastating mental illness but will their identical twin suffer from it too? Flip a coin heads they win, tails they lose. Variations in the environment are unlikely to account for this, and even if they did, how would these environmental effects have such profoundly different impacts on two genetically identical people?

Here’s a third case study. A small child, less than three years old, is abused and neglected by his or her parents. Eventually, the state intervenes and the child is taken away from the biological parents and placed with foster or adoptive parents. These new carers love and cherish the child, doing everything they can to create a secure home, full of affection. The child stays with these new parents throughout the rest of its childhood and adolescence, and into young adulthood.

Sometimes everything works out well for this person. They grow up into a happy, stable individual indistinguishable from all their peers who had normal, non abusive childhoods. But often, tragically, it doesn’t work out this way. Children who have suffered from abuse or neglect in their early years grow up with a substantially higher risk of adult mental health problems than the general population. All too often the child grows up into an adult at high risk of depression, self-harm, drug abuse and suicide.

Once again, we have to ask ourselves why. Why is it so difficult to override the effects of early childhood exposure to neglect or abuse?

Why should something that happened early in life have effects on mental health that may still be obvious decades later?

In some cases, the adult may have absolutely no recollection of the traumatic events, and yet they may suffer the consequences mentally and emotionally for the rest of their lives.

These three case studies seem very different on the surface. The first is mainly about nutrition, especially of the unborn child. The second is about the differences that arise between genetically identical individuals. The third is about long term psychological damage as a result of childhood abuse.

But these stories are linked at a very fundamental biological level. They are all examples of epigenetics. Epigenetics is the new discipline that is revolutionising biology. Whenever two genetically identical individuals are non-identical in some way we can measure, this is called epigenetics. When a change in environment has biological consequences that last long after the event itself has vanished into distant memory, we are seeing an epigenetic effect in action.

Epigenetic phenomena can be seen all around us, every day. Scientists have identified many examples of epigenetics, just like the ones described above, for many years. When scientists talk about epigenetics they are referring to all the cases where the genetic code alone isn’t enough to describe what’s happening, there must be something else going on as well.

This is one of the ways that epigenetics is described scientifically, where things which are genetically identical can actually appear quite different to one another. But there has to be a mechanism that brings out this mismatch between the genetic script and the final outcome. These epigenetic effects must be caused by some sort of physical change, some alterations in the vast array of molecules that make up the cells of every living organism. This leads us to the other way of viewing epigenetics, the molecular description.

In this model, epigenetics can be defined as the set of modifications to our genetic material that change the ways genes are switched on or off, but which don’t alter the genes themselves.

Although it may seem confusing that the word ‘epigenetics’ can have two different meanings, it’s just because we are describing the same event at two different levels. It’s a bit like looking at the pictures in old newspapers with a magnifying glass, and seeing that they are made up of dots. If we didn’t have a magnifying glass we might have thought that each picture was just made in one solid piece and we’d probably never have been able to work out how so many new images could be created each day. On the other hand, if all we ever did was look through the magnifying glass, all we would see would be dots, and we’d never see the incredible image that they formed together and which we’d see if we could only step back and look at the big picture.

The revolution that has happened very recently in biology is that for the first time we are actually starting to understand how amazing epigenetic phenomena are caused. We’re no longer just seeing the large image, we can now also analyse the individual dots that created it.

Crucially, this means that we are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

The ‘epi’ in epigenetics is derived from Greek and means at, on, to, upon, over or beside. The DNA in our cells is not some pure, unadulterated molecule. Small chemical groups can be added at specific regions of DNA. Our DNA is also smothered in special proteins. These proteins can themselves be covered with additional small chemicals. None of these molecular amendments changes the underlying genetic code. But adding these chemical groups to the DNA, or to the associated proteins, or removing them, changes the expression of nearby genes. These changes in gene expression alter the functions of cells, and the very nature of the cells themselves. Sometimes, if these patterns of chemical modifications are put on or taken off at a critical period in development, the pattern can be set for the rest of our lives, even if we live to be over a hundred years of age.

There’s no debate that the DNA blueprint is a starting point. A very important starting point and absolutely necessary, without a doubt. But it isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life. And as we shall see in Chapter 1, we would all look like big amorphous blobs, because all the cells in our bodies would be completely identical.

Huge areas of biology are influenced by epigenetic mechanisms, and the revolution in our thinking is spreading further and further into unexpected frontiers of life on our planet. Some of the other examples we’ll meet in this book include why we can’t make a baby from two sperm or two eggs, but have to have one of each. What makes cloning possible? Why is cloning so difficult? Why do some plants need a period of cold before they can flower? Since queen bees and worker bees are genetically identical, why are they completely different in form and function? Why are all tortoiseshell cats female?

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

Scientists in both the academic and commercial sectors are also waking up to the enormous impact that epigenetics has on human health. It’s implicated in diseases from schizophrenia to rheumatoid arthritis, and from cancer to chronic pain. There are already two types of drugs that successfully treat certain cancers by interfering with epigenetic processes. Pharmaceutical companies are spending hundreds of millions of dollars in a race to develop the next generation of epigenetic drugs to treat some of the most serious illnesses afflicting the industrialised world. Epigenetic therapies are the new frontiers of drug discovery.

In biology, Darwin and Mendel came to define the 19th century as the era of evolution and genetics; Watson and Crick defined the 20th century as the era of DNA, and the functional understanding of how genetics and evolution interact. But in the 21st century it is the new scientific discipline of epigenetics that is unravelling so much of what we took as dogma and rebuilding it in an infinitely more varied, more complex and even more beautiful fashion.

The world of epigenetics is a fascinating one. It’s filled with remarkable subtlety and complexity, and in Chapters 3 and 4 we’ll delve deeper into the molecular biology of what’s happening to our genes when they become epigenetically modified. But like so many of the truly revolutionary concepts in biology, epigenetics has at its basis some issues that are so simple they seem completely self evident as soon as they are pointed out. Chapter 1 is the single most important example of such an issue. It’s the investigation which started the epigenetics revolution.

Notes on nomenclature

There is an international convention on the way that the names of genes and proteins are written, which we adhere to in this book.

Gene names and symbols are written in italics. The proteins encoded by the genes are written in plain text. The symbols for human genes and proteins are written in upper case. For other species, such as mice, the symbols are usually written with only the first letter capitalised.

This is summarised for a hypothetical gene in the following table.

Like all rules, however, there are a few quirks in this system and while these conventions apply in general we will encounter some exceptions in this book.

Chapter 1

An Ugly Toad and an Elegant Man

Like the toad, ugly and venomous, wears yet a precious jewel in his head. William Shakespeare

Humans are composed of about 50 to 70 trillion cells. That’s right, 50,000,000,000,000 cells. The estimate is a bit vague but that’s hardly surprising. Imagine we somehow could break a person down into all their individual cells and then count those cells, at a rate of one cell every second. Even at the lower estimate it would take us about a million and a half years, and that’s without stopping for coffee or losing count at any stage. These cells form a huge range of tissues, all highly specialised and completely different from one another. Unless something has gone very seriously wrong, kidneys don’t start growing out of the top of our heads and there are no teeth in our eyeballs.

This seems very obvious but why don’t they? It’s actually quite odd, when we remember that every cell in our body was derived from the division of just one starter cell. This single cell is called the zygote. A zygote forms when one sperm merges with one egg.

A Zygote

This zygote splits in two; those two cells divide again and so on, to create the miraculous piece of work which is a full human body. As they divide the cells become increasingly different from one another and form specialised cell types. This process is known as differentiation. It’s a vital one in the formation of any multicellular organism.

If we look at bacteria down a microscope then pretty much all the bacteria of a single species look identical. Look at certain human cells in the same way say, a food absorbing cell from the small intestine and a neuron from the brain and we would be hard pressed to say that they were even from the same planet. But so what? Well, the big ‘what’ is that these cells started out with exactly the same genetic material as one another. And we do mean exactly, this has to be the case, because they came from just one starter cell, that zygote. So the cells have become completely different even though they came from one cell with just one blueprint.

One explanation for this is that the cells are using the same information in different ways and that’s certainly true. But it’s not necessarily a statement that takes us much further forwards. In a 1960 adaptation of H. G. Wells’s The Time Machine, starring Rod Taylor as the time travelling scientist, there’s a scene where he shows his time machine to some learned colleagues (all male, naturally) and one asks for an explanation of how the machine works. Our hero then describes how the occupant of the machine will travel through time by the following mechanism:

In front of him is the lever that controls movement. Forward pressure sends the machine into the future. Backward pressure, into the past. And the harder the pressure, the faster the machine travels.

Everyone nods sagely at this explanation. The only problem is that this isn’t an explanation, it’s just a description. And that’s also true of that statement about cells using the same information in different ways it doesn’t really tell us anything, it just re-states what we already knew in a different way.

What’s much more interesting is the exploration of how cells use the same genetic information in different ways. Perhaps even more important is how the cells remember and keep on doing it. Cells in our bone marrow keep on producing blood cells, cells in our liver keep on producing liver cells. Why does this happen? One possible and very attractive explanation is that as cells become more specialised they rearrange their genetic material, possibly losing genes they don’t require. The liver is a vital and extremely complicated organ. The website of the British Liver Trust states that the liver performs over 500 functions, including processing the food that has been digested by our intestines, neutralising toxins and creating enzymes that carry out all sorts of tasks in our bodies. But one thing the liver simply never does is transport oxygen around the body. That job is carried out by our red blood cells, which are stuffed full of a particular protein, haemoglobin. Haemoglobin binds oxygen in tissues where there’s lots available, like our lungs, and then releases it when the red blood cell reaches a tissue that needs this essential chemical, such as the tiny blood vessels in the tips of our toes. The liver is never going to carry out this function, so perhaps it just gets rid of the haemoglobin gene, which it simply never uses.

It’s a perfectly reasonable suggestion cells could simply lose genetic material they aren’t going to use. As they differentiate, cells could jettison hundreds of genes they no longer need. There could of course be a slightly less drastic variation on this, maybe the cells shut down genes they aren’t using. And maybe they do this so effectively that these genes can never ever be switched on again in that cell, i.e. the genes are irreversibly inactivated. The key experiments that examined these eminently reasonable hypotheses, loss of genes, or irreversible inactivation involved an ugly toad and an elegant man.

Turning back the biological clock

The work has its origins in experiments performed many decades ago in England by John Gurdon, first in Oxford and subsequently Cambridge. Now Professor Sir John Gurdon, he still works in a lab in Cambridge, albeit these days in a gleaming modern building that has been named after him. He’s an engaging, unassuming and striking man who, 40 years on from his ground breaking work, continues to publish research in a field that he essentially founded.

John Gurdon cuts an instantly recognisable figure around Cambridge. Now in his seventies, he is tall, thin and has a wonderful head of swept back blonde hair. He looks like the quintessential older English gentleman of American movies, and fittingly he went to school at Eton. There is a lovely story that John Gurdon still treasures, a school report from his biology teacher at that institution which says, ‘I believe Gurdon has ideas about becoming a scientist. In present showing, this is quite ridiculous.’ The teacher’s comments were based on his pupil’s dislike of mindless rote learning of unconnected facts. But as we shall see, for a scientist as wonderful as John Gurdon, memory is much less important than imagination.

In 1937 the Hungarian biochemist Albert SzentGyorgyi won the Nobel Prize for Physiology or Medicine, his achievements including the discovery of vitamin C. In a phrase that has various subtly different translations but one consistent interpretation he defined discovery as, ‘To see what everyone else has seen but to think what nobody else has thought’. It is probably the best description ever written of what truly great scientists do. And John Gurdon is truly a great scientist, and may well follow in Szent-Gyorgyi’s Nobel footsteps.

In 2009 he was a co-recipient of the Lasker Prize, which is to the Nobel what the Golden Globes are so often to the Oscars. John Gurdon’s work is so wonderful that when it is first described it seems so obvious, that anyone could have done it. The questions he asked, and the ways in which he answered them, have that scientifically beautiful feature of being so elegant that they seem entirely self-evident.

John Gurdon used non-fertilised toad eggs in his work. Any of us who has ever kept a tank full of frogspawn and watched this jelly-like mass develop into tadpoles and finally tiny frogs, has been working, whether we thought about it in these terms or not, with fertilised eggs, i.e. ones into which sperm have entered and created a new complete nucleus. The eggs John Gurdon worked on were a little like these, but hadn’t been exposed to sperm.

There were good reasons why he chose to use toad eggs in his experiments. The eggs of amphibians are generally very big, are laid in large numbers outside the body and are see-through. All these features make amphibians a very handy experimental species in developmental biology, as the eggs are technically relatively easy to handle. Certainly a lot better than a human egg, which is hard to obtain, very fragile to handle, is not transparent and is so small that we need a microscope just to see it.

John Gurdon worked on the African clawed toad (Xenopus Iaevis, to give it its official title), one of those John Malkovich ugly-handsome animals, and investigated what happens to cells as they develop and differentiate and age. He wanted to see if a tissue cell from an adult toad still contained all the genetic material it had started with, or if it had lost or irreversibly inactivated some as the cell became more specialised. The way he did this was to take a nucleus from the cell of an adult toad and insert it into an unfertilised egg that had had its own nucleus removed. This technique is called somatic cell nuclear transfer (SCNT), and will come up over and over again. ‘Somatic’ comes from the Greek word for ‘body’.

After he’d performed the SCNT, John Gurdon kept the eggs in a suitable environment (much like a child with a tank of frogspawn) and waited to see if any of these cultured eggs hatched into little toad tadpoles.

The experiments were designed to test the following hypothesis: ‘As cells become more specialised (differentiated) they undergo an irreversible loss/inactivation of genetic material.’ There were two possible outcomes to these experiments:


The hypothesis was correct and the ‘adult’ nucleus has lost some of the original blueprint for creating a new individual. Under these circumstances an adult nucleus will never be able to replace the nucleus in an egg and so will never generate a new healthy toad, with all its varied and differentiated tissues.


The hypothesis was wrong, and new toads can be created by removing the nucleus from an egg and replacing it with one from adult tissues.

Other researchers had started to look at this before John Gurdon decided to tackle the problem, two scientists called Briggs and King using a different amphibian, the frog Rana pipiens. In 1952 they transplanted the nuclei from cells at a very early stage of development into an egg lacking its own original nucleus and they obtained viable frogs. This demonstrated that it was technically possible to transfer a nucleus from another cell into an ‘empty’ egg without killing the cell. However, Briggs and King then published a second paper using the same system but transferring a nucleus from a more developed cell type and this time they couldn’t create any frogs. The difference in the cells used for the nuclei in the two papers seems astonishingly minor just one day older and no froglets. This supported the hypothesis that some sort of irreversible inactivation event had taken place as the cells differentiated. A lesser man than John Gurdon might have been put off by this. Instead he spent over a decade working on the problem.

The design of the experiments was critical. Imagine we have started reading detective stories by Agatha Christie. After we’ve read our first three we develop the following hypothesis: ‘The killer in an Agatha Christie novel is always the doctor.’ We read three more and the doctor is indeed the murderer in each. Have we proved our hypothesis? No. There’s always going to be the thought that maybe we should read just one more to be sure. And what if some are out of print, or unobtainable? No matter how many we read, we may never be entirely sure that we’ve read the entire collection. But that’s the joy of disproving hypotheses. All we need is one instance in which Poirot or Miss Marple reveal that the doctor was a man of perfect probity and the killer was actually the vicar, and our hypothesis is shot to pieces. And that is how the best scientific experiments are designed to disprove, not to prove an idea.

And that was the genius of John Gurdon’s work. When he performed his experiments what he was attempting was exceptionally challenging with the technology of the time. If he failed to generate toads from the adult nuclei this could simply mean his technique had something wrong with it. No matter how many times he did the experiment without getting any toads, this wouldn’t actually prove the hypothesis. But if he did generate live toads from eggs where the original nucleus had been replaced by the adult nucleus he would have disproved the hypothesis. He would have demonstrated beyond doubt that when cells differentiate, their genetic material isn’t irreversibly lost or changed. The beauty of this approach is that just one such toad would topple the entire theory and topple it he did.

John Gurdon is incredibly generous in his acknowledgement of the collegiate nature of scientific research, and the benefits he obtained from being in dynamic laboratories and universities. He was lucky to start his work in a well set-up laboratory which had a new piece of equipment which produced ultraviolet light. This enabled him to kill off the original nuclei of the recipient eggs without causing too much damage, and also ‘softened up’ the cell so that he could use tiny glass hypodermic needles to inject donor nuclei.

Other workers in the lab had, in some unrelated research, developed a strain of toads which had a mutation with an easily detectable, but non-damaging effect. Like almost all mutations this was carried in the nucleus, not the cytoplasm. The cytoplasm is the thick liquid inside cells, in which the nucleus sits. So John Gurdon used eggs from one strain and donor nuclei from the mutated strain. This way he would be able to show unequivocally that any resulting toads had been coded for by the donor nuclei, and weren’t just the result of experimental error, as could happen if a few recipient nuclei had been left over after treatment.

John Gurdon spent around fifteen years, starting in the late 1950s, demonstrating that in fact nuclei from specialised cells are able to create whole animals if placed in the right environment i.e. an unfertilised eggé. The more differentiated/specialised the donor cell was, the less successful the process in terms of numbers of animals, but that’s the beauty of disproving a hypothesis we might need a lot of toad eggs to start with but we don’t need to end up with many live toads to make our case. Just one non murderous doctor will do it, remember?

Sir John Gurdon showed us that although there is something in cells that can keep specific genes turned on or switched off in different cell types, whatever this something is, it can’t be loss or permanent inactivation of genetic material, because if he put an adult nucleus into the right environment in this case an ‘empty’ unfertilised egg it forgot all about this memory of which cell type it came from. It went back to being a naive nucleus from an embryo and started the whole developmental process again.

Epigenetics is the ‘something’ in these cells. The epigenetic system controls how the genes in DNA are used, in some cases for hundreds of cell division cycles, and the effects are inherited from when cells divide. Epigenetic modifications to the essential blueprint exist over and above the genetic code, on top of it, and program cells for decades. But under the right circumstances, this layer of epigenetic information can be removed to reveal the same shiny DNA sequence that was always there. That’s what happened when John Gurdon placed the nuclei from fully differentiated cells into the unfertilised egg cells.

Did John Gurdon know what this process was when he generated his new baby toads? No. Does that make his achievement any less magnificent? Not at all. Darwin knew nothing about genes when he developed the theory of evolution through natural selection. Mendel knew nothing about DNA when, in an Austrian monastery garden, he developed his idea of inherited factors that are transmitted ‘true’ from generation to generation of peas. It doesn’t matter. They saw what nobody else had seen and suddenly we all had a new way of viewing the world.

The epigenetic landscape

Oddly enough, there was a conceptual framework that was in existence when John Gurdon performed his work. Go to any conference with the word ‘epigenetics’ in the title and at some point one of the speakers will refer to something called ‘Waddington’s epigenetic landscape’.


The Epigenetics Revolution

by Nessa Carey

get it at

Stephen Hawking. His Life And Work – Kitty Ferguson.

The Story and Science of One of the Most Extraordinary, Celebrated and Courageous Figures of Our Time.



Stephen Hawking is one of the most remarkable figures of our time, a Cambridge genius who has earned international celebrity and become an inspiration to those who have witnessed his triumph over disability. This is Hawking’s life story by Kitty Ferguson, written with help from Hawking himself and his close associates.

Ferguson’s Stephen Hawking’s Quest for a Theory of Everything was a Sunday Times bestseller in 1992. She has now transformed that short book into a hugely expanded, carefully researched, up to the minute biography giving a rich picture of Hawking’s life, his childhood, the heart rending beginning of his struggle with motor neurone disease, his ever increasing international fame, and his long personal battle for survival in pursuit of a scientific understanding of the universe. Throughout, Kitty Ferguson also summarizes and explains the cutting-edge science in which Hawking has been engaged.

Stephen Hawking is written with the clarity and simplicity for which all Kitty Ferguson’s books have been praised. The result is a captivating account of an extraordinary life and mind.


The quest for a Theory of Everything

Kitty Ferguson

IN THE CENTRE of Cambridge, England, There are a handful of narrow lanes that seem hardly touched by the twentieth or twenty-first centuries. The houses and buildings represent a mixture of eras, but a step around the corner from the wider thoroughfares into any of these little byways is a step back in time, into a passage leading between old college walls or a village street with a medieval church and churchyard or a malt house. Traffic noises from equally old but busier roads nearby are barely audible. There is near silence, birdsong, voices, footsteps. Scholars and townspeople have walked here for centuries.

When I wrote my first book about Stephen Hawking in 1990, I began the story in one of those little streets, Free School Lane. It runs off Bene’t Street, beside the church of St Bene’t’s with its eleventh century bell tower. Around the corner, in the lane, flowers and branches still droop through the iron palings of the churchyard, as they did twenty years ago. Bicycles tethered there belie the antique feel of the place, but a little way along on the right is a wall of black, rough stones with narrow slit windows belonging to the fourteenth-century Old Court of Corpus Christi College, the oldest court in Cambridge. Turn your back to that wall and you will see, high up beside a gothic-style gateway, a plaque that reads, THE CAVENDISH LABORATORY. This gateway and the passage beyond are a portal to a more recent era, oddly tucked away in the medieval street.

There is no hint of the friary that stood on this site in the twelfth century or the gardens that were later planted on its ruins. Instead, bleak, factory like buildings, almost oppressive enough to be a prison, tower over grey asphalt pavement. The situation improves further into the complex, and in the two decades since I first wrote about it some newer buildings have gone up, but the glass walls of these well-designed modern structures are still condemned to reflect little besides the grimness of their older neighbours.

For a century, until the University of Cambridge built the ‘New’ Cavendish Labs in 1974, this complex housed one of the most important centres of physics research in the world. Here, ‘J. J.’ Thomson discovered the electron, Ernest Rutherford probed the structure of the atom and the list goes on and on. When I attended lectures here in the 1990s (for not everything moved to the New Cavendish in 1974), enormous chalk-boards were still in use, hauled noisily up and down with crank driven chain pulley systems to make room for the endless strings of equations in a physics lecture.

The Cockcroft Lecture Room, part of this same site, is a much more up-to-date lecture room. Here, on 29 April 1980, scientists, guests and university dignitaries gathered in steep tiers of seats, facing a two-storey wall of chalk board and slide screen still well before the advent of PowerPoint. The occasion was the inaugural lecture of a new Lucasian Professor of Mathematics, 38-year-old mathematician and physicist Stephen William Hawking. He had been named to this illustrious chair the previous autumn.

The title announced for his lecture was a question: ‘Is the End in Sight for Theoretical Physics?’ Hawking startled his listeners by announcing that he thought it was. He invited them to join him in a sensational escape through time and space on a quest to find the Holy Grail of science: the theory that explains the universe and everything that happens in it what some were calling the Theory of Everything.

Watching Stephen Hawking, silent in a wheelchair while one of his students read his lecture for the audience, no one unacquainted with him would have thought he was a promising choice to lead such an adventure.

Theoretical physics was for him the great escape from a prison more grim than any suggested by the Old Cavendish Labs. Beginning when he was a graduate student in his early twenties, he had lived with encroaching disability and the promise of an early death. Hawking has amyotrophic lateral sclerosis, known in America as Lou Gehrig’s disease after the New York Yankees’ first baseman, who died of it. The progress of the disease in Hawking’s case had been slow, but by the time he became Lucasian Professor he could no longer walk, write, feed himself, or raise his head if it tipped forward. His speech was slurred and almost unintelligible except to those who knew him best. For the Lucasian lecture, he had painstakingly dictated his text earlier, so that it could be read by the student.

Jane and Stephen Hawking in the 60s.

But Hawking certainly was and is no invalid. He is an active mathematician and physicist, whom some were even then calling the most brilliant since Einstein. The Lucasian Professorship is an extremely prestigious position in the University of Cambridge, dating from 1663. The second holder of the chair was Sir Isaac Newton.

It was typical of Hawking’s iconoclasm to begin this distinguished professorship by predicting the end of his own field. He said he thought there was a good chance the so-called Theory of Everything would be found before the close of the twentieth century, leaving little for theoretical physicists like himself to do.

Since that lecture, many people have come to think of Stephen Hawking as the standard bearer of the quest for that theory. However, the candidate he named for Theory of Everything was not one of his own theories but ‘N=8 supergravity’, a theory which many physicists at that time hoped might unify all the particles and the forces of nature. Hawking is quick to point out that his work is only one part of a much larger picture, involving physicists all over the world, and also part of a very old quest.

The longing to understand the universe must surely be as ancient as human consciousness. Ever since human beings first began to look at the night skies as well as at the enormous variety of nature around them, and considered their own existence, they’ve been trying to explain all this with myths, religion, and, later, mathematics and science. We may not be much nearer to understanding the complete picture than our remotest ancestors, but most of us like to think, as does Stephen Hawking, that we are.

Hawking’s life story and his science continue to be full of paradoxes. Things are often not what they seem. Pieces that should fit together refuse to do so. Beginnings may be endings; cruel circumstances can lead to happiness, although fame and success may not; two brilliant and highly successful scientific theories taken together yield nonsense; empty space isn’t empty; black holes aren’t black; the effort to unite everything in a simple explanation reveals, instead, a fragmented picture; and a man whose appearance inspires shock and pity, takes us joyfully to where the boundaries of time and space ought to be but are not.

Anywhere we look in our universe, we find that reality is astoundingly complex and elusive, sometimes alien, not always easy to take, and often impossible to predict. Beyond our universe there may be an infinite number of others. The close of the twentieth century has come and gone, and no one has discovered the Theory of Everything. Where does that leave Stephen Hawking’s prediction? Can any scientific theory truly explain it all?


“Our goal is nothing less than a complete description of the universe we live in”

THE IDEA THAT all the amazing intricacy and variety we experience in the world and the cosmos may come down to something remarkably simple is not new or far-fetched. The sage Pythagoras and his followers in southern Italy in the sixth century BC studied the relationships between lengths of strings on a lyre and the musical pitches these produced, and realized that hidden behind the confusion and complexity of nature there is pattern, order, rationality. In the two and a half millennia since, our forebears have continued to find often, like the Pythagoreans, to their surprise and awe that nature is less complicated than it first appears.

Imagine, if you can, that you are a super-intelligent alien who has absolutely no experience of our universe: is there a set of rules so complete that by studying them you could discover exactly what our universe is like? Suppose someone handed you that rule book. Could it possibly be a short book?

For decades, many physicists believed that the rule book is not lengthy and contains a set of fairly simple principles, perhaps even just one principle that lies behind everything that has happened, is happening, and ever will happen in our universe. In 1980, Stephen Hawking made the brash claim that we would hold the rule book in our hands by the end of the twentieth century.

My family used to own a museum facsimile of an ancient board game. Archaeologists digging in the ruins of the city of Ur in Mesopotamia had unearthed an exquisite inlaid board with a few small carved pieces. It was obviously an elaborate game, but no one knew its rules. The makers of the facsimile had tried to deduce them from the design of the board and pieces, but those like ourselves who bought the game were encouraged to make our own decisions and discoveries about how to play it.

You can think of the universe as something like that: a magnificent, elegant, mysterious game. Certainly there are rules, but the rule book didn’t come with the game. The universe is no beautiful relic like the game found at Ur. Yes, it is old, but the game continues. We and everything we know about (and much we do not) are in the thick of the play. If there is a Theory of Everything, we and everything in the universe must be obeying its principles, even while we try to discover what they are.

You would expect the complete, unabridged rules for the universe to fill a vast library or super computer. There would be rules for how galaxies form and move, for how human bodies work and fail to work, for how humans relate to one another, for how subatomic particles interact, how water freezes, how plants grow, how dogs bark intricate rules, within rules within rules. How could anyone think this could be reduced to a few principles?

Richard Feynman, the American physicist and Nobel laureate, gave an excellent example of the way the reduction process happens. There was a time, he pointed out, when we had something we called motion and something else called heat and something else again called sound. ‘But it was soon discovered,’ wrote Feynman:

“After Sir Isaac Newton explained the laws of motion, that some of these apparently different things were aspects of the same thing. For example, the phenomena of sound could be completely understood as the motion of atoms in the air. So sound was no longer considered something in addition to motion. It was also discovered that heat phenomena are easily understandable from the laws of motion. In this way, great globs of physics theory were synthesized into a simplified theory.”

Life among the Small Pieces

All matter as we normally think of it in the universe, you and I, air, ice, stars, gases, microbes, this book, is made up of minuscule building blocks called atoms. Atoms in turn are made up of smaller objects, called particles, and a lot of empty space.

The most familiar matter particles are the electrons that orbit the nuclei of atoms and the protons and neutrons that are clustered in the nuclei. Protons and neutrons are made up of even tinier particles of matter called ‘quarks’. All matter particles belong to a class of particles called ‘fermions’, named for the great Italian physicist Enrico Fermi. They have a system of messages that pass among them, causing them to act and change in various ways. A group of humans might have a message system consisting of four different services: telephone, fax, e-mail and ‘snail mail’. Not all the humans would send and receive messages and influence one another by means of all four message services. You can think of the message system among the fermions as four such message services, called forces. There is another class of particles that carry these messages among the fermions, and sometimes among themselves as well: ‘messenger’ particles, more properly called ‘bosons’. Apparently every particle in the universe is either a fermion or a boson.

One of the four fundamental forces of nature is gravity. One way of thinking about the gravitational force holding us to the Earth is as ‘messages’ carried by bosons called gravitons between the particles of the atoms in your body and the particles of the atoms in the Earth, influencing these particles to draw closer to one another. Gravity is the weakest of the forces, but, as we’ll see later, it is a very long-range force and acts on everything in the universe. When it adds up, it can dominate all the other forces.

A second force, the electromagnetic force, is messages carried by bosons called photons among the protons in the nucleus of an atom, between the protons and the electrons nearby, and among electrons. The electromagnetic force causes electrons to orbit the nucleus. On the level of everyday experience, photons show up as light, heat, radio waves, microwaves and other waves, all known as electromagnetic radiation. The electromagnetic force is also long-range and much stronger than gravity, but it acts only on particles with an electric charge.

A third message service, the strong nuclear force, causes the nucleus of the atom to hold together.

A fourth, the weak nuclear force, causes radioactivity and plays a necessary role, in stars and in the early universe, in the formation of the elements.

The gravitational force, the electromagnetic force, the strong nuclear force, and the weak nuclear force, the activities of those four forces are responsible for all messages among all fermions in the universe and for all interactions among them. Without the four forces, every fermion (every particle of matter) would exist, if it existed at all, in isolation, with no means of contacting or influencing any other, oblivious to every other. To put it bluntly, whatever doesn’t happen by means of one of the four forces doesn’t happen. If that is true, a complete understanding of the forces would give us an understanding of the principles underlying everything that happens in the universe. Already we have a remarkably condensed rule book.

Much of the work of physicists in the twentieth century was aimed at learning more about how the four forces of nature operate and how they are related. In our human message system, we might discover that telephone, fax and e-mail are not really so separate after all, but can be thought of as the same thing showing up in three different ways. That discovery would ‘unify’ the three message services. In a similar way, physicists have sought, with some success, to unify the forces. They hope ultimately to find a theory which explains all four forces as one showing up in different ways, a theory that may even unite both fermions and bosons in a single family. They speak of such a theory as a unified theory.

A theory explaining the universe, the Theory of Everything, must go several steps further. Of particular interest to Stephen Hawking, it must answer the question, what was the universe like at the instant of beginning, before any time whatsoever had passed? Physicists phrase that question: what are the ‘initial conditions’ or the ‘boundary conditions at the beginning of the universe’? Because this issue of boundary conditions has been and continues to be at the heart of Hawking’s work, it behooves us to spend a little time with it.

The Boundary Challenge

Suppose you put together a layout for a model railway, then position several trains on the tracks and set the switches and throttles controlling the train speeds as you want them, all before turning on the power. You have set up boundary conditions. For this session with your train set, reality is going to begin with things in precisely this state and not in any other. Where each train will be five minutes after you turn on the power, whether any train will crash with another, depends heavily on these boundary conditions.

Imagine that when you have allowed the trains to run for ten minutes, without any interference, a friend enters the room. You switch off the power. Now you have a second set of boundary conditions: the precise position of everything in the layout at the second you switched it off. Suppose you challenge your friend to try to work out exactly where all the trains started out ten minutes earlier. There would be a host of questions besides the simple matter of where the trains are standing and how the throttles and switches are set. How quickly does each of the trains accelerate and slow down? Do certain parts of the tracks offer more resistance than others? How steep are the gradients? Is the power supply constant? Is it certain there has been nothing to interfere with the running of the train set, something no longer evident?

The whole exercise would indeed be daunting. Your friend would be in something like the position of a modern physicist trying to work out how the universe began, what were the boundary conditions at the beginning of time.

Boundary conditions in science do not apply only to the history of the universe. They simply mean the lie of the land at a particular point in time, for instance the start of an experiment in a laboratory. However, unlike the situation with the train set or a lab experiment, when considering the universe, one is often not allowed to set up boundary conditions.

One of Hawking’s favourite questions is how many ways the universe could have begun and still ended up the way we observe it today, assuming that we have correct knowledge and understanding of the laws of physics and they have not changed. He is using ‘the way we observe the universe today’ as a boundary condition and also, in a more subtle sense, using the laws of physics and the assumption that they have not changed as boundary conditions. The answer he is after is the reply to the question, what were the boundary conditions at the beginning of the universe, or the ‘initial conditions of the universe’ the exact layout at the word go, including the minimal laws that had to be in place at that moment in order to produce at a certain time in the future the universe as we know it today? It is in considering this question that he has produced some of his most interesting work and surprising answers.

A unified description of the particles and forces, and knowledge of the boundary conditions for the origin of the universe, would be a stupendous scientific achievement, but it would not be a Theory of Everything. In addition, such a theory must account for values that are ‘arbitrary elements’ in all present theories.

Language Lesson

Arbitrary elements include such ‘constants of nature’ as the mass and charge of the electron and the velocity of light. We observe what these are, but no theory explains or predicts them. Another example: physicists know the strength of the electromagnetic force and the weak nuclear force. The electroweak theory is a theory that unifies the two, but it cannot tell us how to calculate the difference in strength between the two forces. The difference in strength is an ‘arbitrary element’, not predicted by the theory. We know what it is from observation, and so we put it into a theory ‘by hand’. This is considered a weakness in a theory.

When scientists use the word predict, they do not mean telling the future. The question ‘Does this theory predict the speed of light?’ isn’t asking whether the theory tells us what that speed will be next Tuesday. It means, would this theory make it possible for us to work out the speed of light if it were impossible to observe what that speed is? As it happens, no present theory does predict the speed of light. It is an arbitrary element in all theories.

One of Hawking’s concerns when he wrote A Brief History of Time was that there be a clear understanding of what is meant by a theory. A theory is not Truth with a capital T, not a rule, not fact, not the final word. You might think of a theory as a toy boat. To find out whether it floats, you set it on the water. You test it. When it flounders, you pull it out of the water and make some changes, or you start again and build a different boat, benefiting from what you’ve learned from the failure.

Some theories are good boats. They float a long time. We may know there are a few leaks, but for all practical purposes they serve us well. Some serve us so well, and are so solidly supported by experiment and testing, that we begin to regard them as truth. Scientists, keeping in mind how complex and surprising our universe is, are extremely wary about calling them that. Although some theories do have a lot of experimental success to back them up and others are hardly more than a glimmer in a theorist’s eyes brilliantly designed boats that have never been tried on the water it is risky to assume that any of them is absolute, fundamental scientific ‘truth’.

It is important, however, not to dither around for ever, continuing to call into question well-established theories without having a good reason for doing so. For science to move ahead, it is necessary to decide whether some theories are dependable enough, and match observation sufficiently well, to allow us to use them as building blocks and proceed from there. Of course, some new thought or discovery might come along and threaten to sink the boat. We’ll see an example of that later in this book.

In A Brief History of Time Stephen Hawking wrote that a scientific theory is ‘just a model of the universe, or a restricted part of it, and a set of rules that relate quantities in the model to observations that we make. It exists only in our minds and does not have any other reality (whatever that may mean)? The easiest way to understand this definition is to look at some examples.

There is a film clip showing Hawking teaching a class of graduate students, probably in the early 1980s, with the help of his graduate assistant. By this time Hawking’s ability to speak had deteriorated so seriously that it was impossible for anyone who did not know him well to understand him. In the clip, his graduate assistant interprets Hawking’s garbled speech to say, ‘Now it just so happens that we have a model of the universe here’, and places a large cardboard cylinder upright on the seminar table. Hawking frowns and mutters something that only the assistant can understand. The assistant apologetically picks up the cylinder and turns it over to stand on its other end. Hawking nods approval, to general laughter.

A ‘model’, of course, does not have to be something like a cardboard cylinder or a drawing that we can see and touch. It can be a mental picture or even a story. Mathematical equations or creation myths can be models.

Getting back to the cardboard cylinder, how does it resemble the universe? To make a full-fledged theory out of it, Hawking would have to explain how the model is related to what we actually see around us, to ‘observations’, or to what we might observe if we had better technology. However, just because someone sets a piece of cardboard on the table and tells how it is related to the actual universe does not mean anyone should accept this as the model of the universe. We are to consider it, not swallow it hook, line and sinker. It is an idea, existing ‘only in our minds’. The cardboard cylinder may turn out to be a useful model. On the other hand, some evidence may turn up to prove that it is not. We shall have found that we are part of a slightly different game from the one the model suggested we were playing. Would that mean the theory was ‘bad’? No, it may have been a very good theory, and everyone may have learned a great deal from considering it, testing it, and having to change it or discard it. The effort to shoot it down may have required innovative thinking and experiments that will lead to something more successful or pay off in other ways.

What is it then that makes a theory a good theory? Quoting Hawking again, it must ‘accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations’.

For example, Isaac Newton’s theory of gravity describes a very large class of observations. It predicts the behaviour of objects dropped or thrown on Earth, as well as planetary orbits.

It’s important to remember, however, that a good theory does not have to arise entirely from observation. A good theory can be a wild theory, a great leap of imagination. ‘The ability to make these intuitive leaps is really what characterizes a good theoretical physicist,’ says Hawking. However, a good theory should not be at odds with things already observed, unless it gives convincing reasons for seeming to be at odds.

Superstring theory, one of the most exciting current theories, predicts more than three dimensions of space, a prediction that certainly seems inconsistent with observation. Theorists explain the discrepancy by suggesting the extra dimensions are curled up so small we are unable to recognize them.

We’ve already seen what Hawking means by his second requirement, that a theory contain only a few arbitrary elements.

The final requirement, according to Hawking, is that it must suggest what to expect from future observations. It must challenge us to test it. It must tell us what we will observe if the theory is correct. It should also tell us what observations would prove that it is not correct. For example, Albert Einstein’s theory of general relativity predicts that beams of light from distant stars bend a certain amount as they pass massive bodies like the sun. This prediction is testable. Tests have shown Einstein was correct.

Some theories, including most of Stephen Hawking’s, are impossible to test with our present technology, perhaps even with any conceivable future technology. They are tested with mathematics. They must be mathematically consistent with what we do know and observe. But we cannot observe the universe in its earliest stages to find out directly whether his ‘no-boundary proposal’ (to be discussed later) is correct. Although some tests were proposed for proving or disproving ‘wormholes’, Hawking does not think they would succeed. But he has told us what he thinks we will find if we ever do have the technology, and he is convinced that his theories are consistent with what we have observed so far. In some cases he has risked making some very specific predictions about the results of experiments and observations that push at the boundaries of our present capabilities.

If nature is perfectly unified, then the boundary conditions at the beginning of the universe, the most fundamental particles and the forces that govern them, and the constants of nature, are interrelated in a unique and completely compatible way, which we might be able to recognize as inevitable, absolute and self-explanatory. To reach that level of understanding would indeed be to discover the Theory of Everything, of Absolutely Everything even the answer, perhaps, to the question of why does the universe fit this description to ‘know the Mind of God’, as Hawking termed it in A Brief History of Time, or ‘the Grand Design’, as he would phrase it less dramatically in a more recent book by that name.

Laying Down the Gauntlet

We are ready to list the challenges that faced any ‘Theory of Everything’ candidate when Hawking delivered his Lucasian Lecture in 1980. You’ll learn in due course how some requirements in this list have changed subtly since then.

– It must give us a model that unifies the forces and particles.

– It must answer the question, what were the ‘boundary conditions’ of the universe, the conditions at the very instant of beginning, before any time whatsoever passed?

– It must be ‘restrictive’, allowing few options. It should, for instance, predict precisely how many types of particles there are. If it leaves options, it must somehow account for the fact that we have the universe we have and not a slightly different one.

– It should contain few arbitrary elements. We would rather not have to peek too often at the actual universe for answers. Paradoxically, the Theory of Everything itself may be an arbitrary element. Few scientists expect it to explain why there should exist either a theory or anything at all for it to describe. It is not likely to answer Stephen Hawking’s question: ‘Why does the universe [or, for that matter, the Theory of Everything] go to all the bother of existing?’

– It must predict a universe like the universe we observe or else explain convincingly why there are discrepancies. If it predicts that the speed of light is ten miles per hour, or disallows penguins or pulsars, we have a problem. A Theory of Everything must find a way to survive comparison with what we observe.

– It should be simple, although it must allow for enormous complexity. The physicist John Archibald Wheeler of Princeton wrote:

“Behind it all is surely an idea so simple, so beautiful, so compelling that when in a decade, a century, or a millennium we grasp it, we will all say to each other, how could it have been otherwise? How could we have been so stupid for so long?”

The most profound theories, such as Newton’s theory of gravity and Einstein’s relativity theories, are simple in the way Wheeler described.

– It must solve the enigma of combining Einstein’s theory of general relativity (a theory that explains gravity) with quantum mechanics (the theory we use successfully when talking about the other three forces).

This is a challenge that Stephen Hawking has taken up. We introduce the problem here. You will understand it better after reading about the uncertainty principle of quantum mechanics in this chapter and about general relativity later.

Theory Meets Theory

Einstein’s theory of general relativity is the theory of the large and the very large stars, planets, galaxies, for instance. It does an excellent job of explaining how gravity works on that level.

Quantum mechanics is the theory of the very small. It describes the forces of nature as messages among fermions (matter particles). Quantum mechanics also contains something extremely frustrating, the uncertainty principle: we can never know precisely both the position of a particle and its momentum (how it is moving) at the same time. In spite of this problem, quantum mechanics does an excellent job of explaining things on the level of the very small.

One way to combine these two great twentieth century theories into one unified theory would be to explain gravity, more successfully than has been possible so far, as an exchange of messenger particles, as we do with the other three forces. Another avenue is to rethink general relativity in the light of the uncertainty principle.

Explaining gravity as an exchange of messenger particles presents problems. When you think of the force holding you to the Earth as the exchange of gravitons (messenger particles of gravity) between the matter particles in your body and the matter particles that make up the Earth, you are describing the gravitational force in a quantum-mechanical way. But because all these gravitons are also exchanging gravitons among themselves, mathematically this is a messy business. We get infinities, mathematical nonsense.

Physical theories cannot really handle infinities. When they have appeared in other theories, theorists have resorted to something known as ‘renormalization’. Richard Feynman used renormalization when he developed a theory to explain the electromagnetic force, but he was far from pleased about it. ‘No matter how clever the word,’ he wrote, ‘it is what I would call a dippy process!’ It involves putting in other infinities and letting the infinities cancel each other out. It does sound dubious, but in many cases it seems to work in practice. The resulting theories agree with observation remarkably well.

Renormalization works in the case of electromagnetism, but it fails in the case of gravity. The infinities in the gravitational force are of a much nastier breed than those in the electromagnetic force. They refuse to go away. Supergravity, the theory Hawking spoke about in his Lucasian lecture, and superstring theory, in which the basic objects in the universe are not pointlike particles but tiny strings or loops of string, began to make promising inroads in the twentieth century; and later in this book we shall be looking at even more promising recent developments. But the problem is not completely solved.

On the other hand, what if we allow quantum mechanics to invade the study of the very large, the realm where gravity seems to reign supreme? What happens when we rethink what general relativity tells us about gravity in the light of what we know about the uncertainty principle, the principle that you can’t measure accurately the position and the momentum of a particle at the same time? Hawking’s work along these lines has had bizarre results: black holes aren’t black, and the boundary conditions may be that there are no boundaries.

While we are listing paradoxes, here’s another: empty space isn’t empty. Later in this book we’ll discuss how we arrive at that conclusion. For now be content to know that the uncertainty principle means that so-called empty space teems with particles and antiparticles. (The matter-antimatter used in science fiction is a familiar example.)

General relativity tells us that the presence of matter or energy makes spacetime curve, or warp. We’ve already mentioned one result of that curvature: the bending of light beams from distant stars as they pass a massive body like the sun.

Keep those two points in mind: (1) ‘Empty’ space is filled with particles and antiparticles, adding up to an enormous amount of energy. (2) The presence of this energy causes curvature of spacetime.

If both are true the entire universe ought to be curled up into a small ball. This hasn’t happened. When general relativity and quantum mechanics work together, what they predict seems to be dead wrong. Both general relativity and quantum mechanics are exceptionally good theories, two of the outstanding intellectual achievements of the twentieth century. They serve us magnificently not only for theoretical purposes but in many practical ways. Nevertheless, put together they yield infinities and nonsense. The Theory of Everything must somehow resolve that nonsense.

Predicting the Details

Once again imagine that you are an alien who has never seen our universe. With the Theory of Everything you ought nevertheless to be able to predict everything about it right? It’s possible you can predict suns and planets and galaxies and black holes and quasars but can you predict next year’s Derby winner? How specific can you be? Not very. The calculations necessary to study all the data in the universe are ludicrously far beyond the capacity of any imaginable computer. Hawking points out that although we can solve the equations for the movement of two bodies in Newton’s theory of gravity, we can’t solve them exactly for three bodies, not because Newton’s theory doesn’t work for three bodies but because the maths is too complicated. The real universe, needless to say, has more than three bodies in it.

Nor can we predict our health, although we understand the principles that underlie medicine, the principles of chemistry and biology, extremely well. The problem again is that there are too many billions upon billions of details in a real-life system, even when that system is just one human body.

With the Theory of Everything in our hands we’d still be a staggeringly long way from predicting everything. Even if the underlying principles are simple and well understood, the way they work out is enormously complicated. ‘A minute to learn, the lifetime of the universe to master’, to paraphrase an advertising slogan. ‘Lifetime of the universe to master’ is a gross understatement.”

Where does that leave us? What horse will win the Grand National next year is predictable with the Theory of Everything, but no computer can hold all the data or do the maths to make the prediction. Is that correct?

There’s a further problem. We must look again at the uncertainty principle of quantum mechanics.

The Fuzziness of the Very Small

At the level of the very small, the quantum level of the universe, the uncertainty principle also limits our ability to predict.

Think of all those odd, busy inhabitants of the quantum world, both fermions and bosons. They’re an impressive zoo of particles. Among the fermions there are electrons, protons and neutrons. Each proton or neutron is, in turn, made up of three quarks, which are also fermions. Then we have the bosons: photons (messengers of the electromagnetic force), gravitons (the gravitational force), gluons (the strong force), and Ws and Zs (the weak force). It would be helpful to know where all these and many others are, where they are going, and how quickly they are getting there. Is it possible to find out?

The diagram of an atom (fig 2.1) is the model proposed by New Zealander Ernest Rutherford at the Cavendish Labs in Cambridge early in the twentieth century. It shows electrons orbiting the nucleus of the atom as planets orbit the sun. We now know that things never really look like this on the quantum level. The orbits of electrons cannot be plotted as though electrons were planets. We do better to picture them swarming in a cloud around the nucleus. Why the blur?

The uncertainty principle makes life at the quantum level a fuzzy, imprecise affair, not only for electrons but for all the particles.

Regardless of how we go about trying to observe what happens, it is impossible to find out precisely both the momentum and the position of a particle at the same time. The more accurately we measure how the particle is moving, the less accurately we know its position, and vice versa.

It works like a seesaw: when the accuracy of one measurement goes up, the accuracy of the other must go down. We pin down one measurement only by allowing the other to become more uncertain.

The best way to describe the activity of a particle is to study all the possible ways it might be moving and then calculate how likely one way is as opposed to another. It becomes a matter of probabilities. A particle has this probability to be moving that way or it has that probability to be here. Those probabilities are nevertheless very useful information.

It’s a little like predicting the outcome of elections. Election poll experts work with probabilities. When they deal with large enough numbers of voters, they come up with statistics that allow them to predict who will win the election and by what margin, without having to know how each individual will vote. When quantum physicists study a large number of possible paths that particles might follow, the probabilities of their moving thus and so or of being in one place rather than another become concrete information.

Pollsters admit that interviewing an individual can influence a vote by causing the voter to become more aware of issues. Physicists have a similar dilemma. Probing the quantum level influences the answers they find.

Thus far the comparison between predicting elections and studying the quantum level seems a good one. Now it breaks down: on election day, each voter does cast a definite vote one way or another, secret perhaps but not uncertain. lf pollsters placed hidden cameras in voting booths and were not arrested they could find out how each individual voted. It is not like that in quantum physics. Physicists have devised ingenious ways of sneaking up on particles, all to no avail. The world of elementary particles does not just seem uncertain because we haven’t been clever enough to find a successful way to observe it. It really is uncertain. No wonder Hawking, in his Lucasian lecture, called quantum mechanics ‘a theory of what we do not know and cannot predict’.

Taking this limitation into account, physicists have redefined the goal of science: the Theory of Everything will be a set of laws that make it possible to predict events up to the limit set by the uncertainty principle, and that means in many cases satisfying ourselves with statistical probabilities, not specifics.

Hawking sums up our problem. In answer to the question of whether everything is predetermined either by the Theory of Everything or by God, he says yes, he thinks it is. ‘But it might as well not be, because we can never know what is determined. If the theory has determined that we shall die by hanging, then we shall not drown. But you would have to be awfully sure that you were destined for the gallows to put to sea in a small boat during a storm.’ He regards the idea of free will as ‘a very good approximate theory of human behaviour’.

Is There Really a Theory of Everything?

Not all physicists believe there is a Theory of Everything, or, if there is, that it is possible for anyone to find it. Science may go on refining what we know by making discovery after discovery, opening boxes within boxes but never arriving at the ultimate box. Others argue that events are not entirely predictable but happen in a random fashion. Some believe God and human beings have far more freedom of give and-take within this creation than a deterministic Theory of Everything would allow. They believe that as in the performance of a great piece of orchestral music, though the notes are written down, there may yet be enormous creativity in the playing of the notes that is not at all predetermined.

Whether a complete theory to explain the universe is within our reach or ever will be, there are those among us who want to make a try. Humans are intrepid beings with insatiable curiosity. Some, like Stephen Hawking, are particularly hard to discourage. One spokesman for those who are engaged in this science, Murray Gell-Mann, described the quest:

“It is the most persistent and greatest adventure in human history, this search to understand the universe, how it works and where it came from. It is difficult to imagine that a handful of residents of a small planet circling an insignificant star in a small galaxy have as their aim a complete understanding of the entire universe, a small speck of creation truly believing it is capable of comprehending the whole.”

The advertising slogan for the game Othello is ‘A minute to learn, a lifetime to master’.

‘Equal to anything!’

WHEN STEPHEN HAWKING was twelve years old, two of his schoolmates made a bet about his future. John McClenahan bet that Stephen ‘would never come to anything’; Basil King, that he would ‘turn out to be unusually capable’. The stake was a bag of sweets.

Young S. W. Hawking was no prodigy. Some reports claim he was brilliant in a haphazard way, but Hawking remembers that he was just another ordinary English schoolboy, slow learning to read, his handwriting the despair of his teachers. He ranked no more than halfway up in his school class, though he now says, in his defence, ‘It was a very bright class.’ Maybe someone might have predicted a career in science or engineering from the fact that Stephen was intensely interested in learning the secrets of how things such as clocks and radios work. He took them apart to find out, but he could seldom put them back together. Stephen was never well coordinated physically, and he was not keen on sports or other physical activities. He was almost always the last to be chosen for any sports team.

John McClenahan had good reason to think he would win the wager.

Basil King probably was just being a loyal friend or liked betting on long shots. Maybe he did see things about Stephen that teachers, parents and Stephen himself couldn’t see. He hasn’t claimed his bag of sweets, but it’s time he did. Because Stephen Hawking, after such an unexceptional beginning, is now one of the intellectual giants of our modern world and among its most heroic figures. How such transformations happen is a mystery that biographical details alone cannot explain. Hawking would have it that he is still ‘just a child who has never grown up. I still keep asking these how and why questions. Occasionally I find an answer.’

1942 – 1959

Stephen William Hawking was born during the Second World War, on 8 January 1942, in Oxford. It was a winter of discouragement and fear, not a happy time to be born. Hawking likes to recall that his birth was exactly three hundred years after the death of Galileo, who is called the father of modern science. But few people in January 1942 were thinking about Galileo.

Stephen’s parents, Frank and Isobel Hawking, were not wealthy. Frank’s very prosperous Yorkshire grandfather had over-extended himself buying farm land and then gone bankrupt in the great agricultural depression of the early twentieth century. His resilient wife, Frank’s grandmother and Stephen’s great-grandmother, saved the family from complete ruin by opening a school in their home. Her ability and willingness to take this unusual step are evidence that reading and education must already have been a high priority in the family.

Isobel, Stephen’s mother, was the second oldest of seven children. Her father was a family doctor in Glasgow. When lsobel was twelve, they moved to Devon.

It wasn’t easy for either family to scrape together money to send a child to Oxford, but in both cases they did. Taking on a financial burden of this magnitude was especially unusual in the case of lsobel’s parents, for few women went to university in the 1930s. Though Oxford had been admitting female students since 1878, it was only in 1920 that the university had begun granting degrees to women. Isobel’s studies ranged over an unusually wide curriculum in a university where students tended to be much more specialized than in an American liberal arts college or university. She studied philosophy, politics and economics.

Stephen’s father Frank was a meticulous, determined young man who kept a journal every day from the age of fourteen and would continue it until the end of his life. He was at Oxford earlier than Isobel, studying medical science with a speciality in tropical medicine. When the Second World War broke out he was in East Africa doing field research, and he intrepidly found his way overland to take ship for England and volunteer for militaw service. He was assigned instead to medical research.

Isobel held several jobs after graduation from Oxford, all of them beneath her ability and credentials as a university graduate. One was as an inspector of taxes. She so loathed it that she gave it up in disgust to become a secretary at a medical institute in Hampstead. There she met Frank Hawking. They were married in the early years of the war.

In January 1942 the Hawkings were living in Highgate, north London. In the London area hardly a night passed without air raids, and Frank and Isobel Hawking decided Isobel should go to Oxford to give birth to their baby in safety. Germany was not bombing Oxford or Cambridge, the two great English university towns, reputedly in return for a British promise not to bomb Heidelberg and Gottingen. In Oxford, the city familiar from her recent university days, Isobel spent the final week of her pregnancy first in a hotel and then, as the birth grew imminent and the hotel grew nervous, in hospital, but she was still able to go out for walks to fill her time. On one of those leisurely winter days, she happened into a bookshop and, with a book token, bought an astronomical atlas. She would later regard this as a rather prophetic purchase.

Not long after Stephen’s birth on 8 January his parents took him back to Highgate. Their home survived the war, although a V-2 rocket hit a few doors away when the Hawkings were absent, blowing out the back windows of their house and leaving glass shards sticking out of the opposite wall like little daggers. It had been a good moment to be somewhere else.

After the war the family lived in Highgate until 1950. Stephen’s sister Mary was born there in 1943 (when Stephen was less than two years old), and a second daughter, Philippa, arrived in 1946. The family would adopt another son, Edward, in 1955, when Stephen was a teenager. ln Highgate Stephen attended the Byron House School, whose ‘progressive methods’ he would later blame for his not learning to read until after he left there.

When Dr Frank Hawking, beginning to be recognized as a brilliant leader in his field, became head of the Division of Parasitology at the National Institute for Medical Research, the family moved to St Albans.

Eccentric in St Albans

The Hawkings were a close family. Their home was full of good books and good music, often reverberating with the operas of Richard Wagner played at high volume on the record player. Frank and Isobel Hawking believed strongly in the value of education, a good bit of it occurring at home. Frank gave his children a grounding in, among other things, astronomy and surveying, and Isobel took them often to the museums in South Kensington, where each child had a favourite museum and none had the slightest interest in the others’ favourites. She would leave Stephen in the Science Museum and Mary in the Natural History Museum, and then stay with Philippa too young to be left alone at the Victoria and Albert. After a while she would collect them all again.

In St Albans the Hawkings were regarded as a highly intelligent, eccentric family. Their love of books extended to such compulsive reading habits that Stephen’s friends found it odd and a little rude of his family to sit at the dining table, uncommunicative, their noses buried in their books. Reports that the family car was a used hearse are false. For many years the Hawkings drove around in a succession of used London taxis of the black, boxlike sort. This set them apart not only because of the nature of the vehicle, but also because after the war cars of any kind were not easily available. Only families who were fairly wealthy had them at all. Frank Hawking installed a table in the back of the taxi, between the usual bench seat and the fold-down seats, so that Stephen and his siblings could play cards and games. The car and the game table were put to especially good use getting to their usual holiday location, a painted gypsy caravan and an enormous army tent set up in a field at Osmington Mills, in Dorset. The Hawking campsite was only a hundred yards from the beach. It was a rocky beach, not sand, but it was an interesting part of the coast smuggler territory in a past age.

In the post-war years it was not unusual for families to live frugally with few luxuries, unable to afford home repairs, and, out of generosity or financial constraint, house more than two generations under one roof. But the Hawkings, though their house in St Albans was larger than many British homes, carried frugality and disrepair to an extreme. In this three storey, strangely put together redbrick dwelling, Frank kept bees in the cellar, and Stephen’s Scottish grandmother lived in the attic, emerging regularly to play the piano spectacularly well for local folk dances. The house was in dire need of work when the Hawkings moved in, and it stayed that way. According to Stephen’s adopted younger brother Edward, ‘It was a very large, dark house really rather spooky, rather like a nightmare.’ The leaded stained glass at the front door must originally have been beautiful but was missing pieces. The front hall was lit only by a single bulb and its fine authentic William Morris wall covering had darkened. A greenhouse behind the rotting porch lost panes whenever there was a wind. There was no central heating, the carpeting was sparse, broken windows were not replaced. The books, packed two deep on shelves all over the house, added a modicum of insulation.

Frank Hawking would brook no complaints. One had only to put on more clothes in winter, he insisted. Frank himself was often away on research trips to Africa during the coldest months. Stephen’s sister Mary recalls thinking that fathers were ‘like migratory birds. They were there for Christmas and then they vanished until the weather got warm.’ She thought that fathers of her friends who didn’t disappear were ‘a bit odd’.

The house lent itself to imaginative escapades. Stephen and Mary competed in finding ways to get in, some of them so secret that Mary was never able to discover more than ten of the eleven that Stephen managed to use. As if one such house were not enough, Stephen had another imaginary one in an imaginary place he called Drane. It seemed he did not know where this was, only that it existed. His mother became a little frantic, so determined was he to take a bus to find it, but later, when they visited Kenwood House in Hampstead Heath, she heard him declare that this was it, the house he had seen in a dream.

‘Hawkingese’ was the name Stephen’s friends gave the Hawking ‘family dialect’. Frank Hawking himself had a stutter and Stephen and his siblings spoke so rapidly at home that they also stumbled over their words and invented their own oral short hand. That did not prevent Stephen from being, according to his mother, ‘always extremely conversational’. He was also ‘very imaginative loved music and acting in plays’, also ‘rather lazy’ but ‘a self-educator from the start like a bit of blotting paper, soaking it all up’. Part of the reason for his lack of distinction in school was that he could not be bothered with things he already knew or decided he had no need to know.

Stephen had a rather commanding nature in spite of being smaller than most of his classmates. He was well organized and capable of getting other people organized. He was also known as something of a comedian. Getting knocked around by larger boys didn’t bother him much, but he had his limits, and he could, when driven to it, turn rather fierce and daunting. His friend Simon Humphrey had a heftier build than Stephen, but Simon’s mother recalled that it was Stephen, not Simon, who on one memorable occasion swung around with his fists clenched to confront the much larger bullies who were teasing them. ‘That’s the sort of thing he did he was equal to anything.’

The eight year old Stephen’s first school in St Albans was the High School for Girls, curiously named since its students included young children well below ‘high school’ age, and its Michael House admitted boys. A seven year old named Jane Wilde, in a class somewhat younger than Stephen’s, noticed the boy with ‘floppy golden brown hair’ as he sat ‘by the wall in the next door classroom’, but she didn’t meet him. She would later become his wife.

Stephen attended that school for only a few months, until Frank needed to stay in Africa longer than usual and Isobel accepted an invitation to take the children for four months to Majorca, off the east coast of Spain. Balmy, beautiful Majorca, the home of lsobel’s friend from her Oxford days, Beryl, and Beryl’s husband, the poet Robert Graves, was an enchanting place to spend the winter. Education was not entirely neglected for there was a tutor for Stephen and the Graveses’ son William.

Back in St Albans after this idyllic hiatus, Stephen went for one year to Radlett, a private school, and then did well enough in his tests to qualify for a place at the more selective St Albans School, also a private school, in the shadow of the Cathedral. Though in his first year at St Albans he managed to rank no better than an astonishing third from the bottom of his class, his teachers were beginning to perceive that he was more intelligent than he was demonstrating in the classroom. His friends dubbed him ‘Einstein’, either because he seemed more intelligent than they or because they thought he was eccentric. Probably both. His friend Michael Church remembers that he had a sort of ‘overarching arrogance some overarching sense of what the world was about’.

‘Einstein’ soon rose in ranking to about the middle of the class. He even won the Divinity prize one year. From Stephen’s earliest childhood, his father had read him stories from the Bible. ‘He was quite well versed in religious things,’ Isobel later told an interviewer. The family often enjoyed having theological debates, arguing quite happily for and against the existence of God.

Undeterred by a low class placing, ever since the age of eight or nine Stephen had been thinking more and more seriously about becoming a scientist. He was addicted to questioning how things worked and trying to find out. It seemed to him that in science he could find out the truth, not only about clocks and radios but also about everything else around him. His parents planned that at thirteen he would go to Westminster School. Frank Hawking thought his own advancement had suffered because of his parents’ poverty and the fact that he had not attended a prestigious school. Others with less ability but higher social standing had get ahead of him, or so he felt. Stephen was to have something better.

The Hawkings could not afford Westminster unless Stephen won a scholarship. Unfortunately, he was prone at this age to recurring bouts of a low fever, diagnosed as glandular fever, that sometimes was serious enough to keep him home from school in bed. As bad luck would have it, he was ill at the time of the scholarship examination. Frank’s hopes were dashed and Stephen continued at St Albans School, but he believes his education there was at least as good as the one he would have received at Westminster.

Stephen, age 14.

After the Hawkings adopted Edward in 1955, Stephen was no longer the only male sibling. Stephen accepted his new younger brother in good grace. He was, according to Stephen, ‘probably good for us. He was a rather difficult child, but one couldn’t help liking him.’

Continuing at St Albans School rather than heading off to Westminster had one distinct advantage. It meant being able to continue growing up in a little band of close friends who shared with Stephen such interests as the hazardous manufacture of fireworks in the dilapidated greenhouse and inventing board games of astounding complexity, and who relished long discussions on a wide range of subjects. Their game ‘Risk’ involved railways, factories, manufacturing, and its own stock exchange, and took days of concentrated play to finish. A feudal game had dynasties and elaborate family trees. According to Michael Church, there was something that particularly intrigued Stephen about conjuring up these worlds and setting down the laws that governed them. John McClenahan’s father had a workshop where he allowed John and Stephen to construct model aeroplanes and boats, and Stephen later remarked that he liked ‘to build working models that I could control. Since I began my Ph.D., this need has been met by my research into cosmology. If you understand how the universe operates, you control it in a way.’ In a sense, Hawking’s grown-up models of the universe stand in relation to the ‘real’ universe in the same way his childhood model aeroplanes and boats stood in relation to real aeroplanes and boats. They give an agreeable, comforting feeling of control while, in actuality, representing no control at all.

Stephen was fifteen when he learned that the universe was expanding. This shook him. ‘I was sure there must be some mistake,’ he says. ‘A static universe seemed so much more natural. It could have existed and could continue to exist for ever. But an expanding universe would change with time. If it continued to expand, it would become virtually empty. That was disturbing.

Like many other teenagers of their generation, Stephen and his friends became fascinated with extrasensory perception (ESP). They tried to dictate the throw of dice with their minds. However, Stephen’s interest turned to disgust when he attended a lecture by someone who had investigated famous ESP studies at Duke University in the United States. The lecturer told his audience that whenever the experiments got results, the experimental techniques were faulty, and whenever the experimental techniques were not faulty, they got no results. Stephen concluded that ESP was a fraud. His scepticism about claims for psychic phenomena has not changed. To his way of thinking, people who believe such claims are stalled at the level where he was at the age of fifteen.

Ancestor of ‘Cosmos’

Probably the best of all the little group’s adventures and achievements and one that captured the attention and admiration of the entire town of St Albans was building a computer that they called LUCE (Logical Uniselector Computing Engine). Cobbled together out of recycled pieces of clocks and other mechanical and electrical items, including an old telephone switchboard, LUCE could perform simple mathematical functions. Unfortunately that teenage masterpiece no longer exists. Whatever remained of it was thrown away eventually when a new head of computing at St Albans went on a cleaning spree.

The most advanced version of LUCE was the product of Stephen’s and his friends’ final years of school before university. They were having to make hard choices about the future. Frank Hawking encouraged his son to follow him into medicine. Stephen’s sister Mary would do that, but Stephen found biology too imprecise to suit him. Biologists, he thought, observed and described things but didn’t explain them on a fundamental level. Biology also involved detailed drawings, and he wasn’t good at drawing. He wanted a subject in which he could look for exact answers and get to the root of things. If he’d known about molecular biology, his career might have been very different. At fourteen, particularly inspired by a teacher named Mr Tahta, he had decided that what he wanted to do was ‘mathematics, more mathematics, and physics’.

Stephen’s father insisted this was impractical. What jobs were there for mathematicians other than teaching? Moreover he wanted Stephen to attend his own college, University College, Oxford, and at ‘Univ’ one could not read mathematics. Stephen followed his father’s advice and began boning up on chemistry, physics and only a little maths, in preparation for entrance to Oxford. He would apply to Univ to study mainly physics and chemistry.

In 1959, during Stephen’s last year before leaving home for university, his mother Isobel and the three younger children accompanied Frank when he journeyed to India for an unusually lengthy research project. Stephen stayed in St Albans and lived for the year with the family of his friend Simon Humphrey. He continued to spend a great deal of time improving LUCE, though Dr Humphrey interrupted regularly to insist he write letters to his family something Stephen on his own would have happily neglected. But the main task of that year had to be studying for scholarship examinations coming up in March. It was essential that Stephen perform extremely well in these examinations if there was to be even an outside chance of Oxford’s accepting him.

Students who rank no higher than halfway up in their school class seldom get into Oxford unless someone pulls strings behind the scenes. Stephen’s lacklustre performance in school gave Frank Hawking plenty of cause to think he had better begin pulling strings. Stephen’s headmaster at St Albans also had his doubts about Stephen’s chances of acceptance and a scholarship, and he suggested Stephen might wait another year. He was young to be applying to university. The two other boys planning to take the exams with him were a year older. However, both headmaster and father had underestimated Stephen’s intelligence and knowledge, and his capacity to rise to a challenge. He achieved nearly perfect marks in the physics section of the entrance examinations. His interview at Oxford with the Master of University College and the physics tutor, Dr Robert Berman, went so well there was no question but that he would be accepted to read physics and be given a scholarship. A triumphant Stephen joined his family in India for the end of their stay.

Not a Grey Man

In October 1959, aged seventeen, Hawking went up to Oxford to enter University College, his father’s college. ‘Univ’ is in the heart of Oxford, on the High Street. Founded in 1249, it is the oldest of the many colleges that together make up the University. Stephen would study natural science, with an emphasis on physics. By this time he had come to consider mathematics not as a subject to be studied for itself but as a tool for doing physics and learning how the universe behaves. He would later regret that he had not exerted more effort mastering that tool.

Oxford’s architecture, like Cambridge’s, is a magnificent hodge-podge of every style since the Middle Ages. Its intellectual and social traditions predate even its buildings and, like those of any great university, are a mix of authentic intellectual brilliance, pretentious fakery, innocent tomfoolery and true decadence. For a young man interested in any of these, Stephen’s new environment had much to offer. Nevertheless, for about a year and a half, he was lonely and bored. Many students in his year were considerably older than he, not only because he had sat his examinations early but because others had taken time off for national service. He was not inspired to relieve his boredom by exerting himself academically. He had discovered he could get by better than most by doing virtually no studying at all.

Contrary to their reputation, Oxford tutorials are often not one-to-one but two or three students with one tutor. A young man named Gordon Berry became Hawking’s tutorial partner. They were two of only four physics students who entered Univ that Michaelmas (autumn) term of 1959. This small group of newcomers Berry, Hawking, Richard Bryan and Derek Powney spent most of their time together, somewhat isolated from the rest of the College.

It wasn’t until he was halfway through his second year that Stephen began enjoying Oxford. When Robert Berman describes him, it’s difficult to believe he’s speaking of the same Stephen Hawking who seemed so ordinary a few years earlier and so bored the previous year. ‘He did, I think, positively make an effort to sort of come down to the other students’ level and you know, be one of the boys. If you didn’t know about his physics and to some extent his mathematical ability, he wouldn’t have told you He was very popular.’ Others who remember Stephen in his second and third years at Oxford describe him as lively, buoyant and adaptable. He wore his hair long, was famous for his wit, and liked classical music and science fiction.

The attitude among most Oxford students in those days, Hawking remembers, was ‘very antiwork’: ‘You were supposed either to be brilliant without effort, or to accept your limitations and get a fourth-class degree. To work hard to get a better class of degree was regarded as the mark of a grey man, the worst epithet in the Oxford vocabulary.’ Stephen’s freewheeling, independent spirit and casual attitude towards his studies fitted right in. In a typical incident one day in a tutorial, after reading a solution he had worked out, he crumpled up the paper disdainfully and propelled it across the room into the wastepaper basket.

The physics curriculum, at least for someone with Hawking’s abilities, could be navigated successfully without rising above this blasé approach. Hawking described it as ‘ridiculously easy. You could get through without going to any lectures, just by going to one or two tutorials a week. You didn’t need to remember many facts, just a few equations.’ You could also, it seems, get through without spending very much time doing experiments in the laboratory. Gordon and he found ways to use shortcuts in taking data and fake parts of the experiments. ‘We just didn’t apply ourselves,’ remembers Berry. ‘And Steve was right down there in not applying himself.

Derek Powney tells the story of the four of them receiving an assignment having to do with electricity and magnetism. There were thirteen questions, and their tutor, Dr Berman, told them to finish as many as they could in the week before the next tutorial. At the end of the week Richard Bryan and Derek had managed to solve one and a half of the problems; Gordon only one. Stephen had not yet begun. On the day of the tutorial Stephen missed three morning lectures in order to work on the questions, and his friends thought he was about to get his comeuppance. His bleak announcement when he joined them at noon was that he had been able to solve only ten. At first they thought he was joking, until they realized he had done ten. Derek’s comment was that this was the moment Stephen’s friends recognized ‘that it was not just that we weren’t in the same street, we weren’t on the same planet’. ‘Even in Oxford, we must all have been remarkably stupid by his standards.’

His friends were not the only ones who sometimes found his intelligence impressive. Dr Berman and other dons were also beginning to recognize that Hawking had a brilliant mind, ‘completely different from his contemporaries’. ‘Undergraduate physics was simply not a challenge for him. He did very little work, really, because anything that was do-able he could do. It was only necessary for him to know something could be done, and he could do it without looking to see how other people did it. Whether he had any books I don’t know, but he didn’t have very many, and he didn’t take notes. ‘l’m not conceited enough to think that I ever taught him anything.’ Another tutor called him the kind of student who liked finding mistakes in the textbooks better than working out the problems.

The Oxford physics course was scheduled in a way that made it easy not to see much urgent need for work. It was a three year course with no exams until the end of the third year. Hawking calculates he spent on the average about one hour per day studying: about one thousand hours in three years. ‘l’m not proud of this lack of work,’ he says. ‘l’m just describing my attitude at the time, which I shared with most of my fellow students: an attitude of complete boredom and feeling that nothing was worth making an effort for. One result of my illness has been to change all that: when you are faced with the possibility of an early death, it makes you realize that life is worth living, and that there are lots of things you want to do.’

One major explanation why Stephen’s spirits improved dramatically in the middle of his second year was that he and Gordon Berry joined the college Boat Club. Neither of them was a hefty hunk of the sort who make the best rowers. But both were light, wiry, intelligent and quick, with strong, commanding voices, and these are the attributes that college boat clubs look for when recruiting a coxswain (cox) the person who sits looking forward, facing the line of four or eight rowers, and steers the boat with handles attached to the rudder. The position of cox is definitely a position of control, something that Hawking has said appealed to him with model boats, aeroplanes and universes, a man of slight build commanding eight muscle-men.

Stephen exerted himself far more on the river, rowing and coxing for Univ, than he did at his studies. One sure way to be part of the ‘in’ crowd at Oxford was to be a member of your college rowing team. If intense boredom and a feeling that nothing was worth making an effort for were the prevailing attitudes elsewhere, all that changed on the river. Rowers, coxes and coaches regularly assembled at the boathouse at dawn, even when there was a crust of ice on the river, to perform arduous calisthenics and lift the racing shell into the water. The merciless practice went on in all weather, up and down the river, coaches bicycling along the towpath exhorting their crews. On race days emotions ran high and crowds of rowdy well-wishers sprinted along the banks of the river to keep up with their college boats. There were foggy race days when boats appeared and vanished like ghosts, and drenching race days when water filled the bottom of the boat. Boat club dinners in formal dress in the college hall lasted late and ended in battles of winesoaked linen napkins.

All of it added up to a stupendous feeling of physical well-being, camaraderie, all-stops-out effort, and of living college life to the hilt. Stephen became a popular member of the boating crowd. At the level of intercollege competition he did well. He’d never before been good at a sport, and this was an exhilarating change. The College Boatsman of that era, Norman Dix, remembered him as an ‘adventurous type; you never knew quite what he was going to do’. Broken cars and damaged boats were not uncommon as Stephen steered tight corners and attempted to take advantage of narrow manoeuvring opportunities that other coxes avoided.

At the end of the third year, however, examinations suddenly loomed larger than any boat race. Hawking almost floundered. He’d settled on theoretical physics as his speciality. That meant a choice between two areas for graduate work: cosmology, the study of the very large; or elementary particles, the study of the very small. Hawking chose cosmology. ‘It just seemed that cosmology was more exciting, because it really did seem to involve the big question: Where did the universe come from?’

Fred Hoyle, the most distinguished British astronomer of his time, was at Cambridge. Stephen had become particularly enthusiastic about the idea of working with Hoyle when he took a summer course with one of Hoyle’s most outstanding graduate students, Jayant Narlikar. Stephen applied to do Ph.D. research at Cambridge and was accepted with the condition that he get a First from Oxford.

One thousand hours of study was meagre preparation for getting a First. However, an Oxford examination offers a choice from many questions and problems. Stephen was confident he could get through successfully by doing problems in theoretical physics and avoiding any questions that required knowledge of facts. As the examination day approached, his confidence faltered. He decided, as a fail-safe, to take the Civil Service exams and apply for a job with the Ministry of Works.

The night before his Oxford examinations Stephen was too nervous to sleep. The examination went poorly. He was to take the Civil Service exams the next morning, but he overslept and missed them. Now everything hung on his Oxford results.

As Stephen and his friends waited on tenterhooks for their results to be posted, only Gordon was confident he had done well in his examinations well enough for a First, he believed. Gordon was wrong. He and Derek received Seconds, Richard a disappointing Third. Stephen ended up disastrously on the borderline between a First and a Second.

Faced with a borderline result, the examiners summoned Hawking for a personal interview, a ‘viva’. They questioned him about his plans. In spite of the tenseness of the situation, with his future hanging in the balance, Stephen managed to come up with the kind of remark for which he was famous among his friends: ‘If I get a First, I shall go to Cambridge. If I receive a Second, I will remain at Oxford. So I expect that you will give me a First.’ He got his First. Dr Berman said of the examiners: ‘They were intelligent enough to realize they were talking to someone far cleverer than most of themselves.’

That triumph notwithstanding, all was not well. Hawking’s adventures as a cox, his popularity, and his angst about his exams had pushed into the background a problem that he had first begun to notice that year and that refused to go away. ‘I seemed to be getting more clumsy, and I fell over once or twice for no apparent reason,’ he remembers. The problem had even invaded his halcyon existence on the river when he began to have difficulty sculling (rowing a one-man boat). During his final Oxford term, he tumbled down the stairs and landed on his head. His friends spent several hours helping him overcome a temporary loss of shortand long-term memory, insisted he go to a doctor to make sure no serious damage had been done, and encouraged him to take a Mensa intelligence test to prove to them and to himself that his mind was not affected. All seemed well, but they found it difficult to believe that his fall had been a simple accident.

There was indeed something amiss, though not as a result of his tumble and not with his mind. That summer, on a trip he and a friend took to Persia (now Iran), he became seriously ill, probably from a tourist stomach problem or a reaction to the vaccines required for the trip. It was a harrowing journey in other ways, more harrowing for his family back home than for Stephen. They lost touch with him for three weeks, during which time there was a serious earthquake in the area where he was travel ling. Stephen, as it turned out, had been so ill and riding on such a bumpy bus that he didn’t notice the earthquake at all.

He finally got back home, depleted and unwell. Later there would be speculation about whether a non-sterile smallpox vaccination prior to the trip had caused his illness in Persia and also his ALS, but the latter had, in fact, begun earlier. Nevertheless, because of his illness in Persia and the increasingly troubling symptoms he was experiencing, Stephen arrived at Cambridge a more unsettled and weaker twenty-year-old than he had been at Oxford the previous spring. He moved into Trinity Hall for the Michaelmas term in the autumn of 1962.

During the summer before Stephen left for Cambridge, Jane Wilde saw him while she was out walking with her friends in St Albans. He was a ‘young man with an awkward gait, his head down, his face shielded from the world under an unruly mass of straight brown hair immersed in his own thoughts, looking neither right nor left lolloping along in the opposite direction’. Jane’s friend Diana King, sister of Stephen’s friend Basil King, astonished her friends by telling them that she had gone out with him. ‘He’s strange but very clever. He took me to the theatre once. He goes on Ban the Bomb marches.“



Stephen Hawking. His Life And Work

by Kitty Ferguson

get it at

Professor Stephen Hawking 1942-2018

Friends and colleagues from the University of Cambridge have paid tribute to Professor Stephen Hawking, who died on 14 March 2018 at the age of 76.

Widely regarded as one of the world’s most brilliant minds, he was known throughout the world for his contributions to science, his books, his television appearances, his lectures and through biographical films. He leaves three children and three grandchildren.

Professor Hawking broke new ground on the basic laws which govern the universe, including the revelation that black holes have a temperature and produce radiation, now known as Hawking radiation. At the same time, he also sought to explain many of these complex scientific ideas to a wider audience through popular books, most notably his bestseller A Brief History of Time.

He was awarded the CBE in 1982, was made a Companion of Honour in 1989, and was awarded the US Presidential Medal of Freedom in 2009. He was the recipient of numerous awards, medals and prizes, including the Copley Medal of the Royal Society, the Albert Einstein Award, the Gold Medal of the Royal Astronomical Society, the Fundamental Physics Prize, and the BBVA Foundation Frontiers of Knowledge Award for Basic Sciences. He was a Fellow of The Royal Society, a Member of the Pontifical Academy of Sciences, and a Member of the US National Academy of Sciences.

He achieved all this despite a decades-long battle against motor neurone disease, with which he was diagnosed while a student, and eventually led to him being confined to a wheelchair and to communicating via his instantly recognisable computerised voice. His determination in battling with his condition made him a champion for those with a disability around the world.

Professor Hawking came to Cambridge in 1962 as a PhD student and rose to become the Lucasian Professor of Mathematics, a position once held by Isaac Newton, in 1979. In 2009, he retired from this position and was the Dennis Stanton Avery and Sally Tsui Wong-Avery Director of Research in the Department of Applied Mathematics and Theoretical Physics until his death. He was active scientifically and in the media until the end of his life.

Professor Stephen Toope, Vice-Chancellor of the University of Cambridge, paid tribute, saying, ”Professor Hawking was a unique individual who will be remembered with warmth and affection not only in Cambridge but all over the world. His exceptional contributions to scientific knowledge and the popularisation of science and mathematics have left an indelible legacy. His character was an inspiration to millions. He will be much missed.”

Stephen William Hawking was born on January 8, 1942 in Oxford although his family was living in north London at the time. In 1959, the family moved to St Albans where he attended St Albans School. Despite the fact that he was always ranked at the lower end of his class by teachers, his school friends nicknamed him ‘Einstein’ and seemed to have encouraged his interest in science. In his own words, “physics and astronomy offered the hope of understanding where we came from and why we are here. I wanted to fathom the depths of the Universe.”

His ambition brought him a scholarship to University College Oxford to read Natural Science.There he studied physics and graduated with a first class honours degree.

He then moved to Trinity Hall Cambridge and was supervised by Dennis Sciama at the Department of Applied Mathematics and Theoretical Physics for his PhD; his thesis was titled ‘Properties of Expanding Universes.’ In 2017, he made his PhD thesis freely available online via the University of Cambridge’s Open Access repository. There have been over a million attempts to download the thesis, demonstrating the enduring popularity of Professor Hawking and his academic legacy.

On completion of his PhD, he became a research fellow at Gonville and Caius College where he remained a fellow for the rest of his life. During his early years at Cambridge, he was influenced by Roger Penrose and developed the singularity theorems which show that the Universe began with the Big Bang.

An interest in singularities naturally led to an interest in black holes and his subsequent work in this area laid the foundations for the modern understanding of black holes. He proved that when black holes merge, the surface area of the final black hole must exceed the sum of the areas of the initial black holes, and he showed that this places limits on the amount of energy that can be carried away by gravitational waves in such a merger. He found that there were parallels to be drawn between the laws of thermodynamics and the behaviour of black holes. This eventually led, in 1974, to the revelation that black holes have a temperature and produce radiation, now known as Hawking radiation, a discovery which revolutionised theoretical physics.

He also realised that black holes must have an entropy often described as a measure of how much disorder is present in a given system equal to one quarter of the area of their event horizon: the ‘point of no return’, where the gravitational pull of a black hole becomes so strong that escape is impossible. Some forty odd years later, the precise nature of this entropy is still a puzzle. However, these discoveries led to Hawking formulating the ‘information paradox’ which illustrates a fundamental conflict between quantum mechanics and our understanding of gravitational physics. This is probably the greatest mystery facing theoretical physicists today.

To understand black holes and cosmology requires one to develop a theory of quantum gravity. Quantum gravity is an unfinished project which is attempting to unify general relativity, the theory of gravitation and of space and time with the ideas of quantum mechanics. Hawking’s work on black holes started a new chapter in this quest and most of his subsequent achievements centred on these ideas. Hawking recognised that quantum mechanical effects in the very early universe might provide the primordial gravitational seeds around which galaxies and other large-scale structures could later form. This theory of inflationary fluctuations, developed along with others in the early 1980’s, is now supported by strong experimental evidence from the COBE, WMAP and Planck satellite observations of the cosmic microwave sky.

Another influential idea was Hawking’s ‘no boundary’ proposal which resulted from the application of quantum mechanics to the entire universe. This idea allows one to explain the creation of the universe in a way that is compatible with laws of physics as we currently understand them.

Professor Hawking’s influential books included The Large Scale Structure of Spacetime, with G F R Ellis; General Relativity: an Einstein centenary survey, with W Israel; Superspace and Supergravity, with M Rocek (1981); The Very Early Universe, with G Gibbons and S Siklos, and 300 Years of Gravitation, with W Israel.

However, it was his popular science books which took Professor Hawking beyond the academic world and made him a household name. The first of these, A Brief History of Time, was published in 1988 and became a surprise bestseller, remaining on the Sunday Times bestseller list for a record breaking 237 weeks. Later popular books included Black Holes and Baby Universes, The Universe in a Nutshell, A Briefer History of Time, and My Brief History. He also collaborated with his daughter Lucy on a series of books for children about a character named George who has adventures in space.

In 2014, a film of his life, The Theory of Everything, was released. Based on the book by his first wife Jane, the film follows the story of their life together, from first meeting in Cambridge in 1964, with his subsequent academic successes and his increasing disability. The film was met with worldwide acclaim and Eddie Redmayne, who played Stephen Hawking, won the Academy Award for Best Actor at the 2015 ceremony.

Travel was one of Professor Hawking’s pastimes. One of his first adventures was to be caught up in the 7.1 magnitude Bou-in-Zahra earthquake in Iran in 1962. In 1997 he visited the Antarctic. He has plumbed the depths in a submarine and in 2007 he experienced weightlessness during a zero-gravity flight, routine training for astronauts. On his return, he quipped “Space, here I come.”

Writing years later on his website, Professor Hawking said:

“I have had motor neurone disease for practically all my adult life. Yet it has not prevented me from having a very attractive family and being successful in my work. I have been lucky that my condition has progressed more slowly than is often the case. But it shows that one need not lose hope.”

At a conference In Cambridge held in celebration of his 75th birthday in 2017, Professor Hawking said:

“It has been a glorious time to be alive and doing research into theoretical physics. Our picture of the Universe has changed a great deal in the last 50 years, and I’m happy if I’ve made a small contribution.”

And he said he wanted others to feel the passion he has for understanding the universal laws that govern us all.

“I want to share my excitement and enthusiasm about this quest. So remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious, and however difficult life may seem, there is always something you can do, and succeed at. It matters that you don’t just give up.”

How afraid of human cloning should we be? – Philip Ball. * Here’s why we’re still not cloning humans, 20 years after Dolly the sheep – Sharon Begley.

The creation of two monkeys brings the science of human cloning closer to reality. But that doesn’t mean it will happen.

Philip Ball

The cloning of macaque monkeys in China makes human reproductive cloning more conceivable. At the same time, it confirms how difficult it would be to clone a random adult – Adolf Hitler, say – from a piece of their tissue. And it changes nothing in the debate about whether such human cloning should ever happen.

Since the cloning of Dolly the sheep by scientists in Scotland in 1996, several other mammals have been cloned, including dogs, cats and pigs. But the same methods didn’t work so well for primates – like monkeys, and us. That’s why this latest step is significant. It shows that, with a bit of modification, the technique used for Dolly can create cloned, apparently healthy baby monkeys. The pair made this way by scientists at the Institute of Neuroscience in Shanghai have been christened Hua Hua and Zhong Zhong.

Crucially, the cute duo were cloned from the genetic material in cells of a macaque foetus, not from an adult monkey. This material – the chromosomes, housed in the cell’s nucleus – was extracted from the donor cell and placed inside the egg from an adult monkey, from which its own nucleus had first been removed. The egg was then stimulated to grow into an embryo in a surrogate mother’s womb, to make the egg respond as if it had been fertilised.

The important additional step – not needed for Dolly and her ilk – was to add some molecules to the egg before implantation that could activate genes involved in embryo development. Without that encouragement, these genes don’t seem to “awaken” in primates, and so the embryo can’t develop. But it seems that, in adult cells, those genes can’t so easily be revived, which is what still prevents the successful cloning of adult monkeys. In contrast, Dolly was cloned from cells of an adult ewe.

The Chinese scientists want to clone monkeys to study the genetic factors behind Alzheimer’s disease. With a strain of genetically identical monkeys, they can deactivate individual genes thought to play a role in the disease and see what effect it has. Such biomedical use of primates is fraught with ethical issues of its own – it is of course the very closeness of the relationship to humans that makes such research more informative but also more disturbing.

But the research also reopens the debate about human reproductive cloning. No one can yet know if cloning of a human foetus would work this way, but it seems entirely possible. Human cloning for reproduction is banned in many countries (including the UK), and a declaration by the UN in 2005 called on all states to prohibit it as “incompatible with human dignity and the protection of human life”. Right now there is every reason to respect that advice on safety grounds alone. Hua Hua and Zhong Zhong were the only live births from six pregnancies, resulting from the implantation of 79 cloned embryos into 21 surrogates. Two baby macaques were in fact born from embryos cloned from adult cells, but both died – one from impaired body development, the other from respiratory failure.

My guess is that the success rate will improve – and that there will eventually be successful cloning from adult cells. That won’t obviate safety concerns for human cloning though, and it’s hard to see quite how the issue can ever be convincingly resolved short of actually giving it a try. That was how IVF began. Many people, including some eminent scientists, were convinced that it would lead to birth defects. But in the absence of a clear ethical framework, Robert Edwards and Patrick Steptoe were able to try it anyway in 1977. Their bold, even reckless move has now alleviated the pain of infertility for millions of people.

It’s hard to make any comparable case for human reproductive cloning – to argue that the potential benefits create a risk worth taking. To construct a scenario where cloning seems a valid option for reproduction takes a lot of ingenuity: say, where a heterosexual couple want a biological child but one of them is sure to pass on some complex genetic disorder and they object to sperm or egg donation. Even in those cases, advances in other reproductive technologies such as gene editing or the production of sperm or egg cells from other body cells seem likely to render recourse to cloning futile.

It’s not hard to think up invalid reasons for human cloning, of course – most obviously, the vanity of imagining that one is somehow creating a “copy” of oneself and thereby prolonging one’s life. That notion would not only be obnoxious but deluded. Which is not to say that it would prevent someone from giving it a go. The fantasist “human cloning company” Clonaid, run by the Raëlian cult, which spuriously claimed to have created the first cloned child in 2002, stated (with no apparent irony) in its publicity material that “a surprisingly large number” of the requests it had received “come from the Los Angeles/Hollywood area”.

Yet although human reproductive cloning would be foolish and lacking solid motivation, that doesn’t excuse some of the baseless reasons often advanced against it. Suggestions that a cloned child would be stigmatised, “diminished”, “handmade”, “unnatural”, “soulless” and the start of a slippery slope to Brave New World, echo many of the earlier objections to IVF. The cloning debate reveals more about our prejudices towards reproductive technologies in general than it does about our ability to make wise decisions about biomedical advances. A good case was never made with bad arguments.

The Guardian


Here’s why we’re still not cloning humans, 20 years after Dolly the sheep

Sharon Begley

Dolly, the first animal to be cloned from an adult of its species, was born in 1996 at the Roslin Institute in Scotland.

When her creators announced what they had done, it triggered warnings of rich people cloning themselves for spare parts, of tyrants cloning soldiers for armies, of bereaved parents cloning their dead child to produce a replacement – and promises that the technique would bring medical breakthroughs. Which raises some questions:
Why are there no human clones?
Because of scientific, ethical, and commercial reasons.

The scientists who created Dolly – named after Dolly Parton, naturally – removed the DNA from a sheep ovum, fused the ovum with a mammary epithelial cell from an adult “donor” sheep, and transplanted the result, now carrying DNA only from the donor, into a surrogate ewe. But that technique, called somatic cell nuclear transfer (SCNT), turned out not to be so easy in other species.

“I think no one realized how hard cloning would be in some species though relatively easy in others,” said legal scholar and bioethicist Hank Greely of Stanford University. “Cats: easy; dogs: hard; mice: easy; rats: hard; humans and other primates: very hard.”

There has also been no commercial motive for human cloning. Both the assisted reproduction (IVF) and pharmaceutical industries “immediately said they had no interest in human cloning,” said bioethicist George Annas of Boston University. “That was a big deal. All new technologies are driven by the profit motive,” absent which they tend to languish.

The Raelians (a cult that believes humans are the clones of aliens) claimed in 2002 that they had cloned a baby from a 31-year-old American woman, but for some reason the now 13-year-old “Eve” has never stepped forward to claim her place in history.

But surely someone has made money from Dolly-like cloning work?
Livestock cloning has become a commercial business, with ViaGen – part of biotech company Intrexon – cloning cattle, sheep, and pigs. It also clones pets. But it’s not a huge business.

In South Korea, biologist Woo Suk Hwang rebounded from scandal (in 2004, he fraudulently claimed to have cloned a human embryo) to clone hundreds of dogs, cows, pigs, and even coyotes. Price for Fido Redux: about $100,000, Nature reported.

While pet cloning “remains very expensive and very uncommon,” said Greely, “the world’s best polo pony team is made up of clones.” The thoroughbred racing industry bans clones, however.

Did Dolly start a revolution?

If you count by scientific publications, sure: There were only about 60 papers on somatic cell nuclear transfer in the decade before her birth, most of them describing (failed) attempts to use it to produce prized cattle and other commercial livestock, and 5,870 in the decade after, many of them reporting progress toward medical uses of SCNT.

Where are those medical breakthroughs?

They were premised on what’s called therapeutic cloning, to distinguish it from reproductive cloning. The idea is to take a cell from a patient, put its DNA into an ovum whose own DNA was removed, and get the ovum to begin dividing and multiplying in a lab dish, eventually producing specialized cells like neurons and pancreatic beta cells. Those cells could be used for basic research, such as to follow how a disease like ALS develops at the cellular level, or for therapy.

In 2013, a team led by reproductive biologist Shoukhrat Mitalipov of Oregon Health & Science University used somatic cell nuclear transfer to create a human cell line. There hasn’t been enough time since then for the rest of the therapeutic cloning promise to be realized.

2013? Why did it take so long?

Some animals turned out to be much harder to clone than others, and humans are really tough. It wasn’t until 2014 that scientists, led by Dieter Egli of the New York Stem Cell Foundation, used a variation on the Dolly recipe to create the first disease-specific cell lines from a patient, with type 1 diabetes. The donor’s DNA plus a DNA-free egg produced a line of cells Egli is using to grow insulin-producing beta cells that match the donor precisely, minimizing the chance of rejection. “Now you have cells that are genetically identical to the donor, which will allow us to make patient-specific cells for transplant,” Egli said. “We’re three-quarters of the way there, and that breakthrough is due to Dolly.”

So Dolly deserves the credit if such cells start to be used to cure diabetes and other diseases?

Sort of. Competing techniques, especially “reprogramming” adult cells so they can turn into (potentially) diabetes-curing beta cells and others, have diminished interest in SCNT, since it’s so much harder to pull off. But it was Dolly who showed not only that mammalian cloning can work, but also that “there is something in the egg that could take an adult cell [the sheep mammary cell] backwards in time and restore it to an embryonic state” able to become a whole new creature, said Dr. Robert Lanza, chief scientific officer of the Astellas Institute for Regenerative Medicine. “This is what spurred the discovery of iPS [induced pluripotent stem] cells,” the reprogrammed adult cells that might finally make stem-cell medicine a reality.

Did Dolly have effects outside medicine?

Yes, for endangered species. Lanza and his team adapted the Dolly-making technique to clone endangered species. The first, a gaur, was born in 2001, and their banteng (a species of wild ox) was born in 2003. Both died within days, but efforts are underway to clone such endangered species as the black-footed ferret, possibly the northern white rhino, giant pandas,- and also extinct animals such as the passenger pigeon and mammoth, Lanza said: “We’re likely to see de-extinction become a reality in our lifetime.”

Where is Dolly now?

After developing a lung disease called jaagsiekte, she was euthanized on Feb. 14, 2003, stuffed, and put on display at the Museum of Scotland in Edinburgh.

Physicists Just Found a Loophole in Graphene That Could Unlock Clean, Limitless Energy – Usman Abrar. 

By all measures, graphene shouldn’t exist. The fact it does comes down to a neat loophole in physics that sees an impossible 2D sheet of atoms act like a solid 3D material. New research has delved into graphene’s rippling, discovering a physical phenomenon on an atomic scale that could be exploited as a way to produce a virtually limitless supply of clean energy.

The team of physicists led by researchers from the University of Arkansas didn’t set out to discover a radical new way to power electronic devices. Their aim was far more humble – to simply watch how graphene shakes. We’re all familiar with the gritty black carbon-based material called graphite, which is commonly combined with a ceramic material to make the so-called ‘lead’ in pencils.

What we see as smears left by the pencil are actually stacked sheets of carbon atoms arranged in a ‘chicken wire’ pattern. Since these sheets aren’t bonded together, they slide easily over one another. For years scientists wondered if it was possible to isolate single sheets of graphite, leaving a 2-dimensional plane of carbon ‘chicken wire’ to stand on its own.

In 2004 a pair of physicists from the University of Manchester achieved the impossible, isolating sheets from a lump of graphite that were just an atom thick. To exist, the 2D material had to be cheating in some way, acting as a 3D material in order to provide some level of robustness. It turns out the ‘loophole’ was the random jiggling of atoms popping back and forth, giving the 2D sheet of graphene a handy third dimension.

In other words, graphene was possible because it wasn’t perfectly flat at all, but vibrated on an atomic level in such a way that its bonds didn’t spontaneously unravel. To accurately measure the level of this jiggling, physicist Paul Thibado recently led a team of graduate students in a simple study. They laid sheets of graphene across a supportive copper grid and observed the changes in the atoms’ positions using a scanning tunneling microscope. While they could record the bobbing of atoms in the graphene, the numbers didn’t really fit any expected model. They couldn’t reproduce the data they were collecting from one trial to the next.

Thibado pushed the experiment into a different direction, searching for a pattern by changing the way they looked at the data.

The team quickly found the sheets of graphene were buckling in way not unlike the snapping back and forth of a bent piece of thin metal as it’s twisted from the sides. Patterns of small, random fluctuations combining to form sudden, dramatic shifts are known as Lévy flights. While they’ve been observed in complex systems of biology and climate, this was the first time they’d been seen on an atomic scale. By measuring the rate and scale of these graphene waves, Thibado figured it might be possible to harness it as an ambient temperature power source.

So long as the graphene’s temperature allowed the atoms to shift around uncomfortably, it would continue to ripple and bend. Place electrodes to either side of sections of this buckling graphene, and you’d have a tiny shifting voltage. This video clip below explains the process in detail:

By Thibado’s calculations, a single ten micron by ten micron piece of graphene could produce ten microwatts of power. It mightn’t sound impressive, but given you could fit more than 20,000 of these squares on the head of a pin, a small amount of graphene at room temperature could feasibly power something small like a wrist watch indefinitely. Better yet, it could power bioimplants that don’t need cumbersome batteries.

As exciting as they are, these applications still need to be investigated. Fortunately Thibado is already working with scientists at the US Naval Research Laboratory to see if the concept has legs. For an impossible molecule, graphene has become something of a wonder material that has turned physics on its head. It’s already being touted as a building block for future conductors. Perhaps we’ll also be seeing it power the future of a new field of electronic devices as well.

This research was published in Physical Review Letters.

Sci-Tech Universe 

This Woman Is Said to Rival Einstein, and She’s Only 23 – Usman Abrar. 

At age 14, Sabrina Pasterski walked onto the MIT campus to request notarization of aircraft worthiness for her single-engine plane. She built it herself and had already flown the craft solo, so even within the bastion of brilliance that is MIT, people were interested. 

Nine years have passed, and now Pasterski is an MIT graduate and Harvard Ph.D. candidate in physics at age 23. (You can stay up to date with her many published papers and talks on her website,

Pasterski focuses on understanding quantum gravity, explaining gravity within the context of quantum mechanics. She is also interested in black holes and Spacetime. It’s probably no surprise that she’s known to the NASA scientists, and that she has a standing job offer from Jeff Bezos and Blue Origin. 

Pasterski is exceptional in many ways, but she’s also part of a growing trend. In 1999, the number people earning physics bachelor’s degrees in the U.S. was at its lowest point in four decades, with only 3,178 awarded that year. However, in 2015 things looked much different, according to the American Institute of Physics. That year 8,081 bachelor’s degrees in physics were awarded — an all-time high. Physics doctorates also reached an all-time high of 1,860 in 2015. These numbers aren’t flukes or random spikes; the numbers for the previous two years were also high. 

This trend is due in part to higher enrollment and less attrition among female students. These women remain a minority in physics and astronomy, and many are still having to face challenges with impostor syndrome and mentoring. However, more female students in physics means more graduates overall and a more active scientific community in the U.S.

Sabrina Pasterski on YouTube 

A Strong Tradition

Sabrina Pasterski and other women in science today have benefited from being part of a proud tradition of standout female scientists. Marie S. Curie, the mother of modern physics, was the first Nobel Prize winning woman in the history of science. She was the first European female to earn a doctorate degree for her scientific research, and she later became the first woman professor and lecturer at the Sorbonne University in Paris. Curie’s work with radiation — a term she invented — transformed our understanding of the natural world, and she remains one of the most notable minds in science, regardless of gender.

Less famous — but no less significant to science — was Ada Lovelace. Intrigued by Charles Babbage’s idea for an “Analytical Engine,” a machine for computing, Lovelace published an article on the machine and developed an algorithm that would allow it to calculate a sequence of Bernoulli numbers. She saw the potential of the device and predicted that it might use its algorithms in many different ways. Ada was the first person to articulate the concept of machines following rules in order to manipulate symbols and produce graphics for scientific and practical purposes. She was recognized as the world’s first programmer posthumously.

Rounding out this look back at female scientists, we look at Dian Fossey, a conservation biologist who fought passionately to save mountain gorillas. Fossey studied endangered gorilla species in the mountain forests of Rwanda and learned to mimic the actions, behaviors, and sounds of the gorillas in order to approach them. She strongly opposed poaching, financed patrols to destroy traps, and helped arrest several poachers. In 1977, Fossey’s favorite gorilla, Digit, was killed by poachers as he defended his group against poachers. Fossey became totally focused on preventing poaching, destroying gorilla traps, capturing and humiliating the poachers, and even burning their camps. In December 1985, Fossey was found murdered in her camp in Rwanda. The case was never solved, although she is believed to have been killed by poachers.

Female scientists like Sabrina Pasterski are joining an amazing group and a proud tradition. Their work will inspire the scientists of tomorrow and change our understanding of the world — just as the work of historical female scientists did for them.

Sci-Tech Universe 


Sabrina Gonzalez Pasterski (born June 3, 1993) is an American physicist from Chicago, Illinois who studies high energy physicsShe describes herself as “a proud first-generation Cuban-American & Chicago Public Schools alumna.” She completed her undergraduate studies at the Massachusetts Institute of Technology (MIT) and is currently a graduate student at Harvard University.

As a sophomore, Gonzalez Pasterski worked on the CMS experiment at the Large Hadron Collider. Gonzalez Pasterski is currently pursuing a Ph.D. degree in high energy physics under the supervision of Andrew Strominger from whom she was given her academic freedom in the Spring of 2015 based upon Pasterski et al’s 2014 discovery of the “spin memory effect” which may be used to detect/verify the net effects of gravitational waves. After being granted that academic freedom, she would complete the Pasterski-Strominger-Zhiboedov Triangle for EM in a 2015 solo paper that Stephen Hawking cited in early 2016


Further Research into Artificial Wombs Brings Us Closer to a Future Where Babies Grow Outside the Body – Dom Galeon. 


Around 15 million babies are born preterm or premature every year, according to the World Health Organization. This number is expected to rise, bringing more infants into the world before completing 37 weeks of gestation. How we are going to care for a growing number of premature infants is a real concern: preterm birth complications were responsible for almost a million deaths in 2015, making it the leading cause of death among children below 5 years of age.

Thankfully, there are a number of interventions that can help, many of which involve developing better incubation chambers, even artificial wombs and placentas where the premature infants can continue their growth outside the womb. One of these is an artificial womb developed by a combined team of researchers from the Women and Infants Research Foundation, the University of Western Australia, and Tohoku University Hospital, Japan.  

“Designing treatment strategies for extremely preterm infants is a challenge,” lead researcher Matt Kemp said in a press release. “At this gestational age the lungs are often too structurally and functionally under-developed for the baby to breathe easily.” Their work, published in the American Journal of Obstetrics & Gynecology, took a different approach. The key was treating the preterm infants not as babies, but as fetuses.


Their device and method successfully incubated healthy baby lambs in an ex-vivo uterine environment (EVE) for a one-week period. “At its core, our equipment is essentially is a high-tech amniotic fluid bath combined with an artificial placenta. Put those together, and with careful maintenance what you’ve got is an artificial womb,” Kemp explained.

He added in the press release, “By providing an alternative means of gas exchange for the fetus, we hoped to spare the extremely preterm cardiopulmonary system from ventilation-derived injury, and save the lives of those babies whose lungs are too immature to breathe properly. The end goal is to provide preterm babies the chance to better develop their lungs and other important organs before being brought into the world.” It’s this approach that makes it revolutionary.

The scientists hope that this EVE therapy could soon help bring preterm human babies to term. “We now have a much better understanding of what works and what doesn’t, and although significant development is required, a life support system based around EVE therapy may provide an avenue to improve outcomes for extremely preterm infants.”



Caesar’s Last Breath. The Epic Story of the Air around us – Sam Kean. 

The ghosts of breaths past continue to flit around you every second of every hour, confronting you with every single yesterday.

Short of breathing from a tank, we can’t escape the air of those around us. We recycle our neighbors’ breaths all the time, even distant neighbors’. Just as light from distant stars can sparkle our irises, the remnants of a stranger’s breath from Timbuktu might come wafting in on the next breeze.

Our breaths entangle us with the historical past. Some of the molecules in your next breath might well be emissaries from 9/11 or the fall of the Berlin Wall, witnesses to World War I or the star-spangled banner over Fort McHenry. And if we extend our imagination far enough in space and time, we can conjure up some fascinating scenarios. For instance, is it possible that your next breath, this one, right here, might include some of the same air that Julius Caesar exhaled when he died?

How could something as ephemeral as a breath still linger? If nothing else, the atmosphere extends so far and wide that Caesar’s last gasp has surely been dissolved into nothingness by now, effaced into the æther. You can open a vein into the ocean, but you don’t expect a pint of blood to wash ashore two thousand years later.

Your lungs expel a half liter of air with every normal breath; a gasping Caesar probably exhaled a full liter, a volume equivalent to a balloon five inches wide. Now compare that balloon to the sheer size of the atmosphere. Depending on where you cut it off, the bulk of the atmosphere forms a shell around Earth about ten miles high. Given those dimensions, that shell has a volume of two billion cubic miles. Compared to the atmosphere at large, then, a one-liter breath represents just 0.00000000000000000001 percent of all the air on Earth. Talk about tiny: Imagine gathering together all of the hundred billion people who ever lived, you, me, every last Roman emperor and pope and Dr. Who. If we let those billions of people stand for the atmosphere, and reduce our population by that percentage, you’d have just 0.00000000001 “people” left, a speck of a few hundred cells, a last breath indeed. Compared to the atmosphere, Caesar’s gasp seems like a rounding error, a cipher, and the odds of encountering any of it in your next breath seem nil.

Consider how quickly gases spread around the planet. Within about two weeks, prevailing winds would have smeared Caesar’s last breath all around the world, in a band at roughly the same latitude as Rome, through the Caspian Sea, through southern Mongolia, through Chicago and Cape Cod. Within about two months, the breath would cover the entire Northern Hemisphere. And within a year or two, the entire globe.

The same holds true today, naturally, any breath or belch or exhaust fume anywhere on Earth will take roughly two weeks, two months, or one or two years to reach you, depending on your relative location.

While on some level (the human level) Caesar’s last breath does seem to have disappeared into the atmosphere, on a microscopic level his breath hasn’t disappeared at all, since the individual molecules that make it up still exist.

So in asking whether you just inhaled some of Caesar’s last breath, I’m really asking whether you inhaled any molecules he happened to expel at that moment.

One liter of air at any sort of reasonable temperature and pressure corresponds to approximately 25 sextillion (25,000,000,000,000, 000,000,000) molecules.

When you crunch the numbers, you’ll find that roughly one particle of “Caesar air” will appear in your next breath. That number might drop a little depending on what assumptions you make, but it’s highly likely that you just inhaled some of the very atoms Caesar used to sound his cri de coeur contra Brutus. And it’s a certainty that, over the course of a day, you inhale thousands.

Nothing liquid or solid of Julius Caesar remains. But you and Julius are practically kissing cousins. To misquote a poet, the atoms belonging to his breath as good as belong to you.

You could pick anyone who suffered through an agonizing last breath: the masses at Pompeii, Jack the Ripper’s victims, soldiers who died during gas attacks in World War I. Or I could have picked anyone who died in bed, whose last breath was serene—the physics is identical. Heck, I could have picked Rin Tin Tin or Jumbo the giant circus elephant. Think of anything that ever breathed, from bacteria to blue whales, and some of his, her, or its last breath is either circulating inside you now or will be shortly.

Why not be more audacious? Why not go further and trace these air molecules to even bigger and wilder phenomena? Why not tell the full story of all the gases we inhale? Every milestone in Earth’s history, you see—from the first Hadean volcanic eruptions to the emergence of complex life—depended critically on the behavior and evolution of gases. Gases not only gave us our air, they reshaped our solid continents and transfigured our liquid oceans. The story of Earth is the story of its gases. Much the same can be said of human beings, especially in the past few centuries. When we finally learned to harness the raw physical power of gases, we could suddenly build steam engines and blast through billion-year-old mountains in seconds with explosives. Similarly, when we learned to exploit the chemistry of gases, we could finally make steel for skyscrapers and abolish pain in surgery and grow enough food to feed the world. Like Caesar’s last breath, that history surrounds you every second: every time the wind comes clattering through the trees, or a hot-air balloon soars overhead, or an unaccountable smell of lavender or peppermint or even flatulence wrinkles your nose, you’re awash in it. Put your hand in front of your mouth again and feel it: we can capture the world in a single breath.

This includes the formation of our very planet from a cloud of space gas 4.5 billion years ago. Later a proper atmosphere emerged on our planet, as volcanoes began expelling gases from deep inside Earth. The emergence of life then scrambled and remixed this original atmosphere, leading to the so-called oxygen catastrophe (which actually worked out pretty well for us animals). Overall the first section explains where air comes from and how gases behave in different situations.

Human beings have, well, harnessed the special talents of different gases over the past few centuries. We normally don’t think of air as having much mass or weight, but it does: if you drew an imaginary cylinder around the Eiffel Tower, the air inside it would weigh more than all the metal. And because air and other gases have weight, they can lift and push and even kill. Gases powered the Industrial Revolution and fulfilled humanity’s ancient dream of flying.

Our relationship with air has evolved in the past few decades. For one thing, we’ve changed the composition of what we breathe: the air you inhale now is not the same air your grandparents inhaled in their youth, and it’s markedly different from the air people breathed three hundred years ago.

You can survive without food, without solids, for weeks. You can survive without water, without liquids, for days. Without air, without gases, you’d last a few minutes at most. I’ll wager, though, that you spend the least amount of time thinking about what you’re breathing.

Caesar’s Last Breath aims to change that. Pure air is colorless and (ideally) odorless, and by itself it sounds like nothing. That doesn’t mean it’s mute, that it has no voice. It’s burning to tell its story. Here it is.

Caesar’s Last Breath. The Epic Story of the Air around us. by Sam Kean

get it from Amazon

How Color Vision Came to the Animals – Nick Stockton. 

ANIMALS ARE LIVING color. Wasps buzz with painted warnings. Birds shimmer their iridescent desires. Fish hide from predators with body colors that dapple like light across a rippling pond. And all this color on all these creatures happened because other creatures could see it.

The natural world is so showy, it’s no wonder scientists have been fascinated with animal color for centuries. Even today, the questions how animals see, create, and use color are among the most compelling in biology.

Until the last few years, they were also at least partially unanswerable—because color researchers are only human, which means they can’t see the rich, vivid colors that other animals do. But now new technologies, like portable hyperspectral scanners and cameras small enough to fit on a bird’s head, are helping biologists see the unseen. And as described in a new Science paper, it’s a whole new world.

Visions of Life

The basics: Photons strike a surface—a rock, a plant, another animal—and that surface absorbs some photons, reflects others, refracts still others, all according to the molecular arrangement of pigments and structures. Some of those photons find their way into an animal’s eye, where specialized cells transmit the signals of those photons to the animal’s brain, which decodes them as colors and shapes.

It’s the brain that determines whether the colorful thing is a distinct and interesting form, different from the photons from the trees, sand, sky, lake, and so on it received at the same time. If it’s successful, it has to decide whether this colorful thing is food, a potential mate, or maybe a predator. “The biology of color is all about these complex cascades of events,” says Richard Prum, an ornithologist at Yale University and co-author of the paper.

In the beginning, there was light and there was dark. That is, basic greyscale vision most likely evolved first, because animals that could anticipate the dawn or skitter away from a shadow are animals that live to breed. And the first eye-like structures—flat patches of photosensitive cells—probably didn’t resolve much more than that. It wasn’t enough. “The problem with using just light and dark is that the information is quite noisy, and one problem that comes up is determining where one object stops and another one starts. ” says Innes Cuthill, a behavioral ecologist at the University of Bristol and coauthor of the new review.

Color adds context. And context on a scene is an evolutionary advantage. So, just like with smart phones, better resolution and brighter colors became competitive enterprises. For the resolution bit, the patch light-sensing cells evolved over millions of years into a proper eye—first by recessing into a cup, then a cavity, and eventually a fluid-filled spheroid capped with a lens. For color, look deeper at those light-sensing cells. Wedged into their surfaces are proteins called opsins. Every time they get hit with a photon—a quantum piece of light itself—they transduce that signal into an electrical zap to the rudimentary animal’s rudimentary brain. The original light/dark opsin mutated into spin-offs that could detect specific ranges of wavelengths. Color vision was so important that it evolved independently multiple times in the animal kingdom—in mollusks, arthropods, and vertebrates.

In fact, primitive fish had four different opsins, to sense four spectra—red, green, blue, and ultraviolet light. That four-fold ability is called tetrachromacy, and the dinosaurs probably had it. Since they’re the ancestors of today’s birds, many of them are tetrachromats, too.

But modern mammals don’t see things that way. That’s probably because early mammals were small, nocturnal things that spent their first 100 million years running around in the dark, trying to keep from being eaten by tetrachromatic dinosaurs. “During that period the complicated visual system they inherited from their ancestors degraded,” says Prum. “We have a clumsy, retrofitted version of color vision. Fishes, and birds, and many lizards see a much richer world than we do.”

In fact, most monkeys and apes are dichromats, and see the world as greyish and slightly red-hued. Scientists believe that early primates regained three-color vision because spotting fresh fruit and immature leaves led to a more nutritious diet. But no matter how much you enjoy springtime of fall colors, the wildly varicolored world we humans live in now isn’t putting on a show for us. It’s mostly for bugs and birds. “Flowering plants of course have evolved to signal pollinators,” says Prum. “The fact that we find them beautiful is incidental, and the fact that we can see them at all is because of an overlap in the spectrums insects and birds can see and the ones we can see.”

Covered in Color

And as animals gained the ability to sense color, evolution kickstarted an arms race in displays—hues and patterns that aided in survival became signifiers of ace baby-making skills. Almost every expression of color in the natural world came about to signal, or obscure, a creature to something else.

For instance, “aposematism” is color used as a warning—the butterfly’s bright colors say “don’t eat me, you’ll get sick.” “Crypsis” is color used as camouflage. Color serves social purposes, too. Like, in mating. Did you know that female lions prefer brunets? Or that paper wasps can recognize each others’ faces? “Some wasps even have little black spots that act like karate belts, telling other wasps not to try and fight them,” says Elizabeth Tibbetts, an entomologist at the University of Michigan.

But animals display colors using two very different methods. The first is with pigments, colored substances created by cells called chromatophores (in reptiles, fish, and cephalopods), and melanocytes (in mammals and birds). They absorb most wavelengths of light and reflect just a few, limiting both their range and brilliance. For instance, most animals cannot naturally produce red; they synthesize it from plant chemicals called carotenoids.

The other way animals make color is with nanoscale structures. Insects, and, to a lesser degree, birds, are the masters of color-based structure. And compared to pigment, structure is fabulous. Structural coloration scatters light into vibrant, shimmering colors, like the shimmering iridescent bib on a Broad-tailed hummingbird, or the metallic carapace of a Golden scarab beetle. And scientists aren’t quite sure why iridescence evolved. Probably to signal mates, but still: Why?

Decoding the rainbow of life

The question of iridescence is similar to most questions scientists have about animal coloration. They understand what the colors do in broad strokes, but there’s till a lot of nuance to tease out. This is mostly because, until recently, they were limited to seeing the natural world through human eyes. “If you ask the question, what’s this color for, you should approach it the way animals see those colors,” says Tim Caro, a wildlife biologist at UC Davis and the organizing force behind the new paper. (Speaking of mysteries, Caro recently figured out why zebras have stripes.)

Take the peacock. “The male’s tail is beautiful, and it evolved to impress the female. But the female may be impressed in a different way than you or I,” Caro says. Humans tend to gaze at the shimmering eyes at the tip of each tail feather; peahens typically look at the base of the feathers, where they attach to the peacock’s rump. Why does the peahen find the base of the feathers sexy? No one knows. But until scientists strapped to the birds’ heads tiny cameras spun off from the mobile phone industry, they couldn’t even track the peahens’ gaze.

Another new tech: Advanced nanomaterials give scientists the ability to recreate the structures animals use to bend light into iridescent displays. By recreating those structures, scientists can figure out how genetically expensive they are to make.

Likewise, new magnification techniques have allowed scientists to look into an animal’s eye structure. You might have read about how mantis shrimp have not three or four but a whopping 12 different color receptors and how they see the world in psychedelic hyperspectral saturation. This isn’t quite true. Those color channels aren’t linked together—not like they are in other animals. The shrimp probably aren’t seeing 12 different, overlapping color spectra. “We are thinking maybe those color receptors are being turned on or off by some other, non-color, signal,” says Caro.

But perhaps the most important modern innovation in biological color research is getting all the different people from different disciplines together. “There are a lot of different sorts of people working on color,” says Caro. “Some behavioral biologists, some neurophysiologists, some anthropologists, some structural biologists, and so on.”

And these scientists are scattered all over the globe. He says the reason he brought everyone to Berlin is so they could finally synthesize all these sub-disciplines together, and move into a broader understanding of color in the world. The most important technology in understanding animal color vision isn’t a camera or a nanotech surface. It’s an airplane. Or the internet.


The Asteroid that finished the Dinosaurs. A grain of sand hitting a bowling ball. – Liz Dunphy. 

The asteroid impact that doomed the dinosaurs to extinction had such a devastating effect on Earth by pure chance, scientists say.

If it had struck 30 seconds later – or 30 seconds sooner – it would have caused far less damage and the dinosaurs would probably have survived.

As a result, man might never have become the planet’s dominant species, a BBC documentary reveals tonight, according to Daily Mail.

The asteroid struck 66million years ago 24 miles off the Yucatan Peninsula in Mexico, causing a crater 111 miles wide and 20 miles deep. Scientists who drilled into the crater found the rock was rich in sulphur compounds.

The impact of the asteroid vaporised this rock, filling the air with a cloud of dust similar to that created by a catastrophic volcanic eruption.

This blocked out the sun and cooled the planet dramatically – below freezing for a decade – wiping out most life.

Those dinosaurs not killed by fumes, molten rock falling from the sky or tsunamis would have starved as their food ran out.

Yet if the asteroid, which is estimated to have been nine miles across and travelling at 40,000mph, had arrived a few seconds sooner or later, it could have landed in deep water in the Atlantic or Pacific.

That would have meant that mostly sea water would have been vaporised, causing far less harm. Instead, the effect of the impact of a comparatively tiny asteroid was magnified catastrophically.

Sean Gulick, professor of geophysics at the University of Texas at Austin, who organised the drilling with Professor Joanna Morgan, of Imperial College London, said: “That asteroid struck Earth in a very unfortunate place.”

Professor Morgan said research suggests 100billion tons of sulphates were thrown into the atmosphere, adding: “That would be enough to cool the planet for a decade and wipe out most life.”

The asteroid’s impact was so huge that the blast led to the extermination of three quarters of all life on Earth, including most of the dinosaurs.

But this chance event allowed smaller mammals – and ultimately humans – the chance to thrive.

Had the asteroid crashed seconds earlier or later it would have hit the ocean, potentially causing much less vaporisation which may have allowed the dinosaurs to survive, scientists now believe.

Professor Joanna Morgan of Imperial College London has co-led a major new study with Sean Gullick, professor of geophysics at the University of Texas, Austin into the the impact of this earth-changing asteroid.

The results of this major study will be revealed in a new BBC documentary called The Night the Dinosaurs Died which will be screened in the UK tomorrow and is presented by Professors Alice Roberts and Ben Garod.

In the study, researchers have drilled into the peak ring of the Chicxulub crater in the Gulf of Mexico where the asteroid hit.

Their research has unearthed insights into how impacts can help shape planets and possibly even provide habitat for new origins of life.

It also established a new understanding of how violent asteroid impacts cause a planet’s surface to behave like a fluid – previous scientific analysis suggested that such impacts deform the surface by melting most of the rock around the impact.

Prof Gullick said that the asteroid struck the earth at a very unfortunate place – a concentration of sulphur-rich rock which vaporised, catapulting a light-reflecting cloud into the air.

Prof Gullick explained that sulphate particles reflect light, which effectively shaded the earth from the sun, dramatically cooling the planet, limiting plant growth and ultimately cutting off food supplies.

This caused the decline and death of the dinosaurs as a species which had dominated earth for 150m years.

According to Professor Joanna Morgan, the samples suggest that more than 100bn tons of sulphates were thrown into the atmosphere with extra soot from the fires that followed.

“That would be enough to cool the planet for a decade and wipe out most life,” Prof Morgan said as reported by The Times.

But this dark day for the dinosaurs provided an opportunity for mammals and ultimately humans to evolve.

“Just half a million years after the extinction of the dinosaurs, landscapes had filled with mammals of all shapes and sizes. Chances are, if it wasn’t for that asteroid we wouldn’t be here today,” scientist and BBC presenter Prof Alice Roberts told The Times.

Rock analysis has allowed scientists to calculate the size of the impact which indicates that the asteroid was approximately nine miles wide and hit the planet at 40,000mph.

This would make the asteroid equivalent to a grain of sand hitting a bowling ball.


The 30 seconds that sentenced dinosaurs to their doom: New BBC documentary reveals the moment an asteroid NINE-MILES long hit the earth and wiped out an entire species. 

Daily Mail

Scientists Are Attempting to Unlock the Secret Potential of the Human Brain – Philip Perry. 

Sometimes, it occurs when a person suffers a nearly fatal accident or life-threatening situation. In others, they are born with a developmental disorder, such as autism. But a slim margin of each group develop remarkable capabilities, such as being able to picture advanced mathematical figures in one’s head, have perfect recall, or to draw whole cityscapes from memory alone. This is known as savant syndrome. Of course, it’s exceedingly rare. But how does it work? And do we all hide spectacular capabilities deep within our brain?

“I noticed the light bouncing off a car window in the form of an arc, and the concept came to life. It clicked for me-­because the circle I saw was subdivided by light rays, and I realized each ray was really a representation of pi.”

He’d acquired an exceedingly rare condition. Only about 70 people in the world so far have been identified with savant syndrome. There are two ways for it to occur, either through an injury that causes brain damage or through a disorder, such as autism.


Vaccination, it’s science, there is no other side – Dr Michelle Dickinson. 

When a high percentage of people are vaccinated, there are too few susceptible people left to infect.

Science is the field of study concerned with discovering by observing and experimenting.

Although anybody can do science, professional scientific researchers follow a scientific method which allows them to explain occurrences using a logical, consistent, systematic method of investigation.

This involves collecting large amounts of data from well thought out experiments and analysing that data to arrive at a well-tested, well documented, theory that is supported by the evidence.

The theory is then subjected to critique by other experts and only if approved by them is it allowed to be published in a peer reviewed journal for others to read and learn from.

As a person who reads and writes peer reviewed journal articles, I’ll admit that they can be difficult to understand, are often filled with specialist jargon, and are not usually available to the public without having to pay a fee.

This makes obtaining and analysing scientific data difficult and expensive.

What is easy to obtain and analyse is scientific information from websites and documentaries which are deliberately designed to be simple to understand, easy to access and contain memorable, shareable sound bites.

Websites, social media posts and documentaries however do not have to follow any of the rules of peer reviewed scientific method, and instead can make incredible ‘scientific’ claims based on anecdotal stories beautifully packaged into believable emotive narratives.

I mention this as the controversial anti-vaccination film Vaxxed: From Cover-Up to Catastrophe is touring New Zealand.

The movie, directed by Andrew Wakefield the former British doctor who was struck off the medical register over an unethical study, claims to give the other side to the vaccination argument.

Let’s be clear – the whole point of peer reviewed scientific method is that there is no other side.

Science presents all sides, that’s the beauty of science, it’s transparent and open about its evidence based conclusions.

Experiments carried out over hundreds of studies by scientists all over the world involving more than 15 million children conclude clearly that vaccines are not linked to autism.

For those that don’t want to trawl through all of the peer reviewed scientific studies that have shown this, the Cochrane systematic review of research on the MMR vaccine gives a great public summary.

In light of this, there are still hundreds of websites claiming that vaccinations are dangerous, an issue emphasised this week at Grantlea Downs School in Timaru.

The school’s board of trustees, of which I couldn’t find out how many were familiar with scientific method, decided not to allow their students to receive the free vaccine against Human Papilloma Virus (HPV) on site.

The vaccine is highly effective in preventing infection HPV responsible for about 90 per cent of HPV caused cancers, and school-based vaccinations programmes are the most convenient way for children to get protected against HPV.

Convenient vaccination programmes are important because they work on herd immunity, a form of immunity that occurs when the vaccination of a significant portion of a population provides protection for individuals who have not developed immunity due to being too young or too ill to be vaccinated.

When a high percentage of the population is protected, there are too few susceptible people left to infect and diseases become difficult to spread.

Anti-immunisation websites and movies create fear with cherry-picked science and reductions in childhood vaccinations will allow disease transmission chains to rebuild meaning herd immunity will no longer be effective.

I’m all about freedom of choice, but if you are going to put other people at risk, you should have a really good reason. A movie or website isn’t one of them.

Dr Michelle Dickinson, also known as Nanogirl, is an Auckland University nanotechnologist who is passionate about getting Kiwis hooked on science.

NZ Herald

The Kiwi who split the atom. 1917: Sir Ernest Rutherford, our greatest scientist’s biggest breakthrough.

Ernest Rutherford and Hans Geiger in the physics laboratory at Manchester University.

“I have broken the machine and touched the ghost of matter.”

So proclaimed Sir Ernest Rutherford a century ago, in the same year he became the first person to split the atom.

By that point, the Nelson-born godfather of modern atomic physics had already received a Nobel Prize in Chemistry (in 1908) and a star scientist at Cambridge, McGill and Manchester universities.

His greatest triumphs came in three landmark discoveries, which forever changed modern science and created the field of nuclear physics.

In the first, for which he received his Nobel Prize, he conducted a clever experiment using an air-tight glass tube and radioactive radium emanation to prove that alpha particles are helium ions.

In doing so, Rutherford effectively had, said James Campbell in the Dictionary of New Zealand Biographies, “unravelled the mysteries of radioactivity, showing that some heavy atoms spontaneously decay into slightly lighter, and chemically different, atoms”.

“This discovery of the natural transmutation of elements first brought him to world attention.”

Later, Rutherford and his young student, Ernest Marsden – who would become a world-renowned physicist in his own right – conducted an experiment that allowed Rutherford to deduce that nearly all of the mass of an atom was concentrated in a nucleus a thousand times smaller than the atom itself.

This gave birth to the nuclear model of the atom – and later formed the basis for revealing the stable orbit of the atom.

In his third and most famous discovery, in 1917, Rutherford succeeded in splitting the atom itself, becoming the first human to create a nuclear reaction.

Albert Einstein called Rutherford a “second Newton” – but the famed scientist wasn’t so different from other ingenious Kiwi innovators.

Of his knack for unorthodox solutions to experiments, Rutherford noted his early years in New Zealand: “We don’t have the money, so we have to think.”

Jamie Morton, NZ Herald

Black humour is sign of high intelligence, study suggests | Science | The Guardian.  

I’ve always known that 🙂

Who needs Mensa? If you want to find out if someone has a high IQ, just tell them a string of sick jokes and then gauge their reaction.

A new study in the journal Cognitive Processing has found that intelligence plays a key role in the appreciation of black humour – as well as several other factors, notably a person’s aggression levels.

A team of researchers, led by Ulrike Willinger at the Medical University of Vienna, asked 156 people, who had an average age of 33 and included 76 women, to rate their comprehension and enjoyment of 12 darkly humorous cartoons taken from The Black Book by the renowned German cartoonist Uli Stein.

Examples include a cartoon depicting a morgue where a physician lifts a cover sheet off a body. A woman confirms: “Sure, that’s my husband – anyway, which washing powder did you use to get that so white?”

Participants were also tested for verbal and non-verbal IQ and asked about their mood, aggression and educational background.

The group with the highest sick humour appreciation and comprehension scored the highest in verbal and non-verbal IQ tests, were better educated, and scored lower for aggression and bad mood.

The Guardian

NASA Discovers Planet Covered in Cannabis, Scientists Shocked. 

“We always think young people aren’t interested by anything but it’s false. Young people love smoking pot,” says David Charbonneau, astronomer at the Harvard-Smithsonian Center for Astrophysics. “Chlorophyll concentration analyses generated by Kepler lead us to believe that the level of THC in these marijuana plants is 3000% higher than the plants found on Earth. If that doesn’t motivate young people to explore space, I don’t know what will”. The Joint Blog

Research suggests being lazy is a sign of high intelligence. 

New research seems to prove the theory that brainy people spend more time lazing around than their active counterparts. 

Findings from a US-based study seem to support the idea that people with a high IQ get bored less easily, leading them to spend more time engaged in thought. And active people may be more physical as they need to stimulate their minds with external activities, either to escape their thoughts or because they get bored quickly. The Independent 

The Occipito Temporal doesn’t get much interest.

The brain bank is the collection of donated brains at Auckland University, a remarkable asset for researchers seeking ways to treat devastating neurological disorders such as Huntington’s disease, Parkinson’s, Alzheimer’s, stroke and motor neurone disease.

No one can predict what parts of the brain will be needed in the future, so they are all stored with great care and respect. NZ Herald