Epigenetics: The Evolution Revolution – Israel Rosenfield and Edward Ziff * The Epigenetics Revolution – Nessa Carey.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

We are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

Israel Rosenfield and Edward Ziff

At the end of the eighteenth century, the French naturalist Jean-Baptiste Lamarck noted that life on earth had evolved over long periods of time into a striking variety of organisms. He sought to explain how they had become more and more complex. Living organisms not only evolved, Lamarck argued; they did so very slowly, “little by little and successively.” In Lamarckian theory, animals became more diverse as each creature strove toward its own “perfection,” hence the enormous variety of living things on earth. Man is the most complex life form, therefore the most perfect, and is even now evolving.

In Lamarck’s view, the evolution of life depends on variation and the accumulation of small, gradual changes. These are also at the center of Darwin’s theory of evolution, yet Darwin wrote that Lamarck’s ideas were “veritable rubbish.” Darwinian evolution is driven by genetic variation combined with natural selection, the process whereby some variations give their bearers better reproductive success in a given environment than other organisms have. Lamarckian evolution, on the other hand, depends on the inheritance of acquired characteristics. Giraffes, for example, got their long necks by stretching to eat leaves from tall trees, and stretched necks were inherited by their offspring, though Lamarck did not explain how this might be possible.

When the molecular structure of DNA was discovered in 1953, it became dogma in the teaching of biology that DNA and its coded information could not be altered in any way by the environment or a person’s way of life. The environment, it was known, could stimulate the expression of a gene. Having a light shone in one’s eyes or suffering pain, for instance, stimulates the activity of neurons and in doing so changes the activity of genes those neurons contain, producing instructions for making proteins or other molecules that play a central part in our bodies.

The structure of the DNA neighboring the gene provides a list of instructions, a gene program, that determines under what circumstances the gene is expressed. And it was held that these instructions could not be altered by the environment. Only mutations, which are errors introduced at random, could change the instructions or the information encoded in the gene itself and drive evolution through natural selection. Scientists discredited any Lamarckian claims that the environment can make lasting, perhaps heritable alterations in gene structure or function.

But new ideas closely related to Lamarck’s eighteenth century views have become central to our understanding of genetics. In the past fifteen years these ideas, which belong to a developing field of study called epigenetics, have been discussed in numerous articles and several books, including Nessa Carey’s 2012 study The Epigenetic Revolution and The Deepest Well, a recent work on childhood trauma by the physician Nadine Burke Harris.

The developing literature surrounding epigenetics has forced biologists to consider the possibility that gene expression could be influenced by some heritable environmental factors previously believed to have had no effect over it, like stress or deprivation. “The DNA blueprint,” Carey writes,

Isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life.

That might seem a commonsensical view. But it runs counter to decades of scientific thought about the independence of the genetic program from environmental influence. What findings have made it possible?

In 1975, two English biologists, Robin Holliday and John Pugh, and an American biologist, Arthur Riggs, independently suggested that methylation, a chemical modification of DNA that is heritable and can be induced by environmental influences, had an important part in controlling gene expression. How it did this was not understood, but the idea that through methylation the environment could, in fact, alter not only gene expression but also the genetic program rapidly took root in the scientific community.

As scientists came to better understand the function of methylation in altering gene expression, they realized that extreme environmental stress, the results of which had earlier seemed self explanatory, could have additional biological effects on the organisms that suffered it. Experiments with laboratory animals have now shown that these outcomes are based on the transmission of acquired changes in genetic function. Childhood abuse, trauma, famine, and ethnic prejudice may, it turns out, have long term consequences for the functioning of our genes.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

Epigenesis does not change the information coded in the genes or a person’s genetic makeup, the genes themselves are not affected, but instead alters the manner in which they are “read” by blocking access to certain genes and preventing their expression.

This mechanism can be the hidden cause of our feelings of depression, anxiety, or paranoia. What is perhaps most surprising of all, this alteration could, in some cases, be passed on to future generations who have never directly experienced the stresses that caused their forebears’ depression or ill health.

Numerous clinical studies have shown that childhood trauma, arising from parental death or divorce, neglect, violence, abuse, lack of nutrition or shelter, or other stressful circumstances, can give rise to a variety of health problems in adults: heart disease, cancer, mood and dietary disorders, alcohol and drug abuse, infertility, suicidal behavior, learning deficits, and sleep disorders.

Since the publication in 2003 of an influential paper by Rudolf Jaenisch and Adrian Bird, we have started to understand the genetic mechanisms that explain why this is the case. The body and the brain normally respond to danger and frightening experiences by releasing a hormone, a glucocorticoid that controls stress. This hormone prepares us for various challenges by adjusting heart rate, energy production, and brain function; it binds to a protein called the glucocorticoid receptor in nerve cells of the brain.

Normally, this binding shuts off further glucocorticoid production, so that when one no longer perceives a danger, the stress response abates. However, as Gustavo Turecki and Michael Meaney note in a 2016 paper surveying more than a decade’s worth of findings about epigenetics, the gene for the receptor is inactive in people who have experienced childhood stress; as a result, they produce few receptors. Without receptors to bind to, glucocorticoids cannot shut off their own production, so the hormone keeps being released and the stress response continues, even after the threat has subsided.

“The term for this is disruption of feedback inhibition,” Harris writes. It is as if “the body’s stress thermostat is broken. Instead of shutting off this supply of ‘heat’ when a certain point is reached, it just keeps on blasting cortisol through your system.”

It is now known that childhood stress can deactivate the receptor gene by an epigenetic mechanism, namely, by creating a physical barrier to the information for which the gene codes. What creates this barrier is DNA methylation, by which methyl groups known as methyl marks (composed of one carbon and three hydrogen atoms) are added to DNA.

DNA methylation is long-lasting and keeps chromatin, the DNA-protein complex that makes up the chromosomes containing the genes, in a highly folded structure that blocks access to select genes by the gene expression machinery, effectively shutting the genes down. The long-term consequences are chronic inflammation, diabetes, heart disease, obesity, schizophrenia, and major depressive disorder.

Such epigenetic effects have been demonstrated in experiments with laboratory animals. In a typical experiment, rat or mouse pups are subjected to early-life stress, such as repeated maternal separation. Their behavior as adults is then examined for evidence of depression, and their genomes are analyzed for epigenetic modifications. Likewise, pregnant rats or mice can be exposed to stress or nutritional deprivation, and their offspring examined for behavioral and epigenetic consequences.

Experiments like these have shown that even animals not directly exposed to traumatic circumstances, those still in the womb when their parents were put under stress, can have blocked receptor genes. It is probably the transmission of glucocorticoids from mother to fetus via the placenta that alters the fetus in this way. In humans, prenatal stress affects each stage of the child’s maturation: for the fetus, a greater risk of preterm delivery, decreased birth weight, and miscarriage; in infancy, problems of temperament, attention, and mental development; in childhood, hyperactivity and emotional problems; and in adulthood, illnesses such as schizophrenia and depression.

What is the significance of these findings?

Until the mid-1970s, no one suspected that the way in which the DNA was “read” could be altered by environmental factors, or that the nervous systems of people who grew up in stress free environments would develop differently from those of people who did not. One’s development, it was thought, was guided only by one’s genetic makeup.

As a result of epigenesis, a child deprived of nourishment may continue to crave and consume large amounts of food as an adult, even when he or she is being properly nourished, leading to obesity and diabetes. A child who loses a parent or is neglected or abused may have a genetic basis for experiencing anxiety and depression and possibly schizophrenia.

Formerly, it had been widely believed that Darwinian evolutionary mechanisms, variation and natural selection, were the only means for introducing such long lasting changes in brain function, a process that took place over generations. We now know that epigenetic mechanisms can do so as well, within the lifetime of a single person.

It is by now well established that people who suffer trauma directly during childhood or who experience their mother’s trauma indirectly as a fetus may have epigenetically based illnesses as adults. More controversial is whether epigenetic changes can be passed on from parent to child.

Methyl marks are stable when DNA is not replicating, but when it replicates, the methyl marks must be introduced into the newly replicated DNA strands to be preserved in the new cells. Researchers agree that this takes place when cells of the body divide, a process called mitosis, but it is not yet fully established under which circumstances marks are preserved when cell division yields sperm and egg, a process called meiosis, or when mitotic divisions of the fertilized egg form the embryo. Transmission at these two latter steps would be necessary for epigenetic changes to be transmitted in full across generations.

The most revealing instances for studies of intergenerational transmission have been natural disasters, famines, and atrocities of war, during which large groups have undergone trauma at the same time. These studies have shown that when women are exposed to stress in the early stages of pregnancy, they give birth to children whose stress response systems malfunction. Among the most widely studied of such traumatic events is the Dutch Hunger Winter. In 1944 the Germans prevented any food from entering the parts of Holland that were still occupied. The Dutch resorted to eating tulip bulbs to overcome their stomach pains. Women who were pregnant during this period, Carey notes, gave birth to a higher proportion of obese and schizophrenic children than one would normally expect. These children also exhibited epigenetic changes not observed in similar children, such as siblings, who had not experienced famine at the prenatal stage.

During the Great Chinese Famine (1958-1961), millions of people died, and children born to young women who experienced the famine were more likely to become schizophrenic, to have impaired cognitive function, and to suffer from diabetes and hypertension as adults. Similar studies of the 1932-1933 Ukrainian famine, in which many millions died, revealed an elevated risk of type II diabetes in people who were in the prenatal stage of development at the time. Although prenatal and early childhood stress both induce epigenetic effects and adult illnesses, it is not known if the mechanism is the same in both cases.

Whether epigenetic effects of stress can be transmitted over generations needs more research, both in humans and in laboratory animals. But recent comprehensive studies by several groups using advanced genetic techniques have indicated that epigenetic modifications are not restricted to the glucocorticoid receptor gene. They are much more extensive than had been realized, and their consequences for our development, health, and behavior may also be great.

It is as though nature employs epigenesis to make long lasting adjustments to an individual’s genetic program to suit his or her personal circumstances, much as in Lamarck’s notion of “striving for perfection.”

In this view, the ill health arising from famine or other forms of chronic, extreme stress would constitute an epigenetic miscalculation on the part of the nervous system. Because the brain prepares us for adult adversity that matches the level of stress we suffer in early life, psychological disease and ill health persist even when we move to an environment with a lower stress level.

Once we recognize that there is an epigenetic basis for diseases caused by famine, economic deprivation, war related trauma, and other forms of stress, it might be possible to treat some of them by reversing those epigenetic changes. “When we understand that the source of so many of our society’s problems is exposure to childhood adversity,” Harris writes,

The solutions are as simple as reducing the dose of adversity for kids and enhancing the ability of caregivers to be buffers. From there, we keep working our way up, translating that understanding into the creation of things like more effective educational curricula and the development of blood tests that identify biomarkers for toxic stress, things that will lead to a wide range of solutions and innovations, reducing harm bit by bit, and then leap by leap.

Epigenetics has also made clear that the stress caused by war, prejudice, poverty, and other forms of childhood adversity may have consequences both for the persons affected and for their future unborn children, not only for social and economic reasons but also for biological ones.

The Epigenetics Revolution

Nessa Carey

DNA.
Sometimes, when we read about biology, we could be forgiven for thinking that those three letters explain everything. Here, for example, are just a few of the statements made on 26 June 2000, when researchers announced that the human genome had been sequenced:

Today we are learning the language in which God created life. US President Bill Clinton

We now have the possibility of achieving all we ever hoped for from medicine. UK Science Minister Lord Sainsbury

Mapping the human genome has been compared with putting a man on the moon, but I believe it is more than that. This is the outstanding achievement not only of our lifetime, but in terms of human history. Michael Dexter, The Wellcome Trust

From these quotations, and many others like them, we might well think that researchers could have relaxed a bit after June 2000 because most human health and disease problems could now be sorted out really easily. After all, we had the blueprint for humankind. All we needed to do was get a bit better at understanding this set of instructions, so we could fill in a few details. Unfortunately, these statements have proved at best premature. The reality is rather different.

We talk about DNA as if it’s a template, like a mould for a car part in a factory. In the factory, molten metal or plastic gets poured into the mould thousands of times and, unless something goes wrong in the process, out pop thousands of identical car parts.

But DNA isn’t really like that. It’s more like a script. Think of Romeo and Juliet, for example. In 1936 George Cukor directed Leslie Howard and Norma Shearer in a film version. Sixty years later Baz Luhrmann directed Leonardo DiCaprio and Claire Danes in another movie version of this play. Both productions used Shakespeare’s script, yet the two movies are entirely different. Identical starting points, different outcomes.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

The implications of this for human health are very wide ranging, as we will see from the case studies we are going to look at in a moment. In all these case studies it’s really important to remember that nothing happened to the DNA blueprint of the people in these case studies. Their DNA didn’t change (mutate), and yet their life histories altered irrevocably in response to their environments.

Audrey Hepburn was one of the 20th century’s greatest movie stars. Stylish, elegant and with a delicately lovely, almost fragile bone structure, her role as Holly Golightly in Breakfast at Tiffany’s has made her an icon, even to those who have never seen the movie. It’s startling to think that this wonderful beauty was created by terrible hardship. Audrey Hepburn was a survivor of an event in the Second World War known as the Dutch Hunger Winter. This ended when she was sixteen years old but the after effects of this period, including poor physical health, stayed with her for the rest of her life.

The Dutch Hunger Winter lasted from the start of November 1944 to the late spring of 1945. This was a bitterly cold period in Western Europe, creating further hardship in a continent that had been devastated by four years of brutal war. Nowhere was this worse than in the Western Netherlands, which at this stage was still under German control. A German blockade resulted in a catastrophic drop in the availability of food to the Dutch population. At one point the population was trying to survive on only about 30 per cent of the normal daily calorie intake. People ate grass and tulip bulbs, and burned every scrap of furniture they could get their hands on, in a desperate effort to stay alive. Over 20,000 people had died by the time food supplies were restored in May 1945.

The dreadful privations of this time also created a remarkable scientific study population. The Dutch survivors were a well defined group of individuals all of whom suffered just one period of malnutrition, all of them at exactly the same time. Because of the excellent healthcare infrastructure and record keeping in the Netherlands, epidemiologists have been able to follow the long term effects of the famine. Their findings were completely unexpected.

One of the first aspects they studied was the effect of the famine on the birth weights of children who had been in the womb during that terrible period. If a mother was well fed around the time of conception and malnourished only for the last few months of the pregnancy, her baby was likely to be born small. If, on the other hand, the mother suffered malnutrition for the first three months of the pregnancy only (because the baby was conceived towards the end of this terrible episode), but then was well fed, she was likely to have a baby with a normal body weight. The foetus ‘caught up’ in body weight.

That all seems quite straightforward, as we are all used to the idea that foetuses do most of their growing in the last few months of pregnancy. But epidemiologists were able to study these groups of babies for decades and what they found was really surprising. The babies who were born small stayed small all their lives, with lower obesity rates than the general population. For forty or more years, these people had access to as much food as they wanted, and yet their bodies never got over the early period of malnutrition. Why not? How did these early life experiences affect these individuals for decades? Why weren’t these people able to go back to normal, once their environment reverted to how it should be?

Even more unexpectedly, the children whose mothers had been malnourished only early in pregnancy, had higher obesity rates than normal. Recent reports have shown a greater incidence of other health problems as well, including certain tests of mental activity. Even though these individuals had seemed perfectly healthy at birth, something had happened to their development in the womb that affected them for decades after. And it wasn’t just the fact that something had happened that mattered, it was when it happened. Events that take place in the first three months of development, a stage when the foetus is really very small, can affect an individual for the rest of their life.

Even more extraordinarily, some of these effects seem to be present in the children of this group, i.e. in the grandchildren of the women who were malnourished during the first three months of their pregnancy.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

Let’s consider a different human story. Schizophrenia is a dreadful mental illness which, if untreated, can completely overwhelm and disable an affected person. Patients may present with a range of symptoms including delusions, hallucinations and enormous difficulties focusing mentally. People with schizophrenia may become completely incapable of distinguishing between the ‘real world’ and their own hallucinatory and delusional realm. Normal cognitive, emotional and societal responses are lost. There is a terrible misconception that people with schizophrenia are likely to be violent and dangerous. For the majority of patients this isn’t the case at all, and the people most likely to suffer harm because of this illness are the patients themselves. Individuals with schizophrenia are fifty times more likely to attempt suicide than healthy individuals.

Schizophrenia is a tragically common condition. It affects between 0.5 per cent and 1 per cent of the population in most countries and cultures, which means that there may be over fifty million people alive today who are suffering from this condition. Scientists have known for some time that genetics plays a strong role in determining if a person will develop this illness. We know this because if one of a pair of identical twins has schizophrenia, there is a 50 per cent chance that their twin will also have the condition. This is much higher than the 1 per cent risk in the general population.

Identical twins have exactly the same genetic code as each other. They share the same womb and usually they are brought up in very similar environments. When we consider this, it doesn’t seem surprising that if one of the twins develops schizophrenia, the chance that his or her twin will also develop the illness is very high. In fact, we have to start wondering why it isn’t higher. Why isn’t the figure 100 per cent? How is it that two apparently identical individuals can become so very different? An individual has a devastating mental illness but will their identical twin suffer from it too? Flip a coin heads they win, tails they lose. Variations in the environment are unlikely to account for this, and even if they did, how would these environmental effects have such profoundly different impacts on two genetically identical people?

Here’s a third case study. A small child, less than three years old, is abused and neglected by his or her parents. Eventually, the state intervenes and the child is taken away from the biological parents and placed with foster or adoptive parents. These new carers love and cherish the child, doing everything they can to create a secure home, full of affection. The child stays with these new parents throughout the rest of its childhood and adolescence, and into young adulthood.

Sometimes everything works out well for this person. They grow up into a happy, stable individual indistinguishable from all their peers who had normal, non abusive childhoods. But often, tragically, it doesn’t work out this way. Children who have suffered from abuse or neglect in their early years grow up with a substantially higher risk of adult mental health problems than the general population. All too often the child grows up into an adult at high risk of depression, self-harm, drug abuse and suicide.

Once again, we have to ask ourselves why. Why is it so difficult to override the effects of early childhood exposure to neglect or abuse?

Why should something that happened early in life have effects on mental health that may still be obvious decades later?

In some cases, the adult may have absolutely no recollection of the traumatic events, and yet they may suffer the consequences mentally and emotionally for the rest of their lives.

These three case studies seem very different on the surface. The first is mainly about nutrition, especially of the unborn child. The second is about the differences that arise between genetically identical individuals. The third is about long term psychological damage as a result of childhood abuse.

But these stories are linked at a very fundamental biological level. They are all examples of epigenetics. Epigenetics is the new discipline that is revolutionising biology. Whenever two genetically identical individuals are non-identical in some way we can measure, this is called epigenetics. When a change in environment has biological consequences that last long after the event itself has vanished into distant memory, we are seeing an epigenetic effect in action.

Epigenetic phenomena can be seen all around us, every day. Scientists have identified many examples of epigenetics, just like the ones described above, for many years. When scientists talk about epigenetics they are referring to all the cases where the genetic code alone isn’t enough to describe what’s happening, there must be something else going on as well.

This is one of the ways that epigenetics is described scientifically, where things which are genetically identical can actually appear quite different to one another. But there has to be a mechanism that brings out this mismatch between the genetic script and the final outcome. These epigenetic effects must be caused by some sort of physical change, some alterations in the vast array of molecules that make up the cells of every living organism. This leads us to the other way of viewing epigenetics, the molecular description.

In this model, epigenetics can be defined as the set of modifications to our genetic material that change the ways genes are switched on or off, but which don’t alter the genes themselves.

Although it may seem confusing that the word ‘epigenetics’ can have two different meanings, it’s just because we are describing the same event at two different levels. It’s a bit like looking at the pictures in old newspapers with a magnifying glass, and seeing that they are made up of dots. If we didn’t have a magnifying glass we might have thought that each picture was just made in one solid piece and we’d probably never have been able to work out how so many new images could be created each day. On the other hand, if all we ever did was look through the magnifying glass, all we would see would be dots, and we’d never see the incredible image that they formed together and which we’d see if we could only step back and look at the big picture.

The revolution that has happened very recently in biology is that for the first time we are actually starting to understand how amazing epigenetic phenomena are caused. We’re no longer just seeing the large image, we can now also analyse the individual dots that created it.

Crucially, this means that we are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

The ‘epi’ in epigenetics is derived from Greek and means at, on, to, upon, over or beside. The DNA in our cells is not some pure, unadulterated molecule. Small chemical groups can be added at specific regions of DNA. Our DNA is also smothered in special proteins. These proteins can themselves be covered with additional small chemicals. None of these molecular amendments changes the underlying genetic code. But adding these chemical groups to the DNA, or to the associated proteins, or removing them, changes the expression of nearby genes. These changes in gene expression alter the functions of cells, and the very nature of the cells themselves. Sometimes, if these patterns of chemical modifications are put on or taken off at a critical period in development, the pattern can be set for the rest of our lives, even if we live to be over a hundred years of age.

There’s no debate that the DNA blueprint is a starting point. A very important starting point and absolutely necessary, without a doubt. But it isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life. And as we shall see in Chapter 1, we would all look like big amorphous blobs, because all the cells in our bodies would be completely identical.

Huge areas of biology are influenced by epigenetic mechanisms, and the revolution in our thinking is spreading further and further into unexpected frontiers of life on our planet. Some of the other examples we’ll meet in this book include why we can’t make a baby from two sperm or two eggs, but have to have one of each. What makes cloning possible? Why is cloning so difficult? Why do some plants need a period of cold before they can flower? Since queen bees and worker bees are genetically identical, why are they completely different in form and function? Why are all tortoiseshell cats female?

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

Scientists in both the academic and commercial sectors are also waking up to the enormous impact that epigenetics has on human health. It’s implicated in diseases from schizophrenia to rheumatoid arthritis, and from cancer to chronic pain. There are already two types of drugs that successfully treat certain cancers by interfering with epigenetic processes. Pharmaceutical companies are spending hundreds of millions of dollars in a race to develop the next generation of epigenetic drugs to treat some of the most serious illnesses afflicting the industrialised world. Epigenetic therapies are the new frontiers of drug discovery.

In biology, Darwin and Mendel came to define the 19th century as the era of evolution and genetics; Watson and Crick defined the 20th century as the era of DNA, and the functional understanding of how genetics and evolution interact. But in the 21st century it is the new scientific discipline of epigenetics that is unravelling so much of what we took as dogma and rebuilding it in an infinitely more varied, more complex and even more beautiful fashion.

The world of epigenetics is a fascinating one. It’s filled with remarkable subtlety and complexity, and in Chapters 3 and 4 we’ll delve deeper into the molecular biology of what’s happening to our genes when they become epigenetically modified. But like so many of the truly revolutionary concepts in biology, epigenetics has at its basis some issues that are so simple they seem completely self evident as soon as they are pointed out. Chapter 1 is the single most important example of such an issue. It’s the investigation which started the epigenetics revolution.

Notes on nomenclature

There is an international convention on the way that the names of genes and proteins are written, which we adhere to in this book.

Gene names and symbols are written in italics. The proteins encoded by the genes are written in plain text. The symbols for human genes and proteins are written in upper case. For other species, such as mice, the symbols are usually written with only the first letter capitalised.

This is summarised for a hypothetical gene in the following table.

Like all rules, however, there are a few quirks in this system and while these conventions apply in general we will encounter some exceptions in this book.

Chapter 1

An Ugly Toad and an Elegant Man

Like the toad, ugly and venomous, wears yet a precious jewel in his head. William Shakespeare

Humans are composed of about 50 to 70 trillion cells. That’s right, 50,000,000,000,000 cells. The estimate is a bit vague but that’s hardly surprising. Imagine we somehow could break a person down into all their individual cells and then count those cells, at a rate of one cell every second. Even at the lower estimate it would take us about a million and a half years, and that’s without stopping for coffee or losing count at any stage. These cells form a huge range of tissues, all highly specialised and completely different from one another. Unless something has gone very seriously wrong, kidneys don’t start growing out of the top of our heads and there are no teeth in our eyeballs.

This seems very obvious but why don’t they? It’s actually quite odd, when we remember that every cell in our body was derived from the division of just one starter cell. This single cell is called the zygote. A zygote forms when one sperm merges with one egg.

A Zygote

This zygote splits in two; those two cells divide again and so on, to create the miraculous piece of work which is a full human body. As they divide the cells become increasingly different from one another and form specialised cell types. This process is known as differentiation. It’s a vital one in the formation of any multicellular organism.

If we look at bacteria down a microscope then pretty much all the bacteria of a single species look identical. Look at certain human cells in the same way say, a food absorbing cell from the small intestine and a neuron from the brain and we would be hard pressed to say that they were even from the same planet. But so what? Well, the big ‘what’ is that these cells started out with exactly the same genetic material as one another. And we do mean exactly, this has to be the case, because they came from just one starter cell, that zygote. So the cells have become completely different even though they came from one cell with just one blueprint.

One explanation for this is that the cells are using the same information in different ways and that’s certainly true. But it’s not necessarily a statement that takes us much further forwards. In a 1960 adaptation of H. G. Wells’s The Time Machine, starring Rod Taylor as the time travelling scientist, there’s a scene where he shows his time machine to some learned colleagues (all male, naturally) and one asks for an explanation of how the machine works. Our hero then describes how the occupant of the machine will travel through time by the following mechanism:

In front of him is the lever that controls movement. Forward pressure sends the machine into the future. Backward pressure, into the past. And the harder the pressure, the faster the machine travels.

Everyone nods sagely at this explanation. The only problem is that this isn’t an explanation, it’s just a description. And that’s also true of that statement about cells using the same information in different ways it doesn’t really tell us anything, it just re-states what we already knew in a different way.

What’s much more interesting is the exploration of how cells use the same genetic information in different ways. Perhaps even more important is how the cells remember and keep on doing it. Cells in our bone marrow keep on producing blood cells, cells in our liver keep on producing liver cells. Why does this happen? One possible and very attractive explanation is that as cells become more specialised they rearrange their genetic material, possibly losing genes they don’t require. The liver is a vital and extremely complicated organ. The website of the British Liver Trust states that the liver performs over 500 functions, including processing the food that has been digested by our intestines, neutralising toxins and creating enzymes that carry out all sorts of tasks in our bodies. But one thing the liver simply never does is transport oxygen around the body. That job is carried out by our red blood cells, which are stuffed full of a particular protein, haemoglobin. Haemoglobin binds oxygen in tissues where there’s lots available, like our lungs, and then releases it when the red blood cell reaches a tissue that needs this essential chemical, such as the tiny blood vessels in the tips of our toes. The liver is never going to carry out this function, so perhaps it just gets rid of the haemoglobin gene, which it simply never uses.

It’s a perfectly reasonable suggestion cells could simply lose genetic material they aren’t going to use. As they differentiate, cells could jettison hundreds of genes they no longer need. There could of course be a slightly less drastic variation on this, maybe the cells shut down genes they aren’t using. And maybe they do this so effectively that these genes can never ever be switched on again in that cell, i.e. the genes are irreversibly inactivated. The key experiments that examined these eminently reasonable hypotheses, loss of genes, or irreversible inactivation involved an ugly toad and an elegant man.

Turning back the biological clock

The work has its origins in experiments performed many decades ago in England by John Gurdon, first in Oxford and subsequently Cambridge. Now Professor Sir John Gurdon, he still works in a lab in Cambridge, albeit these days in a gleaming modern building that has been named after him. He’s an engaging, unassuming and striking man who, 40 years on from his ground breaking work, continues to publish research in a field that he essentially founded.

John Gurdon cuts an instantly recognisable figure around Cambridge. Now in his seventies, he is tall, thin and has a wonderful head of swept back blonde hair. He looks like the quintessential older English gentleman of American movies, and fittingly he went to school at Eton. There is a lovely story that John Gurdon still treasures, a school report from his biology teacher at that institution which says, ‘I believe Gurdon has ideas about becoming a scientist. In present showing, this is quite ridiculous.’ The teacher’s comments were based on his pupil’s dislike of mindless rote learning of unconnected facts. But as we shall see, for a scientist as wonderful as John Gurdon, memory is much less important than imagination.

In 1937 the Hungarian biochemist Albert SzentGyorgyi won the Nobel Prize for Physiology or Medicine, his achievements including the discovery of vitamin C. In a phrase that has various subtly different translations but one consistent interpretation he defined discovery as, ‘To see what everyone else has seen but to think what nobody else has thought’. It is probably the best description ever written of what truly great scientists do. And John Gurdon is truly a great scientist, and may well follow in Szent-Gyorgyi’s Nobel footsteps.

In 2009 he was a co-recipient of the Lasker Prize, which is to the Nobel what the Golden Globes are so often to the Oscars. John Gurdon’s work is so wonderful that when it is first described it seems so obvious, that anyone could have done it. The questions he asked, and the ways in which he answered them, have that scientifically beautiful feature of being so elegant that they seem entirely self-evident.

John Gurdon used non-fertilised toad eggs in his work. Any of us who has ever kept a tank full of frogspawn and watched this jelly-like mass develop into tadpoles and finally tiny frogs, has been working, whether we thought about it in these terms or not, with fertilised eggs, i.e. ones into which sperm have entered and created a new complete nucleus. The eggs John Gurdon worked on were a little like these, but hadn’t been exposed to sperm.

There were good reasons why he chose to use toad eggs in his experiments. The eggs of amphibians are generally very big, are laid in large numbers outside the body and are see-through. All these features make amphibians a very handy experimental species in developmental biology, as the eggs are technically relatively easy to handle. Certainly a lot better than a human egg, which is hard to obtain, very fragile to handle, is not transparent and is so small that we need a microscope just to see it.

John Gurdon worked on the African clawed toad (Xenopus Iaevis, to give it its official title), one of those John Malkovich ugly-handsome animals, and investigated what happens to cells as they develop and differentiate and age. He wanted to see if a tissue cell from an adult toad still contained all the genetic material it had started with, or if it had lost or irreversibly inactivated some as the cell became more specialised. The way he did this was to take a nucleus from the cell of an adult toad and insert it into an unfertilised egg that had had its own nucleus removed. This technique is called somatic cell nuclear transfer (SCNT), and will come up over and over again. ‘Somatic’ comes from the Greek word for ‘body’.

After he’d performed the SCNT, John Gurdon kept the eggs in a suitable environment (much like a child with a tank of frogspawn) and waited to see if any of these cultured eggs hatched into little toad tadpoles.

The experiments were designed to test the following hypothesis: ‘As cells become more specialised (differentiated) they undergo an irreversible loss/inactivation of genetic material.’ There were two possible outcomes to these experiments:

Either

The hypothesis was correct and the ‘adult’ nucleus has lost some of the original blueprint for creating a new individual. Under these circumstances an adult nucleus will never be able to replace the nucleus in an egg and so will never generate a new healthy toad, with all its varied and differentiated tissues.

Or

The hypothesis was wrong, and new toads can be created by removing the nucleus from an egg and replacing it with one from adult tissues.

Other researchers had started to look at this before John Gurdon decided to tackle the problem, two scientists called Briggs and King using a different amphibian, the frog Rana pipiens. In 1952 they transplanted the nuclei from cells at a very early stage of development into an egg lacking its own original nucleus and they obtained viable frogs. This demonstrated that it was technically possible to transfer a nucleus from another cell into an ‘empty’ egg without killing the cell. However, Briggs and King then published a second paper using the same system but transferring a nucleus from a more developed cell type and this time they couldn’t create any frogs. The difference in the cells used for the nuclei in the two papers seems astonishingly minor just one day older and no froglets. This supported the hypothesis that some sort of irreversible inactivation event had taken place as the cells differentiated. A lesser man than John Gurdon might have been put off by this. Instead he spent over a decade working on the problem.

The design of the experiments was critical. Imagine we have started reading detective stories by Agatha Christie. After we’ve read our first three we develop the following hypothesis: ‘The killer in an Agatha Christie novel is always the doctor.’ We read three more and the doctor is indeed the murderer in each. Have we proved our hypothesis? No. There’s always going to be the thought that maybe we should read just one more to be sure. And what if some are out of print, or unobtainable? No matter how many we read, we may never be entirely sure that we’ve read the entire collection. But that’s the joy of disproving hypotheses. All we need is one instance in which Poirot or Miss Marple reveal that the doctor was a man of perfect probity and the killer was actually the vicar, and our hypothesis is shot to pieces. And that is how the best scientific experiments are designed to disprove, not to prove an idea.

And that was the genius of John Gurdon’s work. When he performed his experiments what he was attempting was exceptionally challenging with the technology of the time. If he failed to generate toads from the adult nuclei this could simply mean his technique had something wrong with it. No matter how many times he did the experiment without getting any toads, this wouldn’t actually prove the hypothesis. But if he did generate live toads from eggs where the original nucleus had been replaced by the adult nucleus he would have disproved the hypothesis. He would have demonstrated beyond doubt that when cells differentiate, their genetic material isn’t irreversibly lost or changed. The beauty of this approach is that just one such toad would topple the entire theory and topple it he did.

John Gurdon is incredibly generous in his acknowledgement of the collegiate nature of scientific research, and the benefits he obtained from being in dynamic laboratories and universities. He was lucky to start his work in a well set-up laboratory which had a new piece of equipment which produced ultraviolet light. This enabled him to kill off the original nuclei of the recipient eggs without causing too much damage, and also ‘softened up’ the cell so that he could use tiny glass hypodermic needles to inject donor nuclei.

Other workers in the lab had, in some unrelated research, developed a strain of toads which had a mutation with an easily detectable, but non-damaging effect. Like almost all mutations this was carried in the nucleus, not the cytoplasm. The cytoplasm is the thick liquid inside cells, in which the nucleus sits. So John Gurdon used eggs from one strain and donor nuclei from the mutated strain. This way he would be able to show unequivocally that any resulting toads had been coded for by the donor nuclei, and weren’t just the result of experimental error, as could happen if a few recipient nuclei had been left over after treatment.

John Gurdon spent around fifteen years, starting in the late 1950s, demonstrating that in fact nuclei from specialised cells are able to create whole animals if placed in the right environment i.e. an unfertilised eggé. The more differentiated/specialised the donor cell was, the less successful the process in terms of numbers of animals, but that’s the beauty of disproving a hypothesis we might need a lot of toad eggs to start with but we don’t need to end up with many live toads to make our case. Just one non murderous doctor will do it, remember?

Sir John Gurdon showed us that although there is something in cells that can keep specific genes turned on or switched off in different cell types, whatever this something is, it can’t be loss or permanent inactivation of genetic material, because if he put an adult nucleus into the right environment in this case an ‘empty’ unfertilised egg it forgot all about this memory of which cell type it came from. It went back to being a naive nucleus from an embryo and started the whole developmental process again.

Epigenetics is the ‘something’ in these cells. The epigenetic system controls how the genes in DNA are used, in some cases for hundreds of cell division cycles, and the effects are inherited from when cells divide. Epigenetic modifications to the essential blueprint exist over and above the genetic code, on top of it, and program cells for decades. But under the right circumstances, this layer of epigenetic information can be removed to reveal the same shiny DNA sequence that was always there. That’s what happened when John Gurdon placed the nuclei from fully differentiated cells into the unfertilised egg cells.

Did John Gurdon know what this process was when he generated his new baby toads? No. Does that make his achievement any less magnificent? Not at all. Darwin knew nothing about genes when he developed the theory of evolution through natural selection. Mendel knew nothing about DNA when, in an Austrian monastery garden, he developed his idea of inherited factors that are transmitted ‘true’ from generation to generation of peas. It doesn’t matter. They saw what nobody else had seen and suddenly we all had a new way of viewing the world.

The epigenetic landscape

Oddly enough, there was a conceptual framework that was in existence when John Gurdon performed his work. Go to any conference with the word ‘epigenetics’ in the title and at some point one of the speakers will refer to something called ‘Waddington’s epigenetic landscape’.

from

The Epigenetics Revolution

by Nessa Carey

get it at Amazon.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s