Category Archives: Genomics

Mental Illness, Why Some and not Others? Gene-Environment Interaction and Differential Susceptibility – Scott Barry Kaufman * Gene-Environment Interaction in Psychological Traits and Disorders – Danielle M. Dick * Differential Susceptibility to Environmental Influences – Jay Belsky.

“Whether your story is about having met with emotional pain or physical pain, the important thing is to take the lid off of those feelings. When you keep your emotions repressed, that’s when the body starts to try to get your attention. Because you aren’t paying attention. Our childhood is stored up in our bodies, and one day, the body will present its bill.”

Bernie Siegel MD

In recent years numerous studies show the importance of gene-environment interactions in psychological development, but here’s the thing: we’re not just finding that the environment matters in determining whether mental illness exists. What we’re discovering is far more interesting and nuanced: Some of the very same genes that under certain environmental conditions are associated with some of the lowest lows of humanity, under supportive conditions are associated with the highest highs of human flourishing.

Evidence that adverse rearing environments exert negative effects particularly on children and adults presumed “vulnerable” for temperamental or genetic reasons may actually reflect something else: heightened susceptibility to the negative effects of risky environments and to the beneficial effects of supportive environments. Putatively vulnerable children and adults are especially susceptible to both positive and negative environmental effects.

Children rated highest on externalizing behavior problems by teachers across the primary school years were those who experienced the most harsh discipline prior to kindergarten entry and who were characterized by mothers at age 5 as being negatively reactive infants.

Susceptibility factors are the moderators of the relation between the environment and developmental outcome. Is it that negativity actually reflects a highly sensitive nervous system on which experience registers powerfully negatively when not regulated by the caregiver, but positively when coregulation occurs?
Referred to by some scientists as the “differential susceptibility hypothesis”, these findings shouldn’t be understated. They are revolutionary, and suggest a serious rethinking of the role of genes in the manifestation of our psychological traits and mental “illness”. Instead of all of our genes coding for particular psychological traits, it appears we have a variety of genetic mutations that are associated with sensitivity to the environment, for better and worse.

Known epigenetic modifications (cell specialization, X inactivation, genomic imprinting) all occur early in development and are stable. The discovery that epigenetic modifications continue to occur across development, and can be reversible and more dynamic, has represented a major paradigm shift in our understanding of environmental regulation of gene expression.

Gene: Unit of heredity; a stretch of DNA that codes for a protein.
GxE: Gene-environment Interaction.
Epigenetics: Modifications to the genome that do not involve a change in nucleotide sequence.
Heritability: The proportion of total phenotypic variance that can be accounted for by genetic factors.
Logistic Regression: A statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes).
In logistic regression, the dependent variable is binary or dichotomous, i.e. it only contains data coded as 1 (TRUE, success, pregnant, etc.) or 0 (FALSE, failure, non-pregnant, etc.)
Transcription Factor: In molecular biology, a transcription factor (TF) (or sequence-specific DNA-binding factor) is a protein that controls the rate of transcription of genetic information from DNA to messenger RNA, by binding to a specific DNA sequence. The function of TFs is to regulate – turn on and off – genes in order to make sure that they are expressed in the right cell at the right time and in the right amount throughout the life of the cell and the organism.
Nucleotide: Organic molecules that are the building blocks of DNA and RNA. They also have functions related to cell signaling, metabolism, and enzyme reactions.
MZ: Monozygotic. Of twins derived from a single ovum (egg), and so identical.
DZ: Dizygotic. Of twins derived from two separate ova (eggs). Fraternal twin or nonidentical twin.
DNA: Deoxyribonucleic Acid.
RNA: Ribonucleic acid is a polymeric molecule essential in various biological roles in coding, decoding, regulation, and expression of genes. RNA and DNA are nucleic acids, and, along with lipids, proteins and carbohydrates, constitute the four major macromolecules essential for all known forms of life.
Polymorphism: A location in a gene that comes in multiple forms.
Allele: Natural variation in the genetic sequence; can be a change in a single nucleotide or longer stretches of DNA.
GWAS: Genome-wide Association Study.
ORs: Odds Ratios.
Phenotype: The observed outcome under study; can be the manifestation of both genetic and/or environmental factors.
Dichotomy: A division or contrast between two things that are or are represented as being opposed or entirely different.
Chromosome: A single piece of coiled DNA containing many genes, regulatory elements, and other nucleotide sequences.

Gene-Environment Interaction and Differential Susceptibility

Scott Barry Kaufman

Only a few genetic mutations have been discovered so far that demonstrate differential susceptibility effects. Most of the genes that have been discovered contribute to the production of the neurotransmitters dopamine and serotonin. Both of these biological systems contribute heavily to many aspects of engagement with the world, positive emotions, anxiety, depression, and mood fluctuations. So far, the evidence suggests (but is still tentative) that certain genetic variants under harsh and abusive conditions are associated with anxiety and depression, but that the very same genetic variants are associated with the lowest levels of anxiety, depression, and fear under supportive, nurturing conditions. There hasn’t been too much research looking at differential susceptibility effects on other systems that involve learning and exploration, however.

Enter a brand new study

Rising superstar Rachael Grazioplene and colleagues focused on the cholinergic system, a biological system crucially involved in neural plasticity and learning. Situations that activate the cholinergic system involve “expected uncertainty” such as going to a new country you’ve never been before and knowing that you’re going to face things you’ve never faced before. This stands in contrast to “unexpected uncertainty”, which occurs when your expectations are violated, such as thinking you’re going to a Las Vegas family friendly Cirque Du Soleil only to realize you’ve actually gotten a ticket to an all-male dance revue called “Thunder from Down Under” (I have no idea where that example came from). Those sorts of experiences are more strongly related to the neurotransmitter norepinephrine.

Since the cholinergic system is most active in situations when a person can predict that learning is possible, this makes the system a prime candidate for the differential susceptibility effect. As the researchers note, unpredictable and novel environments could function as either threats or incentive rewards. When the significance of the environment is uncertain, both caution and exploration are adaptive. Therefore, traits relating to anxiety or curiosity should be influenced by cholinergic genetic variants, with developmental experiences determining whether individuals find expected uncertainty either more threatening or more promising.

To test their hypothesis, they focused on a polymorphism in the CHRNA4 gene, which builds a certain kind of neural receptor that the neurotransmitter binds to. These acetylcholine receptors are distributed throughout the brain, and are especially involved in the functioning of dopamine in the striatum. Genetic differences in the CHRNA4 gene seem to change the sensitivity of the brain’s acetylcholine system because small structural changes in these receptors make acetylcholine binding more or less likely. Previous studies have shown associations between variation in the CHRNA4 gene and neuroticism as well as laboratory tests of attention and working memory.

The researchers looked at the functioning of this gene among a group of 614 children aged 8-13 enrolled in a week long day camp. Half of the children in the day camp were selected because they had been maltreated (sexual maltreatment), whereas the other half was carefully selected to come from the same socioeconomic status but not have experienced any maltreatment. This study provides the ideal experimental design and environmental conditions to test the differential susceptibility effect. Not only were the backgrounds of the children clearly defined, but also dramatically different from each other. Additionally, all children engaged in the same novel learning environment, an environment well suited for cholinergic functioning. What did they find?

Individuals with the T/T variation of the CHRNA4 gene who were maltreated showed higher levels of anxiety (Neuroticism) compared to those with the C allele of this gene. They appeared to be more likely to learn with higher levels of uncertainty. In contrast, those with the T/T allele who were not maltreated were low in anxiety (Neuroticism) and high in curiosity (Openness to Experience). What’s more, this effect was independent of age, race, and sex.

These environments, the T/T allele (which is much rarer in the general population than the C allele) may be beneficial, bringing out lower levels of anxiety and increased curiosity in response to situations containing expected uncertainty.

These results are certainly exciting, but a few important caveats are in order. For one thing, the T/T genotype is very rare in the general population, which makes it all the more important for future studies to attempt to replicate these findings. Also, we’re talking vanishingly small effects here. The CHRNA4 variant only explained at most 1% of the variation in neuroticism and openness to experience. So we shouldn’t go around trying to predict individual people’s futures based on knowledge of a single gene and a single environment.

Scientifically speaking though, this level of prediction is expected based on the fact that all of our psychological dispositions are massively polymorphic (consists of many interacting genes). Both gene-gene and gene-environment interactions must be taken into account.

Indeed, recent research found that the more sensitivity (“plasticity”) genes relating to the dopamine and serotonin systems adolescent males carried, the less selfregulation they displayed under unsupportive parenting conditions. In line with the differential susceptibility effect, the reverse was also found: higher levels of selfregulation were displayed by the adolescent males carrying more senstivity genes when they were reared under supportive parenting conditions.

The findings by Grazioplene and colleagues add to a growing literature on acetylcholine’s role in the emergence of schizophrenia and mood disorders. As the researcher’s note, these findings, while small in effect, may have maltreatment is a known risk factor for many psychiatric disorders. Children with the T/T genotype of CHRNA4 rsl 044396 may be more likely to learn fearful responses in harsh and abusive environments, but children with the very same genotype may be more likely to display curiosity and engagement in response to uncertainty under normal or supportive conditions.

While it’s profoundly difficult predicting the developmental trajectory of any single individual, this research suggests we can influence the odds that people will retreat within themselves or unleash the fundamentally human drive to explore and create.

Gene-Environment Interaction in Psychological Traits and Disorders

Danielle M. Dick

There has been an explosion of interest in studying gene-environment interactions (GxE) as they relate to the development of psychopathology. In this article, I review different methodologies to study gene-environment interaction, providing an overview of methods from animal and human studies and illustrations of gene-environment interactions detected using these various methodologies. Gene-environment interaction studies that examine genetic influences as modeled latently (e.g., from family, twin, and adoption studies) are covered, as well as studies of measured genotypes.

Importantly, the explosion of interest in gene-environment interactions has raised a number of challenges, including difficulties with differentiating various types of interactions, power, and the scaling of environmental measures, which have profound implications for detecting gene-environment interactions. Taking research on gene-environment interactions to the next level will necessitate close collaborations between psychologists and geneticists so that each field can take advantage of the knowledge base of the other.


Gene-environment interaction (GxE) has become a hot topic of research, with an exponential increase in interest in this area in the past decade. Consider that PubMed lists only 24 citations for “gene environment interaction” prior to the year 2000, but nearly four times that many in the first half of the year 2010 alone! The projected publications on gene-environment interaction for 2008–2010 are on track to constitute more than 40% of the total number of publications on gene-environment interaction indexed in PubMed.

Where does all this interest stem from? It may, in part, reflect a merging of interests from fields that were traditionally at odds with one another. Historically, there was a perception that behavior geneticists focused on genetic influences on behavior at the expense of studying environmental influences and that developmental psychologists focused on environmental influences and largely ignored genetic factors. Although this criticism is not entirely founded on the part of either field, methodological and ideological differences between these respective fields meant that genetic and environmental influences were traditionally studied in isolation.

More recently, there has been recognition on the part of both of these fields that both genetic and environmental influences are critical components to developmental outcome and that it is far more fruitful to attempt to understand how these factors come together to impact psychological outcomes than to argue about which one is more important. As Kendler and Eaves argued in their article on the joint effect of genes and environments, published more than two decades ago:

It is our conviction that a complete understanding of the etiology of most psychiatric disorders will require an understanding of the relevant genetic risk factors, the relevant environmental risk factors, and the ways in which these two risk factors interact. Such understanding will only arise from research in which the important environmental variables are measured in a genetically informative design. Such research will require a synthesis of research traditions within psychiatry that have often been at odds with one another in the past. This interaction between the research tradition that has focused on the genetic etiology of psychiatric illness and that which has emphasized environmental causation will undoubtedly be to the benefit of both. (Kendler & Eaves 1986, p. 288)

The PubMed data showing an exponential increase in published papers on gene-environment interaction suggest that that day has arrived. This has been facilitated by the rapid advances that have taken place in the field of genetics, making the incorporation of genetic components into traditional psychological studies a relatively easy and inexpensive endeavor. But with this surge of interest in gene-environment interaction, a number of new complications have emerged, and the study of gene-environment interaction faces new challenges, including a recent backlash against studying gene-environment interaction (Risch et al. 2009). Addressing these challenges will be critical to moving research on gene-environment interaction forward in a productive way.

In this article, I first review different study designs for detecting gene-environment interaction, providing an overview of methods from animal and human studies. I cover gene-environment interaction studies that examine genetic influences as modeled latently as well as studies of measured genotypes. In the study of latent gene-environment interaction, specific genotypes are not measured, but rather genetic influence is inferred based on observed correlations between people who have different degrees of genetic and environmental sharing. Thus, latent gene-environment interaction studies examine the aggregate effects of genes rather than any one specific gene.

Molecular genetic studies, in contrast, have generally focused on one specific gene of interest at a time. Relevant examples of gene-environment interaction across these different methodologies are provided, though these are meant to be more illustrative than exhaustive, intended to introduce the reader to relevant studies and findings generated across these various designs.

Subsequently I review more conceptual issues surrounding the study of gene-environment interaction, covering the nature of gene-environment interaction effects as well as the challenges facing the study of gene-environment interaction, such as difficulties with differentiating various types of interactions, and how issues such as the scaling of environmental measures can have profound implications for studying gene-environment interaction.

I include an overview of epigenetics, a relatively new area of study that provides a potential biological mechanism by which the environment can moderate gene expression and affect behavior.

Finally, I conclude with recommendations for future directions and how we can take research on gene-environment interaction to the next level.


It is important to first address some aspects of terminology surrounding the study of gene-environment interaction. In lay terms, the phrase gene-environment interaction is often used to mean that both genes and environments are important. In statistical terms, this does not necessarily indicate an interaction but could be consistent with an additive model, in which there are main effects of the environment and main effects of genes.

But in a statistical sense an interaction is a very specific thing, referring to a situation in which the effect of one variable cannot be understood without taking into account the other variable. Their effects are not independent. When we refer to gene-environment interaction in a statistical sense, we are referring to a situation in which the effect of genes depends on the environment and/or the effect of the environment depends on genotype. We note that these two alternative conceptualizations of gene-environment interaction are indistinguishable statistically. It is this statistical definition of gene-environment interaction that is the primary focus of this review (except where otherwise noted).

It is also important to note that genetic and environmental influences are not necessarily independent factors. That is to say that although some environmental influences may be largely random, such as experiencing a natural disaster, many environmental influences are not entirely random (Kendler et al. 1993).

This phenomenon is called gene-environment correlation.

Three specific ways by which genes may exert an effect on the environment have been delineated (Plomin et al. 1977, Scarr & McCartney 1983):

(a) Passive gene-environment correlation refers to the fact that among biologically related relatives (i.e., nonadoptive families), parents provide not only their children’s genotypes but also their rearing environment. Therefore, the child’s genotype and home environment are correlated.

(b) Evocative gene-environment correlation refers to the idea that individuals’ genotypes influence the responses they receive from others. For example, a child who is predisposed to having an outgoing, cheerful disposition might be more likely to receive positive attention from others than a child who is predisposed to timidity and tears. A person with a grumpy, abrasive temperament is more likely to evoke unpleasant responses from coworkers and others with whom he/she interacts than is a cheerful, friendly person. Thus, evocative gene-environment correlation can influence the way an individual experiences the world.

(c) Active gene-environment correlation refers to the fact that an individual actively selects certain environments and takes away different things from his/her environment, and these processes are influenced by an individual’s genotype. Therefore, an individual predisposed to high sensation seeking may be more prone to attend parties and meet new people, thereby actively influencing the environments he/she experiences.

Evidence exists in the literature for each of these processes. The important point is that many sources of behavioral influence that we might consider “environmental” are actually under a degree of genetic influence (Kendler & Baker 2007), so often genetic and environmental influences do not represent independent sources of influence. This also makes it difficult to determine whether the genes or the environment is the causal agent. If, for example, individuals are genetically predisposed toward sensation seeking, and this makes them more likely to spend time in bars (a gene-environment correlation), and this increases their risk for alcohol problems, are the predisposing sensation-seeking genes or the bar environment the causal agent?

In actuality, the question is moot, they both played a role; it is much more informative to try to understand the pathways of risk than to ask whether the genes or the environment was the critical factor. Though this review focuses on gene-environment interaction, it is important for the reader to be aware that this is but one process by which genetic and environmental influences are intertwined. Additionally, gene-environment correlation must be taken into account when studying gene-environment interaction, a point that is mentioned again later in this review. Excellent reviews covering the nature and importance of gene-environment correlation also exist (Kendler 2011).


Animal Research

Perhaps the most straightforward method for detecting gene-environment interaction is found in animal experimentation: Different genetic strains of animals can be subjected to different environments to directly test for gene-environment interaction. The key advantage of animal studies is that environmental exposure can be made random to genotype, eliminating gene-environment correlation and associated problems with interpretation.

The most widely cited example of this line of research is Cooper and Zubek’s 1958 experiment, in which rats were selectively bred to perform differently in a maze-running experiment (Cooper & Zubek 1958). Under standard environmental conditions, one group of rats consistently performed with few errors (“maze bright”), while a second group committed many errors (“maze dull”). These selectively bred rats were then exposed to various environmental conditions: an enriched condition, in which rats were reared in brightly colored cages with many moveable objects, or a restricted condition, in which there were no colors or toys. The enriched condition had no effect on the maze bright rats, although it substantially improved the performance of the maze dull rats, such that there was no difference between the groups.

Conversely, the restrictive environment did not affect the performance of the maze dull rats, but it substantially diminished the performance of the maze bright rats, again yielding no difference between the groups and demonstrating a powerful gene-environment interaction.

A series of experiments conducted by Henderson on inbred strains of mice, in which environmental enrichment was manipulated, also provides evidence for gene-environment interaction on several behavioral tasks (Henderson 1970, 1972). These studies laid the foundation for many future studies, which collectively demonstrate that environmental variation can have considerable differential impact on outcome depending on the genetic make-up of the animal (Wahlsten et al. 2003).

However, animal studies are not without their limitations. Gene-environment interaction effects detected in animal studies are still subject to the problem of scale (Mather & Jinks 1982), as discussed in greater detail later in this review.

Human Research

Traditional behavior genetic designs

Demonstrating gene-environment interaction in humans has been considerably more difficult where ethical constraints require researchers to make use of natural experiments so environmental exposures are not random. Three traditional study designs have been used to demonstrate genetic influence on behavior: family studies, adoption studies, and twin studies. These designs have been used to detect gene-environment interaction also, and each is discussed in turn.

Family studies

Demonstration that a behavior aggregates in families is the first step in establishing a genetic basis for a disorder (Hewitt & Turner 1995). Decreasing similarity with decreasing degrees of relatedness lends support to genetic influence on a behavior (Gottesman 1991). This is a necessary, but not sufficient, condition for heritability. Similarity among family members is due both to shared genes and shared environment; family studies cannot tease apart these two sources of variance to determine whether familiality is due to genetic or common environmental causes (Sherman et al. 1997).

However, family studies provide a powerful method for identifying gene-environment interaction. By comparing high-risk children, identified as such by the presence of psychopathology in their parents, with a control group of low-risk individuals, it is possible to test the effects of environmental characteristics on individuals varying in genetic risk (Cannon et al. 1990).

In a high-risk study of Danish children with schizophrenic mothers and matched controls, institutional rearing was associated with an elevated risk of schizophrenia only among those children with a genetic predisposition (Cannon et al. 1990). When these subjects were further classified on genetic risk as having one or two affected parents, a significant interaction emerged between degree of genetic risk and birth complications in predicting ventricle enlargement: The relationship between obstetric complications and ventricular enlargement was greater in the group of individuals with one affected parent as compared to controls, and greater still in the group of individuals with two affected parents (Cannon et al. 1993). Another study also found that among individuals at high risk for schizophrenia, experiencing obstetric complications was related to an earlier hospitalization (Malaspina et al. 1999).

Another creative method has made use of the natural experiment of family migration to demonstrate gene-environment interaction: The high rate of schizophrenia among African-Caribbean individuals who emigrated to the United Kingdom is presumed to result from gene-environment interaction. Parents and siblings of first-generation African-Caribbean probands have risks of schizophrenia similar to those for white individuals in the area. However, the siblings of second-generation African-Caribbean probands have markedly elevated rates of schizophrenia, suggesting that the increase in schizophrenia rates is due to an interaction between genetic predispositions and stressful environmental factors encountered by this population (Malaspina et al. 1999, Moldin & Gottesman 1997).

Although family studies provide a powerful design for demonstrating gene-environment interaction, there are limitations to their utility. High-risk studies are very expensive to conduct because they require the examination of individuals over a long period of time. Additionally, a large number of high-risk individuals must be studied in order to obtain a sufficient number of individuals who eventually become affected, due to the low base rate of most mental disorders. Because of these limitations, few examples of high-risk studies exist.

Adoption studies

Adoption and twin studies are able to clarify the extent to which similarity among family members is due to shared genes versus shared environment. In their simplest form, adoption studies involve comparing the extent to which adoptees resemble their biological relatives, with whom they share genes but not family environment, with the extent to which adoptees resemble their adoptive relatives, with whom they share family environment but not genes.

Adoption studies have been pivotal in advancing our understanding of the etiology of many disorders and drawing attention to the importance of genetic factors. For example, Heston’s historic adoption study was critical in dispelling the myth of schizophrenogenic mothers in favor of a genetic transmission explaining the familiality of schizophrenia (Heston & Denney 1967).

Furthermore, adoption studies provide a powerful method of detecting gene-environment interactions and have been called the human analogue of strain-by-treatment animal studies (Plomin & Hershberger 1991). The genotype of adopted children is inferred from their biological parents, and the environment is measured in the adoptive home. Individuals thought to be at genetic risk for a disorder, but reared in adoptive homes with different environments, are compared to each other and to control adoptees.

This methodology has been employed by a number of research groups to document gene-environment interactions in a variety of clinical disorders: In a series of Iowa adoption studies, Cadoret and colleagues demonstrated that a genetic predisposition to alcohol abuse predicted major depression in females only among adoptees who also experienced a disturbed environment, as defined by psychopathology, divorce, or legal problems among the adoptive parents (Cadoret et al. 1996).

In another study, depression scores and manic symptoms were found to be higher among individuals with a genetic predisposition and a later age of adoption (suggesting a more transient and stressful childhood) than among those with only a genetic predisposition (Cadoret et al. 1990).

In an adoption study of Swedish men, mild and severe alcohol abuse were more prevalent only among men who had both a genetic predisposition and more disadvantaged adoptive environments (Cloninger et al. 1981).

The Finnish Adoptive Family Study of Schizophrenia found that high genetic risk was associated with increased risk of schizophrenic thought disorder only when combined with communication deviance in the adoptive family (Wahlberg et al. 1997).

Additionally, the adoptees had a greater risk of psychological disturbance, defined as neuroticism, personality disorders, and psychoticism, when the adoptive family environment was disturbed (Tienari et al. 1990).

These studies have demonstrated that genetic predispositions for a number of psychiatric disorders interact with environmental influences to manifest disorder.

However, adoption studies suffer from a number of methodological limitations. Adoptive parents and biological parents of adoptees are often not representative of the general population. Adoptive parents tend to be socioeconomically advantaged and have lower rates of mental problems, due to the extensive screening procedures conducted by adoption agencies (Kendler 1993). Biological parents of adoptees tend to be atypical, as well, but in the opposite way. Additionally, selective placement by adoption agencies is confounding the clear-cut separation between genetic and environmental effects by matching adoptees and adoptive parents on demographics, such as race and religion. An increasing number of adoptions are also allowing contact between the biological parents and adoptive children, further confounding the traditional genetic and environmental separation that made adoption studies useful for genetically informative research.

Finally, greater contraceptive use is making adoption increasingly rare (Martin et al. 1997). Accordingly, this research strategy has become increasingly challenging, though a number of current adoption studies continue to make important contributions to the field (Leve et al. 2010; McGue et al. 1995, 1996).

Twin studies

Twins provide a number of ways to study gene-environment interaction. One such method is to study monozygotic twins reared apart (MZA). MZAs provide a unique opportunity to study the influence of different environments on identical genotypes. In the Swedish Adoption/Twin Study of Aging, data from 99 pairs of MZAs were tested for interactions between childhood rearing and adult personality (Bergeman et al. 1988).

Several significant interactions emerged. In some cases, the environment had a stronger impact on individuals genetically predisposed to be low on a given trait (based on the cotwin’s score). For example, individuals high in extraversion expressed the trait regardless of the environment; however, individuals predisposed to low extraversion had even lower scores in the presence of a controlling family.

In other traits, the environment had a greater impact on individuals genetically predisposed to be high on the trait: Individuals predisposed to impulsivity were even more impulsive in a conflictual family environment; individuals low on impulsivity were not affected.

Finally, some environments influenced both individuals who were high and low on a given trait, but in opposite directions: Families that were more involved masked genetic differences between individuals predisposed toward high or low neuroticism, but greater genetic variation emerged in less controlling families.

The implementation of population-based twin studies, inclusion of measured environments into twin studies, and advances in biometrical modeling techniques for twin data made it possible to study gene-environment interaction within the framework of the classic twin study. Traditional twin studies involve comparisons of monozygotic (MZ) and dizy-gotic (DZ) twins reared together. MZ twins share all of their genetic variation, whereas DZ twins share on average 50% of their genetic make-up; however, both types of twins are age-matched siblings sharing their family environments. This allows heritability, or the proportion of variance attributed to additive genetic effects, to be estimated by (a) doubling the difference between the correlation found between MZ twins and the correlation found between DZ twins, for quantitative traits, or ( b ) comparing concordance rates between MZs and DZs, for qualitative disorders (McGue & Bouchard 1998).

Biometrical model-fitting made it possible for researchers to address increasingly sophisticated research questions by allowing one to statistically specify predictions made by various hypotheses and to compare models testing competing hypotheses. By modeling data from subjects who vary on exposure to a specified environment, one could test whether there is differential expression of genetic influences in different environments.

Early examples of gene-environment interaction in twin models necessitated “grouping” environments to fit multiple group models. The basic idea was simple: Fit models to data for people in environment 1 and environment 2 separately and then test whether there were significant differences in the importance of genetic and environmental factors across the groups using basic structural equation modeling techniques. In an early example of gene-environment interaction, data from the Australian twin register were used to test whether the relative importance of genetic effects on alcohol consumption varied as a function of marital status, and in fact they did (Heath et al. 1989).

Having a marriage-like relationship reduced the impact of genetic influences on drinking: Among the younger sample of twins, genetic liability accounted for but half as much variance in drinking among married women (31%) as among unmarried women (60%). A parallel effect was found among the adult twins: Genetic effects accounted for less than 60% of the variance in married respondents but more than 76% in unmarried respondents (Heath et al. 1989).

In an independent sample of Dutch twins, religiosity was also shown to moderate genetic and environmental influences on alcohol use initiation in females (with nonsignificant trends in the same direction for males): In females without a religious upbringing, genetic influences accounted for 40% of the variance in alcohol use initiation compared to 0% in religiously raised females. Shared environmental influences were far more important in the religious females (Koopmans et al. 1999).

In data from our population-based Finnish twin sample, we also found that regional residency moderates the impact of genetic and environmental influences on alcohol use. Genetic effects played a larger role in longitudinal drinking patterns from late adolescence to early adulthood among individuals residing in urban settings, whereas common environmental effects exerted a greater in-fluence across this age range among individuals in rural settings (Rose et al. 2001).

When one has pairs discordant for exposure, it is also possible to ask about genetic correlation between traits displayed in different environments.

One obvious limitation of modeling gene-environment interaction in this way was that it constrained investigation to environments that fell into natural groupings (e.g., married/unmarried; urban/rural) or it forced investigators to create groups based on environments that may actually be more continuous in nature (e.g., religiosity). In the first extension of this work to quasi-continuous environmental moderation, we developed a model that allowed genetic and environmental influences to vary as a function of a continuous environmental moderator and used this model to follow-up on the urban/rural interaction reported previously (Dick et al. 2001).

We believed it likely that the urban/rural moderation effect reflected a composite of different processes at work. Accordingly, we expanded the analyses to incorporate more specific information about neighborhood environments, using government-collected information about the specific municipalities in which the twins resided (Dick et al. 2001). We found that genetic influences were stronger in environments characterized by higher rates of migration in and out of the municipality; conversely, shared environmental influences predominated in local communities characterized by little migration.

We also found that genetic predispositions were stronger in communities composed of a higher percentage of young adults slightly older than our age-18 Finnish twins and in regions where there were higher alcohol sales.

Further, the magnitude of genetic moderation observed in these models that allowed for variation as a function of a quasi-continuous environmental moderator was striking, with nearly a fivefold difference in the magnitude of genetic effects between environmental extremes in some cases.

The publication of a paper the following year (Purcell 2002) that provided straightforward scripts for continuous gene-environment interaction models using the most widely used program for twin analyses, Mx (Neale 2000), led to a surge of papers studying gene-environment interaction in the twin literature. These scripts also offered the advantage of being able to take into account gene-environment correlation in the context of gene-environment interaction. This was an important advance because previous examples of gene-environment interaction in twin models had been limited to environments that showed no evidence of genetic effects so as to avoid the confounding of gene-environment interaction with gene-environment correlation.

Using these models, we have demonstrated that genetic influences on adolescent substance use are enhanced in environments with lower parental monitoring (Dick et al. 2007c) and in the presence of substance-using friends (Dick et al. 2007b). Similar effects have been demonstrated for more general externalizing behavior: Genetic influences on antisocial behavior were higher in the presence of delinquent peers (Button et al. 2007) and in environments characterized by high parental negativity (Feinberg et al. 2007), low parental warmth (Feinberg et al. 2007), and high paternal punitive discipline (Button et al. 2008).

Further, in an extension of the socioregional-moderating effects observed on age-18 alcohol use, we found a parallel moderating role of these socioregional variables on age-14 behavior problems in girls in a younger Finnish twin sample. Genetic influences assumed greater importance in urban settings, communities with greater migration, and communities with a higher percentage of slightly older adolescents.

Other psychological outcomes have also yielded significant evidence of gene-environment interaction effects in the twin literature. For example, a moderating effect, parallel to that reported for alcohol consumption above, has been reported for depression symptoms (Heath et al. 1998) in females. A marriage-like relationship reduced the influence of genetic liability to depression symptoms, paralleling the effect found for alcohol consumption: Genetic factors accounted for 29% of the variance in depression scores among married women, but for 42% of the variance in young unmarried females and 51% of the variance in older unmarried females (Heath et al. 1998).

Life events were also found to moderate the impact of factors influencing depression in females (Kendler et al. 1991). Genetic and/or shared environmental influences were significantly more important in influencing depression in high-stress than in low-stress environments, as defined by a median split on a life-event inventory, although there was insufficient power to determine whether the moderating influence was on genetic or environmental effects.

More than simply accumulating examples of moderation of genetic influence by environmental factors, efforts have been made to integrate this work into theoretical frameworks surrounding the etiology of different clinical conditions. This is critical if science is to advance beyond individual observations to testable broad theories.

A 2005 review paper by Shanahan and Hofer suggested four processes by which social context may moderate the relative importance of genetic effects (Shanahan & Hofer 2005).

The environment may (a) trigger or (b) compensate for a genetic predisposition, (c) control the expression of a genetic predisposition, or (d ) enhance a genetic predisposition (referring to the accentuation of “positive” genetic predispositions).

These processes are not mutually exclusive and can represent different ends of a continuum. For example, the interaction between genetic susceptibility and life events may represent a situation whereby the experience of life events triggers a genetic susceptibility to depression. Conversely, “protective” environments, such as marriage-like relationships and low stress levels, can buffer against or reduce the impact of genetic predispositions to depressive problems.

Many different processes are likely involved in the gene-environment interactions observed for substance use and antisocial behavior. For example, family environment and peer substance use/delinquency likely constitute a spectrum of risk or protection, and family/friend environments that are at the “poor” extreme may trigger genetic predispositions toward substance use and antisocial behavior, whereas positive family and friend relationships may compensate for genetic predispositions toward substance use and antisocial behavior.

Social control also appears to be a particularly relevant process in substance use, as it is likely that being in a marriage-like relationship and/or being raised with a religious upbringing exert social norms that constrain behavior and thereby reduce genetic predispositions toward substance use.

Further, the availability of the substance also serves as a level of control over the ability to express genetic predispositions, and accordingly, the degree to which genetic influences will be apparent on an outcome at the population level. In a compelling illustration of this effect, Boardman and colleagues used twin data from the National Survey of Midlife Development in the United States and found a significant reduction in the importance of genetic influences on people who smoke regularly following legislation prohibiting smoking in public places (Boardman et al. 2010).

Molecular analyses

All of the analyses discussed thus far use latent, unmeasured indices of genetic influence to detect the possible presence of gene-environment interaction. This is largely because it was possible to test for the presence of latent genetic influence in humans (via comparisons of correlations between relatives with different degrees of genetic sharing) long before molecular genetics yielded the techniques necessary to identify specific genes influencing complex psychological disorders.

However, recent advances have made the collection of deoxyribonucleic acid (DNA) and resultant genotyping relatively cheap and straightforward. Additionally, the publication of hig profile papers brought gene-environment interaction to the forefront of mainstream psychology. In a pair of papers published in Science in 2002 and 2003, respectively, Caspi and colleagues analyzed data from a prospective, longitudinal sample from a birth cohort from New Zealand, followed from birth through adulthood.

In the 2002 paper, they reported that a functional polymorphism in the gene encoding the neurotransmitter-metabolizing enzyme monoamine oxidase A (MAOA) moderated the effect of maltreatment: Males who carried the genotype conferring high levels of MAOA expression were less likely to develop antisocial problems when exposed to maltreatment (Caspi et al. 2002). In the 2003 paper, they reported that a functional polymorphism in the promoter region of the serotonin transporter gene (5-HTT) was found to moderate the influence of stressful life events on depression. Individuals carrying the short allele of the 5-HTT promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than did individuals homozygous for the long allele (Caspi et al. 2003).

Both studies were significant in demonstrating that genetic variation can moderate individuals’ sensitivity to environmental events.

These studies sparked a multitude of reports that aimed to replicate, or to further extend and explore, the findings of the original papers, resulting in huge literatures surrounding each reported gene-environment interaction in the years since the original publications (e.g., Edwards et al. 2009, Enoch et al. 2010, Frazzetto et al. 2007, Kim-Cohen et al. 2006, McDermott et al. 2009,Prom-Wormley et al. 2009, Vanyukov et al. 2007, Weder et al. 2009). It is beyond the scope of this review to detail these studies; however, of note was the publication in 2009 of a highly publicized meta-analysis of the interaction between 5-HTT, stressful life events, and risk of depression that concluded there was “no evidence that the serotonin transporter genotype alone or in interaction with stressful life events is associated with an elevated risk of depression in men alone, women alone, or in both sexes combined” (Risch et al. 2009). Further, the authors were critical of the rapid embracing of gene-environment interaction and the substantial resources that have been devoted to this research.

The paper stimulated considerable backlash against the study of gene-environment interactions, and the pendulum appeared to be swinging back the other direction. However, a recent review by Caspi and colleagues entitled “Genetic Sensitivity to the Environment: The Case of the Serotonin Transporter Gene and Its Implications for Studying Complex Diseases and Traits” highlighted the fact that evidence for involvement of 5-HTT in stress sensitivity comes from at least four different types of studies, including observational studies in humans, experimental neuroscience studies, studies in nonhuman primates, and studies of 5HTT mutations in rodents (Caspi et al. 2010).

Further, the authors made the distinction between different cultures of evaluating gene-environment interactions: a purely statistical (theory-free) approach that relies wholly on meta-analysis (e.g., such as that taken by Risch et al. 2009) versus a construct-validity (theory-guided) approach that looks for a nomological network of convergent evidence, such as the approach that they took.

It is likely that this distinction also reflects differences in training and emphasis across different fields. The most cutting-edge genetic strategies at any given point, though they have changed drastically and rapidly over the past several decades, have generally involved atheoretical methods for gene identification (Neale et al. 2008). This was true of early linkage analyses, where ~400 to 1,000 markers were scanned across the genome to search for chromosomal regions that were shared by affected family members, suggesting there may be a gene in that region that harbored risk for the particular outcome under study. This allowed geneticists to search for genes without having to know anything about the underlying biology, with the ideas that the identification of risk genes would be informative as to etiological processes and that our understanding of the biology of most psychiatric conditions is limited.

Although it is now recognized that linkage studies were underpowered to detect genes of small effect, such as those now thought to be operating in psychiatric conditions, this atheoretical approach was retained in the next generation of gene-finding methods that replaced linkage, the implementation of genome-wide association studies (GWAS) (Cardon 2006). GWAS also have the general framework of scanning markers located across the entire genome in an effort to detect association between genetic markers and disease status; however, in GWAS over a million markers (or more, on the newest genetic platforms) are analyzed.

The next technique on the horizon is sequencing, in which entire stretches of DNA are sequenced to know the exact base pair sequence for a given region (McKenna et al. 2010).

From linkage to sequencing, common across all these techniques is an atheoretical framework for finding genes that necessarily involves conducting very large numbers of tests. Accordingly, there has been great emphasis in the field of genetics on correction for multiple testing (van den Oord 2007). In addition, the estimated magnitude of effect size of genetic variants thought to influence complex behavioral outcomes has been continually shifted downward as studies that were sufficiently powered to detect effect sizes previously thought to be reasonable have failed to generate positive findings (Manolio et al. 2009). GWAS have led the field to believe that genes influencing complex behavioral outcomes likely have odds ratios (ORs) on the order of magnitude of 1.1. This has led to a need for incredibly large sample sizes, requiring meta-analytic GWAS efforts with several tens of thousands of subjects (Landi et al. 2009, Lindgren et al. 2009).

It is important to note there has been increasing attention to the topic of gene-environment interaction from geneticists (Engelman et al. 2009). This likely reflects, in part, frustration and difficulty with identifying genes that impact complex psychiatric outcomes. Several hypotheses have been put forth as possible explanations for the failure to robustly detect genes involved in psychiatric outcomes, including a genetic model involving far more genes, each of very small effect, than was previously recognized, and failure to pay adequate attention to rare variants, copy number variants, and gene-environment interaction (Manolio et al. 2009).

Accordingly, gene-environment interaction is being discussed far more in the area of gene finding than in years past; however, these discussions often involve atheoretical approaches and center on methods to adequately detect gene-environment interaction in the presence of extensive multiple testing (Gauderman 2002, Gauderman et al. 2010). The papers by Risch et al. (2009) and Caspi et al. (2010) on the interaction between 5-HTT, life stress, and depression highlight the conceptual, theoretical, and practical differences that continue to exist between the fields of genetics and psychology surrounding the identification of gene-environment interaction effects.


An important consideration in the study of gene-environment interaction is the nature, or shape, of the interaction that one hypothesizes. There are two primary types of interactions.

One type of interaction is the fan-shaped interaction. In this type of interaction, the influence of genotype is greater in one environmental context than in another. This is the kind of interaction that is hypothesized by a diathesis-stress framework, whereby genetic influences become more apparent, i.e., are more strongly related to outcome, in the presence of negative environmental conditions. There is a reduced (or no) association of genotype with outcome in the absence of exposure to particular environmental conditions.

The literature surrounding depression and life events would be an example of a hypothesized fan-shaped interaction: When life stressors are encountered, genetically vulnerable individuals are more prone to developing depression, whereas in the absence of life stressors, these individuals may be no more likely to develop depression. In essence, it is only when adverse environmental conditions are experienced that the genes “come on-line.”

Gene-environment interactions in the area of adolescent substance use are also hypothesized to be fan-shaped, where some environmental conditions will allow greater opportunity to express genetic predispositions, allowing for more variation by genotype, and other environments will exert social control in such a way as to curb genetic expression (Shanahan & Hofer 2005), leading to reduced genetic variance.

Twin analyses yielding evidence of genetic influences being more or less important in different environmental contexts are generally suggestive of fan-shaped interactions. Changes in the overall heritability do not necessarily dictate that any one specific susceptibility gene will operate in a parallel manner; however, a change in heritability suggests that at least a good portion of the involved genes (assuming many genes of approximately equal and small effect) must be operating in that manner for a difference in heritability by environment to be detectable.

The diathesis-stress model has largely been the dominant model in psychiatry. Gene-finding efforts have focused on the search for vulnerability genes, and gene-environment interaction has been discussed in the context of these genetic effects becoming more or less important under particular environmental conditions.

Different types of gene-environment interactions.

More recently, an alternative framework has been proposed by Belsky and colleagues, the differential susceptibility hypothesis, in which the same individuals who are most adversely affected by negative environments may also be those who are most likely to benefit from positive environments. Rather than searching for “vulnerability genes” influencing psychiatric and behavioral outcomes, they propose the idea of “plasticity genes,” or genes involved in responsivity to environmental conditions (Belsky et al. 2009).

Belsky and colleagues reviewed the literatures surrounding gene-environment interactions associated with three widely studied candidate genes, MAOA, 5-HTT, and DRD4, and suggested that the results provide evidence for differential susceptibility associated with these genes (Belsky et al. 2009).

Their hypothesis is closely related to the concept of biological sensitivity to context (Ellis & Boyce 2008). The idea of biological sensitivity to context has its roots in evolutionary developmental biology, whereby selection pressures should favor genotypes that support a range of phenotypes in response to environmental conditions because this flexibility would be beneficial from the perspective of survival of the species. However, biological sensitivity to context has the potential for both positive effects under more highly supportive environmental conditions and negative effects in the presence of more negative environmental conditions. This theory has been most fully developed and discussed in the context of stress reactivity (Boyce & Ellis 2005), where it has been demonstrated that highly reactive children show disproportionate rates of morbidity when raised in adverse environments, but particularly low rates when raised in low-stress, highly supportive environments (Ellis et al. 2005). In these studies, high reactivity was defined by response to different laboratory challenges, and the authors noted that the underlying cellular mechanisms that would produce such responses are currently unknown, though genetic factors are likely to play a role (Ellis & Boyce 2008).

Although fan-shaped and crossover interactions are theoretically different, in practice, they can be quite difficult to differentiate. One can imagine several “variations on the theme” for both fan-shaped and crossover interactions. In general for a fan-shaped interaction, a main effect of genotype will be present as well as a main effect of the environment. There is a main effect of genotype at both environmental extremes; it is simply far stronger in environment 5 (far right side of the graph) as compared to environment 1 (far left side). But one could imagine a fan-shaped interaction where there was no genotypic effect at one extreme (e.g., the lines converge to the same phenotypic mean at environment).

Further, fan-shaped interactions can differ in the slope of the lines for each genotype, which indicate how much the environment is modifying genetic effects. In the crossover interaction shown above, the lines cross at environment 3 (i.e., in the middle). But crossover interactions can vary in the location of the crossover. It is possible that crossing over only occurs at the environmental extreme.

As previously noted, the crossing over of the genotypic groups in the Caspi et al. publications of the interactions between the 5-HTT gene, life events, and depression (Caspi et al. 2003) and between MAOA, maltreatment, and antisocial behavior (Caspi et al. 2002) occurred at the extreme low ends of the environmental measures, and the degree of crossing over was quite modest. Rather, the shape of the interactions (and the way the interactions were conceptualized in the papers) was largely fan-shaped, whereby certain genotypic groups showed stronger associations with outcome as a function of the environmental stressor.

Also, in both cases, the genetic variance was far greater under one environmental extreme than the other, rather than being approximately equivalent at both ends of the distribution, but with genotypic effects in opposite directions. In general, it is assumed that main effects of genotype will not be detected in crossover interactions, but this will actually depend on the frequency of the different levels of the environment. This is also true of fan-shaped interactions, but to a lesser degree.

Evaluating the relative importance, or frequency of existence, of each type of interaction is complicated by the fact that there is far more power to detect crossover interactions than fan-shaped interactions. Knowing that most of our genetic studies are likely underpowered, we would expect a preponderance of crossover effects to be detected as compared to fan-shaped effects purely as a statistical artifact. Further, even when a crossover effect is observed, power considerations can make it difficult to determine if it is “real.” For example, an interaction observed in our data between the gene CHRM2, parental monitoring, and adolescent externalizing behavior yielded consistent evidence for a gene-environment interaction, with a crossing of the observed regression lines. However, the mean differences by genotype were not significant at either end of the environmental continuum, so it is unclear whether the crossover reflected true differential susceptibility or simply overfitting of the data across the environmental levels containing the majority of the observations, which contributed to a crossing over of the regression lines at one environmental extreme (Dick et al. 2011).

Larger studies would have greater power to make these differentiations; however, there is the unfortunate paradox that the samples with the greatest depth of phenotypic information, allowing for more complex tests about risk associated with particular genes, usually have much smaller sample sizes due to the trade-off necessary to collect the rich phenotypic information. This is an important issue for gene-environment interaction studies in general: Most have been underpowered, and this raises concerns about the likelihood that detected effects are true positives. There are several freely available programs to estimate power (Gauderman 2002, Purcell et al. 2003), and it is critical that papers reporting gene-environment interaction effects (or a lack thereof) include information about the power of their sample in order to interpret the results.

Another widely contested issue is whether gene-environment interactions should be examined only when main effects of genotype are detected. Perhaps not surprisingly, this is the approach most commonly advocated by statistical geneticists (Risch et al. 2009) and that was recommended by the Psychiatric GWAS Consortium (Psychiatr. GWAS Consort. Steer. Comm. 2008). However, this strategy could preclude the detection of crossover interaction effects as well as gene-environment interactions that occur in the presence of relatively low-frequency environments. In addition, if genetic effects are conditional on environmental exposure, main effects of genotype could vary across samples, that is to say, a genetic effect could be detected in one sample and fail to replicate in another if the samples differ on environmental exposure.

Another issue with the detection and interpretation of gene-environment interaction effects involves the range of environments being studied. For example, if we assume that the five levels of the environment shown above represent the true full range of environments that exist, if a particular study only included individuals from environments 3–5, it would conclude that there is a fan-shaped gene-environment interaction. Belsky and colleagues (2009) have suggested this may be particularly problematic in the psychiatric literature because only in rare exceptions (Bakermans-Kranenburg & van Ijzendoorn 2006, Taylor et al. 2006) has the environment included both positive and negative ends of the spectrum. Rather, the absence of environmental stressors has usually constituted the “low” end of the environment, e.g., the absence of life stressors (Caspi et al. 2003) or the absence of maltreatment (Caspi et al. 2002). This could lead individuals to conclude there is a fan-shaped interaction because they are essentially failing to measure, with reference to figure above, environments 0-3, which represent the positive end of the environmental continuum. One can imagine a number of other incorrect conclusions that could be drawn about the nature of gene-environment interaction effects as a result of restricted range of environmental measures. For example, in B, measurement of individuals from environments 0-3 would lead one to conclude that genetic effects play a stronger role at lower levels of environmental exposure. Measurement of individuals from environments 3-5 would lead one to conclude that genetic effects play a stronger role at higher levels of exposure to the same environmental variable. In Figure A, if measurement of individuals was limited to environments 0-3, depending on sample size, there may be inadequate power to detect deviation from a purely additive genetic model, e.g., the slope of the genotypic lines may not be significantly different.

It is also important to note that not only are there several scenarios that would lead one to make incorrect conclusions about the nature of a gene-environment interaction effect, there are also scenarios that would lead one to conclude that a gene-environment interaction exists when it actually does not. Several of these are detailed in a sobering paper by my colleague Lindon Eaves, in which significant evidence for gene-environment interaction was detected quite frequently using standard regression methods, when the simulated data reflected strictly additive models (Eaves 2006). This was particularly problematic when using logistic regression where a dichotomous diagnosis was the outcome. The problem was further exaggerated when selected samples were analyzed.

An additional complication with evaluating gene-environment interactions in psychology is that often our environmental measures don’t have absolute scales of measurement. For example, what is the “real” metric for measuring a construct like parent-child bonding, or maltreatment, or stress? This becomes critical because fan-shaped interactions are very sensitive to scaling. Often a transformation of the scale scores will make the interaction disappear. What does it mean if the raw variable shows an interaction but the log transformation of the scale scores does not? Is the interaction real? Is one metric for measuring the environment a better reflection of the “real” nature of the environment than another?

Many of the environments of interest to psychologists do not have true metrics, such as those that exist for measures such as height, weight, or other physiological variables. This is an issue for the study of gene-environment interaction. It becomes even more problematic when you consider that logistic regression is the method commonly used to test for gene-environment interactions with dichotomous disease status outcomes. Logistic regression involves a logarithmic transformation of the probability of being affected. By definition, this changes the nature of the relationship between the variables being modeled. This compounds problems associated with gene-environment interactions being scale dependent.


An enduring question remains in the study of gene-environment interaction: how does the environment “get under the skin”? Stated in another way:

What are the biological processes by which exposure to environmental events could affect outcome?

Epigenetics is one candidate mechanism. Excellent recent reviews on this topic exist (Meaney 2010, Zhang & Meaney 2010), and I provide a brief overview here.

It is important to note, however, that although epigenetics is increasingly discussed in the context of gene-environment interaction, it does not relate directly to gene-environment interaction in the statistical sense, as differentiated previously in this review. That is to say that epigenetic processes likely tell us something about the biological mechanisms by which the environment can affect gene expression and impact behavior, but they are not informative in terms of distinguishing between additive versus interactive environmental effects.

Although variability exists in defining the term, epigenetics generally refers to modifications to the genome that do not involve a change in nucleotide sequence. To understand this concept, let us review a bit about basic genetics.

The expression of a gene is influenced by transcription factors (proteins), which bind to specific sequences of DNA. It is through the binding of transcription factors that genes can be turned on or off. Epigenetic mechanisms involve changes to how readily transcription factors can access the DNA. Several different types of epigenetic changes are known to exist that involve different types of chemical changes that can regulate DNA transcription.

One epigenetic process that affects transcription binding is DNA methylation. DNA methylation involves the addition of a methyl group (CH3) onto a cytosine (one of the four base pairs that make up DNA). This leads to gene silencing because methylated DNA hinders the binding of transcription factors.

A second major regulatory mechanism is related to the configuration of DNA. DNA is wrapped around clusters of histone proteins to form nucleosomes. Together the nucleosomes of DNA and histone are organized into chromatin. When the chromatin is tightly condensed, it is difficult for transcription factors to reach the DNA, and the gene is silenced. In contrast, when the chromatin is opened, the gene can be activated and expressed. Accordingly, modifications to the histone proteins that form the core of the nucleosome can affect the initiation of transcription by affecting how readily transcription factors can access the DNA and bind to their appropriate sequence.

Epigenetic modifications of the genome have long been known to exist. For example, all cells in the body share the same DNA; accordingly, there must be a mechanism whereby different genes are active in liver cells than, for example, brain cells. The process of cell specialization involves silencing certain portions of the genome in a manner specific to each cell. DNA methylation is a mechanism known to be involved in cell specialization.

Another well known example of DNA methylation involves X-inactivation in females. Because females carry two copies of the X chromosome, one must be inactivated. The silencing of one copy of the X chromosome involves DNA methylation.

Genomic imprinting is another long established principle known to involve DNA methylation. In genomic imprinting the expression of specific genes is determined by the parent of origin. For example, the copy of the gene inherited from the mother is silenced, while the copy inherited from the father is active (or vice versa). The silent copy is inactive through processes involving DNA methylation. These changes all involve epigenetic processes parallel to those currently attracting so much attention.

However, the difference is that these known epigenetic modifications (cell specialization, X inactivation, genomic imprinting) all occur early in development and are stable.

The discovery that epigenetic modifications continue to occur across development, and can be reversible and more dynamic, has represented a major paradigm shift in our understanding of environmental regulation of gene expression.

Animal studies have yielded compelling evidence that early environmental manipulations can be associated with long-term effects that persist into adulthood. For example, maternal licking and grooming in rats is known to have long-term influences on stress response and cognitive performance in their offspring (Champagne et al. 2008, Meaney 2010). Further, a series of studies conducted in macaque monkeys demonstrates that early rearing conditions can result in long-term increased aggression, more reactive stress response, altered neurotransmitter functioning, and structural brain changes (Stevens et al. 2009). These findings parallel research in humans that suggests that early life experiences can have long-term effects on child development (Loman & Gunnar 2010). Elegant work in animal models suggests that epigenetic changes may be involved in these associations (Meaney 2010, Zhang & Meaney 2010).

Evaluating epigenetic changes in humans is more difficult because epigenetic marks can be tissue specific. Access to human brain tissue is limited to postmortem studies of donated brains, which are generally unique and unrepresentative samples and must be interpreted in the context of those limitations. Nonetheless, a recent study of human brain samples from the Quebec Suicide Brain Bank found evidence of increased DNA methylation of the exon 1F promoter in hippocampal samples from suicide victims compared with controls, but only if suicide was accompanied with a history of childhood maltreatment (McGowan et al. 2009). Importantly, this paralleled epigenetic changes originally observed in rat brain in the ortholog of this locus.

Another line of evidence suggesting epigenetic changes that may be relevant in humans is the observation of increasing discordance in epigenetic marks in MZ twins across time. This is significant because MZ twins have identical genotypes, and therefore, differences between them are attributed to environmental influences. In a study by Fraga and colleagues (2005), MZ twins were found to be epigenetically indistinguishable during the early years of life, but older MZ twins exhibited remarkable differences in their epigenetic profiles. These findings suggest that epigenetic changes may be a mechanism by which environmental influences contribute to the differences in outcome observed for a variety of psychological traits of interest between genetically identical individuals.

The above studies complement a growing literature demonstrating differences in gene expression in humans as a function of environmental experience. One of the first studies to analyze the relationship between social factors and human gene expression compared healthy older adults who differed in the extent to which they felt socially connected to others (Cole et al. 2007). Using expression profiles obtained from blood cells, a number of genes were identified that showed systematically different levels of expression in people who reported feeling lonely and distant from others.

Interestingly, these effects were concentrated among genes that are involved in immune response.

The results provide a biological mechanism that could explain why socially isolated individuals show heightened vulnerability to diseases and illnesses related to immune function.

Importantly, they demonstrate that our social worlds can exert biologically significant effects on gene expression in humans (for a more extensive review, see Cole 2009).


This review has attempted to provide an overview of the study of gene-environment interaction, starting with early animal studies documenting gene-environment interaction, to demonstrations of similar effects in family, adoption, and twin studies.

Advances in twin modeling and the relative ease with which gene-environment interaction can now be modeled has led to a significant increase in the number of twin studies documenting changing importance of genetic influence across environmental contexts. There is now widespread documentation of gene-environment interaction effects across many clinical disorders (Thapar et al. 2007).

These findings have led to more integrated etiological models of the development of clinical outcomes. Further, since it is now relatively straightforward and inexpensive to collect DNA and conduct genotyping, there has been a surge of studies testing for gene-environment interaction with specific candidate genes.

Psychologists have embraced the incorporation of genetic components into their studies, and geneticists who focus on gene finding are now paying attention to the environment in an unprecedented way. However, now that the initial excitement surrounding gene-environment interaction has begun to wear off, a number of challenges involved in the study of gene-environment interaction are being recognized.

These include difficulties with interpreting interaction effects (or the lack thereof), due to issues surrounding the measurement and scaling of the environment, and statistical concerns surrounding modeling gene-environment interactions and the nature of their effects.

So where do we go from here? Individuals who jumped on the gene-environment interaction bandwagon are now discovering that studying this process is harder than it first appeared. But there is good reason to believe that gene-environment interaction is a very important process in the development of clinical disorders. So rather than abandon ship, I would suggest that as a field, we just need to proceed with more caution.


– Gene-environment interaction refers to the phenomenon whereby the effect of genes depends on the environment, or the effect of the environment depends on genotype. There is now widespread documentation of gene-environment interaction effects across many clinical disorders, leading to more integrated etiological models of the development of clinical outcomes.

– Twin, family, and adoption studies provide methods to study gene-environment interaction with genetic effects modeled latently, meaning that genes are not directly measured, but rather genetic influence is inferred based on correlations across relatives. Advances in genotyping technology have contributed to a proliferation of studies testing for gene-environment interaction with specific measured genes. Each of these designs has its own strengths and limitations.

– Two types of gene-environment interaction have been discussed in greatest detail in the literature: fan-shaped interactions, in which the influence of genotype is greater in one environmental context than in another; and crossover interactions, in which the same individuals who are most adversely affected by negative environments may also be those who are most likely to benefit from positive environments. Distinguishing between these types of interactions poses a number of challenges.

– The range of environments studied and the lack of a true metric for many environmental measures of interest create difficulties for studying gene-environment interactions. Issues surrounding power, and the use of logistic regression and selected samples, further compound the difficulty of studying gene-environment interactions. These issues have not received adequate attention by many researchers in this field.

– Epigenetic processes may tell us something about the biological mechanisms by which the environment can affect gene expression and impact behavior. The growing literature demonstrating differences in gene expression in humans as a function of environmental experience demonstrates that our social worlds can exert biologically significant effects on gene expression in humans.

– Much of the current work on gene-environment interactions does not take advantage of the state of the science in genetics or psychology; advancing this area of study will require close collaborations between psychologists and geneticists.

Differential Susceptibility to Environmental Influences

Jay Belsky

Evidence that adverse rearing environments exert negative effects particularly on children and adults presumed “vulnerable” for temperamental or genetic reasons may actually reflect something else: heightened susceptibility to the negative effects of risky environments and to the beneficial effects of supportive environments.

Building on Belsky’s (Belsky & Pluess) evolutionary inspired differential susceptibility hypothesis, stipulating that some individuals, including children, are more affected, both for better and for worse, by their environmental exposures and developmental experiences, recent research consistent with this claim is reviewed. It reveals that in many cases, including both observational field studies and experimental intervention ones, putatively vulnerable children and adults are especially susceptible to both positive and negative environmental effects. In addition to reviewing relevant evidence, unknowns in the differential susceptibility equation are highlighted.


Most students of child development probably do not presume that all children are equally susceptible to rearing (or other environmental) effects; a long history of research on interactions between parenting and temperament, or parenting by temperament interactions, clearly suggests otherwise. Nevertheless, it remains the case that most work still focuses on effects of environmental exposures and developmental experiences that apply equally to all children so-called main effects of parenting or poverty or being reared by a depressed mother, thus failing to consider interaction effects, which reflect the fact that whether, how, and how much these contextual conditions influence the child may depend on the child’s temperament or some other characteristic of individuality.

Research on parenting-by-temperament interactions is based on the premise that what proves effective for some individuals in fostering the development of some valued outcome, or preventing some problematic one may simply not do so for others. Commonly tested are diathesis-stress hypotheses derived from multiplerisk/transactional frameworks in which individual characteristics that make children “vulnerable” to adverse experiences placing them “at risk” of developing poorly are mainly influential when there is at the same time some contributing risk from the environmental context (Zuckerman, 1999).

Diathesis refers to the latent weakness or vulnerability that a child or adult may carry (e.g., difficult temperament, particular gene), but which does not manifest itself, thereby undermining well-being, unless the individual is exposed to conditions of risk or stress.

After highlighting some research consistent with a diathesis-stress or dual-risk perspective, I raise questions on the basis of other findings about how the first set of data has been interpreted, advancing the evolutionary inspired proposition that some children, for temperamental or genetic reasons, are actually more susceptible to both (a) the adverse effects of unsupportive parenting and (b) the beneficial effects of supportive rearing.

Finally, I draw conclusions and highlight some “unknowns in the differential-susceptibility equation.”

Diathesis-Stress, Dual-Risk and Vulnerability

The view that infants and toddlers manifesting high levels of negative emotion are at special risk of problematic development when they experience poor quality rearing is widespread.

Evidence consistent with this view can be found in the work of Morrell and Murray, who showed that it was only highly distressed and irritable 4-month-old boys who experienced coercive and rejecting mothering at this age who continued to show evidence, 5 months later, of emotional and behavioural dysregulation. Relatedly, Belsky, Hsieh, and Cernic observed that infants who scored high in negative emotionality at 12 months of age and who experienced the least supportive mothering and fathering across their second and third years of life scored highest on externalizing problems at 36 months of age. And Deater, Deckard and Dodge reported that:

Children rated highest on externalizing behavior problems by teachers across the primary school years were those who experienced the most harsh discipline prior to kindergarten entry and who were characterized by mothers at age 5 as being negatively reactive infants.

The adverse consequences of the co-occurrence of a child risk factor (ie, a diathesis; e.g., negative emotionality) and problematic parenting also is evident in Caspi and Moflitt’s ground breaking research on gene-by-environment (GXE) interaction. Young men followed from early childhood were most likely to manifest high levels of antisocial behavior when they had both (a) a history of child maltreatment and (b) a particular variant of the MAO-A gene, a gene previously linked to aggressive behaviour. Such results led Rutter, like others, to speak of “vulnerable individuals,” a concept that also applies to children putatively at risk for compromised development due to their behavioral attributes. But is “vulnerability” the best way to conceptualize the kind of person-environment interactions under consideration?

Beyond Diathesis, Stress, DualRisk and Vulnerability

Working from an evolutionary perspective, Belsky (Belsky & Pluess) theorized that children, especially within a family, should vary in their susceptibility to both adverse and beneficial effects of rearing influence. Because the future is uncertain, in ancestral times, just like today, parents could not know for certain (consciously or unconsciously) what rearing strategies would maximise reproductive fitness, that is, the dispersion of genes in future generations, the ultimate goal of Darwinian evolution.

To protect against all children being steered, inadvertently, in a parental direction that proved disastrous at some later point in time, developmental processes were selected to vary children’s susceptibility to rearing (and other environmental influences).

In what follows, I review evidence consistent with this claim which highlights early negative emotionality and particular candidate genes as “plasticity factors” making individuals more susceptible to both supportive and unsupportive environments, that is, “for better and for worse”.

Negative Emotionality as Plasticity Factor

The first evidence which Belsky could point to consistent with his differential susceptibility hypothesis concerned early negative emotionality. Children scoring high on this supposed “risk factor”, particularly in the early years, appeared to benefit disproportionately from supportive rearing environments.

Feldman, Greenbaum, and Yirmiya found, for example, that 9-month-olds scoring high on negativity who experienced low levels of synchrony in mother-infant interaction manifested more noncompliance during clean-up at age two than other children did. When such infants experienced mutually synchronous mother-infant interaction, however, they displayed greater self-control than did children manifesting much less negativity as infants. Subsequently, Kochanska, Aksan, and Joy observed that highly fearful 15-month-olds experiencing high levels of power-assertive paternal discipline were most likely to cheat in a game at 38 months, yet when cared for in a supportive manner such negatively emotional, fearful toddlers manifested the most rule-compatible conduct.

In the time since Belsky and Pluess reviewed evidence like that just cited, highlighting the role of negative emotionality as a “plasticity factor”, even more evidence to this effect has emerged in the case of children. Consider in this regard work linking (1) maternal empathy and anger with externalizing problems; (2) mutual responsiveness observed in the mother-child dyad with effortful control; (3) intrusive maternal behavior and poverty with executive functioning; and (4) sensitive parenting with social, emotional and cognitive-academic development.

Experimental studies designed to test Belsky’s differential susceptibility hypothesis are even more suggestive than the longitudinal correlational evidence just cited. Blair discovered that it was highly negative infants who benefited most in terms of both reduced levels of externalizing behavior problems and enhanced cognitive functioning from a multi-faceted infant-toddler intervention program whose data he reanalyzed. Thereafter, Klein Velderman, Bakermans-Kranenburg, Juffer, and van Ijzendoorn found that experimentally induced changes in maternal sensitivity exerted greater impact on the attachment security of highly negatively reactive infants than it did on other infants. In both experiments, environmental influences on “vulnerable” children were for better instead of for worse.

As it turns out, there is ever growing experimental evidence that early negative emotionality is a plasticity factor. Consider findings showing that it is infants who score relatively low on irritability as newborns who fail to benefit from an otherwise security promoting intervention and infants who show few, if any, mild perinatal adversities known to be related to limited negative emotionality who fail to benefit from computer based instruction otherwise found to promote preschoolers’ phonemic awareness and early literacy.

In other words, only the putatively “vulnerable”, those manifesting or likely to manifest high levels of negativity experienced developmental enhancement as a function of the interventions cited. Similar results emerge among older children, as Scott and O’Connor’s parenting intervention resulted in the most positive change in conduct among emotionally dysregulated children (i.e., loses temper, angry, touchy).

Genes as Plasticity Factors

Perhaps nowhere has the diathesis-stress framework informed person-X-environment interaction research more than in the study of GXE interaction. Recent studies involving measured genes and measured environments also document both for better and for worse environmental effects, in the case of susceptible individuals as it turns out. Here I consider evidence pertaining to two specific candidate genes before turning attention to research examining multiple genes at the same time.


One of the most widely studied genetic polymorphisms in research involving measured genes and measured environments pertains to a particular allele (or variant) of the dopamine receptor gene, DRD4. Because the dopaminergic system is engaged in attentional, motivational, and reward mechanisms and one variant of this polymorphism, the 7-repeat allele, has been linked to lower dopamine reception efficiency. Van Ijzendoorn and Bakerman Kranenburg predicted this allele would moderate the association between maternal unresolved loss or trauma and infant attachment disorganization. Having the 7-repeat DRD4 allele substantially increased risk for disorganization in children exposed to maternal unresolved loss/trauma, as expected, consistent with the diathesis-stress framework; yet when children with this supposed “vulnerability gene” were raised by mothers who had no unresolved loss, they displayed significantly less disorganization than agemates without the allele, regardless of mothers’ unresolved loss status.

Similar results emerged when the interplay between DRD4 and observed parental insensitivity in predicting externalizing problems was studied in a group of 47 twins. Children carrying the 7-repeat DRD4 allele raised by insensitive mothers displayed more externalizing behaviors than children without the DRD4 7-repeat (irrespective of maternal sensitivity), whereas children with the 7-repeat allele raised by sensitive mothers showed the lowest levels of externalizing problem behavior.

Such results suggest that conceptualizing the 7-repeat DRD4 allele exclusively in risk-factor terms is misguided, as this variant of the gene seems to heighten susceptibility to a wide variety of environments, with supportive and risky contexts promoting, respectively, positive and negative functioning.

In the time since I last reviewed such differential susceptibility related evidence, ever more GXE findings pertaining to DRD4 (and other polymorphisms) have appeared consistent with the notion that there are individual differences in developmental plasticity. Consider in this regard recent differential susceptibility related evidence showing heightened or exclusive susceptibility of individuals carrying the 7repeat allele when the environmental predictor and developmental outcome were, respectively, (a) maternal positivity and prosocial behavior; (b) early nonfamilial childcare and social competence; (c) contextual stress and support and adolescent negative arousal; (d) childhood adversity and young adult persistent alcohol dependence; and (e) newborn risk status (i.e., gestational age, birth weight for gestational age, length of stay in NICU) and observed maternal sensitivity.

Especially noteworthy, perhaps are the results of a meta-analysis of GXE research involving dopamine related genes showing that children eight and younger respond to positive and negative developmental experiences and environmental exposures in a manner consistent with differential susceptibility.

As in the case of negative emotionality, intervention research also underscores the susceptibility to 7-repeat carriers of the DRD4 gene to benefit disproportionately from supportive environments. Kegel, Bus and van I]zendoorn tested and found support for the hypothesis that it would be DRD4-7R carriers who would benefit from specially designed computer games promoting phonemic awareness and, thereby, early literacy in their randomized control trial (RCT). Other such RCT results point in the same direction with regard to DRD4-7R, including research on African American teenagers in which substance use was the outcome examined.


Perhaps the most studied polymorphism in research on GXE interactions is the serotonin transporter gene, 5-HTTLPR. Most research distinguishes those who carry one or two short alleles (8/3, 3/1) and those homozygous for the long allele (1/1). The short allele has generally been associated with reduced expression of the serotonin transporter molecule, which is involved in the reuptake of serotonin from the synaptic cleft and thus considered to be related to depression, either directly or in the face of adversity. Indeed, the short allele has often been conceptualized as a “depression gene”.

Caspi and associates were the first to show that the 5-HTTLPR moderates effects of stressful life events during early adulthood on depressive symptoms, as well as on probability of suicide ideation/attempts and of major depression episode at age 26 years. Individuals with two 3 alleles proved most adversely affected whereas effects on 1/1 genotypes were weaker or entirely absent. Of special significance, however, is that carriers of the 3/3 allele scored best on the outcomes just mentioned when stressful life events were absent, though not by very much.

Multiple research groups have attempted to replicate Caspi et al.’’s findings of increased vulnerability to depression in response to stressful life events for individuals with one or more copies of the 5 allele, with many succeeding, but certainly not all. The data presented in quite a number of studies indicates, however, that individuals carrying short alleles (s/s, s/l) did not just function most poorly when exposed to many stressors, but best, showing least problems when encountering few or none. Calling explicit attention to such a pattern of results, Taylor and associates reported that young adults homozygous for short alleles (s/s) manifested greater depressive symptomatology than individuals with other allelic variants when exposed to early adversity (i.e., problematic child rearing history), as well as many recent negative life events, yet the fewest symptoms when they experienced a supportive early environment or recent positive experiences. The same for-better-and-for-worse pattern of results concerning depression are evident in Eley et al.’s research on adolescent girls who were and were not exposed to risky family environments.

The effect of 5-HTTLPR in moderating environmental influences in a manner consistent with differential susceptibility is not restricted to depression and its symptoms. It also emerges in studies of anxiety and ADHD, particularly ADHD which persists into adulthood. In all these cases, emotional abuse in childhood or a generally adverse childrearing environment, it proved to be those individuals carrying short alleles who responded to developmental or concurrent experiences in a for-better-and-for-worse manner, depending on the nature of the experience in question.

Since last reviewing such 5-HTTLPR-related GXE research consistent with differential susceptibility, ever more evidence in line with the just cited work has emerged. Consider in this regard evidence showing for-better-and-for-worse results in the case of those carrying one or more short alleles of 5-HTTLPR when the rearing predictor and child outcome were, respectively, (a) maternal responsiveness and child moral internalization, (b) child maltreatment and children’s antisocial behavior, and (c) supportive parenting and children’s positive affect.

Differential susceptibility related findings also emerged (among male African-American adolescents) when (d) perceived racial discrimination was used to predict conduct problems; (e) when life events were used to predict neuroticism, and (f) life satisfaction of young adults; and (g) when retrospectively reported childhood adversity was used to explain aspects of impulsivity among college students (e.g., pervasive influence of feelings, feelings trigger action). Especially noteworthy are the results of a recent meta-analysis of GXE findings pertaining to children under 18 years of age, showing that short allele carriers are more susceptible to the effects of both positive and negative developmental experiences and environmental exposures, at least in the case of Caucasians.

As was the case with DRD4, there is also evidence from intervention studies documenting differential susceptibility. Consider in this regard Drury and associates data showing that it was only children growing up in Romanian orphanages who carried 5-HTTLPR short alleles who benefited from being randomly assigned to high quality foster care in terms of reductions in the display of indiscriminant friendliness. Eley and associates also documented intervention benefits restricted to short allele carriers in their study of cognitive behavior therapy for children suffering from severe anxiety, but their design included only treated children (i.e., did not involve a randomly assigned control group).

Polygenetic Plasticity

Most GxE research, like that just considered, has focused on one or another polymorphism, like DRD4 or 5-HTTLPR. In recent years, however, work has emerged focusing on multiple polymorphisms and thus reflecting the operation of epistatic (i.e., GXG) interactions, as well as GxGxE ones.

One can distinguish polygenetic GxE research in terms of the basis used for creating multigene composites. One strategy involves identifying genes which show main effects and then compositing only these to then test an interaction with some environmental parameter. Another approach is to composite genes for a secondary, follow-up analysis that have been found in a first round of inquiry to generate significant GxE interactions.

When Cicchetti and Rogosch applied this approach using four different polymorphisms, they found that as the number of sensitivity-to-the-environment alleles increased, so did the degree to which maltreated and non-maltreated low-income children differed on a composite measure of resilient functioning in a for-better-and-for-worse manner.

A third approach which has now been used successfully a number of times to chronicle differential susceptibility involves compositing a set of genes selected on an apriori basis before evaluating GxE. Consider in this regard evidence indicating that 2-gene composites moderate links (a) between sexual abuse and adolescent depression/anxiety and somatic symptoms (b) between perceived racial discrimination and risk related cognitions reflecting a fast vs. slow life-history strategy (c) between contextual stress/support and aggression in young adulthood and (d) between social class and post-partum depression.

Of note, too is evidence that a 3-gene composite moderates the relation between a hostile, demoralizing community and family environment and aggression in early adulthood and that a 5-gene composite moderates the relation between parenting and adolescent self-control.

Given research already reviewed, it is probably not surprising that there is also work examining genetically moderated intervention effects focusing on multi-gene composites rather than singular candidate genes. Consider in this regard the Drury et al.’s findings showing that even though the genetic polymorphism brain derived neurotrophic factor, BDNF, did not all by itself operate as a plasticity factor when it came to distinguishing those who did and did not benefit from the aforementioned foster-care intervention implemented with institutionalized children in Romania, the already-noted moderating effect of 5-HTTLPR was amplified if a child carried Met rather than Val alleles of BDNF along with short 5-HTTLPR alleles. In other words, the more plasticity alleles children carried, the more their indiscriminate friendliness declined over time when assigned to foster care and the more it increased if they remained institutionalized.

Consider next Brody, Chen and Beach’s confirmed prediction that the more GABAergic and Dopaminergic genes African American teens carried, the more protected they were from increasing their alcohol use over time when enrolled in a whole-family prevention program. Such results once again call attention to the benefits of moving beyond single polymorphisms when it comes to operationalizing the plasticity phenotype. They also indicate that even if a single gene may not by itself moderate an intervention (or other environmental) effect, it could still play a role in determining the degree to which an individual benefits. These are insights future investigators and interventionists should keep in mind when seeking to illuminate “what works for whom?”

Unknowns in the Differential Susceptibility Equation

The notion of differential susceptibility, derived as it is from evolutionary theorizing, has gained great attention in recent years, including a special section in the journal Development and Psychopathology.

Although research summarized here suggests that the concept has utility, there are many “unknowns,” several of which are highlighted in this concluding section.

Domain General or Domain Specilic?

Is it the case that some children, perhaps those who begin life as highly negatively emotional, are more susceptible both to a wide variety of rearing influences and with respect to a wide variety of developmental outcomes as is presumed in the use of concepts like “fixed” and “plastic” strategists, with the latter being highly malleable and the former hardly at all? Boyce and Ellis contend that a general psychobiological reactivity makes some children especially vulnerable to stress and thus to general health problems. Or is it the case, as Belsky wonders and Kochanska, Aksan, and Joy argue, that different children are susceptible to different environmental influences (e.g., nurturance, hostility) and with respect to different outcomes? Pertinent to this idea are findings of Caspi and Mofiitt indicating that different genes differentially moderated the effect of child maltreatment on antisocial behavior (MAO-A) and on depression (5HTT).

Continuous Versus Discrete Plasticity?

The central argument that children vary in their susceptibility to rearing influences raises the question of how to conceptualize differential susceptibility: categorically (some children highly plastic and others not so at all) or continuously (some children simply more malleable than others)? It may even be that plasticity is discrete for some environment-outcome relations, with some individuals affected and others not at all (e.g., gender specific effects), but that plasticity is more continuous for other susceptibility factors (e.g., in the case of the increasing vulnerability to stress of parents with decreasing dopaminergic efficiency. Certainly the work which composites multiple genotypes implies that there is a “plasticity gradient”, with some children higher and some lower in plasticity.


Susceptibility factors are the moderators of the relation between the environment and developmental outcome, but they do not elucidate the mechanism of differential influence.

Several (non-mutually exclusive) explanations have been advanced for the heightened susceptibility of negatively emotional infants. Suomi posits that the timidity of “uptight” infants affords them extensive opportunity to learn by watching, a view perhaps consistent with Bakermans-Kranenburg and van Ijzendoorn’s aforementioned findings pertaining to DRD4, given the link between the dopamine system and attention. Kochanska et al., contend that the ease with which anxiety is induced in fearful children makes them highly responsive to parental demands.

And Belsky speculates that negativity actually reflects a highly sensitive nervous system on which experience registers powerfully negatively when not regulated by the caregiver but positively when coregulation occurs, a point of view somewhat related to Boyce and Ellis’ proposal that susceptibility may reflect prenatally programmed hyper-reactivity to stress.


Epigenetics: The Evolution Revolution – Israel Rosenfield and Edward Ziff * The Epigenetics Revolution – Nessa Carey.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

We are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

Israel Rosenfield and Edward Ziff

At the end of the eighteenth century, the French naturalist Jean-Baptiste Lamarck noted that life on earth had evolved over long periods of time into a striking variety of organisms. He sought to explain how they had become more and more complex. Living organisms not only evolved, Lamarck argued; they did so very slowly, “little by little and successively.” In Lamarckian theory, animals became more diverse as each creature strove toward its own “perfection,” hence the enormous variety of living things on earth. Man is the most complex life form, therefore the most perfect, and is even now evolving.

In Lamarck’s view, the evolution of life depends on variation and the accumulation of small, gradual changes. These are also at the center of Darwin’s theory of evolution, yet Darwin wrote that Lamarck’s ideas were “veritable rubbish.” Darwinian evolution is driven by genetic variation combined with natural selection, the process whereby some variations give their bearers better reproductive success in a given environment than other organisms have. Lamarckian evolution, on the other hand, depends on the inheritance of acquired characteristics. Giraffes, for example, got their long necks by stretching to eat leaves from tall trees, and stretched necks were inherited by their offspring, though Lamarck did not explain how this might be possible.

When the molecular structure of DNA was discovered in 1953, it became dogma in the teaching of biology that DNA and its coded information could not be altered in any way by the environment or a person’s way of life. The environment, it was known, could stimulate the expression of a gene. Having a light shone in one’s eyes or suffering pain, for instance, stimulates the activity of neurons and in doing so changes the activity of genes those neurons contain, producing instructions for making proteins or other molecules that play a central part in our bodies.

The structure of the DNA neighboring the gene provides a list of instructions, a gene program, that determines under what circumstances the gene is expressed. And it was held that these instructions could not be altered by the environment. Only mutations, which are errors introduced at random, could change the instructions or the information encoded in the gene itself and drive evolution through natural selection. Scientists discredited any Lamarckian claims that the environment can make lasting, perhaps heritable alterations in gene structure or function.

But new ideas closely related to Lamarck’s eighteenth century views have become central to our understanding of genetics. In the past fifteen years these ideas, which belong to a developing field of study called epigenetics, have been discussed in numerous articles and several books, including Nessa Carey’s 2012 study The Epigenetic Revolution and The Deepest Well, a recent work on childhood trauma by the physician Nadine Burke Harris.

The developing literature surrounding epigenetics has forced biologists to consider the possibility that gene expression could be influenced by some heritable environmental factors previously believed to have had no effect over it, like stress or deprivation. “The DNA blueprint,” Carey writes,

Isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life.

That might seem a commonsensical view. But it runs counter to decades of scientific thought about the independence of the genetic program from environmental influence. What findings have made it possible?

In 1975, two English biologists, Robin Holliday and John Pugh, and an American biologist, Arthur Riggs, independently suggested that methylation, a chemical modification of DNA that is heritable and can be induced by environmental influences, had an important part in controlling gene expression. How it did this was not understood, but the idea that through methylation the environment could, in fact, alter not only gene expression but also the genetic program rapidly took root in the scientific community.

As scientists came to better understand the function of methylation in altering gene expression, they realized that extreme environmental stress, the results of which had earlier seemed self explanatory, could have additional biological effects on the organisms that suffered it. Experiments with laboratory animals have now shown that these outcomes are based on the transmission of acquired changes in genetic function. Childhood abuse, trauma, famine, and ethnic prejudice may, it turns out, have long term consequences for the functioning of our genes.

These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long lasting changes in the way genes are expressed.

Epigenesis does not change the information coded in the genes or a person’s genetic makeup, the genes themselves are not affected, but instead alters the manner in which they are “read” by blocking access to certain genes and preventing their expression.

This mechanism can be the hidden cause of our feelings of depression, anxiety, or paranoia. What is perhaps most surprising of all, this alteration could, in some cases, be passed on to future generations who have never directly experienced the stresses that caused their forebears’ depression or ill health.

Numerous clinical studies have shown that childhood trauma, arising from parental death or divorce, neglect, violence, abuse, lack of nutrition or shelter, or other stressful circumstances, can give rise to a variety of health problems in adults: heart disease, cancer, mood and dietary disorders, alcohol and drug abuse, infertility, suicidal behavior, learning deficits, and sleep disorders.

Since the publication in 2003 of an influential paper by Rudolf Jaenisch and Adrian Bird, we have started to understand the genetic mechanisms that explain why this is the case. The body and the brain normally respond to danger and frightening experiences by releasing a hormone, a glucocorticoid that controls stress. This hormone prepares us for various challenges by adjusting heart rate, energy production, and brain function; it binds to a protein called the glucocorticoid receptor in nerve cells of the brain.

Normally, this binding shuts off further glucocorticoid production, so that when one no longer perceives a danger, the stress response abates. However, as Gustavo Turecki and Michael Meaney note in a 2016 paper surveying more than a decade’s worth of findings about epigenetics, the gene for the receptor is inactive in people who have experienced childhood stress; as a result, they produce few receptors. Without receptors to bind to, glucocorticoids cannot shut off their own production, so the hormone keeps being released and the stress response continues, even after the threat has subsided.

“The term for this is disruption of feedback inhibition,” Harris writes. It is as if “the body’s stress thermostat is broken. Instead of shutting off this supply of ‘heat’ when a certain point is reached, it just keeps on blasting cortisol through your system.”

It is now known that childhood stress can deactivate the receptor gene by an epigenetic mechanism, namely, by creating a physical barrier to the information for which the gene codes. What creates this barrier is DNA methylation, by which methyl groups known as methyl marks (composed of one carbon and three hydrogen atoms) are added to DNA.

DNA methylation is long-lasting and keeps chromatin, the DNA-protein complex that makes up the chromosomes containing the genes, in a highly folded structure that blocks access to select genes by the gene expression machinery, effectively shutting the genes down. The long-term consequences are chronic inflammation, diabetes, heart disease, obesity, schizophrenia, and major depressive disorder.

Such epigenetic effects have been demonstrated in experiments with laboratory animals. In a typical experiment, rat or mouse pups are subjected to early-life stress, such as repeated maternal separation. Their behavior as adults is then examined for evidence of depression, and their genomes are analyzed for epigenetic modifications. Likewise, pregnant rats or mice can be exposed to stress or nutritional deprivation, and their offspring examined for behavioral and epigenetic consequences.

Experiments like these have shown that even animals not directly exposed to traumatic circumstances, those still in the womb when their parents were put under stress, can have blocked receptor genes. It is probably the transmission of glucocorticoids from mother to fetus via the placenta that alters the fetus in this way. In humans, prenatal stress affects each stage of the child’s maturation: for the fetus, a greater risk of preterm delivery, decreased birth weight, and miscarriage; in infancy, problems of temperament, attention, and mental development; in childhood, hyperactivity and emotional problems; and in adulthood, illnesses such as schizophrenia and depression.

What is the significance of these findings?

Until the mid-1970s, no one suspected that the way in which the DNA was “read” could be altered by environmental factors, or that the nervous systems of people who grew up in stress free environments would develop differently from those of people who did not. One’s development, it was thought, was guided only by one’s genetic makeup.

As a result of epigenesis, a child deprived of nourishment may continue to crave and consume large amounts of food as an adult, even when he or she is being properly nourished, leading to obesity and diabetes. A child who loses a parent or is neglected or abused may have a genetic basis for experiencing anxiety and depression and possibly schizophrenia.

Formerly, it had been widely believed that Darwinian evolutionary mechanisms, variation and natural selection, were the only means for introducing such long lasting changes in brain function, a process that took place over generations. We now know that epigenetic mechanisms can do so as well, within the lifetime of a single person.

It is by now well established that people who suffer trauma directly during childhood or who experience their mother’s trauma indirectly as a fetus may have epigenetically based illnesses as adults. More controversial is whether epigenetic changes can be passed on from parent to child.

Methyl marks are stable when DNA is not replicating, but when it replicates, the methyl marks must be introduced into the newly replicated DNA strands to be preserved in the new cells. Researchers agree that this takes place when cells of the body divide, a process called mitosis, but it is not yet fully established under which circumstances marks are preserved when cell division yields sperm and egg, a process called meiosis, or when mitotic divisions of the fertilized egg form the embryo. Transmission at these two latter steps would be necessary for epigenetic changes to be transmitted in full across generations.

The most revealing instances for studies of intergenerational transmission have been natural disasters, famines, and atrocities of war, during which large groups have undergone trauma at the same time. These studies have shown that when women are exposed to stress in the early stages of pregnancy, they give birth to children whose stress response systems malfunction. Among the most widely studied of such traumatic events is the Dutch Hunger Winter. In 1944 the Germans prevented any food from entering the parts of Holland that were still occupied. The Dutch resorted to eating tulip bulbs to overcome their stomach pains. Women who were pregnant during this period, Carey notes, gave birth to a higher proportion of obese and schizophrenic children than one would normally expect. These children also exhibited epigenetic changes not observed in similar children, such as siblings, who had not experienced famine at the prenatal stage.

During the Great Chinese Famine (1958-1961), millions of people died, and children born to young women who experienced the famine were more likely to become schizophrenic, to have impaired cognitive function, and to suffer from diabetes and hypertension as adults. Similar studies of the 1932-1933 Ukrainian famine, in which many millions died, revealed an elevated risk of type II diabetes in people who were in the prenatal stage of development at the time. Although prenatal and early childhood stress both induce epigenetic effects and adult illnesses, it is not known if the mechanism is the same in both cases.

Whether epigenetic effects of stress can be transmitted over generations needs more research, both in humans and in laboratory animals. But recent comprehensive studies by several groups using advanced genetic techniques have indicated that epigenetic modifications are not restricted to the glucocorticoid receptor gene. They are much more extensive than had been realized, and their consequences for our development, health, and behavior may also be great.

It is as though nature employs epigenesis to make long lasting adjustments to an individual’s genetic program to suit his or her personal circumstances, much as in Lamarck’s notion of “striving for perfection.”

In this view, the ill health arising from famine or other forms of chronic, extreme stress would constitute an epigenetic miscalculation on the part of the nervous system. Because the brain prepares us for adult adversity that matches the level of stress we suffer in early life, psychological disease and ill health persist even when we move to an environment with a lower stress level.

Once we recognize that there is an epigenetic basis for diseases caused by famine, economic deprivation, war related trauma, and other forms of stress, it might be possible to treat some of them by reversing those epigenetic changes. “When we understand that the source of so many of our society’s problems is exposure to childhood adversity,” Harris writes,

The solutions are as simple as reducing the dose of adversity for kids and enhancing the ability of caregivers to be buffers. From there, we keep working our way up, translating that understanding into the creation of things like more effective educational curricula and the development of blood tests that identify biomarkers for toxic stress, things that will lead to a wide range of solutions and innovations, reducing harm bit by bit, and then leap by leap.

Epigenetics has also made clear that the stress caused by war, prejudice, poverty, and other forms of childhood adversity may have consequences both for the persons affected and for their future unborn children, not only for social and economic reasons but also for biological ones.

The Epigenetics Revolution

Nessa Carey

Sometimes, when we read about biology, we could be forgiven for thinking that those three letters explain everything. Here, for example, are just a few of the statements made on 26 June 2000, when researchers announced that the human genome had been sequenced:

Today we are learning the language in which God created life. US President Bill Clinton

We now have the possibility of achieving all we ever hoped for from medicine. UK Science Minister Lord Sainsbury

Mapping the human genome has been compared with putting a man on the moon, but I believe it is more than that. This is the outstanding achievement not only of our lifetime, but in terms of human history. Michael Dexter, The Wellcome Trust

From these quotations, and many others like them, we might well think that researchers could have relaxed a bit after June 2000 because most human health and disease problems could now be sorted out really easily. After all, we had the blueprint for humankind. All we needed to do was get a bit better at understanding this set of instructions, so we could fill in a few details. Unfortunately, these statements have proved at best premature. The reality is rather different.

We talk about DNA as if it’s a template, like a mould for a car part in a factory. In the factory, molten metal or plastic gets poured into the mould thousands of times and, unless something goes wrong in the process, out pop thousands of identical car parts.

But DNA isn’t really like that. It’s more like a script. Think of Romeo and Juliet, for example. In 1936 George Cukor directed Leslie Howard and Norma Shearer in a film version. Sixty years later Baz Luhrmann directed Leonardo DiCaprio and Claire Danes in another movie version of this play. Both productions used Shakespeare’s script, yet the two movies are entirely different. Identical starting points, different outcomes.

That’s what happens when cells read the genetic code that’s in DNA. The same script can result in different productions.

The implications of this for human health are very wide ranging, as we will see from the case studies we are going to look at in a moment. In all these case studies it’s really important to remember that nothing happened to the DNA blueprint of the people in these case studies. Their DNA didn’t change (mutate), and yet their life histories altered irrevocably in response to their environments.

Audrey Hepburn was one of the 20th century’s greatest movie stars. Stylish, elegant and with a delicately lovely, almost fragile bone structure, her role as Holly Golightly in Breakfast at Tiffany’s has made her an icon, even to those who have never seen the movie. It’s startling to think that this wonderful beauty was created by terrible hardship. Audrey Hepburn was a survivor of an event in the Second World War known as the Dutch Hunger Winter. This ended when she was sixteen years old but the after effects of this period, including poor physical health, stayed with her for the rest of her life.

The Dutch Hunger Winter lasted from the start of November 1944 to the late spring of 1945. This was a bitterly cold period in Western Europe, creating further hardship in a continent that had been devastated by four years of brutal war. Nowhere was this worse than in the Western Netherlands, which at this stage was still under German control. A German blockade resulted in a catastrophic drop in the availability of food to the Dutch population. At one point the population was trying to survive on only about 30 per cent of the normal daily calorie intake. People ate grass and tulip bulbs, and burned every scrap of furniture they could get their hands on, in a desperate effort to stay alive. Over 20,000 people had died by the time food supplies were restored in May 1945.

The dreadful privations of this time also created a remarkable scientific study population. The Dutch survivors were a well defined group of individuals all of whom suffered just one period of malnutrition, all of them at exactly the same time. Because of the excellent healthcare infrastructure and record keeping in the Netherlands, epidemiologists have been able to follow the long term effects of the famine. Their findings were completely unexpected.

One of the first aspects they studied was the effect of the famine on the birth weights of children who had been in the womb during that terrible period. If a mother was well fed around the time of conception and malnourished only for the last few months of the pregnancy, her baby was likely to be born small. If, on the other hand, the mother suffered malnutrition for the first three months of the pregnancy only (because the baby was conceived towards the end of this terrible episode), but then was well fed, she was likely to have a baby with a normal body weight. The foetus ‘caught up’ in body weight.

That all seems quite straightforward, as we are all used to the idea that foetuses do most of their growing in the last few months of pregnancy. But epidemiologists were able to study these groups of babies for decades and what they found was really surprising. The babies who were born small stayed small all their lives, with lower obesity rates than the general population. For forty or more years, these people had access to as much food as they wanted, and yet their bodies never got over the early period of malnutrition. Why not? How did these early life experiences affect these individuals for decades? Why weren’t these people able to go back to normal, once their environment reverted to how it should be?

Even more unexpectedly, the children whose mothers had been malnourished only early in pregnancy, had higher obesity rates than normal. Recent reports have shown a greater incidence of other health problems as well, including certain tests of mental activity. Even though these individuals had seemed perfectly healthy at birth, something had happened to their development in the womb that affected them for decades after. And it wasn’t just the fact that something had happened that mattered, it was when it happened. Events that take place in the first three months of development, a stage when the foetus is really very small, can affect an individual for the rest of their life.

Even more extraordinarily, some of these effects seem to be present in the children of this group, i.e. in the grandchildren of the women who were malnourished during the first three months of their pregnancy.

So something that happened in one pregnant population affected their children’s children. This raised the really puzzling question of how these effects were passed on to subsequent generations.

Let’s consider a different human story. Schizophrenia is a dreadful mental illness which, if untreated, can completely overwhelm and disable an affected person. Patients may present with a range of symptoms including delusions, hallucinations and enormous difficulties focusing mentally. People with schizophrenia may become completely incapable of distinguishing between the ‘real world’ and their own hallucinatory and delusional realm. Normal cognitive, emotional and societal responses are lost. There is a terrible misconception that people with schizophrenia are likely to be violent and dangerous. For the majority of patients this isn’t the case at all, and the people most likely to suffer harm because of this illness are the patients themselves. Individuals with schizophrenia are fifty times more likely to attempt suicide than healthy individuals.

Schizophrenia is a tragically common condition. It affects between 0.5 per cent and 1 per cent of the population in most countries and cultures, which means that there may be over fifty million people alive today who are suffering from this condition. Scientists have known for some time that genetics plays a strong role in determining if a person will develop this illness. We know this because if one of a pair of identical twins has schizophrenia, there is a 50 per cent chance that their twin will also have the condition. This is much higher than the 1 per cent risk in the general population.

Identical twins have exactly the same genetic code as each other. They share the same womb and usually they are brought up in very similar environments. When we consider this, it doesn’t seem surprising that if one of the twins develops schizophrenia, the chance that his or her twin will also develop the illness is very high. In fact, we have to start wondering why it isn’t higher. Why isn’t the figure 100 per cent? How is it that two apparently identical individuals can become so very different? An individual has a devastating mental illness but will their identical twin suffer from it too? Flip a coin heads they win, tails they lose. Variations in the environment are unlikely to account for this, and even if they did, how would these environmental effects have such profoundly different impacts on two genetically identical people?

Here’s a third case study. A small child, less than three years old, is abused and neglected by his or her parents. Eventually, the state intervenes and the child is taken away from the biological parents and placed with foster or adoptive parents. These new carers love and cherish the child, doing everything they can to create a secure home, full of affection. The child stays with these new parents throughout the rest of its childhood and adolescence, and into young adulthood.

Sometimes everything works out well for this person. They grow up into a happy, stable individual indistinguishable from all their peers who had normal, non abusive childhoods. But often, tragically, it doesn’t work out this way. Children who have suffered from abuse or neglect in their early years grow up with a substantially higher risk of adult mental health problems than the general population. All too often the child grows up into an adult at high risk of depression, self-harm, drug abuse and suicide.

Once again, we have to ask ourselves why. Why is it so difficult to override the effects of early childhood exposure to neglect or abuse?

Why should something that happened early in life have effects on mental health that may still be obvious decades later?

In some cases, the adult may have absolutely no recollection of the traumatic events, and yet they may suffer the consequences mentally and emotionally for the rest of their lives.

These three case studies seem very different on the surface. The first is mainly about nutrition, especially of the unborn child. The second is about the differences that arise between genetically identical individuals. The third is about long term psychological damage as a result of childhood abuse.

But these stories are linked at a very fundamental biological level. They are all examples of epigenetics. Epigenetics is the new discipline that is revolutionising biology. Whenever two genetically identical individuals are non-identical in some way we can measure, this is called epigenetics. When a change in environment has biological consequences that last long after the event itself has vanished into distant memory, we are seeing an epigenetic effect in action.

Epigenetic phenomena can be seen all around us, every day. Scientists have identified many examples of epigenetics, just like the ones described above, for many years. When scientists talk about epigenetics they are referring to all the cases where the genetic code alone isn’t enough to describe what’s happening, there must be something else going on as well.

This is one of the ways that epigenetics is described scientifically, where things which are genetically identical can actually appear quite different to one another. But there has to be a mechanism that brings out this mismatch between the genetic script and the final outcome. These epigenetic effects must be caused by some sort of physical change, some alterations in the vast array of molecules that make up the cells of every living organism. This leads us to the other way of viewing epigenetics, the molecular description.

In this model, epigenetics can be defined as the set of modifications to our genetic material that change the ways genes are switched on or off, but which don’t alter the genes themselves.

Although it may seem confusing that the word ‘epigenetics’ can have two different meanings, it’s just because we are describing the same event at two different levels. It’s a bit like looking at the pictures in old newspapers with a magnifying glass, and seeing that they are made up of dots. If we didn’t have a magnifying glass we might have thought that each picture was just made in one solid piece and we’d probably never have been able to work out how so many new images could be created each day. On the other hand, if all we ever did was look through the magnifying glass, all we would see would be dots, and we’d never see the incredible image that they formed together and which we’d see if we could only step back and look at the big picture.

The revolution that has happened very recently in biology is that for the first time we are actually starting to understand how amazing epigenetic phenomena are caused. We’re no longer just seeing the large image, we can now also analyse the individual dots that created it.

Crucially, this means that we are finally starting to unravel the missing link between nature and nurture; how our environment talks to us and alters us, sometimes forever.

The ‘epi’ in epigenetics is derived from Greek and means at, on, to, upon, over or beside. The DNA in our cells is not some pure, unadulterated molecule. Small chemical groups can be added at specific regions of DNA. Our DNA is also smothered in special proteins. These proteins can themselves be covered with additional small chemicals. None of these molecular amendments changes the underlying genetic code. But adding these chemical groups to the DNA, or to the associated proteins, or removing them, changes the expression of nearby genes. These changes in gene expression alter the functions of cells, and the very nature of the cells themselves. Sometimes, if these patterns of chemical modifications are put on or taken off at a critical period in development, the pattern can be set for the rest of our lives, even if we live to be over a hundred years of age.

There’s no debate that the DNA blueprint is a starting point. A very important starting point and absolutely necessary, without a doubt. But it isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life. And as we shall see in Chapter 1, we would all look like big amorphous blobs, because all the cells in our bodies would be completely identical.

Huge areas of biology are influenced by epigenetic mechanisms, and the revolution in our thinking is spreading further and further into unexpected frontiers of life on our planet. Some of the other examples we’ll meet in this book include why we can’t make a baby from two sperm or two eggs, but have to have one of each. What makes cloning possible? Why is cloning so difficult? Why do some plants need a period of cold before they can flower? Since queen bees and worker bees are genetically identical, why are they completely different in form and function? Why are all tortoiseshell cats female?

Why is it that humans contain trillions of cells in hundreds of complex organs, and microscopic worms contain about a thousand cells and only rudimentary organs, but we and the worm have the same number of genes?

Scientists in both the academic and commercial sectors are also waking up to the enormous impact that epigenetics has on human health. It’s implicated in diseases from schizophrenia to rheumatoid arthritis, and from cancer to chronic pain. There are already two types of drugs that successfully treat certain cancers by interfering with epigenetic processes. Pharmaceutical companies are spending hundreds of millions of dollars in a race to develop the next generation of epigenetic drugs to treat some of the most serious illnesses afflicting the industrialised world. Epigenetic therapies are the new frontiers of drug discovery.

In biology, Darwin and Mendel came to define the 19th century as the era of evolution and genetics; Watson and Crick defined the 20th century as the era of DNA, and the functional understanding of how genetics and evolution interact. But in the 21st century it is the new scientific discipline of epigenetics that is unravelling so much of what we took as dogma and rebuilding it in an infinitely more varied, more complex and even more beautiful fashion.

The world of epigenetics is a fascinating one. It’s filled with remarkable subtlety and complexity, and in Chapters 3 and 4 we’ll delve deeper into the molecular biology of what’s happening to our genes when they become epigenetically modified. But like so many of the truly revolutionary concepts in biology, epigenetics has at its basis some issues that are so simple they seem completely self evident as soon as they are pointed out. Chapter 1 is the single most important example of such an issue. It’s the investigation which started the epigenetics revolution.

Notes on nomenclature

There is an international convention on the way that the names of genes and proteins are written, which we adhere to in this book.

Gene names and symbols are written in italics. The proteins encoded by the genes are written in plain text. The symbols for human genes and proteins are written in upper case. For other species, such as mice, the symbols are usually written with only the first letter capitalised.

This is summarised for a hypothetical gene in the following table.

Like all rules, however, there are a few quirks in this system and while these conventions apply in general we will encounter some exceptions in this book.

Chapter 1

An Ugly Toad and an Elegant Man

Like the toad, ugly and venomous, wears yet a precious jewel in his head. William Shakespeare

Humans are composed of about 50 to 70 trillion cells. That’s right, 50,000,000,000,000 cells. The estimate is a bit vague but that’s hardly surprising. Imagine we somehow could break a person down into all their individual cells and then count those cells, at a rate of one cell every second. Even at the lower estimate it would take us about a million and a half years, and that’s without stopping for coffee or losing count at any stage. These cells form a huge range of tissues, all highly specialised and completely different from one another. Unless something has gone very seriously wrong, kidneys don’t start growing out of the top of our heads and there are no teeth in our eyeballs.

This seems very obvious but why don’t they? It’s actually quite odd, when we remember that every cell in our body was derived from the division of just one starter cell. This single cell is called the zygote. A zygote forms when one sperm merges with one egg.

A Zygote

This zygote splits in two; those two cells divide again and so on, to create the miraculous piece of work which is a full human body. As they divide the cells become increasingly different from one another and form specialised cell types. This process is known as differentiation. It’s a vital one in the formation of any multicellular organism.

If we look at bacteria down a microscope then pretty much all the bacteria of a single species look identical. Look at certain human cells in the same way say, a food absorbing cell from the small intestine and a neuron from the brain and we would be hard pressed to say that they were even from the same planet. But so what? Well, the big ‘what’ is that these cells started out with exactly the same genetic material as one another. And we do mean exactly, this has to be the case, because they came from just one starter cell, that zygote. So the cells have become completely different even though they came from one cell with just one blueprint.

One explanation for this is that the cells are using the same information in different ways and that’s certainly true. But it’s not necessarily a statement that takes us much further forwards. In a 1960 adaptation of H. G. Wells’s The Time Machine, starring Rod Taylor as the time travelling scientist, there’s a scene where he shows his time machine to some learned colleagues (all male, naturally) and one asks for an explanation of how the machine works. Our hero then describes how the occupant of the machine will travel through time by the following mechanism:

In front of him is the lever that controls movement. Forward pressure sends the machine into the future. Backward pressure, into the past. And the harder the pressure, the faster the machine travels.

Everyone nods sagely at this explanation. The only problem is that this isn’t an explanation, it’s just a description. And that’s also true of that statement about cells using the same information in different ways it doesn’t really tell us anything, it just re-states what we already knew in a different way.

What’s much more interesting is the exploration of how cells use the same genetic information in different ways. Perhaps even more important is how the cells remember and keep on doing it. Cells in our bone marrow keep on producing blood cells, cells in our liver keep on producing liver cells. Why does this happen? One possible and very attractive explanation is that as cells become more specialised they rearrange their genetic material, possibly losing genes they don’t require. The liver is a vital and extremely complicated organ. The website of the British Liver Trust states that the liver performs over 500 functions, including processing the food that has been digested by our intestines, neutralising toxins and creating enzymes that carry out all sorts of tasks in our bodies. But one thing the liver simply never does is transport oxygen around the body. That job is carried out by our red blood cells, which are stuffed full of a particular protein, haemoglobin. Haemoglobin binds oxygen in tissues where there’s lots available, like our lungs, and then releases it when the red blood cell reaches a tissue that needs this essential chemical, such as the tiny blood vessels in the tips of our toes. The liver is never going to carry out this function, so perhaps it just gets rid of the haemoglobin gene, which it simply never uses.

It’s a perfectly reasonable suggestion cells could simply lose genetic material they aren’t going to use. As they differentiate, cells could jettison hundreds of genes they no longer need. There could of course be a slightly less drastic variation on this, maybe the cells shut down genes they aren’t using. And maybe they do this so effectively that these genes can never ever be switched on again in that cell, i.e. the genes are irreversibly inactivated. The key experiments that examined these eminently reasonable hypotheses, loss of genes, or irreversible inactivation involved an ugly toad and an elegant man.

Turning back the biological clock

The work has its origins in experiments performed many decades ago in England by John Gurdon, first in Oxford and subsequently Cambridge. Now Professor Sir John Gurdon, he still works in a lab in Cambridge, albeit these days in a gleaming modern building that has been named after him. He’s an engaging, unassuming and striking man who, 40 years on from his ground breaking work, continues to publish research in a field that he essentially founded.

John Gurdon cuts an instantly recognisable figure around Cambridge. Now in his seventies, he is tall, thin and has a wonderful head of swept back blonde hair. He looks like the quintessential older English gentleman of American movies, and fittingly he went to school at Eton. There is a lovely story that John Gurdon still treasures, a school report from his biology teacher at that institution which says, ‘I believe Gurdon has ideas about becoming a scientist. In present showing, this is quite ridiculous.’ The teacher’s comments were based on his pupil’s dislike of mindless rote learning of unconnected facts. But as we shall see, for a scientist as wonderful as John Gurdon, memory is much less important than imagination.

In 1937 the Hungarian biochemist Albert SzentGyorgyi won the Nobel Prize for Physiology or Medicine, his achievements including the discovery of vitamin C. In a phrase that has various subtly different translations but one consistent interpretation he defined discovery as, ‘To see what everyone else has seen but to think what nobody else has thought’. It is probably the best description ever written of what truly great scientists do. And John Gurdon is truly a great scientist, and may well follow in Szent-Gyorgyi’s Nobel footsteps.

In 2009 he was a co-recipient of the Lasker Prize, which is to the Nobel what the Golden Globes are so often to the Oscars. John Gurdon’s work is so wonderful that when it is first described it seems so obvious, that anyone could have done it. The questions he asked, and the ways in which he answered them, have that scientifically beautiful feature of being so elegant that they seem entirely self-evident.

John Gurdon used non-fertilised toad eggs in his work. Any of us who has ever kept a tank full of frogspawn and watched this jelly-like mass develop into tadpoles and finally tiny frogs, has been working, whether we thought about it in these terms or not, with fertilised eggs, i.e. ones into which sperm have entered and created a new complete nucleus. The eggs John Gurdon worked on were a little like these, but hadn’t been exposed to sperm.

There were good reasons why he chose to use toad eggs in his experiments. The eggs of amphibians are generally very big, are laid in large numbers outside the body and are see-through. All these features make amphibians a very handy experimental species in developmental biology, as the eggs are technically relatively easy to handle. Certainly a lot better than a human egg, which is hard to obtain, very fragile to handle, is not transparent and is so small that we need a microscope just to see it.

John Gurdon worked on the African clawed toad (Xenopus Iaevis, to give it its official title), one of those John Malkovich ugly-handsome animals, and investigated what happens to cells as they develop and differentiate and age. He wanted to see if a tissue cell from an adult toad still contained all the genetic material it had started with, or if it had lost or irreversibly inactivated some as the cell became more specialised. The way he did this was to take a nucleus from the cell of an adult toad and insert it into an unfertilised egg that had had its own nucleus removed. This technique is called somatic cell nuclear transfer (SCNT), and will come up over and over again. ‘Somatic’ comes from the Greek word for ‘body’.

After he’d performed the SCNT, John Gurdon kept the eggs in a suitable environment (much like a child with a tank of frogspawn) and waited to see if any of these cultured eggs hatched into little toad tadpoles.

The experiments were designed to test the following hypothesis: ‘As cells become more specialised (differentiated) they undergo an irreversible loss/inactivation of genetic material.’ There were two possible outcomes to these experiments:


The hypothesis was correct and the ‘adult’ nucleus has lost some of the original blueprint for creating a new individual. Under these circumstances an adult nucleus will never be able to replace the nucleus in an egg and so will never generate a new healthy toad, with all its varied and differentiated tissues.


The hypothesis was wrong, and new toads can be created by removing the nucleus from an egg and replacing it with one from adult tissues.

Other researchers had started to look at this before John Gurdon decided to tackle the problem, two scientists called Briggs and King using a different amphibian, the frog Rana pipiens. In 1952 they transplanted the nuclei from cells at a very early stage of development into an egg lacking its own original nucleus and they obtained viable frogs. This demonstrated that it was technically possible to transfer a nucleus from another cell into an ‘empty’ egg without killing the cell. However, Briggs and King then published a second paper using the same system but transferring a nucleus from a more developed cell type and this time they couldn’t create any frogs. The difference in the cells used for the nuclei in the two papers seems astonishingly minor just one day older and no froglets. This supported the hypothesis that some sort of irreversible inactivation event had taken place as the cells differentiated. A lesser man than John Gurdon might have been put off by this. Instead he spent over a decade working on the problem.

The design of the experiments was critical. Imagine we have started reading detective stories by Agatha Christie. After we’ve read our first three we develop the following hypothesis: ‘The killer in an Agatha Christie novel is always the doctor.’ We read three more and the doctor is indeed the murderer in each. Have we proved our hypothesis? No. There’s always going to be the thought that maybe we should read just one more to be sure. And what if some are out of print, or unobtainable? No matter how many we read, we may never be entirely sure that we’ve read the entire collection. But that’s the joy of disproving hypotheses. All we need is one instance in which Poirot or Miss Marple reveal that the doctor was a man of perfect probity and the killer was actually the vicar, and our hypothesis is shot to pieces. And that is how the best scientific experiments are designed to disprove, not to prove an idea.

And that was the genius of John Gurdon’s work. When he performed his experiments what he was attempting was exceptionally challenging with the technology of the time. If he failed to generate toads from the adult nuclei this could simply mean his technique had something wrong with it. No matter how many times he did the experiment without getting any toads, this wouldn’t actually prove the hypothesis. But if he did generate live toads from eggs where the original nucleus had been replaced by the adult nucleus he would have disproved the hypothesis. He would have demonstrated beyond doubt that when cells differentiate, their genetic material isn’t irreversibly lost or changed. The beauty of this approach is that just one such toad would topple the entire theory and topple it he did.

John Gurdon is incredibly generous in his acknowledgement of the collegiate nature of scientific research, and the benefits he obtained from being in dynamic laboratories and universities. He was lucky to start his work in a well set-up laboratory which had a new piece of equipment which produced ultraviolet light. This enabled him to kill off the original nuclei of the recipient eggs without causing too much damage, and also ‘softened up’ the cell so that he could use tiny glass hypodermic needles to inject donor nuclei.

Other workers in the lab had, in some unrelated research, developed a strain of toads which had a mutation with an easily detectable, but non-damaging effect. Like almost all mutations this was carried in the nucleus, not the cytoplasm. The cytoplasm is the thick liquid inside cells, in which the nucleus sits. So John Gurdon used eggs from one strain and donor nuclei from the mutated strain. This way he would be able to show unequivocally that any resulting toads had been coded for by the donor nuclei, and weren’t just the result of experimental error, as could happen if a few recipient nuclei had been left over after treatment.

John Gurdon spent around fifteen years, starting in the late 1950s, demonstrating that in fact nuclei from specialised cells are able to create whole animals if placed in the right environment i.e. an unfertilised eggé. The more differentiated/specialised the donor cell was, the less successful the process in terms of numbers of animals, but that’s the beauty of disproving a hypothesis we might need a lot of toad eggs to start with but we don’t need to end up with many live toads to make our case. Just one non murderous doctor will do it, remember?

Sir John Gurdon showed us that although there is something in cells that can keep specific genes turned on or switched off in different cell types, whatever this something is, it can’t be loss or permanent inactivation of genetic material, because if he put an adult nucleus into the right environment in this case an ‘empty’ unfertilised egg it forgot all about this memory of which cell type it came from. It went back to being a naive nucleus from an embryo and started the whole developmental process again.

Epigenetics is the ‘something’ in these cells. The epigenetic system controls how the genes in DNA are used, in some cases for hundreds of cell division cycles, and the effects are inherited from when cells divide. Epigenetic modifications to the essential blueprint exist over and above the genetic code, on top of it, and program cells for decades. But under the right circumstances, this layer of epigenetic information can be removed to reveal the same shiny DNA sequence that was always there. That’s what happened when John Gurdon placed the nuclei from fully differentiated cells into the unfertilised egg cells.

Did John Gurdon know what this process was when he generated his new baby toads? No. Does that make his achievement any less magnificent? Not at all. Darwin knew nothing about genes when he developed the theory of evolution through natural selection. Mendel knew nothing about DNA when, in an Austrian monastery garden, he developed his idea of inherited factors that are transmitted ‘true’ from generation to generation of peas. It doesn’t matter. They saw what nobody else had seen and suddenly we all had a new way of viewing the world.

The epigenetic landscape

Oddly enough, there was a conceptual framework that was in existence when John Gurdon performed his work. Go to any conference with the word ‘epigenetics’ in the title and at some point one of the speakers will refer to something called ‘Waddington’s epigenetic landscape’.


The Epigenetics Revolution

by Nessa Carey

get it at

Hacking DNA: The Story of CRISPR, Ken Thompson, and the Gene Drive – Geoff Ralston. 

The very nature of the human race is about to change. This change will be radical and rapid beyond anything in our species’ history. A chapter of our story just ended and the next chapter has begun.

This revolution in what it means to be human will be enabled by a new genetic technology that goes by the innocuous sounding name CRISPR, pronounced “crisper”. Many readers will already have seen this term in the news, and can expect much more of it in the mainstream media soon. CRISPR is an acronym for Clustered Regularly Interspaced Short Palindromic Repeats and is to genomics what vi (Unix’s visual text editor) is to software. It is an editing technology which gives unprecedented power to genetic engineers: it turns them into genetic hackers. Before CRISPR, genetic engineering was slow, expensive, and inaccurate. With CRISPR, genome editing is cheap, accurate, and repeatable.

… Y Combinator