This study examined the processes giving rise to moral hypocrisy, a phenomenon in which individuals judge their own transgressions to be less morally objectionable than the same transgressions enacted by others.
The study provides strong evidence that moral hypocrisy is governed by a dual process model of moral judgment wherein a prepotent negative reaction to the thought of a fairness transgression operates in tandem with higher order processes to mediate decision making. Hypocrisy readily emerged under normal processing conditions, but disappeared under conditions of cognitive constraint. Inhibiting control prevented a tamping down or override of the intuitive aversive response to the transgression.
The detection of a low level sensitivity to fairness transgressions, even at the cost of one’s own potential short-term gain, adds to the growing body of evidence dispelling theories which describe morality as a tenuous and fragile “veneer” laid over a core of selfish impulses. These findings rule out the possibility that hypocrisy derives from differences in automatic affective reactions towards one’s own and others‘ transgressions.
The unearthing of a prepotent negative response to one’s own transgressions, and conversely the absence of an automatic positivity bias, reveals an adventitious relationship between moral judgment and hypocrisy.
Two alternative models of the source of hypocrisy were compared to determine whether hypocrisy results from automatic or voluntary biases. Findings demonstrated not only that participants viewed their own transgressions as significantly more “fair” than the same transgressions enacted by others, but also that this bias was eliminated under conditions of cognitive constraint.
These findings support the view that hypocrisy stems from volitionally guided justifications, and thereby suggest that at a more basic level humans possess a negative response to violations of fairness norms whether enacted by themselves or others.
“It’s vile. It‘s more sad than anything else, to see someone with such potential throw it all down the drain because of a sexual addiction.” Former Congressman Mark Foley on Bill Clinton, 1998, before resigning amidst allegations of sexual misconduct in 2006.
Moral hypocrisy refers to a fundamental bias in moral judgment in which individuals evaluate a moral transgression enacted by themselves to be less objectionable than an identical transgression enacted by others. Of high import for intergroup relations, this asymmetric leniency has been shown to extend to others as a function of their relation to the self: a transgression enacted by a member of an ingroup is perceived to be of equal acceptability to the same transgression enacted by the self, but to be more acceptable than the identical behavior enacted by an outgroup member or non afiliated other.
Although on first blush this finding may seem somewhat unsurprising for groups characterized by long standing conflict (e.g. Isreali vs Palestinian factions), its value lies in its demonstration among emergent groups. That is, moral hypocrisy readily arises even when using minimal groups, thereby attesting to the deep seated nature of the bias.
Given both its apparent elemental status and practical import, moral hypocrisy stands as a phenomenon quite worthy of further investigation. At present, the existence of moral hypocrisy is clear but the mechanisms that underlie it remain clouded. Accordingly, the present experiment focuses on examining the process(es) by which moral hypocrisy emerges.
Uncovering the hypocritical mind
To elicit hypocrisy, we developed a paradigm in which individuals faced a dilemma representing a conflict between self interest and the interest of another. In this paradigm, to be described in more detail below, some participants were required to divide a resource (i.e., expended time and energy) between themselves and another, and could do so either fairly (i.e., through a random allocation procedure) or unfairly (i.e.. through personal selection of the preferred option). They were later asked to evaluate the morality, or fairness, of their actions. Other participants viewed a separate individual, who was a confederate, acting in an unfair manner toward another (i.e., selecting the better option for herself) and subsequently evaluated the morality of this act. We defined hypocrisy as the discrepancy between the fairness judgments for this same transgression when committed by the self or by the other.
By modeling hypocrisy as discrepant moral judgments, we might expect that its underlying mechanisms would operate in a fashion similar to that of any other moral evaluation. Recent research in the psychology of morality has begun to converge on a dual process model of moral judgment. According to this view, an intuitive process is theorized to work in tandem with more domain general, consciously guided processes to mediate decision making. Processes at both levels are sensitive, to differing degrees, to morally relevant events or principles (e.g., cause no direct harm, utility, self protection), with the eventual decision output representing some confluence of the processes. We believe that moral hypocrisy can be understood within this framework.
Conceptualizing hypocrisy as a dual process model, however, leads to competing predictions regarding precisely how these two classes of processes interact to produce the phenomenon. More specifically, hypocrisy could be driven by a discrepancy in automatic intuitions in response to one’s own versus another’s transgressions, that is, individuals might display an automatic positivity bias for their own transgressions relative to others’, with higher order processes simply functioning to create post hoc justifications for “gut level“ decisions. Alternatively, hypocrisy might be driven by differential activation of higher order cognitive processes geared toward justification and rationalization of one’s own transgressions. That is, although individuals might have negative automatic reactions to both their own and others‘ transgressions, they may engage in more consciously motivated reasoning when judging their own transgressions in order to maintain a positive self view.
Distinguishing between these two competing explanations has important practical implications for developing strategies geared toward curbing this disturbingly familiar phenomenon. Indeed deciding whether intuitions should be fostered or overcome hinges upon whether or not people have automatic aversions to their own as well as others’ violations of fairness norms.
Two alternative models
As noted. there is reason to believe hypocrisy could emerge in two ways based on a dual process model of moral judgment. Mounting evidence suggests that humans may have evolved an intuitive aversion to violations of equity, with similar aversions evidenced by certain primate species. It has also been hypothesized that humans have evolved specific social emotions designed to foster cooperation and trust with others, suggesting an important role for emotional responses designed to inhibit selfserving behavior, and thereby to avoid negative social consequences. Accordingly, violations of fairness stand as a strong candidate to engender a spontaneous and immediate negative reaction regardless of the enactor, suggesting that hypocrisy might emerge from more deliberative processes.
Similarly, several lines of research suggest that higher order processes might be employed to rationalize and justify a self enacted transgression. In this case, the intuitive system would favor a more “moral” judgment in accord with a basic fairness norm (i,e,. showing self interest is not appropriate), but conscious control systems might work to generate a more “immoral“ judgment (i,e,. showing self-interest is permissible), that nevertheless may serve to protect one’s self-image. However, when judging another’s transgression, higher order processes should not temper the intuitive response as the motive for self image preservation is not relevant.
Alternatively, recent findings demonstrate that disruption of brain regions involved in cognitive control can decrease aversion to inequity within the context of economic games, suggesting that automatic reactions might be geared toward engendering self-serving, as opposed to fair, behavior. Indeed, this finding aligns with much research suggesting that humans possess an automatic positivity bias with respect to evaluations involving the self. For instance, tests of implicit self esteem consistently reveal a seemingly ubiquitous generalized positive evaluation of self.
In a similar vein. much work has suggested that exaggerated perceptions of mastery and unrealistic optimism are characteristic of normal human thought. When taken in combination with recent research demonstrating that both motivational states and chronic views regarding one’s abilities, are capable of influencing low level automatic processes. These findings suggest that chronic views of oneself as a moral individual, as well as motives to appear as such, might lead to positively biased spontaneous evaluations of one’s own transgressions relative to those of others.
If it is the case that the intuitive system does not generate an immediate aversion, or at least a lesser one, to an individual’s own transgressions, then hypocrisy might simply arise as a result of discrepant, spontaneous evaluative responses. According to this view, the intuitive system would favor a more “moral” judgment in accord with a basic fairness norm when contemplating other’s transgressions, but favor a more “immoral” judgment in accord with an automatic positivity bias when contemplating one’s own. Put simply, individuals might not be as sensitive to transgressions that bring one immediate benefits. If true, these intuitions would work in concert with higher order processes which would serve to provide post hoc explanations for the behavior.
The present experiment
The present experiment seeks to disentangle these competing explanations. If hypocrisy derives from competition between a negative affective response to any violation of fairness coupled with conscious efforts aimed at justifying the behavior when enacted by oneself, then hypocrisy should disappear when efforts aimed at conscious control are constrained. However, if hypoctisy arises because of discrepant automatic intuitions generated in response to one‘s own versus another’s transgressions, then constraining conscious control should have no effect on judgments of the morality of one’s own transgressions.
To examine this question, we used a factorial design, crossing judgments of self and other transgressions with a manipulation of cognitive constraint: a 2 (Enactor: Self vs. Other) x 2 (Constraint: Control, Cognitive Load). In the control conditions, we expected to replicate the usual hypocrisy effect identified by Valdesolo and DeSteno (2007).
Partitipants who acted immorally (i.e. violated the fairness norm) should judge their own fairness transgression to be less objectionable than the same transgression enacted by another. Of import however, we also expected that reduced ability for controlled processing would alter the relative causal force of processes contributing to judgment, directly addressing the nature of the dual mechanisms underlying hypocrisy. If manipulation of cognttive constraint has no influence on judgments of participants’ own transgressions, it would suggest that the model is one wherein hypocrisy arises from biased automatic intuitions. However, if increased cognitive constraint results in more “moral” judgments of participants‘ own transgressions (ie. one‘s own actions are judged to be more unfair) and thereby attenuates hypocrisy, these findings would suggest that hypocrisy arises from discrepant volitional efforts aimed at justifying transgressions when enacted by the self relative to others.
We expected that the manipulations of cognitive constraint would not influence participants’ judgments of the confederate‘s transgressions, as motivated reasoning processes should not be engaged when judging violations committed by neutral others. Consequently, conditions involving the judgments of others will function not only as a baseline for computation of the hypocrisy measure, but also to show that any effects of the manipulations do not represent global influences on moral decision making (e.g., increased cognitive constraint decreases the perceived fairness of any actions, whether enacted by the self or another).
Some more than others! Humans possess an automatic positivity bias with respect to evaluations involving the self. For instance, tests of implicit self esteem consistently reveal a seemingly ubiquitous generalized positive evaluation of self.
Ninety one individuals (58 females. 33 males) participated and were randomly assigned to one of four experimental conditions.
As the load condition procedures constitute minor variants of the control condition procedures, the procedures for the two primary control conditions will be described in detail, with descriptions of the other conditions limited to noting the small differences in design. In all conditions, participants judged the fairness of an identical action, which served as the primary dependent variable. As noted, we employed a 2×2 design, crossing judgments of self and other transgressions with cognitive load. Presentation of all materials and data collection were accomplished using Medialab software.
Condition 1: judging one‘s own transgression
On entering the lab,. a participant was seated at an individual workstation, given a brief introduction to the experiment, and told to begin the computerized tasks. The instructions explained that the experimenters were examining performance on two different types of tasks, and that any participant would only complete one of the tasks. The first task (i.e. the green task) consisted of a brief survey combined with a short photo hunt that would take 10min to complete. The second task (i.e. the red task) consisted of a series of math and logic problems combined with a longer and somewhat tedious mental rotation task that would take 45 min to complete. Following the task descriptions, participants were informed by the experimenter that the research team was also evaluating a new participant assignment protocol meant to reduce experimenter bias. Therefore, certain participants would be randomly selected to make Condition assignments for themselves and others.
Participants then read the following instructions:
In order for the experimenters to remain blind to condition assignments, you must assign either yourself or the next panicipant to the green condition and the other of you to the red condition. Some people feel that giving both individuals an equal chance is the fairest way to assign the tasks.
It you would like to use a randomizer to assign conditions please move to the computer behind you and follow the instructions. The decision is entirely up to you. You can assign yourself and the other participant however you choose. The other participant does not and will not know that you are assigning Conditions.
The randomizer was a computer program designed to assign the participant to the “red” condition following a few demonstration trials conducted by the experimenter in which it alternated between conditions to guard against participant suspicion. The experimenter then left the room to allow the participant to make his or her choice.
After assigning conditions to themselves and another, participants responded to questions regarding the assignment procedure, which were presented as a way to collect opinions on the new protocol. Embedded in a small set of distractors was the target question: “How fairly did you act?” Participants responded to this question on a 7 point scale ranging from “extremely unfairly“ to “extremely fairly”. The session was then terminated and participants debriefed.
Condition 2: judging another’s transgressions
In this Condition, participants’ primary task involved evaluating the actions of another individual who completed a procedure identical to the one completed by the participant in Condition 1. Here, participants were informed that their role was to act as an impartial observer, to provide feedback to experimenters regarding use of the new assignment protocol by other participants. These other participants were in fact confederates.
To accomplish this goal, participants were informed that they would be seated in the room with an individual taking part in an experiment and therefore able to observe his actions and responses to the experimental protocol through the use of a yoked computer. That is, participants would be able to see on their screen what the other participant was reading and selecting in real time. Participants received the following instructions on their screen:
Your computer is connected to the adjacent computer. Another participant will be completing an experiment on that computer and you will be asked to follow along and observe on your screen everything that he reads and does. Note that the other participant will be unaware that this is happening. After approximately 5min of observing, you will be asked to rate the new assignment protocol in terms of clarity and design as well as answer some questions concerning the performance of the participant.
Participants were asked if they understood their task, and if so to click the mouse to connect the two computers. From this point on, they were presumably observing the other participant’s screen and were asked not to touch their computer until it disconnected and automatically moved them along to the evaluations.
After the computers had “connected.” the participant waited in her seat while the experimenter brought in the second participant [i.e. the confederate]. The confederate was told that all instructions would be on the computer and to begin the experiment by clicking the mouse. The confederate then simultaneously clicked his mouse as well as a second mouse surreptitiously connected to the back of the participant‘s computer. The mouse clicks set off a timed presentation which created the illusion that the participant was observing, on her own monitor, the confederate go through the instructions and assign himself the “green” condition and a future participant the “red” condition without using the randomizer. After observing the confederate’s choice, the participant’s computer “disconnected” and brought her to an evaluation section where, embedded in a set of distractors, she answered the following target question: “How fairly did the participant act?” using the same scale as in Condition 1.
Cognitive constraint conditions
Condition 3: judging one’s own transgression
Condition 3 was a replication of Condition 1 with the exception that participants made fairness judgments under cognitive load. The load manipulation came directly after partitipants assigned tasks to themselves and the other, thereby affecting only moral judgment and not behavior. Cognitive load was manipulated using a digit string memory task. Participants were told that the experimenters were interested in how people make judgments when they are distracted. To simulate distraction. they would be asked to remember a string of digits at the same time that they were responding to a series of questions. Participants were told that a string of seven digits would appear on the screen before each question. They would then have to answer the question within 10s, immediately after which they would have to recall the digit string that had preceded the question. Participants were also told that it was extremely important to provide the most accurate answers possible for questions comprising the assignment evaluation measure. The primary dependent variable consisted of the fairness question presented and scaled as in Condition 1 and embedded in the series of distractor questions completed under load.
Condition 4: judgments of another‘s transgressions
Condition 4 mirrored Condition 2 with the exception that participants made judgments under cognitive load, using the same load manipulation as in Condition 3.
Participants in conditions involving judgments of their own transgressions were removed from analysis if they did not commit a transgression. That is, only those participants who assigned themselves the “green” (i.e. preferable) condition and who did not use the randomizer were included in the analysis. As in previous research, those who immediately acted either altruistically or in accord with the fairness norm were a substantial minority. This group consisted of 7 (8%) participants spread almost equally across the two relevant conditions (i.e. Conditions 1 and 3).
Moving to the full factorial design, an ANOVA continued the predicted interaction between the Enactor and Cognitive Constraint factors (see Fig. 1). As expected, moral hypocrisy emerged in the control conditions; the same fairness transgression was judged to be substantially more moral when enacted by the self than when enacted by another. However, constraints on effortful correction (i.e. cognitive load) resulted in the disappearance of the hypocrisy effect: participants experiencing load judged their own transgressions to be as unfair as the same behavuor when enacted by another. Indeed, a planned contrast revealed that judgments of one’s own actions in the control condition (i.e. Condition 1) significantly exceeded judgments in any of the other three conditions, which showed no reliable differences among themselves.
The present study provides strong evidence that moral hypocrisy is governed by a dual process model of moral judgment wherein a prepotent negative reaction to the thought of a fairness transgression operates in tandem with higher order processes to mediate decision making. Hypocrisy readily emerged under normal processing conditions, but disappeared under conditions of cognitive constraint. Inhibiting control prevented a tamping down or override of the intuitive aversive response to the transgression.
Of import, these findings rule out the possibility that hypocrisy derives from differences in automatic affective reactions towards one’s own and others‘ transgressions. Rather. when contemplating one’s own transgression, motives of rationalization and justification temper the initial negative response and lead to more lenient judgments. Motivated reasoning processes are not engaged when judging others’ violations, rendering the prepotent negative response more causally powerful and leading to harsher judgments.
These findings are also noteworthy for demonstrating that controlled processing need not always function to “correct” more basic, intuitive responses, but rather can be subject to less admirable motives such as the protection of self image. Indeed, they show that the interplay between intuitive and volitional moral reasoning is sensitive not only to abstract moral principles but also to more selfish motivations, as evidenced by the overwhelming majority of participants who acted unfairly when assigning tasks.
Despite this disconcerting result, the unearthing of a prepotent negative response to one’s own transgressions, and conversely the absence of an automatic positivity bias, reveals an adventitious relationship between moral judgment and hypocrisy. The detection of a low level sensitivity to fairness transgressions, even at the cost of one’s own potential short term gain, adds to the growing body of evidence dispelling theories which describe morality as a tenuous and fragile “veneer” laid over a core of selfish impulses. Instead, it seems likely that humans have evolved strong intuitions which, though selected to promote long-term self interest via reciprocal altruism, can represent moment to moment instances of pure selfless concern.
Yet our hypocritical behavior belies this intuition. In light of such findings, future work should aim to further define the conditions which temper hypocrisy, and ultimately suggest ways in which humans can better translate moral feelings into moral actions.