Basic Instinct

Genetic determinism in its purest form may be scientifically incoherent, but it is worth investigating the fears that often surround it.


Scapegoating the Naturalistic Fallacy

A common reaction to hypotheses put forward by evolutionary psychologists is to worry that many human behaviors could be excused or justified by deflecting blame onto evolved psychological adaptations. The evolutionary psychologist’s stock response is often to counter that someone has committed the naturalistic fallacy. A common example of David Hume’s “is-ought” problem, the naturalistic fallacy is a type of appeal to nature: a fallacious assumption that something is desirable simply by virtue of being natural. For example, claiming that organic foods are inherently better or more healthy than those treated with artificial products or processes would be a fallacy through appeal to nature. But more specifically, the naturalistic fallacy is the idea that what is natural is morally good — or at least cannot be morally bad. To claim, for example, that we couldn’t condemn violence if humans were violent by nature would be to commit the naturalistic fallacy.

It should be obvious upon even cursory consideration that equating the natural with the desirable is fallacious. A clear example is the contrast between diseases and vaccines: smallpox is natural, but no one will jump to its defense on this account. Correspondingly, a sane person would recognize that the fact that the smallpox vaccine is artificial does not make it worse than the disease itself. But the ease with which anti-vaccine propaganda has spread should speak to the allure of such appeals to nature: we clearly have some bias toward seeing value in things in their “natural” state. To challenge an argument on the grounds of having committed an appeal to nature is to identify a flaw in the criteria used to determine something’s value or worth — and in the case of the naturalistic fallacy, its moral worth.

Given that moral fears about evolutionary psychology are so often diagnosed as instances of the naturalistic fallacy, one might expect that such fears are typically grounded in concerns about value judgments. However, I believe this might be a misdiagnosis. If you listen closely to the objections of many people — particularly those with progressive or egalitarian ideals — their concerns actually seem slightly oblique to whether or not we can derive merit or moral value from claims about human nature. Let’s take a common worry as an example: the possibility of a lawyer using evolutionary psychological arguments to excuse or diminish the actions of a rapist.

If the prosecutor were to identify the defense attorney’s argument as a naturalistic fallacy, the prosecutor’s underlying concern would seem to be that the defense will persuade the jury to view the moral merit of rape in light of its naturalness. But is this so? Even if the defense attorney could convince the jury that rape is “natural” for humans, I doubt that such an argument would genuinely persuade most jurors to abandon the belief that rape is morally wrong. Instead, I think the prosecutor’s implicit concern is that the defense may influence the jurors’ opinions about the rapist’s moral responsibility, rather than the merit or moral value of their actions. Even if this sounds obvious in retrospect, I think it is a distinction that has been missed by many counterarguments that have been launched at the wrong target.

Thus, it is in the relationship between evolutionary psychology and moral responsibility that we must seek our answers to this objection. In the example above, if we use the framework I’m arguing for, the intent of the defense attorney’s tactic seems readily apparent: to reduce the rapist’s punishment by making it appear that he had insufficient control over his actions, and is therefore less morally responsible for them than jurors would otherwise assume. A key unspoken assumption here is that, if a behavior is “evolved,” it is somehow more causally deterministic than other behaviors — or at least less subject to self-control. This assumption is profoundly misguided, but I think it is bound up with some genuine concerns. To unravel them, we need to look a bit more closely at developmental biology and the concept of genetic determinism.

Genetic Determinism & Control

Genetic determinism is, roughly speaking, the idea that genes directly control our behavior. Despite its tenacious hold on our collective consciousness, it is incredibly vague as a concept. In the past, as in my introductory chapter on natural selection, I’ve tried to undermine the specter of genetic determinism by illustrating the category error of treating “genes” and “environment” or “nature” and “nurture” as two comparable factors that can compete as causal explanations for any given behavior. Similarly, I’ve attacked the idea of biological essentialism (the idea that organisms have some unchangeable “essence” or natural state) by illustrating the logical incoherency of treating the effects of genetic variation as “fixed” in any meaningful sense — at least any more so than the effects of environmental variables.

One particular argument I like to cite is the fact that genes can have no effects in a vacuum, and that changing relevant aspects of the environment will change the resultant phenotype. This approach seems especially helpful for debunking eugenics (and its more surreptitious cousin, “dysgenics”). I’ve pointed to quotes like the following from Dawkins’ The Extended Phenotype to undercut the way people talk about having genes “for” certain phenotypic effects:

It is a fundamental truism, of logic more than of genetics, that the phenotypic ‘effect’ of a gene is a concept that has meaning only if the context of environmental influences is specified, environment being understood to include all the other genes in the genome. A gene ‘for’ A in environment X may well turn out to be a gene for B in environment Y. It is simply meaningless to speak of an absolute, context-free, phenotypic effect of a given gene. …changes in the environment may change the very nature of the phenotypic character we set out to explain.

(Dawkins, 1987, p. 60)

While all of these points are true and extremely important, deep down, I’ve always been unsatisfied with them as answers to the thornier questions about our responsibility for, or control over, specific behaviors. While changing the relevant environment will change the phenotype, this glosses over the fact that many environmental changes would simply terminate the developmental process in question — or even terminate the organism as a whole. It’s certainly true that any genetic influence could be undone by some environmental change, but that change could turn out to be the removal of oxygen from the environment, or something similarly indispensable to the organism in question.

Ultimately, the idea that genes can have no effects in a vacuum seems like an overly broad, somewhat facile response to the deep concerns about moral responsibility and self-control that lurk within these discussions. I think it would be intellectually dishonest to try to end the argument here, because I don’t think these really reach the heart of the fears people have about genetic determinism and biological essentialism. Whether or not the properties we care about are “genetically determined” or “essential” to an organism, it seems that some genetic factors in our development are fundamentally out of our control, in a way that matters to us.

In her excellent book The Genetic Lottery: Why Genes Matter for Social Equality (2021), behavioral geneticist Paige Harden extends Dawkins’ famous analogy of genes as cooking recipes. She points out that, even though variation in “ingredients” and “cooking environment” can drastically alter the results of a “recipe,”

Nevertheless, recipes do constrain your final product. Beginning with a recipe for lemon chicken will not yield, say, chocolate chip cookies. Errors in a recipe can result in a slightly less appetizing dish (not enough salt) or in total disaster (a cup of salt instead of a cup of sugar). In the same way, mutations in DNA sequence can result in slightly altered proteins or in entirely non-functional ones. And some recipes are more tolerant of error, deviation, and substitution than others. Just as making spaghetti Bolognese does not require the same exacting attention to weight and temperature and timing as making chocolate soufflé, some genes are more intolerant to mutation than others.

(Harden, 2021, p. 48)

I think this passage highlights some distinctions that are underemphasized in casual discussions of genetic determinism. We are quick to remind students and laypeople that a phenotype cannot be “more” or “less” genetic, but it doesn’t come as naturally to us to talk about the directness of genetic effects, or how closely any given gene is tied to fundamental developmental processes, and so on. To again quote Dawkins, “all genetic effects are ‘byproducts’ except protein molecules” (Dawkins, 1987, p. 300); but I have never seen a rigorous discussion of degrees of directness in genetic control of behavior. I find this absence frustrating and surprising, because this seems to be exactly what laypeople are getting at when they talk about “more” or “less” genetic influences. It feels like we’re jumping to attack a strawman, letting the steelman slip by to later chip away at the minds of our audiences.

In order to strike at the genetic determinist steelman, I need to quickly establish an account of moral responsibility that is consistent enough with developmental biology to make sense out of these arguments. As I argued in my post on free will, I find it difficult to make sense out of the concepts of moral desert and responsibility in light of the logical consequences of physicalism. However, I think some accounts are more defensible than others, especially in terms of accurately modeling the way we tend to arrive at intuitions about moral responsibility. In my opinion, the best descriptivist account of moral responsibility is R. Jay Wallace’s Responsibility and the Moral Sentiments (1994). Contrary to the common interpretation of control or free will as requiring access to alternative possibilities, Wallace’s “broadly Kantian” account suggests that an agent’s degree of self-control and moral responsibility are predicated on their “power to grasp and apply moral reasons, and to regulate [their] behavior by the light of such reasons” (Wallace, 1994, loc 1960) — what Wallace collectively refers to as the agent’s powers of reflective self-control.

It would take the length of his book and then some to defend this idea, but suffice to say that I think his account provides a useful framework for my purposes here. It seems effective for assessing moral responsibility more rigorously in the grey areas where our intuitive moral reasoning cannot lead us to a conclusive or consensual answer. So, if we want to truly assess the idea of genetic determinism, I think we need to understand the relevance of genes to the powers of reflective self-control. But before we can do that, we need to know what kind of effects genes can have on cognition and behavior, and how such effects are manifested. If the powers of reflective self-control are key to moral responsibility, I think it behooves us to investigate the biological basis of our cognition more thoroughly.

I think people assume that behaviors rooted in primarily “biological” causes are less under the control of the organism; and consequently, that they could provide reasonable exemptions to moral responsibility. But then the question becomes: why should genetic or evolutionary explanations of behavior imply reduced self-control?

Many things can be out of an individual’s control, but I’ve long had an intuition that the animating force behind most of the fear of fields like evolutionary psychology reflects a deep discomfort with the idea of innateness. I think this fear is something along the lines that we have inescapable psychological dispositions that are somehow imbued in us as organisms. Psychological fields that treat humans as organisms with genetically inherited behavioral dispositions have perpetually been dogged by moral concerns that seem to manifest a deep perceived connection between innateness and lack of self-control. So, following from the question I started with, a more productive question to interrogate might be: When, if ever, does “innateness” imply inevitability or reduced self-control?

Innateness

Before we can tackle the question of self-control, we need to get a handle on the concept of innateness itself — a task that turns out to be harder than it sounds. The precise meaning of the term “innate,” along with related ones like “instinctual,” has been hotly debated for at least a century, with many proposed technical definitions. And from the very beginning, a host of scientists and philosophers have argued (compellingly) that the concept is muddled at best, and vacuous at worst (e.g., Griffiths, Machery, & Linquist, 2009).

As Mameli and Bateson point out in their 2006 paper “Innateness and the Sciences,” most of our common intuitions about “innate” biological or psychological properties — that they are present from birth, “essential” to an organism, or “natural” — are incoherent from anything but a folk-biological perspective. Since phenotype is inherently the product of the interaction between genes and the environment, no phenotypic trait can be present in an organism “from birth.” Neither can an organism be said to have an “essence,” given that its DNA (the only component that could in principle be considered “essential”) has no intrinsic qualities beyond its chemical composition. And conceptions of what is “natural” have just as many problems, having been argued about for centuries (see Sober, 1980).

But as I argued earlier, these counterpoints seem to do little to alleviate our concerned interest in the concept. Laypeople and scientists alike continue to frequently use the word “innate” or its synonyms (Griffiths, Machery, & Linquist, 2009), which makes it seem like it might be pointing to something. Many of the descriptions we give of innate traits are incoherent, but that doesn’t mean that it can’t be defined coherently. So what, if anything, is innateness?

The problem in answering this question is that innateness doesn’t seem to be one thing. As Mameli and Bateson point out, the word “innate” is used to refer to a huge variety of concepts, which they refer to as i-properties. They catalogued 26 potential scientific definitions for “innate,” and then winnowed them down by eliminating or reformulating definitions that were unclear, internally incoherent, or which led to largely counterintuitive results — ones that clashed too much with our usage or intuitions about innate traits. In the end, they concluded that only eight of the definitions they considered were scientifically useful, concrete, and consistent with at least some folk examples of innate traits.

Their final list of scientifically cogent i-properties were as follows:

  1. Reliably developing at a particular stage in the life cycle.
  2. Only modifiable by evolutionarily abnormal environmental conditions.
  3. Not being the product of a developmentally plastic adaptation.
  4. Necessary for the development and functioning of other adaptive aspects of the organism — i.e., genetically entrenched.
  5. Possessing or accompanied by mechanisms to buffer the trait’s development against a wide range of environmental disruptions — i.e., developmentally canalized.
  6. Possessing or accompanied by mechanisms to prevent the trait’s post-developmental modification by a wide range of environmental disruptions — i.e., post-developmentally canalized.
  7. Species-typical.
  8. A Darwinian adaptation that “has been selected for in virtue of the existence of additive genetic variation for this phenotype.”

As Mameli and Bateson illustrate, none of these i-properties fully captures the jumbled folk concept of innateness. Mameli even points out that, despite being referred to by the same word, most of these i-properties may or may not even be correlated in the first place (Mameli, 2008). As such, many traits will be classified as innate according to some of these definitions and non-innate according to others. Each definition excludes at least some prototypical examples of supposedly innate phenotypic traits. For example, while grooming behavior in rats meets all eight definitions, female sex in turtles fails on (ii), (iii), and (vii); the phenylketonuria trait fails on the last four; and, perhaps most counterintuitively, the human belief that water is a liquid meets senses (ii), (vi), and (vii). Consequently, a trait’s possession of one i-property does not imply its possession of another; and correspondence with these definitions does not perfectly predict an assessment of a trait’s innateness.

While it could be argued that the concept of innateness has a “prototypicality” criterion based on meeting all eight criteria, this would not alleviate many of the incoherencies often introduced by the concept. One of the key problems with all of these properties sharing a name is that many people — even scientists — transition between different usages of “innate” without noticing. Because the folk concept of innateness subsumes many different properties, evidence for one is taken (subliminally) to imply that the same trait shares another i-property, with no evidence to support the implied claim (Griffiths & Linquist, 2022). It is possible that they are statistically correlated, but we have little to no evidence to either support or refute these correlations.

But despite this confusion, these definitions are individually coherent and tractable enough that we can reasonably ask, for each one, how it might relate to the powers of reflective self-control. I think some insight might be gleaned if we compare two predispositions: one that seems more inevitable, like the patellar reflex, and one that seems more flexible, like speaking. Which of the following criteria do each of these behavioral predispositions fulfill?

  1. seen to reliably develop at a particular stage in the life cycle?
  2. only modifiable by evolutionarily abnormal environmental conditions?
  3. not produced by a developmentally plastic adaptation?
  4. genetically entrenched?
  5. developmentally canalized?
  6. post-developmentally canalized?
  7. species-typical?
  8. a Darwinian adaptation?

Our patellar reflex and spoken communication are both reliably developing (i), species-typical (vii) adaptations (viii), and I expect they would both qualify as genetically entrenched (iv). I think an argument could be made that they are both developmentally and post-developmentally canalized; but whether or not speaking in particular is considered canalized, or whether it is “only modifiable by abnormal conditions,” depends heavily on how you define the trait in question. If you’re talking about the general capacity to communicate through spoken language, it certainly meets criteria (v), (vi), and (ii). On the other hand, if you consider the specific language spoken to be a “modification,” it would meet none of the three.

But in either case, I think the crux of the matter has the most to do with definition (iii): whether or not the developmental process involved was plastic — that is, “evolved to produce different phenotypes in response to different environmental circumstances” (Mameli & Bateson, 2006, pp. 167). Very close to Mameli and Bateson’s third definition of innateness is one provided by Mallon and Weinberg (1994): according to them, “for a trait to be innate, it must normally be invariantly acquired and must not have been acquired by means of a process that normally produces variant traits” (pp. 338-339).

In describing processes that “normally produce variant traits,” they define a spectrum of developmental processes from “closed” to “open” — a spectrum that seems to be loosely synonymous with developmental plasticity. A more “closed” developmental process is one with a narrower range of possible phenotypic outputs, sensitive only to variation within the “normal” range of environmental conditions (e.g., human body plan development); whereas a more “open” process is one with a wider range of potential phenotypes, produced in response to specific environmental variations (e.g., human language acquisition). They refer to this definition of innateness as closed process invariantism — or, more specifically, they define an innate trait as one that is invariantly produced by a closed developmental process.

Though the focus on developmental process might seem unintuitive at first, it does seem to capture many ideas about innate traits; and I think it begins to hit the nail more squarely on the head. It seems to tap into the real, available options in the phenotypic results, which is a bit surprising when contrasted with the other i-properties. But I think the idea of options might give us the final clue to the origin of the fear of genetic determinism — and like many major insights, it reveals something that was hiding in plain sight.

When it comes to the relationship between innateness and self-control, I think this clarification reveals a crucial facet of the fear surrounding evolutionary and genetic explanations of behavior that I had previously overlooked. A trait that develops from a closed process is one that is not directly sensitive to a variety of environmental inputs — which means it is likely to be less sensitive to potential alterations. Perhaps the reason the idea of innateness seems to undermine moral responsibility is the bare assumption that such behaviors would not be sensitive to deliberate human intervention or prevention. In other words, while I thought the fear of inevitability in our behavior could be explained through innateness, in the end, I think we must look no further than inevitability itself to locate the source of our trepidation.

Options & Inevitability

Dan Dennett provided a highly relevant clarification of the concept of inevitability in his 2015 book Elbow Room: The Varieties of Free Will Worth Wanting:

We categorize [some] projected things as things that will happen unless we take certain steps, and others as things that will happen because we will take certain steps, and some of them as things that will happen no matter what steps we take. We call the latter “inevitable,” because nothing we do makes any difference to them, and hence it is pointless to deliberate about them. Now that is what “inevitable” means. It does not mean “causally necessary” or “determined,” and it is not implied by those terms.

(Dennett, 2015, p. 139, bold added)

According to Dennett, inevitability has less to do with causation or determinism than what we can do to change something. In other words, it deprives us of real options for prevention or improvement. And this is how the conflation between terms like “evolved,” “genetic,” and “innate” begins. A highly closed developmental process, like the development of the body plan, does not provide options: it provides no points of intervention or prevention. It either produces the one, narrowly defined phenotype it sets out to produce, or it fails in its task. By contrast, an open developmental process like language acquisition provides countless options: within a limited number of syntactical and acoustical constraints, you can push it in practically any direction.

I think it is not innateness itself that scares us: it’s the idea of being limited to only one possibility — and that possibility being our worst nightmare. When people express fears about lawyers using genes or evolved predispositions to get their clients off the hook for heinous behavior, they are expressing the fear that there is no possibility for a better outcome. And I think this can be traced back to the conflation of “evolved” or “genetically predisposed” with “innate” in the sense of closed process invariantism. But it is absolutely essential to dispel this myth.

To say that a behavior or predisposition is “evolved” is to say that we have some genetically inherited apparatus that was selected for over the eons because it produced a particular behavioral phenotype — under certain relevant environmental conditions. In principle, this says nothing about the trait’s flexibility or inevitability: the mechanism that produces it could be narrowly designed to produce that one phenotype, but it could equally well be developmentally plastic. Or when it comes to behavior, the developmental process for some small feature could even terminate without causing collapse of any other major adaptations or systems. And it has long been recognized that many traits induced by specific variations in the environment are equally indelible as those induced by specific variations in genes. As far as we know, we can no more undo the developmental damage and behavioral sequelae of lead poisoning than we can remove some deleterious allele from all the cells in a person’s body.

To complement this idea, I will borrow a large quote from Wallace: even though his discussion focused on the broader topic of causal determinism, I think his analysis can be applied equally well to genetic determinism (or environmental determinism, for that matter).

Whether or not this thesis [determinism] is true would seem to have no bearing on the question of whether or not people possess the powers of reflective self-control. Those powers are matters of broadly psychological capacity or competence, like the power to speak a given language, or to add and subtract large numbers, or to read and play music on the piano. It would be very strange to suppose that determinism per se would deprive people of psychological capacities of this sort — as if the confirmation of determinism would give us reason to conclude that Jane Austen lacked the competence to write in English, or that Maria Callas had no capacity to sing. Similarly, determinism would seem irrelevant to the question of whether people have the general psychological abilities I have referred to as powers of reflective self-control. The only reasons we might have to deny that people possess such general psychological powers would equally be reasons for questioning all intentional explanations of human activity.

(Wallace, 1994, loc 3923, bold added)

In other words, the source of a behavior should have no specific implications for the degree to which it can be controlled: behavioral predispositions guided by genetic variation are not intrinsically any more or less inevitable than those caused by environmental variables.

Language acquisition is, in many ways, prototypical of the cluster of traits associated with innateness: it is species-typical, reliably developing, arguably canalized and genetically entrenched, and inarguably a spectacular Darwinian adaptation. And I think most people would agree that humans have an “innate” capacity to learn language — especially in light of our utter failure to induce the sort of language acquisition observed from five-year-old humans in even adults of other ape species. But language acquisition is also the poster child of developmentally plastic adaptations. It could not possibly be conceived of as an instance of closed process invariantism, because it is one of the most open developmental processes we have ever observed. If there were one feature of our species that met all non-essential facets of innateness while completely missing the center of the bullseye, it would be language.

And this is possibly the most hopeful message I could have expected to take away from this investigation. In the end, evidence for an adaptation predisposing a certain behavior is nowhere near enough to suggest that it develops through a relatively closed process. We have no reason to think an evolved predisposition to rape (if it existed, which I don’t think it does in humans) wouldn’t be specifically sensitive to environmental shifts, like variations in social norms and mores — and I would argue that it would actually be a perfect candidate for social intervention.

For inspiration, we need look no further than our predisposition to consume sweeter foods over others. Though most of us have a strong inclination toward it, the vast majority do not indulge every impulse to eat whatever dessert they happen by. Children have more difficulty controlling these impulses, but this likely has more to do with children’s generally weaker or less-developed powers of reflective self-control. Adults, on the other hand, are able to assess the potential harms of indulging our impulses and inhibit ourselves accordingly. Regardless of the origin of such a predisposition, is obviously and meaningfully within the influence of our powers of reflective self-control.

And whatever degree of effort we exert to protect our own wellbeing from overindulgence in unhealthy foods, I think it would be unconscionable to accept anything less when another person’s safety is at stake. I would hope I didn’t need to say this, but until proven otherwise, such impulses should be assumed to be technically resistible enough that any greater harm to a potential victim than to the agent should always inhibit the behavior. We don’t accept violent impulses as excuses for physical assaults, even if we may have an evolved predisposition to respond violently to certain nonviolent offenses — because (no irony intended) we live in a society.

When someone erroneously describes the existence of evolved mechanisms that produce immoral behavior as a “hard truth” that we must accept, I think I can now confidently rebut them. People would be no more or less responsible for their behavior if it were ultimately caused by “environmental” factors or “genetic” ones (as if the two could be disentangled in the first place). Saying that behavior stems from an evolutionary adaptation should no more predispose us to blame or praise the actor than if it stemmed from an early environmental event out of their control, like trauma. Genetic factors are no less luck-dependent, and ultimately no more controllable, than environmental ones. Discussions of genetic and environmental contributions to behavior cannot be used to settle issues of free will and moral responsibility, and we should not look to genetic and evolutionary studies for answers to how and when we should hold people responsible for their actions.


References

Dawkins, R. (1987). The extended phenotype: The long reach of the gene. Oxford University Press. Kindle Edition.

Griffiths, P., Machery, E., & Linquist, S. (2009). The vernacular concept of innateness. Mind & Language, 24(5), 605-630.

Harden, K. P. (2021). The genetic lottery: Why DNA matters for social equality. Princeton University Press. Kindle Edition.

Mallon, R., & Weinberg, J. M. (2006). Innateness as closed process invariance. Philosophy of Science, 73(3), 323-344.

Mameli, M. (2008). On innateness: The clutter hypothesis and the cluster hypothesis. The Journal of Philosophy, 105(12), 719-736.

Mameli, M., & Bateson, P. (2006). Innateness and the sciences. Biology and Philosophy, 21, 155-188.

Sober, E. (1980). Evolution, population thinking, and essentialism. Philosophy of Science, 47(3), 350-383.

Wallace, R. J. (1994). Responsibility and the moral sentiments. Harvard University Press. Kindle Edition.


Comments are closed