National Academies Press: OpenBook

Reference Manual on Scientific Evidence: Third Edition (2011)

Chapter: Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner

« Previous: Reference Guide on Medical Testimony--John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

I. Introduction

Science’s understanding of the human brain is increasing exponentially. We know almost infinitely more than we did 30 years ago; however, we know almost nothing compared with what we are likely to know 30 years from now. The results of advances in understanding human brains—and of the minds they generate—are already beginning to appear in courtrooms. If, as neuroscience indicates, our mental states are produced by physical states of our brain, our increased ability to discern those physical states will have huge implications for the law. Lawyers already are introducing neuroimaging evidence as relevant to questions of individual responsibility, such as claims of insanity or diminished responsibility, either on issues of liability or of sentencing. In May 2010, parties in two cases sought to introduce neuroimaging in court as evidence of honesty; we are also beginning to see efforts to use it to prove that a person is in pain. These and other uses of neuroscience are almost certain to increase with our growing knowledge of the human brain as well as continued technological advances in accurately and precisely measuring the brain. This chapter strives to give judges some background knowledge about neuroscience and the strengths and weaknesses of its possible applications in litigation in order to help them become better prepared for these cases.1

The chapter begins with a brief overview of the structure and function of the human brain. It then describes some of the tools neuroscientists use to understand the brain—tools likely to produce findings that parties will seek to introduce in court. Next, it discusses a number of fundamental issues that must be considered when interpreting neuroscientific findings. Finally, after discussing, in general, the issues raised by neuroscience-based evidence, the chapter concludes by analyzing a few illustrative situations in which neuroscientific evidence is likely to appear in court in the future.

II. The Human Brain

This abbreviated and simplified discussion of the human brain describes the cellular basis of the nervous system, the structure of the brain, and finally our current understanding of how the brain works. More detailed, but still accessible, informa-

1. The Law and Neuroscience Project, funded by the John D. and Catherine T. MacArthur Foundation, is preparing a book about law and neuroscience for judges, which should be available by 2011. A Primer on Neuroscience (Stephen Morse & Adina Roskies eds., forthcoming 2011). The Project has already published a pamphlet written by neuroscientists for judges, with brief discussions of issues relevant to law and neuroscience. A Judge’s Guide to Neuroscience: A Concise Introduction (M.S. Gazzaniga & J.S. Rakoff eds., 2010). One early book on a broad range of issues in law and neuroscience also deserves mention: Neuroscience and the Law: Brain, Mind, and the Scales of Justice (Brent Garland ed., 2004).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

tion about the human brain can be found in academic textbooks and in popular books for general audiences.2

A. Cells

Like most of the human body the nervous system is made up of cells. Adult humans contain somewhere between 50 trillion and 100 trillion human cells. Each of those cells is both individually alive and part of a larger living organism.

Each cell in the body (with rare exceptions) contains each person’s entire complement of human genes—his or her genome. The genes, found on very long molecules of deoxyribonucleic acid (DNA) that make up a human’s 46 chromosomes, work by leading the cells to make other molecules, notably proteins and ribonucleic acid (RNA). We now believe that there are about 23,000 human genes. Cells are different from each other not because they contain different genes but because they turn on and off different sets of genes. All human cells seem to use the same group of several thousand “housekeeping” genes that run the cell’s basic machinery, but skin cells, kidney cells, and brain cells differ in which other genes they use. Scientists count different numbers of “types” of human cells, with estimates ranging from a few hundred to a few thousand (depending largely on how narrowly or broadly one defines a cell type).

The most important cells in the nervous system are called neurons. Neurons pass messages from one neuron to another in a complex way that appears to be responsible for brain function, conscious or otherwise.

Neurons (Figure 1) come in many sizes, shapes, and subtypes (with their own names), but they generally have three features: a cell body (or “soma”), short extensions called dendrites, and a longer extension called an axon. The cell body contains the nucleus of the cell, which in turn contains the 46 chromosomes with the cell’s DNA. The dendrites and axons both reach out to make connections with other neurons. The dendrites generally receive information from other neurons; the axons send information.

Communication between neurons occurs at areas called synapses (Figure 2), where two neurons almost meet. At a synapse, the two neurons will come within

2. The Society for Neuroscience, the very large scholarly society that covers a wide range of brain science, has published a brief and useful primer about the human brain called Brain Facts. The most recent edition, published in 2008, is available free at www.sfn.org/index.aspx?pagename=brainfacts.

Some particularly interesting books about various aspects of the brain written for a popular audience include Oliver W. Sacks, The Man Who Mistook His Wife for a Hat and Other Clinical Tales (1990); Antonio R. Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain (1994); Daniel L. Schacter, Searching for Memory: The Brain, the Mind, and the Past (1996); Joseph E. LeDoux, The Emotional Brain: The Mysterious Underpinnings of Emotional Life (1996); Christopher D. Frith, Making Up the Mind: How the Brain Creates Our Mental World (2007); and Sandra Aamodt & Sam Wang, Welcome to Your Brain: Why You Lose Your Car Keys But Never Forget How to Drive and Other Puzzles of Everyday Life (2008).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

Figure 1.   Schematic of the typical structure of a neuron.

img-769

Source: Quasar Jarosz at en.wikipedia.

less than a micrometer (a millionth of a meter) of each other, with the presynaptic side, on the axon, separated from the postsynaptic side, on the dendrite, by a gap called the synaptic cleft. At synapses, when the axon (on the presynaptic side) “fires” (becomes active) it releases molecules, known as neurotransmitters, into the synaptic cleft. Some of those molecules are picked up by special receptors on the dendrite that is on the postsynaptic side of the cleft. More than 100 different neurotransmitters have been identified; among the best known are dopamine, serotonin, glutamate, and acetylcholine. Some of the neurotransmitters released into the synaptic cleft are picked up by special receptors on the postsynaptic side of the cleft by the dendrite.

At the postsynaptic side of the cleft, neurotransmitters binding to the receptors can have a wide range of effects. Sometimes they cause the receiving (postsynaptic) neuron to “fire,” sometimes they suppress (inhibit) the postsynaptic neuron from firing, and sometimes they seem to do neither. The response of the receiving neuron is a complicated summation of the various messages it receives from multiple neurons that converge, through synapses, on its dendrites.

A neuron that does fire does so by generating an electrical current that flows down (away from the cell body) the length of its axon. We normally think of electrical current as flowing in things like copper wiring. In that case, free electrons move down the wire. The electrical currents of neurons are more complicated. Molecules with a positive or negative electrical charge (ions) move through the neuron’s membrane and create differences in the electrical charge between the inside and outside of the neuron, with the current traveling along the axon, rather like a fire brigade passing buckets of water in only one direction

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

Figure 2.   Synapse. Communication between neurons occurs at the synapse, where the sending (presynaptic) and receiving (postsynaptic) neurons meet. When the presynaptic neuron fires, it releases neurotransmitters into the synaptic cleft, which bind to receptors on the postsynaptic neuron.

img-770

Source: From Carlson. Carlson, Neil R. Foundations of Physiological Psychology (with Neuroscience Animations and Student Study Guide CD-ROM), 6th. © 2005. Printed and electronically reproduced by permission of Pearson Education, Inc., Upper Saddle River, New Jersey.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

down the line. Firing occurs in milliseconds. This process of moving ions in and out of the cell membrane requires that the cell use large amounts of energy. When the current reaches the end of the axon, it may or may not cause the axon to release neurotransmitters into the synaptic cleft. This complicated part-electrical, part-chemical system is how information passes from one neuron to another.

The axons of human neurons are all microscopically narrow, but they vary enormously in length. Some are micrometers long; others, such as neurons running from the base of the spinal cord to the toes, are several feet long. Longer axons tend to be coated with a fatty substance called myelin. Myelin helps insulate the axon and thus increases the strength and efficiency of the electrical signal, much like the insulation wrapped around a copper wire. (The destruction of this myelin sheathing is the cause of multiple sclerosis.) Axons coated with myelin appear white; thus areas of the nervous system that have many myelin-coated axons are referred to as “white matter.” Cell bodies, by contrast, look gray, and so areas with many cell bodies and relatively few axons make up our “gray matter.” White matter can roughly be thought of as the wiring that connects gray matter to the rest of the body or to other areas of gray matter.

What we call nerves are really bundles of neurons. For example, we all have nerves that run down our arms to our fingers. Some of those nerves consist of neurons that pass messages from the fingers, up the arm, to other neurons in the spinal cord that then pass the messages on to the brain, where they are analyzed and experienced. This is how we feel things with our fingers. Other nerves are bundles of neurons that pass messages from the brain through the spinal cord to nerves that run down the arms to the fingers, telling them when and how to move.

Neurons can connect with other neurons or with other kinds of cells. Neurons that control body movements ultimately connect to muscle cells—these are called motor neurons. Neurons that feed information into the brain start with specialized sensory cells (i.e., cells specialized for detecting different types of stimuli—light, touch, heat, pain, and more) that fire in response to the appropriate stimulus. Their firings ultimately lead, directly or through other neurons, into the brain. These are sensory neurons. These neurons send information only in one direction—motor neurons ultimately from the brain, sensory neurons to the brain. The paralysis caused by, for example, severe damage to the spinal cord both prevents the legs from receiving messages to move that would come from the brain through the motor neurons and keeps the brain from receiving messages from sensory neurons in the legs about what the legs are experiencing. The break in the spinal column prevents the messages from getting through, just as a break in a local telephone line will keep two parties from connecting. (There are, unfortunately, not yet any human equivalents to wireless service.)

Estimates of the number of cells in a human brain vary widely, from a few hundred billion to several trillion. These cells include those that make up blood vessels and various connective tissues in the brain, but most of them are specialized brain cells. About 80 billion to 100 billion of these brain cells are neurons; the

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

other cells (and the source of most of the uncertainty about the number of cells) are principally another class of cells referred to generally as glial cells. Glial cells play many important roles in the brain, including, for example, producing and maintaining the myelin sheaths that insulate axons and serving as a special immune system for the brain. The full importance of glial cells is still being discovered; emerging data suggest that they may play a larger role in mental processes than as “support staff.” At this point, however, we concentrate on neurons, the brain structures they form, and how those structures work.

B. Brain Structure

Anatomists refer to the brain, the spinal cord, and a few other nerves directly connecting to the brain as the central nervous system. All the other nerves are part of the peripheral nervous system. This reference guide does not focus on the peripheral nervous system, despite its importance in, for example, assessing some aspects of personal injuries. We also, less fairly, ignore the central nervous system other than the brain, even though the spinal cord, in particular, plays an important role in modulating messages going into and coming out of the brain.

The average adult human brain (Figure 3) weighs about 3 pounds and fills a volume of about 1300 cubic centimeters. If liquid, it would almost fill two standard wine bottles with a little space left over. Living brains have a consistency about like that of gelatin. Despite the softness of brains, they are made up of regular shapes and structures that are generally consistent from person to person. Just as every nondamaged or nondeformed human face has two eyes, two ears,

Figure 3.   Lateral (left) and mid-sagittal (right) views of the human brain.

img-772

Source: Courtesy of Anthony Wagner.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

one nose, and one mouth with standard numbers of various kinds of teeth, every normal brain has the same set of identifiable structures, both large and small.

Neuroscientists have long worked to describe and define particular regions of the brain. In some ways this is like describing parcels of land in property documents, and, like property descriptions, several different methods are used. At the largest scale, the brain is often divided into three parts: the brain stem, the cerebellum, and the cerebrum.3

The brain stem is found near the bottom of the brain and is, in some ways, effectively an extension of the spinal cord. Its various parts play crucial roles in controlling the body’s autonomic functioning, such as heart rate and digestion. The brain stem also contains important regions that regulate processing in the cerebrum. For example, the substantia nigra and ventral tegmental area in the brain stem consist of critical neurons that generate the neurotransmitter dopamine. While the substantia nigra is crucial for motor control, the ventral tegmental area is important for learning about rewards. The loss of neurons in the substantia nigra is at the core of the movement problems of Parkinson’s disease.

The cerebellum, which is about the size and shape of a squashed tennis ball, is tucked away in the back of the skull. It plays a major role in fine motor control and seems to keep a library of learned motor skills, such as riding a bicycle. It was long thought that damage to the cerebellum had little to no effect on a person’s personality or cognitive abilities, but resulted primarily in unsteady gait, difficulty in making precise movements, and problems in learning movements. More recent studies of patients with cerebellar damage and functional brain imaging studies of healthy individuals indicate that the cerebellum also plays a role in more cognitive functions, including supporting aspects of working memory, attention, and language.

The cerebrum is the largest part of the human brain, making up about 85% of its volume. The cerebrum is found at the front, top, and much of the back of the human brain. The human brain differs from the brains of other mammals mainly because it has a vastly enlarged cerebrum.

There are several different ways to identify parts of, or locations in, the cerebrum. First, the cerebrum is divided into two hemispheres—the famous left and right brain. These two hemispheres are connected by tracts of white matter—of axons—most notably the large connection called the corpus callosum. Oddly, the right hemisphere of the brain generally receives messages from and controls the movements of the left side of the body, while the left hemisphere receives messages from and controls the movements of the right side of the body.

Each hemisphere of the cerebrum is divided into four lobes (Figure 4): The frontal lobe in the front of the cerebrum (behind the forehead), the parietal lobe at

3. The brain also is sometimes divided into the forebrain, midbrain, and hindbrain. This classification is useful for some purposes, particularly in describing the history and development of the vertebrate brain, but it does not entirely correspond to the categorization of cerebrum, brain stem, and cerebellum, and it is not used in this reference guide.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

Figure 4.   Lobes of a hemisphere. Each hemisphere of the brain consists of four lobes—the frontal, parietal, temporal, and occipital lobes.

img-774

Source: http://commons.wikimedia.org/wiki/File:Gray728.svg. This image is in the public domain because its copyright has expired. This applies worldwide.

the top and toward the back, the temporal lobe on the side (just behind and above the ears), and the occipital lobe at the back. Thus, one could describe a particular region as lying in the left frontal lobe—the frontal lobe of the left hemisphere.

The surface of the cerebrum consists of the cortex, which is a sheet of gray matter a few millimeters thick. The cortex is not a smooth sheet in humans, but rather is heavily folded with valleys, called sulci (“sulcus” in the singular), and bulges, called gyri (“gyrus”). The sulci and gyri have their own names, and so a location can be described as in the inferior frontal gyrus in the left frontal lobe. These folds allow the surface area of the cortex, as well as the total volume of the cortex, to be much greater than in other mammals, while still allowing it to fit inside our skulls, similar to the way the many folds of a car’s radiator give it a very large surface area (for radiating away heat) in a relatively small space.

The cerebral cortex is extraordinarily large in humans compared with other species and is clearly centrally involved in much of what makes our brains special, but the cerebrum contains many other important subcortical structures that we share with other vertebrates. Some of the more important areas include the thalamus, the hypothalamus, the basal ganglia, and the amygdala. These areas all connect widely, with the cortex, with each other, and with other parts of the brain to form complex networks.

The functions of all these areas are many, complex, and not fully understood, but some facts are known. The thalamus seems to act as a main relay that carries

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

information to and from the cerebral cortex, particularly for vision, hearing, touch, and proprioception (one’s sense of the position of the parts of one’s body). It also is, importantly, involved in sleep, wakefulness, and consciousness. The hypothalamus has a wide range of functions, including the regulation of body temperature, hunger, thirst, and fatigue. The basal ganglia are a group of regions in the brain that are involved in motor control and learning, among other things. They seem to be strongly involved in selecting movements, as well as in learning through reinforcement (as a result of rewards). The amygdala appears to be important in emotional processing, including how we attach emotional significance to particular stimuli.

In addition, many other parts of the brain, in the cortex or elsewhere, have their own special names, usually with Latin or Greek roots that may or may not seem descriptive today. The hippocampus, for example, is named for the Greek word for seahorse. For most of us, these names will have no obvious rhyme or reason, but merely must be learned as particular structures in the brain—the superior colliculus, the tegmentum, the globus pallidus, the substantia nigra, the cingulate cortex, and more. All of these structures come in pairs, with one in the left hemisphere and one in the right hemisphere; only the pineal gland is unpaired. Brain atlases include scores of names for particular structures or regions in the brain and detailed information about the structures or regions.

Some of these smaller structures may have special importance to human behavior. The nucleus accumbens, for example, is a small subcortical region in each hemisphere of the cerebrum that appears important for reward processing and motivation. In experiments with rats that received stimulation of this region in return for pressing a lever, the rats would press the lever almost to the exclusion of any other behavior, including eating. The nucleus accumbens in humans appears linked to appetitive motivation, responding in anticipation of primary rewards (such as pleasure from food and sex) and secondary rewards (such as money). Through interactions with the orbital frontal cortex and dopamine-generating neurons in the midbrain (including the ventral tegmental area), the nucleus accumbens is considered part of a “reward network.” With a hypothesized role in addictive behavior and in reward computations, more broadly, this putative reward network is a topic of considerable ongoing research.

All of these various locations, whether defined broadly by area or by the names of specific structures, can be further subdivided using directions: front and back, up and down, toward the middle, or toward the sides. Unfortunately, the directions often are not expressed in a straightforward manner, and several different terminological conventions exist. Locations toward the front or back of the brain can be referred to as either anterior or posterior or as rostral or caudal (literally, toward the nose, or beak, or the tail). Locations toward the bottom or top of the brain are termed inferior or superior or, alternatively, as ventral or dorsal (toward the stomach or toward the back). A location toward the middle of the brain is called medial; one toward the side is called lateral. Thus, different loca-

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

tions could be described, for example, as in the left anterior cingulate cortex, in the dorsal medial (or sometimes dorsomedial) prefrontal cortex, or in the posterior hypothalamus.

Finally, one other method often is used, a method created by Korbinian Brodmann in 1909. Brodmann, a neuroanatomist, divided the brain into about 50 different areas or regions (Figure 5). Each region was defined on the basis of the

Figure 5.   Brodmann’s areas. Brodmann divided the cortex into different areas based on the cell types and how they were organized.

img-776

Source: Prof. Mark Dubin, University of Colorado.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

kinds of neurons found there and how those neurons are organized. A location described by Brodmann area may or may not correspond closely with a structural location. Other organizational schemes exist, but Brodmann’s remains the most widely used to describe the approximate locations of findings in modern human brain imaging studies.

C. Some Aspects of How the Brain Works

Most of neuroscience is dedicated to finding out how the brain works, but although much has been learned, considerably more remains unknown. We could use many different ways to describe what is known about how the brain works. This section discusses a few important aspects of brain function and makes several general points about the localization and distribution of functions, as well as brain plasticity, before commenting on the effects of hormones and other chemical influences on the brain.

Some brain functions are localized in, or especially dependent on, particular regions of the brain. This has been known for many years as a result of studies of people who, through traumatic injury, stroke, or cancer, have lost, or lost the use of, particular regions of their brains. For example, in the 1860s, French anatomist Paul Broca discovered through autopsies of patients that damage to a region in the left inferior frontal lobe (now known as Broca’s area) caused an inability to speak. It is now known that some functions cannot normally be performed when particular brain areas are damaged or missing. The visual cortex, located at the back of the brain in the occipital lobes, is as necessary for vision as the eyes are; the hippocampus is necessary for the creation of many kinds of memory; and the motor cortex is necessary for voluntary movements. The motor cortex and the parallel somatosensory cortex, which is essential for processing sensory information such as the sense of touch from the body, are further subdivided, with particular regions necessary for causing motion or sensing feelings from the legs, arms, fingers, face, and so on. Other brain regions also will be involved in these actions or sensations, but these regions are necessary to them.

At the same time, the fact that a region is necessary to a particular class of sensations, behaviors, or cognition does not mean either that it is not involved in other brain functions or that other brain regions do not also contribute to these particular abilities. The amygdala, for example, is involved in our feelings of fear, but it is also involved broadly in emotional reactions, both positive and negative. It also modulates learning, memory, and even sensory perception. Although some functions are localized, others are widely distributed. For example, the visual cortex is essential to vision, but actual visual perception involves many parts of the brain in addition to the occipital lobes. Memories appear to be stored over much of the cortex. Networks of brain regions participate in many of these functions.

For example, if you touch something very hot with your left index finger, your spinal cord, through a reflex loop, will cause you to pull your finger back

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

very quickly. Then the part of your right somatosensory cortex devoted to the index finger will be involved in receiving and initially interpreting the sensation. Other areas of your brain will recognize the stimulus as painful, your motor regions will be involved in waving your hand back and forth or bringing your finger to your mouth, widespread parts of your cortex may lead to your remembering other instances of burning yourself, and your hippocampus may play a role in making a new long-term memory of this incident. There is no brain region “for” burning your finger; many regions, both specific and general, contribute to the brain’s response.

In addition, brains are at least somewhat “plastic” or changeable on both small and large scales. Anyone who can see has a working visual cortex, and it is always located in the back of the brain (in the occipital lobe), but its exact borders will vary slightly from person to person. In other cases, the brain may adjust and change in response to a person’s behavior or changes in that person’s anatomy. For example, a right-handed violinist may develop an enlarged brain region for controlling the fingers of the left hand, used in fingering the violin. If a person loses an arm to amputation, the parts of the motor and somatosensory cortices that had dealt with that arm may be “taken over” by other body parts. In some cases, this brain plasticity can be extreme. A young child who has lost an entire hemisphere of his or her brain may grow up to have normal or nearly normal functionality as the remaining hemisphere takes on the tasks of the missing hemisphere. Unfortunately, the possibilities of this kind of extreme plasticity do diminish with age, but rehabilitation after stroke in adults sometimes does show changes in the brain functions undertaken by particular brain regions.

The picture of the brain as a set of interconnected neurons that fire in networks or patterns in response to stimuli is useful but not complete. In addition to neuron firings, other factors affect how the brain works, particularly chemical factors.

Some of these are hormones, generated by the body either inside or outside the brain. They can affect how the brain functions, as well as how it develops. Sex hormones such as estrogen and testosterone can have both short-term and long-term effects on the brain. So can other hormones, such as cortisol, associated with stress, and oxytocin, associated with, among other things, trust and bonding. Endorphins, chemicals secreted by the pituitary gland in the brain, are associated with pain relief and a sense of well-being. Still other chemicals, brought in from outside the body, can have major effects on the brain, both in the short term and the long term. Examples include alcohol, caffeine, nicotine, morphine, and cocaine. These can trigger very specific brain reactions or can have broad effects.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

III. Some Common Neuroscience Techniques

Neuroscientists use many techniques to study the brain. Some of them have been used for centuries, such as autopsies and the observation of patients with brain damage. Some, such as the intentional destruction of parts of the brain, can be used ethically only in research on nonhuman animals. Of course, research with nonhuman animals, although often helpful in understanding human brains, is of less value when examining behaviors that are uniquely developed among humans. The current revolution in neuroscience is largely the result of a revolution in the tools available to neuroscientists, as new methods have been developed to image and to intervene in living brains. These methods, particularly the imaging methods that allow more precise measurements of human brain structure and function in living people, are giving rise to increasing efforts to introduce neuroscientific evidence in court.

This section of this chapter focuses on several kinds of neuroimaging—computerized axial tomography (CAT) scans, positron emission tomography (PET) scans, single photon emission computed tomography (SPECT) scans, and magnetic resonance imaging (MRI), as well as an older method, electroencephalography (EEG), and its close relative, magnetoencephalography (MEG). Some of these methods show the structure of the brain, others show the brain’s functioning, and some do both. These are not the only important neuroscience techniques; several others are discussed briefly at the end of this section. Genetic analysis provides yet another technique for increasing our understanding of human brains and behaviors, but this chapter does not deal with the possible applications of human genetics to understanding behavior.

A. Neuroimaging

Traditional imaging technologies have not been very helpful in studying the brain. X-ray images are the shadows cast by dense objects. Not only is the brain surrounded by our very dense skulls, but there are no dense objects inside the brain to cast these shadows. Although a few features of the brain or its blood vessels could be seen through methods that involved the injection of air into some of the spaces in the brain or of contrast media into the blood, these provided limited information. The opportunity to see inside a living brain itself only goes back to about the 1970s, with the development of CAT scans. This ability has since exploded with the development of several new techniques, three of which, with CAT, are discussed on the following pages.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

1. CAT scans

The CAT scan is a multidimensional, computer-assisted X-ray machine. Instead of taking one X ray from a fixed location, in a CAT scan both the X-ray source and (180 degrees opposite the source) the X-ray detectors rotate around the person being scanned. Rather than exposing negatives to make “pictures” of dense objects, as in traditional X rays, the X-ray detectors produce data for computer analysis. A complete modern CAT scan includes data sufficient to reconstruct the scanned object in three dimensions. Computerized algorithms can then be used to produce an image of any particular slice through the object. The multiple angles and computer analysis make it possible to pick out the relatively small density differences within the brain that traditional X-ray technology could not distinguish and to use them to produce images of the soft tissue (Figure 6).

Figure 6.   CAT scan depicting axial sections of the human brain. The ventral most (bottom) surface of the brain is at upper left and the dorsal most (top) surface is at the lower right.

img-780

Source: http://en.wikipedia.org/wiki/File:CT_of_brain_of_Mikael_H%C3%A4ggstr%C3%B6m_large.png. Image in the public domain.

The CAT scan provides a structural image of the brain. It is useful for showing some kinds of structural abnormalities, but it provides no direct information

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

about the brain’s functioning. A CAT scan brain image is not as precise as the image produced from an MRI, but because the procedure is both quick and (relatively) inexpensive, CAT scanners are common in hospitals. Medically, brain CAT scans are used mainly to look for bleeding or swelling inside the brain, although they also will record sizeable tumors or other large structural abnormalities. For neuroscience, the great advantage of the CAT scan was its ability, for the first time, to reveal some details inside the skull, an ability that has been largely superseded for research by MRI. CAT scans have been used in courts to argue that structural changes in the brain, shown on the CAT scan, are evidence of insanity or other mental impairments. Perhaps their most notable use was in 1982 in the trial of John Hinckley for the attempted assassination of President Ronald Reagan. A CAT scan of Hinckley’s brain that showed widened sulci (the “valleys” in the surface of the brain) was introduced into evidence to show that Hinckley suffered from organic brain damage in the form of shrinkage of his brain.4

2. PET scans and SPECT scans

Traditional X-ray machines and their more sophisticated descendant, the CAT scan, project X rays through the skull and create images based on how much of the X rays are blocked or absorbed. PET scans and SPECT scans operate very differently. In these methods, a substance that emits radiation is introduced into the body. That radiation then is detected from outside the body in a way that can determine the location of the radiation source. These scans generally are not used for determining the brain’s structure, but for understanding how it is functioning. They are particularly good at measuring one aspect of brain structure—the density of particular receptors, such as those for dopamine, at synapses in some areas of the brain, such as the frontal lobes.

Radioactive decay of atoms can take several forms, producing alpha, beta, or gamma radiation. PET scanners take advantage of isotopes of atoms that decay by giving off positive beta radiation. Beta decay usually involves the emission of an electron; positive beta decay involves the emission of a positron, the positively charged antimatter equivalent of an electron. When positrons (antimatter) meet electrons (matter), the two particles are annihilated and converted into two photons of gamma radiation with a known energy (511,000 electron volts) that follow directly opposite paths from the site of the annihilation. Inside the body, the collision between the positron and electron and the consequent production of the gamma radiation photons takes place within a short distance (a millimeter or two) of the site of the initial radioactive decay that produced the positron.

4. The effects of this evidence on the verdict are unclear. See Lincoln Caplan, The Insanity Defense and the Trial of John W. Hinckley, Jr. (1984) for a discussion of the case and its consequences for the law.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

PET scans, therefore, start with the introduction into a person’s body of a radioactive tracer that decays by giving off a positron. One common tracer is fluorodeoxyglucose (FDG), a molecule that is almost identical to the simple sugar, glucose, except that one of the oxygen atoms in glucose is replaced by an atom of fluorine-18, an isotope of the element fluorine with nine protons and nine neutrons. Fluorine normally found in nature is fluorine-19, with 9 protons and 10 neutrons, and is stable. Fluorine-18 is very unstable and decays, through positive beta decay, quickly losing about half of its mass every 110 minutes (its half-life). The body treats FDG as though it were glucose, and so the FDG is concentrated where the body needs the energy supplied by glucose. A major clinical use of PET scans derives from the fact that tumor cells use energy, and hence glucose, at much higher rates than normal cells.

After giving the FDG time to become concentrated in the body, which usually takes about an hour, the person is put inside the scanner itself. There, the person is entirely surrounded by a very sensitive radiation detector, tuned to respond to gamma radiation of the energy produced by annihilated positrons. When two “hits” are detected by two sensors at about the same time, the source is known to be located on a line connecting the two. Very small differences in the timing of when the radiation is detected can help determine where along that the line the annihilation took place. In this way, as more gamma radiation from the decaying FDG is detected, the general location of the FDG within the body can be determined and, as a result, tissue that is using a lot of glucose, such as a tumor, can be located.

In neuroscience research, PET scans also can be taken using different molecules that bind more specifically to particular tissues or cells. Some of these more specific ligands use fluorine-18, but others use a different radioactive tracer that also decays by emitting a positron—oxygen-15. This can be used to determine what parts of the brain are using more or less oxygen. Oxygen-15, however, has a much shorter half-life (2 minutes) and so is more difficult and expensive to use than FDG. Similarly, carbon-11, with a half-life of 20 minutes, also can be used. Carbon-11 atoms can be introduced into various molecules that bind to important receptors in the brain, such as receptors for dopamine, serotonin, or opioids. This allows the study of the distribution and function of these receptors, both in healthy people and in people with various mental illnesses or neurological diseases.

The result of a PET scan is a record of the locations of positron decay events in the brain. Computer visualization tools can then create cross-sectional images of the brain, showing higher and lower rates of decay, with differences in magnitude typically depicted through the use of different colors (Figure 7).

PET scans are excellent for showing the location of various receptors in normal and abnormal brains. PET scans are also very good for showing areas of different glucose use and, hence, of different levels of metabolism. This can be very useful, for example, in detecting some kinds of brain damage, such as the damage that occurs with Alzheimer’s disease, where certain regions of the brain become abnormally inactive, or in brain regions that have been damaged by a stroke.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

Figure 7.   PET scan depicting an axial section of the human brain.

img-783

Source: http://en.wikipedia.org/wiki/Positron_emission_tomography. Image in the public domain.

In addition, the comparison (subtraction) of two PET scan measurements, one scan when a person is engaged in a task that is thought to require particular brain functions and a second control (or baseline) scan that is not thought to require these functions, allows researchers indirectly to measure brain function. PET scans were initially used in this way in research to show what areas of the brain were used when people experienced various stimuli or performed particular tasks. PET has been substantially superseded for this purpose by functional MRI, which is less expensive, does not involve radiation exposure, provides better spatial resolution, and allows a longer period of testing.

SPECT scans are similar to PET scans. Each can produce a three-dimensional model of the brain and display images of any cross section through the brain. Like PET scans, they require the injection of a radioactive tracer material; unlike PET scans, the radioactive tracer in SPECT directly emits gamma radiation rather than emitting positrons. These kinds of tracers are more stable, more accessible, and much cheaper than the positron-emitting tracers needed for PET scans. With a PET scan, the gamma detector entirely surrounds the person; with a SPECT scan, one to three gamma detectors are rotated around the body over about 15 to

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

20 minutes. As with PET scans, the SPECT tracers can be used to measure brain metabolism or to attach to specific molecular receptors in the brain. The spatial resolution of a SPECT scan, however, is poorer than with a PET scan, with an uncertainty of about 1 cm.

Both PET and SPECT scans are most useful if coupled with good structural images. Contemporary PET and SPECT scanners often include a simultaneous CAT scan; there is some experimental work aimed at providing simultaneous PET and MRI scans.

3. MRI—structural and functional

MRI was developed in the 1970s, first came into wide use in the 1980s, and is currently the dominant neuroimaging technology for producing detailed images of the brain’s structure and for measuring aspects of brain function. MRI operates on completely different principles than either CAT scans or PET or SPECT scans; it does not rely on X rays passing through the brain or on the decay of radioactive tracer molecules inside the brain. Rather, MRI’s workings involve more complicated physics. This section discusses the general characteristics of MRI and then focuses on structural MRI, diffusion tensor imaging, and finally, functional MRI.

The power of an MRI scanner is measured by the strength of its magnetic field, measured in units called tesla (T). The magnetic field of a small bar magnet is about 0.01 T. The strength of the Earth’s magnetic field is about 0.00005 T. The MRI machines used for clinical purposes use magnetic fields of between 0.2 T and 3.0 T, with 1.5 T or 3.0 T being the systems most commonly used today. MRI machines for human research purposes have reached 9.4 T. In general, the stronger the magnetic field, the better the image, although higher fields also can create their own measurement difficulties, especially when imaging brain function. MRI machines achieve these high magnetic fields through using superconducting magnets, made by cooling the electromagnet with liquid helium at a temperature 4° (Celsius) above absolute zero. For this and other reasons, MRI systems are complicated, with higher initial and continuing maintenance costs compared with some other methods for functional imaging (e.g., electroencephalography; see infra Section III.B).

In most MRI systems (Figure 8), the subject, on an examination table, slides into a cylindrical opening in the machine so that the part of the body to be imaged is in the middle of the magnet. Depending on the kind of imaging performed, the examination or experiment can take from about 30 minutes to more than 2 hours; throughout the scanning process the subject needs to stay as motionless as possible to avoid corrupting the images. The main sensations for the subject are the loud thumping and buzzing noises made by the machine, as well as the machine’s vibration.

MRI examinations appear to involve minimal risk. Unlike the other neuroimaging technologies discussed above, MRI does not involve any high-energy

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

Figure 8.   MRI machine. Magnetic resonance imaging systems are used to acquire both structural and functional images of the brain.

img-785

Source: Courtesy of Anthony Wagner.

radiation. The magnetic field seems to be harmless, at least as long as magnetizable objects are kept away from it. MRI subjects need to remove most metal objects; people with some kinds of implanted metallic devices, with tattoos with metal in their ink, or with fragments of ferrous metal anywhere in their bodies cannot be scanned because of the dangerous effects of the field on those bits of metal.

When the subject is positioned in the MRI scanner, the powerful field of the magnet causes the nuclei of atoms (usually the hydrogen nuclei of the body’s water molecules) to align with the direction of the main magnetic field of the magnet. Using a brief electromagnetic pulse, these aligned atoms are then “flipped” out of alignment from the main magnetic field, and, after the pulse stops, the nuclei then

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

rapidly realign with the strong main magnetic field. Because the nuclei spin (like a top), they create an oscillating magnetic field that is measured by a receiver coil. During structural imaging, the strength of the signal generated partially depends on the relative density of hydrogen nuclei, which varies from point to point in the body according to the density of water. In this manner, MRI scanners can generate images of the body’s anatomy or of other scanned objects. Because an MRI scan can effectively distinguish between similar soft tissues, MRI can provide very-high-resolution images of the brain’s anatomy, which is, after all, made up of soft tissue.

Structural MRI scans produce very detailed images of the brain (Figure 9). They can be used to spot abnormalities, large and small, as well as to see normal variation in the size and shape of brain features. Structural MRI can be used, for example, to see how brain features change as a person ages. Previously, getting that kind of detailed information about a brain required an autopsy or, at a minimum, extensive neurosurgery. This ability makes structural MRI both an important clinical tool and a very useful technique for research that tries to correlate human differences, normal and abnormal, with differences in brain structure, as well as for research that seeks to understand brain development.

Another structural imaging application of brain MRI has become increasingly prevalent over the past decade: diffusion tensor imaging (DTI). As noted above, neuronal tissue in the brain can be divided roughly into gray matter (the bodies of neurons) and white matter (neuronal axons that transmit signals over distance). DTI uses MRI to see what direction water diffuses through brain tissue. Tracts of white matter are made up of bundles of axons coated with fatty myelin. Water will diffuse through that white matter along the direction of the axons and not, generally, across them. This method can be used, therefore, to trace the location of these bundles of white matter and hence the long-distance connections between different parts of the brain. Abnormal patterns of these connections may be associated with various conditions, from Alzheimer’s disease to dyslexia, some of which may have legal implications.

Functional MRI (fMRI) is perhaps the most exciting use of MRI in neuroscience for understanding brain function. This technique shows what regions of the brain are more or less active in response to the performance of particular tasks or the presentation of particular stimuli. It does not measure brain activity (the firing of neurons) directly but, instead, looks at how blood flow changes in response to brain activity and uses those changes, through the so-called BOLD response (the blood-oxygen-level dependent response), to allow the researcher to infer patterns of brain activity.

Structural MRI generally creates its images through detecting the density of hydrogen atoms in the subject and flipping them with radio pulses. For fMRI, the scanner detects changes in the ratio of oxygenated hemoglobin (oxyhemoglobin) and deoxygenated hemoglobin (deoxyhemoglobin) in particular locations in the brain. Hemoglobin is the protein in red blood cells that carries oxygen from the lungs to the body. On the basis of metabolic demands, hemoglobin molecules

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

Figure 9.   Brain MRI scan depicting an axial (upper), coronal (lower left), and sagittal (lower right) image of the human brain.

img-787

Source: Courtesy of Anthony Wagner.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

supply oxygen for the body’s needs. Accordingly, “fresher” blood will have a higher ratio of oxyhemoglobin to deoxyhemoglobin than more “used” blood. Importantly, because deoxyhemoglobin (which is found at a higher level in “used” blood) causes the fMRI signal to decay, a higher ratio of oxyhemoglobin to deoxyhemoglobin will produce a stronger fMRI signal.

Neural activity is energy intensive for neurons, and neurons do not contain any significant reserves of oxygen or glucose. Therefore, the brain’s blood vessels respond quickly to increases in activity in any one region of the brain by sending more fresh blood to that area. This is the basis of the BOLD response, which measures changes in the ratio of oxyhemoglobin to deoxyhemoglobin in a brain region several seconds after activity in that region. In particular, when a brain region becomes more active, there is first, perhaps more intuitively, a decline in the ratio of oxyhemoglobin to deoxyhemoglobin immediately after activity in the region, apparently corresponding to the depletion of oxygen in the blood at the site of the activity. This decline, however, is very small and very hard to detect with fMRI. Immediately after this decrease, there is an infusion of fresh (oxyhemoglobin-rich) blood, which can take several seconds to reach maximum; it is this infusion that results in the increase in the oxy/deoxyhemoglobin ratio that is measured in BOLD fMRI studies. Because even this subsequent increase is relatively small and variable, fMRI experiments typically involve many trials of the same task or class of stimuli in order to be able to see the signal amidst the noise.

Thus, in a typical fMRI experiment the subject will be placed in the scanner and the researchers will measure differences in the BOLD response throughout his or her brain between different conditions. A subject might, for example, be told to look at a video screen on which images of places alternate with images of faces. For purposes of the experiment, the computer will impose a spatial map on the subject’s brain, dividing it into thousands of little cubes, each a few cubic millimeters in size, referred to as “voxels.” Either while the data are being collected (so-called “real-time fMRI”5) or after an entire dataset has been gathered, a computerized program will compare the BOLD signal for each voxel when the screen was showing places to that when the screen contained faces. Regions that showed a statistically significant increase in the BOLD response several seconds after the face was on the video screen compared with the effects several seconds after a screen showing a place appeared will be said to have been “activated” by seeing the face. The researchers will infer that those regions were, in some way, involved in how the brain processes images of faces. The results typically will be shown as a structural brain image on which areas of more or less activation, as shown by a statistical test, will be shown by different colors (Figure 10).6

5. Use of this “real-time” fMRI has been increasing, but it is not yet clear whether the claims for it will stand up.

6. This example is actually a simplified version of experiments performed by Professor Nancy Kanwisher at MIT in the early 2000s that explored a region of the brain called the fusiform face area

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

Figure 10. fMRI image. Functional MRI data reveal regions associated with cognition and behavior. Here, regions of the frontal and parietal lobes that are more active when remembering past events relative to detecting novel stimuli are depicted.

img-789

Source: Courtesy of Anthony Wagner.

Functional MRI was first proposed in 1990, and the first research results using BOLD-contrast fMRI in humans were published in 1992. The past decade has seen an explosive increase in the number of research articles based on fMRI, with nearly 2500 articles published in 2008—compared with about 450 in 1998.7

which is particularly involved in processing visions of faces. See Kathleen M. O’Craven & Nancy Kanwisher, Mental Imagery of Faces and Places Activates Corresponding Stimulus-Specific Brain Regions, 12 J. Cog. Neurosci. 1013 (2000).

7. See the census of fMRI articles from 1993 to 2008 in Carole A. Federico et al., Intersecting Complexities in Neuroimaging and Neuroethics, in Oxford Handbook of Neuroethics (J. Illes & B.J. Sahakian eds., 2011). This continued an earlier census from 1993 to 2001. Judy Illes et al., From Neuroimaging to Neuroethics, 5 Nature Neurosci. 205 (2003).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

MRI (functional and structural) is quite safe, and MRI machines are widespread in developed countries, largely for clinical use but increasingly for research use as well. Although fMRI research is subject to many questions and controversies (discussed infra Section IV), this technique has been responsible for most of the recent interest in applying neuroscience to law, from criminal responsibility to lie detection.

B. EEG and MEG

EEG is the measurement of the brain’s electrical activity as exhibited on the scalp; MEG is the measurement of the small magnetic fields generated by the brain’s electrical activity. The roots of EEG go back into the nineteenth century, but its use increased dramatically in the 1930s and 1940s.

The process uses electrodes attached to the subject’s head with an electrically conductive substance (a paste or a gel) to record electrical currents on the surface of the scalp. Multiple electrodes are used; for clinical purposes, 20 to 25 electrodes are commonly used, although arrays of more than 200 electrodes can be used. (In MEG, superconducting “squids”8 are positioned over the scalp to detect the brain’s tiny magnetic signals.) The electrical currents are generated by the neurons throughout the brain, although EEG is more sensitive to currents emerging from neurons closer to the skull. It is therefore more challenging to use EEG to reveal the functioning of structures deep in the brain.

Because EEG and MEG directly measure neural activity, in contrast to the measures of blood flow in fMRI, the timing of the neural activity can be measured with great precision (the temporal resolution), down to milliseconds. On the other hand, in comparison to fMRI, EEG and MEG are poor at determining the location of the sources of the currents (the spatial resolution). The EEG/MEG signal is a summation of the activity of thousands to millions of neurons at any one time. Any one pattern of EEG or MEG signal at the scalp has an infinite number of possible source patterns, making the problem of determining the brain source of measured EEG/MEG signal particularly challenging and the results less precise.

The results of clinical EEG and MEG tests can be very useful for detecting some kinds of brain conditions, notably epilepsy, and are also part of the process of diagnosing brain death. EEG and MEG are also used for research, particularly in the form of event-related potentials, which correlate the size or pattern of the EEG or MEG signal with the performance of particular tasks or the presentation of particular stimuli. Thus, as with the hypothetical fMRI experiment described above, one could look for any consistent changes in the EEG or MEG signal when a subject sees faces rather than a blank screen. Apart from the determina-

8. SQUID stands for superconducting quantum interference device (and has nothing to do with the marine animal). This device can measure extremely small magnetic fields, including those generated by various processes in living organisms, and so is useful in biological studies.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

tion of brain death, where EEG is already used, the most discussed possible legally relevant uses of EEG have been lie detection and memory detection.

EEG is safe, cheap, quiet, and portable. MEG is safe and quiet, but the technology is considerably more expensive than EEG and is not easily portable. EEG methods can tolerate much more head movement by the subject than PET or MRI techniques, although movement is often a challenge for MEG. EEG and MEG have good temporal resolution, distinguishing between milliseconds, which makes them very attractive for research, but their spatial resolution is inadequate for many research questions. As a result, some researchers use a combination of methods, integrating MRI and EEG or MEG data (acquired simultaneously or at different times) using sophisticated data analysis techniques.

C. Other Techniques

Functional neuroimaging (especially fMRI) and EEG seem to be the techniques that are most likely to lead to efforts to introduce neuroscience-based evidence in court, but several other neuroscience techniques also might have legal applications. This section briefly describes four other methods that may be discussed in court: lesion studies, transcranial magnetic stimulation, deep brain stimulation, and implanted microelectrode arrays.

1. Lesion studies

One powerful way to test whether particular brain regions are associated with particular mental processes is to study mental processes after those brain regions have been destroyed or damaged. Observations of the consequences of such lesions, created by accidents or disease, were, in fact, the main way in which localization of brain function was originally understood.

For ethical reasons, the experimental destruction of brain tissue is limited to nonhuman animals. Nonetheless, in addition to accidental damage, on occasion human brains will need to be intentionally damaged for clinical purposes. Tumors may have to be removed or, in some cases, epilepsy may have to be treated by removing the region of the brain that is the focus for the seizures. Valuable knowledge may be gained from following these subjects.

Our understanding of the role of the hippocampus in creating memories, as one example, was greatly aided by study of a patient known as H.M.9 When he was 27 years old, H.M. was treated for intractable epilepsy, undergoing an

9. H.M.’s name, not publicly released until his death, was Henry Gustav Molaison. Details of his life can be found in several obituaries, including Benedict Carey, H.M., An Unforgettable Amnesiac, Dies at 82, N.Y. Times, Dec. 4, 2008, at A1, and H.M., A Man Without Memories, The Economist, Dec. 20, 2008. The first scientific report of his case was W.B. Scoville & Brenda Milner, Loss of Recent Memory After Bilateral Hippocampal Lesions, 20 J. Neurol., Neurosurg. Psychiatry 11 (1957).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

experimental procedure that surgically removed his left and right medial temporal lobes, including most of his two hippocampi. The surgery was successful, but from that time until his death in 2008, H.M. could not form new long-term memories, either of events or of facts. His short-term memory, also known as working memory, was intact, and he could learn new motor, perceptual, and (some) cognitive skills (his “procedural memory” still functioned). He also could remember his life’s events from before his surgery, although his memories were weaker the closer the events were to the surgery. Those brain regions were clearly involved in making new long-term memories for facts or events, but not in storing old ones.

2. Transcranial magnetic stimulation (TMS)

TMS is a noninvasive method of creating a temporary, reversible functional brain “lesion.” Using this technique, researchers disrupt the organized activity of the brain’s neurons by applying an electrical current. The current is formed by a rapidly changing magnetic field that is generated by a coil held next to the subject’s skull. The field penetrates the scalp and skull easily and causes a small current in a roughly conical portion of the brain below the coil. This current induces a change in the typical responses of the neurons, which can block the normal functioning of that part of the brain.

TMS can be done in a number of ways. In some approaches, TMS happens at the same time as the subject performs the task to be studied. These concurrent approaches include single pulses or paired pulses as well as rapid (more than once per second) repetitive TMS that is delivered during task performance. Another method uses TMS for an extended period, often several minutes, before the task is performed. This sequential TMS uses slow (less than once per second) repetitive TMS.

The effects of single-pulse/paired-pulse and concurrent repetitive TMS are present while the coil is generating the magnetic field, and can extend for a few tens of milliseconds after the stimulation is turned off. By contrast, the effects of pretask repetitive TMS are thought to last for a few minutes (about half as long as the actual stimulation). When TMS is repeated regularly in nonhumans, long-term effects have been observed. Therefore, guidelines regarding how much stimulation can be applied in humans have been established.

The Food and Drug Administration (FDA) has approved TMS as a treatment for otherwise untreatable depression. The neuroscience research value of TMS stems from its ability to alter brain function in a relatively small area (about 2 cm) in an otherwise healthy brain, thus allowing for targeted testing of the role of a particular brain region for a particular class of cognitive abilities. By blocking normal functioning of the affected neurons, this can be equivalent, in effect, to a temporary lesion of that area of the brain. TMS appears to have minimal risks, but its long-term effects are not known.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

3. Deep brain stimulation (DBS)

DBS is an FDA-approved treatment for several neurological conditions affecting movement, notably Parkinson’s disease, essential tremor, and dystonia. The device used in DBS includes a lead that is implanted into a specific brain region, a pulse generator (generally implanted under the shoulder or in the abdomen), and a wire connecting the two. The pulse generator sends an electric current to the electrodes in the lead, which in turn affect the functioning of neurons in an area around the electrodes.

The precise manner by which DBS affects brain function remains unclear. Even for Parkinson’s disease, for which it is widely used, individual patients sometimes benefit in unpredictable ways from placement of the lead in different locations and from different frequency or power of the stimulation.

Researchers are continuing to experiment with DBS for other conditions, such as depression, minimally conscious state, chronic pain, and overeating that leads to morbid obesity. The results are sometimes surprising. In a Canadian trial of DBS for appetite control, the obese patient did not ultimately lose weight but did suddenly develop a remarkable memory. That research group is now starting a trial of DBS for dementia.10 Other surprises have included some negative side effects from DBS, such as compulsive gambling, hypersexuality, and hallucinations. These kinds of unexpected consequences from DBS make it of continuing broader research interest.

4. Implanted microelectrode arrays

Ultimately, to understand the brain fully one would like to know what each of its 100 billion neurons is doing at any given time, analyzed in terms of their collective patterns of activity.11 No current technology comes close to that kind of resolution. For example, although fMRI has a voxel size of a few cubic millimeters, it is looking at the blood flow responding to thousands or millions of neurons at each point in the brain. Conversely, while direct electrical recordings allow individual neurons to be examined, and manipulated, it is not easy to record from many neurons at once. While still on a relatively small scale, recent developments now offer one method for recording from multiple neurons simultaneously by using an implanted microelectrode array.

A chip containing many tiny electrodes can be implanted directly into brain tissue. Some of those electrodes will make useable connections with neurons and can then be used either to record the activity of that neuron (when it is firing or

10. See Clement Hamani et al., Memory Enhancement Induced by Hypothalamic/Fornix Deep Brain Stimulation, 63 Annals Neurol. 119 (2008).

11. See the discussion in Emily R. Murphy & Henry T. Greely, What Will Be the Limits of Neuroscience-Based Mindreading in the Law? in The Oxford Handbook of Neuroethics (J. Illes & B.J. Sahakian eds., 2011).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

not) or to stimulate the neuron to fire. These kinds of implants have been used in research on motor function, both in monkeys and in occasional human patients. The research has aimed at understanding better what neuronal activity leads to motion and hence, in the long run, perhaps to a method of treating quadriplegia or other motion disorders.

These arrays have several disadvantages as research tools. Arrays require neurosurgery for their implantation, with all of its consequent risks of infection or damage. They also have a limited lifespan, because the brain’s defenses eventually prevent the electrical connection between the electrode and the neuron, usually over the span of a few months. Finally, the arrays can only reach a tiny number of the billions of neurons in the brain; current arrays have about 100 microelectrodes.

IV. Issues in Interpreting Study Results

Lawyers trying to introduce neuroscience evidence will almost always be arguing that, when interpreted in the light of some preexisting research study, some kind of neuroscience-based test of the brain of a person in the case—usually a party, though sometimes a witness—is relevant to the case. It might be a claim that a PET scan shows that a criminal defendant was likely to have been legally insane at the time of the crime; it could be a claim that an fMRI of a witness demonstrates that she is lying. The judge will have to determine whether the scientific evidence is admissible at all under the Federal Rules of Evidence, and particularly under Rule 702. If the evidence is admissible, the finder of fact will need to consider the validity and strength of the underlying scientific finding, the accuracy of the particular test performed on the party or witness, and the application of the former to the latter.

Neuroscience-based evidence will commonly raise several scientific issues relevant to both the initial admissibility decision and the eventual determination of the weight to be given the evidence. This section of the reference guide examines seven of these issues: replication, experimental design, group averages, subject selection and number, technical accuracy, statistical issues, and countermeasures. The discussion focuses on fMRI-based evidence, because that seems likely to be the method that will be used most frequently in the coming years, but most of the seven issues apply more broadly.

One general point is absolutely crucial. The various techniques discussed in Section III, supra, are generally accepted scientific procedures, both for use in research and, in most cases, in clinical care. Each one is a good scientific tool in general. The crucial issue is not likely to be whether the techniques meet the requirements for admissibility when used for some purposes, but whether the techniques—when used for the purpose for which they are offered—meet those requirements. Sometimes proponents of fMRI-based lie detection, for example, have argued that the technique should be accepted because fMRI is the subject of more than 12,000 peer-reviewed

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

publications. That is true, but irrelevant—the question is the application of fMRI to lie detection, which is the subject of far fewer, and much less definitive, publications.

A. Replication

A good general rule of thumb in science is never to rely on any experimental finding until it has been independently replicated. This may be particularly true with fMRI experiments, not because of fraud or negligence on the part of the experimenters, but because, for reasons discussed below, these experiments are very complicated. Replication builds confidence that those complications have not led to false results.

In many scientific fields, including much of fMRI research, replication is sometimes not as common as it should be. A scientist often is not rewarded for replicating (or failing to replicate) another’s work. Grants, tenure, and awards tend to go to people doing original research. The rise of fMRI has meant that such original experiments are easy to conceive and to attempt—anyone with experimental expertise, access to research subjects (often undergraduates), and access to an MRI scanner (found at any major medical facility) can try his or her own experiments and, if the study design and logic are sound and the results are statistically significant, may well end up with published results. Experiments replicating, or failing to replicate, another’s work are neither as exciting nor as publishable.

For example, as discussed in more detail below, more than 15 different laboratories have collectively published 20 to 30 peer-reviewed articles finding some statistically significant relationship between fMRI-measured brain activity and deception. None of the studies is an independent replication of another laboratory’s work. Each laboratory used its own experimental design, its own scanner, and its own method of analysis. Interestingly, the published results implicate many different areas of the brain as being activated when a subject lies. A few of the brain regions are found to be important in most of the studies, but many of the other brain regions showing a correlation with deception differ from publication to publication. Only a few of the laboratories have published replications of their own work; some of those laboratories have actually published findings with different results from those in their earlier publications.

That a finding has been replicated does not mean it is correct; different laboratories can make the same mistakes. Neither does failure of replication mean that a result is wrong. Nonetheless, the existence of independent replication is important support for a finding.

B. Problems in Experimental Design

The most important part of an fMRI experiment is not the MRI scanner, but the design of the underlying experiment being examined in the scanner. A poorly

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

designed experiment may yield no useful information, and even a well-designed experiment may lead to information of uncertain relevance.

A well-designed experiment must focus on the particular mental state or brain process of interest while minimizing any systematic biases. This can be especially difficult with fMRI studies. After all, these studies are measuring blood flow in the brain associated with neuronal responses in particular regions. If, for example, in an experiment trying to assess how the brain reacts to pain, the experimental subjects are consistently distracted at one point in the experiment by thinking about something else, the areas of brain activation will include the areas activated by the distraction. One of the earliest published lie detection experiments was designed so that the experimental subjects pushed a button for “yes” only when saying (honestly) that they held the card displayed; they pushed the “no” button both when they did not hold the card displayed and when they did hold it but were following instructions to lie. They were to say “yes” only 24 times out of 432 trials.12 The resulting differences might have come from the differences in thinking about telling the truth or telling a lie—but they also may have come from the differences in thinking about pressing the “no” button (the most common action) and pressing the “yes” button (the less frequent response). The results themselves cannot distinguish between the two explanations.

Designing good experiments is difficult, but in some respects the better the experiment, the less relevant it may prove to a real situation. A laboratory experiment attempts to minimize distractions and differences among subjects, but such factors will be common in real-world settings. Perhaps more important, for some kinds of experiments it will be difficult, if not impossible, to reproduce in the laboratory the conditions of interest in the real world. As an extreme example, if one is interested in how a murderer’s brain functions during a murder, one cannot conduct an experiment that involves having the subject commit a murder in the scanner. For ethical reasons, that condition of interest cannot be tested in the experiment.

The problem of trying to detect deception provides a different example. All published laboratory-based experiments involve people who know that they are taking part in a research project. Most of them are students and are being paid to participate in the project. They have received detailed information about the experiment and have signed a consent form. Typically, they are instructed to “lie” about a particular matter. Sometimes they are told what the lie should be (to deny that they see a particular playing card, such as the seven of clubs, on a screen in the scanner); sometimes they are told to make up a lie (about their most recent

12. Daniel D. Langleben et al., Telling Truth from Lie in Individual Subjects with Fast Event-Related fMRI, 26 Human Brain Mapping 262 (2005). See discussion in Nancy Kanwisher, The Use of fMRI in Lie Detection: What Has Been Shown and What Has Not, in Emilio Bizzi et al., Using Imaging to Identify Deceit: Scientific and Ethical Questions (2009), at 10, and in Anthony Wagner, Can Neuroscience Identify Lies? in A Judge’s Guide to Neuroscience, supra note 1, at 30.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

vacation, for example). In either case, they are following instructions—doing what they should be doing—when they tell the “lie.”

This situation is different from the realistic use of lie detection, when a guilty person needs to tell a convincing story to avoid a high-stakes outcome such as arrest or conviction—and even an innocent person will be genuinely nervous about the possibility of an incorrect finding of deception. In an attempt to parallel these real-world characteristics, some laboratory-based studies have tried to give subjects some incentive to lie successfully; for example, the subjects may be told (falsely) that they will be paid more if they “fool” the experimenters. Although this may increase the perceived stakes, it seems unlikely that it creates a realistic level of stress. These differences between the laboratory and the real world do not mean that the experimental results of laboratory studies are unquestionably different from the results that would exist in a real-world situation, but they do raise serious questions about the extent to which the experimental data bear on detecting lies in the real world.

Few judges will be expert in the difficult task of designing valid experiments. Although judges may be able themselves to identify weaknesses in experimental design, more often they will need experts to address these questions. Judges will need to pay close attention to that expert testimony and the related argument, as “details” of experimental design may turn out to be absolutely crucial to the value of the experimental results.

C. The Number and Diversity of Subjects

Doing fMRI scans is expensive. The total cost of performing an hour-long research scan of a subject ranges from about $300 to $1000. Much fMRI research, particularly work without substantial medical implications, is not richly funded. As a result, studies tend to use only a small number of subjects—many fMRI studies use 10 to 20 subjects, and some use even fewer. In the lie detection literature, for example, the number of subjects used ranges from 4 to about 30.

It is unclear how representative such a small group would be of the general population. This is particularly true of the many studies that use university students as research subjects. Students typically are from a restricted age range, are likely to be of above-average intelligence and socioeconomic background, may not accurately reflect the country’s ethnic diversity, and typically will underrepresent people with serious mental conditions. To limit possible confounding variables, it can make sense for a study design to select, for example, only healthy, right-handed, native-English-speaking male undergraduates who are not using drugs. But the very process of selecting such a restricted group raises questions about whether the findings will be relevant to other groups of people. They may be directly relevant, or they may not be. At the early stages of any fMRI research, it may not be clear what kinds of differences among subjects will or will not be important.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

D. Applying Group Averages to Individuals

Most fMRI-based research looks for statistically significant associations between particular patterns of brain activation across a number of subjects. It is highly unlikely than any fMRI pattern will be found always to occur under certain circumstances in every person tested, or even that it will always occur under those circumstances in any one person. Human brains and their responses are too complicated for that. Research is highly unlikely to show that brain pattern “A” follows stimulus “B” each and every time and in every single person, although it may show that A follows B most of the time.

Consider an experiment with 10 subjects that examines how brain activation varies with the sensation of pain. A typical approach to analyzing the data is to take the average brain activation patterns of all 10 subjects combined, looking for the regions that, across the group, have the greatest changes—the most statistically significant changes—when the painful stimulus is applied compared with when it is absent. Importantly, though, the most significant region showing increased activation on average may not be the region with the greatest increase in activation in any particular one of the 10 subjects. It may not even be the area with the greatest activation in any of the 10 subjects, but it may be the region that was most consistently active across the brains of the 10 subjects, even if the response was small in each person.

Although group averages are appropriate for many scientific questions, the problem is that the law, for the most part, is not concerned with “average” people, but with individuals. If these “averaged” brains show a particular pattern of brain activation in fMRI studies and a defendant’s brain does not, what, if anything, does that mean?

It may or may not mean anything—or, more accurately, the chances that it is meaningful will vary. The findings will need to be converted into an assessment of an individual’s likelihood of having a particular pattern of brain activation in response to a stimulus, and that likelihood can be measured in various ways.

Consider the following simplified example. Assume that 1000 people have been tested to see how their brains respond to a particular painful stimulus. Each is scanned twice, once when touched by a painfully hot metal rod and once when the rod is room temperature. Assume that all of them feel pain from the heated rod and that no one feels pain from the room temperature rod. And, finally, assume that 900 of the 1000 show a particular pattern of brain activation when touched with the hot rod, but only 50 of the 1000 show the same pattern when touched with the room temperature rod.

For these 1000 people, using the fMRI activation pattern as a test for the perception of this pain would have a sensitivity of 90% (90% of the 1000 who felt the pain would be correctly identified and only 10% would be false negatives). Using the activation as a test for the lack of the pattern would have a specificity of 95% (95% of those who did not feel pain were correctly identified and only

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

5% were false positives). Now ask, of all those who showed a positive test result, how many were actually positive? This percentage, the positive predictive value, would be 94.7%—900 out of 950. Depending on the planned use of the test, one might care more about one of these measures than another and there are often tradeoffs between them. Making a test more sensitive (so that it misses fewer people with the sought characteristic) often means making it less specific (so that it picks up more people who do not have the characteristic in question). In any event, when more people are tested, these estimates of sensitivity, specificity, and positive predictive value become more accurate.

There are other ways of measuring the accuracy of a test of an individual, but the important point is that some such conversion is essential. A research paper that reveals that the average subject’s brain (more accurately, the “averaged subjects’ brain”) showed a particular reaction to a stimulus does not, in itself, say anything useful about how likely any one person is to have the same reaction to that stimulus. Further analyses are required to provide that information. Researchers, who are often more interested in identifying possible mechanisms of brain action than in creating diagnostic tests, will not necessarily have analyzed their data in ways that make them useful for application to individuals—or even have obtained enough data for that to be possible. At least in the near future, this is likely to be a major issue for applying fMRI studies to individuals, in the courtroom, or elsewhere.

E. Technical Accuracy and Robustness of Imaging Results

MRI machines are variable, complicated, and finicky. The machines come in several different sizes, based on the strength of the magnet, with machines used for clinical purposes ranging from 0.2 T to 3.0 T and research scanners going as high as 9.4 T. Three companies dominate the market for MRI machines—General Electric, Siemens, and Philips—although several other companies also make the machines. Both the power and the manufacturer of an MRI system can make a substantial difference in the resulting data (and images). These variations can be more important with functional MRI (though they also apply to structural MRI) so that a result seen on a 1.5-T Siemens scanner might not appear on a 3.0-T General Electric machine. Similarly, results from one 3.0-T General Electric machine may be different from those on an identical model.

Even the exact same MRI machine may behave differently from day to day or month to month. The machines frequently need maintenance or adjustments and sometimes can be inoperable for days or even weeks at a time. Comparing results from even the same machine before and after maintenance—or a system upgrade—can be difficult. This can make it hard to compare results across different studies or between the group average of one study and results from an individual subject.

These issues concern not only the quality of the scans done in research, but, even more importantly, the credibility of the individual scan sought to be intro-

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

duced at trial. If different machines were used, care must be taken to ensure that the results are comparable. The individual scans also can have other problems. Any one scan is subject not only to machine-derived artifacts and other problems noted above, but also to human-generated artifacts, such as those caused by the subject’s movements during the scan.

Finally, another technical problem of a different kind comes from the nature of fMRI research itself. The scanner will record changes in the relative levels of oxyhemoglobin to deoxyhemoglobin for thousands of voxels throughout the brain. During data analysis, these signal changes will be tested to see if they show any change in the response between the experimental condition and the baseline or control condition. Importantly, with fMRI, there is no definitive way to quantify precisely how large a change there was in the neural response compared to baseline; hence, the researcher must set a somewhat arbitrary statistical cutoff value (a threshold) for saying that a voxel was activated or deactivated. A researcher who wants only to look at strong effects will require a large change from baseline; a researcher who wants to see a wide range of possible effects will allow smaller changes from baseline to count.

Neither way is “right”—we do not know whether there is some minimum change in the BOLD response that means an “important” amount of brain activation has taken place, and if such a true value exists, it is likely to differ across brain regions, across tasks, and across experimental contexts. What this means is that different choices of statistical cutoff values can produce enormous differences in the apparent results. And, of course, the cutoff values used in the studies and in the scan of the individual of interest must be consistent across repeated tests. This important fact often may not be known.

F. Statistical Issues

Interpreting fMRI results requires the application of complicated statistical methods.13 These methods are particularly difficult, and sometimes controversial, for fMRI studies, partly because of the thousands of voxels being examined. Fundamentally, most fMRI experiments look at many thousands of voxels and try to determine whether any of them are, on average, activated or deactivated as a result of the task or stimulus being studied. A simple test for statistical significance asks whether a particular result might have arisen by chance more than 1 time in 20 (or 5%): Is it significant at the .05 level? If a researcher is looking at the results for thousands of different voxels, it is likely that a number of voxels will show an effect above the threshold just by chance. There are statistical ways to control the rate of these false positives, but they need to be applied carefully. At the same time, rigid control of false positives through statistical correction (or the use of

13. For a broad discussion of statistics, see David H. Kaye & David A. Freedman, Reference Guide on Statistics, in this manual.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

a very conservative threshold) can create another problem—an increase in the false-negative rate, which results in failing to detect true brain responses that are present in the data but that fall below the statistical threshold. The community of fMRI researchers recognizes that these issues of statistical significance are difficult to resolve.

Over the past decade, other statistical techniques are increasingly being used in neuroimaging research, including techniques that do not look at the statistical significance of changes in the BOLD response in individual voxels, but that instead examine changes in the distributed patterns of activation across many voxels in a region of the brain or across the whole brain. These techniques include methods known as principal component analysis, multivariate analysis, and related machine learning algorithms. These methods, the details of which are not reviewed in this chapter, are being used increasingly in neuroimaging research and are producing some of the most interesting results in the field. The techniques are fairly complex, and determining how to interpret the results of these tests can be controversial. Thus, these methods alone may require substantial and potentially confusing expert testimony in addition to all the other expert testimony about the underlying neuroscience evidence.

G. Possible Countermeasures

When neuroimaging is being used to compare the brain of one individual—a defendant, plaintiff, or witness, for example—to others, the individual undergoing neuroimaging might be able to use countermeasures to make the results unusable or misleading. And at least some of those countermeasures may prove especially hard to detect.

Subjects can disrupt almost any kind of scanning, whether done for structural or functional purposes, by moving in the scanner. Unwilling subjects could ruin scans by moving their bodies, heads, or, possibly, even by moving their tongues. Blatant movements to disrupt the scan would be apparent, both from watching the subject in the scanner and from seeing the results, leading to a possible negative inference that the person was trying to interfere with the scan. Nonetheless, that scan itself would be useless.

More interesting are possible countermeasures for functional scans. Polygraphy may provide a useful comparison. Countermeasures have long been tried in polygraphy with some evidence of efficacy. Polygraphy typically looks at the differences in physiological measurements of the subject when asked anxiety-provoking questions or benign control questions. Subjects can use drugs or alcohol to try to dampen their body reactions when asked anxiety-provoking questions. They can try to use mental measures to control or affect their physiological reactions, calming themselves during anxiety-provoking questions and increasing their emotional reaction to control questions. And, when asked control questions, they can try to increase the physiological signs the polygraph measures through physical means.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

For example, subjects might bite their tongues, step on tacks hidden in their shoes, or tighten various muscles to try to increase their blood pressure, galvanic skin response, and so on. The National Academy of Sciences report on polygraphs concluded that

Basic science and polygraph research give reason for concern that polygraph test accuracy may be degraded by countermeasures, particularly when used by major security threats who have a strong incentive and sufficient resources to use them effectively. If these measures are effective, they could seriously undermine any value of polygraph security screening.14

Some of the countermeasures used by polygraph subjects can be detected by, for example, drug or alcohol tests or by carefully watching the subject’s body. But purely mental actions cannot be detected. These kinds of countermeasures may be especially useful to subjects seeking to beat neuroscience-based lie detection. For example, some argue that deception produces different activation patterns than telling the truth because it is mentally harder to tell a lie—more of the brain needs to work to decide whether to lie and what lie to tell. If so, two mental countermeasures immediately suggest themselves: make the lie easier to tell (through, perhaps, memorization or practice) or make the brain work harder when telling the truth (through, perhaps, counting backward from 100 by sevens).

Countermeasures are not, of course, potentially useful only in the context of lie detection. A neuroimaging test to determine whether a person was having the subjective feeling of pain might be fooled by the subject remembering, in great detail, past experiences of pain. The possible uses of countermeasures in neuroimaging have yet to be extensively explored, but at this point they cast additional doubt on the reliability of neuroimaging in investigations or in litigation.

V.  Questions About the Admissibility and the Creation of Neuroscience Evidence

The admissibility of neuroscience evidence will depend on many issues, some of them arising from the rules of evidence, some from the U.S. Constitution, and some from other legal provisions. Another often-overlooked reality is that judges may have to decide whether to order this kind of evidence to be created. Certainly, judges may be called to pass upon the requests for criminal defendants (or convicts seeking postconviction relief) to be able to use neuroimaging. They may also have to decide motions in civil or criminal cases to compel neuroimaging. One could even imagine requests for warrants to “search the brains” of possible

14. See National Research Council, The Polygraph and Lie Detection 5 (2003). This report is an invaluable resource for discussions of not just the scientific evidence about the reliability of the polygraph, but also for general background about the application of science to lie detection.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

witnesses for evidence. This guide does not seek to resolve any of these questions, but points out some of the problems that are likely to be raised about admitting neuroscience evidence in court.

A. Evidentiary Rules

This discussion looks at the main evidentiary issues that are likely to be raised in cases involving neuroscience evidence. Note, though, that judges will not always be governed by the rules of evidence. In criminal sentencing or in probation hearings, among other things, the Federal Rules of Evidence do not apply,15 and they apply with limitations in other contexts.16 Nonetheless, even in those circumstances, many of the principles behind the Rules, discussed below, will be important.

1. Relevance

The starting point for all evidentiary questions must be relevance. If evidence is not relevant to the questions in hand, no other evidentiary concerns matter. This basic reminder may be particularly useful with respect to neuroscience evidence. Evidence admitted, for example, to demonstrate that a criminal defendant had suffered brain damage sometime before the alleged crime is not, in itself, relevant. The proffered fact of the defendant’s brain damage must be relevant. It may be relevant, for example, to whether the defendant could have formed the necessary criminal intent, to whether the defendant should be found not guilty by reason of insanity, to whether the defendant is currently competent to stand trial, or to mitigation in sentencing. It must, however, be relevant to something in order to be admissible at all, and specifying its relevance will help focus the evidentiary inquiry. The question, for example, would not be whether PET scans meet the evidentiary requirements to be admitted to demonstrate brain damage, but whether they have “any tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.”17 The brain damage may be relevant to a fact, but that fact must be “of consequence to the determination of the action.”

2. Rule 702 and the admissibility of scientific evidence

Neuroscience evidence will almost always be “scientific…knowledge” governed by Rule 702 of the Federal Rules of Evidence, as interpreted in Daubert v. Merrell Dow Pharmaceuticals18 and its progeny, both before and after the amendments to Rule 702 in 2000. Rule 702 allows the testimony of a qualified expert “if

15. Fed. R. Evid. 1101(d).

16. Fed. R. Evid. 1101(e).

17. Fed. R. Evid. 401.

18. Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

(1) the testimony is sufficiently based upon reliable facts or data, (2) the testimony is the product of reliable principles and methods, and (3) the witness has applied the principles and methods reliably to the facts of the case.” In Daubert, the Supreme Court listed several, nonexclusive guidelines for trial court judges considering testimony under Rule 702. The Committee that proposed the 2000 Amendments to Rule 702 summarized these factors as follows:

The specific factors explicated by the Daubert Court are (1) whether the expert’s technique or theory can be or has been tested—that is, whether the expert’s theory can be challenged in some objective sense, or whether it is instead simply a subjective, conclusory approach that cannot reasonably be assessed for reliability; (2) whether the technique or theory has been subject to peer review and publication; (3) the known or potential rate of error of the technique or theory when applied; (4) the existence and maintenance of standards and controls; and (5) whether the technique or theory has been generally accepted in the scientific community.19

The tests laid out in Daubert and in the evidentiary rules governing expert testimony have been the subjects of enormous discussion, both by commentators and by courts. And, to the extent some neuroscience evidence has been admitted in federal courts (and the courts of states that follow Rule 702 or Daubert), it has passed those tests. We do not have the knowledge needed to analyze them in detail, but we will merely point out a few aspects that seem especially relevant to neuroscience evidence.

Neuroscience evidence should often be subject to tests, as long as the point of the neuroscience evidence is kept in mind. An fMRI scan might provide evidence that someone was having auditory hallucinations, but it could not prove that someone was not guilty by reason of insanity. The latter is a legal conclusion, not a scientific finding. The evidence might be relevant to the question of insanity, but one cannot plausibly conduct a scientific test of whether a particular pattern of brain activation is always associated with legal insanity. One might offer neuroimaging evidence about whether a person is likely to have unusual difficulty controlling his or her impulses, but that is not, in itself, proof that the person acted recklessly. The idea of testing helps separate the conclusions that neuroscience might be able to reach from the legal conclusions that will be beyond it.

Daubert’s stress on the presence of peer review and publication corresponds nicely to scientists’ perceptions. If something is not published in a peer-reviewed journal, it scarcely counts. Scientists only begin to have confidence in findings after peers, both those involved in the editorial process and, more important, those who read the publication, have had a chance to dissect them and to search intensively for errors either in theory or in practice. It is crucial, however, to recognize that publication and peer review are not in themselves enough. The publications need to be compared carefully to the evidence that is proffered.

19. Fed. R. Evid. 702 advisory committee’s note.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

First, the published, peer-reviewed articles must establish the specific scientific fact being offered. An (accurate) assertion that fMRI has been the basis of more than 12,000 peer-reviewed publications will help establish that fMRI can be used in ways that the scientific community finds reliable. By themselves, however, those publications do not establish any particular use of fMRI. If fMRI is being offered as proof of deception, the 20 or 30 peer-reviewed articles concerning its ability to detect deception are most important, not the 11,980 articles involving fMRI for other purposes.

Second, the existence of several peer-reviewed publications on the same general method does not support the accuracy of any one approach if those publications are mutually inconsistent. There are now about 20 to 30 peer-reviewed publications that, using fMRI, find statistically significant differences in patterns of brain activation depending on whether the subjects were telling the truth or (typically) telling a lie when instructed to do so. Many of those publications find patterns that are different from, and often inconsistent with, the patterns described in the other publications. Multiple inconsistent publications do not add weight, and may indeed subtract it, from a scientific method or theory.

Third, the peer-reviewed publication needs to describe in detail the method about which the expert plans to testify. A commercial firm might, for example, claim that its method is “based on” some peer-reviewed publications, but unless the details of the firm’s methods were included in the publication, those details were neither published nor peer reviewed. A proprietary algorithm used to generate a finding published in the peer-reviewed literature is not adequately supported by that literature.

The error rate is also crucial to most neuroscience evidence, in two different senses. One is the degree to which the machines used to produce the evidence make errors. Although these kinds of errors may balance out in a large sample used in published literature, any scan of any one individual may well be affected by errors in the scanning process. Second, and more important, neuroscience evidence will almost never give an absolute answer, but will give a probabilistic one. For example, a certain brain structure or activation pattern will be found in some percentage of people with a particular mental condition or state. These group averages will have error rates when they are applied to individuals. Those rates need to be known and presented.

The issue of standards and controls also is important in neuroscience. This area is new and has not undergone the kind of standardization seen, for example, in forensic DNA analysis. When trying to apply neuroscience findings to an individual, evidence from the individual needs to have been acquired in the same way, with the same standards and conditions, as the evidence from which the scientific conclusions were drawn—or, at least, in ways that can be made readily comparable. For example, there is no one standard in fMRI research for what statistical threshold should be used for a change in the BOLD signal to “count” as a meaningful activation or deactivation. An individual’s scan would need to

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

be analyzed under the same definition for activation as was used in the research supporting the method, and the effects of the chosen threshold on finding a false positive or false negative must be considered.

The final consideration, general acceptance in the scientific community, also needs to be applied carefully. There is clearly general acceptance in the scientific community that fMRI can provide scientifically and sometimes clinically useful information about the workings of human brains, but that does not mean there is general acceptance of any particular fMRI application. Similarly, there may be general acceptance that fMRI can provide some general information about the physical correlates of a particular mental state, but without general acceptance that it can do so reliably in an individual case.

3. Rule 403

Rule 702 is not the only test that neuroscience evidence will need to pass to be admitted in court. Even evidence admissible under that rule must still escape the exclusion provided by Rule 403:

Although relevant, evidence may be excluded if its probative value is substantially outweighed by the danger of unfair prejudice, confusion of the issues, or misleading the jury, or by considerations of undue delay, waste of time, or needless presentation of cumulative evidence.

As discussed in detail in a recent article,20 Rule 403 may be particularly important with some attempted applications of neuroscience evidence because of the balance it requires between the value of evidence to the decisionmaker and its costs.

The probative value of such evidence may often be questioned. Neuroscience evidence will rarely, if ever, be definitive. It is likely to have a range of uncertainties, from the effectiveness of the method in general, to questions of its proper application in this case, to whether any given individual’s reactions are the same as those previously tested.

The other side of Rule 403, however, is even more troublesome. The time necessary to introduce such evidence, and to educate the jury (and judge) about it, will usually be extensive. The possibilities for confusion are likely to be great. And there is at least some evidence that jurors (or, to be precise, “mock jurors”) are particularly likely to overestimate the power of neuroscience evidence.21

20. Teneille Brown & Emily Murphy, Through a Scanner Darkly: Functional Neuroimaging as Evidence of a Criminal Defendant’s Past Mental States, 62 Stan. L. Rev. 1119 (2010).

21. See Deena Skolnick Weisberg et al., The Seductive Allure of Neuroscience Explanations, 20 J. Cog. Neurosci. 470 (2008); David P. McCabe & Alan D. Castel, Seeing Is Believing: The Effect of Brain Images on Judgments of Scientific Reasoning, 107 Cognition 343 (2008). These articles are discussed in Brown & Murphy, supra note 20, at 1199–1202. But see N.J. Schweitzer et al., Neuroimages as Evidence in a Mens Rea Defense: No Impact, Psychol. Pub. Pol’y L. (in press) providing the experimental results that seem to indicate that showing neuroimages to mock jurors does not affect their decisions.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

A high-tech “picture” of a living brain, complete with brain regions shown in bright orange and deep purple (colors not seen in an actual brain), may have an unjustified appeal to a jury. In each case, judges will need to weigh possibilities of confusion or prejudice, along with the near certainty of lengthy testimony, against the claimed probative value of the evidence.

4. Other potentially relevant evidentiary issues

Neuroscience evidence will, of course, be subject in individual cases to all evidentiary rules, from the Federal Rules of Evidence or otherwise, and could be affected by many of them. Four examples follow where the application of several such rules to this kind of evidence may raise interesting issues; there are undoubtedly many others.

First, in June 2009 the U.S. Supreme Court decided Melendez-Diaz v. Massachusetts,22 where the five-justice majority held that the Confrontation Clause required the prosecution to present the testimony at trial of state laboratory analysts who had identified a substance as cocaine. This would seem to apply to any use by the prosecution in criminal cases of neuroscience evidence about a scanned defendant or witness, although it is not clear who would have to testify. Would testimony be required from the person who observed the procedure, the person who analyzed the results of the procedure, or both? If the results were analyzed by a computerized algorithm, would the individual (or group) that wrote that algorithm have to testify? These questions, and others, are not unique to neuroscience evidence, of course, but will have to be sorted out generally after Melendez-Diaz.

Second, the Federal Rules of Evidence put special limits on the admissibility of evidence of character and, in some cases, of predisposition.23 In some cases, neuroscience evidence offered for the purpose of establishing a regular behavior of the person might be viewed as evidence of character24 or predisposition (or

22. 129 S. Ct. 2527 (2009).

23. Fed. R. Evid. 404, 405, 412–415, 608.

24. Evidence about lie detection has sometimes been viewed as “character evidence,” introduced to bolster a witness’s credibility. The Canadian Supreme Court has held that polygraph evidence is inadmissible in part because it violates the rule limiting character evidence.

“What is the consequence of this rule in relation to polygraph evidence? Where such evidence is sought to be introduced, it is the operator who would be called as the witness, and it is clear, of course, that the purpose of his evidence would be to bolster the credibility of the accused and, in effect, to show him to be of good character by inviting the inference that he did not lie during the test. In other words, it is evidence not of general reputation but of a specific incident, and its admission would be precluded under the rule. It would follow, then, that the introduction of evidence of the polygraph test would violate the character evidence rule.” R. v. Béland, 60 C.R. (3d) 1, ¶¶ 71–72 (1987).

The Canadian court also held that polygraph evidence violated another rule concerning character evidence, the rule against “oath-helping.”

“From the foregoing comments, it will be seen that the rule against oath-helping, that is, adducing evidence solely for the purpose of bolstering a witness’s credibility, is well grounded in authority. It

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

lack of predisposition). Whether such evidence could be admitted might hinge on whether it was offered in a civil case or a criminal case, and, if in a criminal case, by the prosecution or the defendant.

Third, Federal Rule of Evidence 406 allows the admission of evidence about a habit or routine practice to prove that the relevant person’s actions conformed to that habit or routine practice. It is conceivable that neuroscience evidence might be used to describe “habits of mind” and thus be offered under this rule.

The fourth example applies to neuroscience-based lie detection. Although New Mexico is the only U.S. jurisdiction that generally allows the introduction of polygraph evidence,25 several jurisdictions allow polygraph evidence in two specific situations. First, polygraph evidence is sometimes allowed when both parties have stipulated to its admission in advance of the performance of the test. (This does lead one to wonder whether a court would allow evidence from a psychic or from a fortune-telling toy, like the Magic Eight Ball, if both parties stipulated to it.) Second, polygraph evidence is sometimes allowed to impeach or to corroborate a witness’s testimony.26 If a neuroscience-based lie detection technique were found to be as reliable as the polygraph, presumably those jurisdictions would have to consider whether to extend these exceptions to such neuroscience evidence.

B. Constitutional and Other Substantive Rules

In many contexts, courts will be asked to admit neuroscience evidence or to order, allow, or punish its creation. Such actions may implicate a surprisingly large number of constitutional rights, as well as other substantive legal provisions. Most of these would be rights against the creation or use of neuroscience evidence, although some would be possible rights to its use. And one constitutional provision, the Fourth Amendment, might cut both ways. Again, this section will not seek to discuss all possible such claims or to resolve any of them, but only to raise some of the most interesting issues.

1. Possible rights against neuroscience evidence

a. The Fifth Amendment privilege against self-incrimination

Could a person be forced to “give evidence” through a neuroscience technology, or would that violate his or her privilege against self-incrimination? This has

is apparent that, since the evidence of the polygraph examination has no other purpose, its admission would offend the well-established rule.” R. v. Béland, 60 C.R. (3d) 1, ¶ 67(1) (The court also ruled against polygraph evidence as violating the rule against prior consistent statements and, because the jury needs no help in assessing credibility, the rule on the use of expert witnesses).

25. Lee v. Martinez, 96 P.3d 291 (N.M. 2004).

26. See, e.g., United States v. Piccinonna, 885 F.2d 1529 (11th Cir. 1989) (en banc). See also United States v. Allard, 464 F.3d. 529 (5th Cir. 2006); Thornburg v. Mullin, 422 F.3d 1113 (10th Cir. 2005).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

already begun to be discussed by legal scholars in the context of lie detection.27 One issue is whether the neuroscience evidence is “testimonial evidence.” If it were held to be “testimonial” it would be subject to the privilege, but if it were nontestimonial, it would, under current law, not be. Examples of nontestimonial evidence for purposes of the privilege against self-incrimination include incriminating information from a person’s private diaries, a blood alcohol test, or medical X rays. An fMRI scan is nothing more than a computer record of radio waves emitted by molecules in the brain. It does not seem like “testimony.” On the other hand, fMRI-based lie detection currently involves asking the subject questions to which he or she gives answers, either orally, by pressing buttons, or by some other form of communication. Perhaps those answers would make the resulting evidence “testimonial.”

It is possible, however, that answers may not be necessary. Two EEG-based systems claim to be able to determine whether a person either recognizes or has “experiential knowledge” of an event (a memory derived from experience as opposed to being told about it).28 Very substantial scientific questions exist about each system, but, assuming they were to be admitted as reliable, they would raise this question more starkly because they do not require the subject of the procedure to communicate. The subject is shown photographs of relevant locations or read a description of the events while hooked up to an EEG. The brain waves,

27. The law review literature, by both faculty and students, discussing the Fifth Amendment and neuroscience-based lie detection is already becoming voluminous. See, e.g., Nita Farahany, Incriminating Thoughts, 64 Stan. L. Rev., Paper No. 11-17, available at SSRN: http://ssrn.com/abstract=1783101 (2011); Dov Fox, The Right to Silence as Protecting Mental Control, 42 Akron L. Rev. 763 (2009); Matthew Baptiste Holloway, One Image, One Thousand Incriminating Words: Images of Brain Activity and the Privilege Against Self-Incrimination, 27 Temp. J. Sci. Tech. & Envtl. L. 141 (2008); William Federspiel, Neuroscience Evidence, Legal Culture, and Criminal Procedure, 16 Wm. & Mary Bill Rts. J. 865 (2008); Sarah E. Stoller & Paul Root Wolpe, Emerging Neurotechnologies for Lie Detection and the Fifth Amendment, 33 Am. J.L. & Med. 359 (2007); Michael S. Pardo, Neuroscience Evidence, Legal Culture, and Criminal Procedure, 33 Am. J. Crim. L. 301 (2006); and Erich Taylor, A New Wave of Police Interrogation? “Brain Fingerprinting,” the Constitutional Privilege Against Self-Incrimination, and Hearsay Jurisprudence, U. Ill. J.L. Tech. & Pol’y 287 (2006).

28. The first system is the so-called Brain Fingerprinting, developed by Dr. Larry Farwell. This method was introduced successfully in evidence at the trial court level in a postconviction relief case in Iowa; the use of the method in that case is discussed briefly in the Iowa Supreme Court’s decision on appeal, Harrington v. Iowa, 659 N.W.2d 509, 516 n.6 (2003). (The Court expressed no view on whether that evidence was properly admitted. See id. at 516.) The method is discussed on the Web site of Farwell’s company, Brain Fingerprinting Laboratories, www.brainwavescience.com. It is criticized from a scientific perspective in J. Peter Rosenfeld, “Brain Fingerprinting”: A Critical Analysis, 4 Sci. Rev. Mental Health Practice 20 (2005). See also the brief discussion in Henry T. Greely & Judy Illes, Neuroscience-Based Lie Detection: The Urgent Need for Regulation, 33 Am. J.L. & Med. 377, 387–88 (2007).

The second system is called Brain Electrical Oscillation Signature (BEOS) and was developed in India, where it has been introduced in trials and has been important in securing criminal convictions. See Anand Giridharadas, India’s Novel Use of Brain Scans in Courts Is Debated, N.Y. Times, Sept. 15, 2008, at A10.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

it is asserted, demonstrate whether the subject recognizes the photographs or has “experiential knowledge” of the events—no volitional communication is necessary. It might be harder to classify these EEG records as “testimonial.”

b. Other possible general constitutional protections against compulsory neuroscience procedures

Even if the privilege against self-incrimination applies to neuroscience methods of obtaining evidence, it only applies where someone invokes the privilege. The courts and other government bodies force people to answer questions all the time, often under penalty of criminal or civil sanctions or of the court’s contempt power. For example, a plaintiff in a civil case alleging damage to his health can be compelled to undergo medical testing at a defendant’s appropriate request. In that case, the plaintiff can refuse, but only at the risk of seeing his case dismissed. Presumably, a party could similarly demand that a party, or a witness, undergo a neuroimaging examination, looking for either structural or functional aspects of the person’s brain relevant to the case. If the privilege against self-incrimination is not available, or is available but not attractive, could the person asked have any other protection?

The answer is not clear. One might try to argue, along the lines of Rochin v. California,29 that such a procedure violates the Due Process Clause of the Fifth and Fourteenth Amendments because it intrudes on the person in a manner that “shocks the conscience.” Alternatively, one might argue that a “freedom of the brain” is a part of the fundamental liberty or the right to privacy protected by the Due Process Clause.30 Or one might try to use language in some U.S. Supreme Court First Amendment cases that talk about “freedom of thought” to argue that the First Amendment’s freedoms of religion, speech, and the press encompass a broader protection of the contents of the mind. The Court never seems to have decided a case on that point. The closest case might be Stanley v. Georgia,31 where the Court held that Georgia could not criminalize a man’s private possession of pornography for his own use. None of these arguments is, in itself, strongly supported, but each draws some appeal from a belief that we should be able to keep our thoughts, and, by extension, the workings of our brain, to ourselves.

c. Other substantive rights against neuroscience evidence

At least one form of possible neuroscience evidence may already be covered by statutory provisions limiting its creation and use—lie detection. In 1988, Congress

29. 342 U.S. 165 (1952).

30. See Paul Root Wolpe, Is My Mind Mine? Neuroethics and Brain Imaging, in The Penn Center Guide to Bioethics (Arthur L. Caplan et al. eds., 2009).

31. 394 U.S. 557 (1969). In the context of finding that the First Amendment forbids criminalizing mere possession of pornography, in the home, for an adult’s private use, the Court wrote “Our whole constitutional heritage rebels at the thought of giving government the power to control men’s minds.” The leap from that language, or that holding, to some kind of mental privacy, is not small.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

passed the federal Employee Polygraph Protection Act (EPPA).32 Under this Act, almost all employers are forbidden to “directly or indirectly,…require, request, suggest, or cause any employee or prospective employee to take or submit to any lie detector test” or to “use, accept, refer to, or inquire concerning the results of any lie detector test of any employee or prospective employee.”33 The Act defines a “lie detector” broadly, as “a polygraph, deceptograph, voice stress analyzer, psychological stress evaluator, or any other similar device (whether mechanical or electrical) that is used, or the results of which are used, for the purpose of rendering a diagnostic opinion regarding the honesty or dishonesty of an individual.”34 The Department of Labor can punish violators with civil fines, and those injured have a private right of action for damages.35 The Act does provide narrow exceptions for polygraph tests in some circumstances.36

In addition to federal statutes, many states passed their own versions of the EPPA, either before or after the federal act. The laws passed after EPPA generally apply similar prohibitions to some employers not covered by the federal act (such as state and local governments), but with their own idiosyncratic set of exceptions. Many states have also passed laws regulating lie detection services. Most of these seem clearly aimed at polygraphy, but, in some states, the language used is quite broad and may well encompass neuroscience-based lie detection.37

States also may provide protection against neuroscience evidence that goes beyond lie detection and could prevent involuntary neuroscience procedures. Some states have constitutional or statutory rights of privacy that could be read to include a broad freedom for mental privacy. And in some states, such as California, such privacy rights apply not just to state action but to private actors as well.38 Most employment cases would be covered by EPPA and its state equivalents, but such state privacy protections might be used to help decide whether courts could

32. Federal Employee Policy Protection Act of 1988, Pub. L. No. 100-347, § 2, 102 Stat. 646 (codified as 29 U.S.C. §§ 2001–2009 (2006)). See generally the discussion of federal and state laws in Greely & Illes, supra note 28, at 405–10, 421–31.

33. 29 U.S.C. § 2002 (1)–(2) (2006) (The section also prohibits employers from taking action against employees because of their refusal to take a test, because of the results of such a test, or for asserting their rights under the Act); and id. § 2001 (3)–(4) (2006).

34. Id. § 2001(3) (2006).

35. Id. § 2005 (2006).

36. Id. § 2001(6) (2006).

37. See generally Greely & Illes, supra note 28, at 409–10, 421–31 (for both state laws on employee protection and state laws more broadly regulating polygraphy).

38. “All people are by nature free and independent and have inalienable rights. Among these are enjoying and defending life and liberty, acquiring, possessing, and protecting property, and pursuing and obtaining safety, happiness, and privacy.” (emphasis added). Calif. Const. art. I, § 1. The words, “and privacy” were added by constitutional amendment in 1972. The California Supreme Court has applied these privacy protections in suits against private actors: “In summary, the Privacy Initiative in article I, section 1 of the California Constitution creates a right of action against private as well as government entities.” Hill v. Nat’l Collegiate Athletic Ass’n, 865 P.2d 633, 644 (Cal. 1994).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

compel neuroimaging scans or whether they could be required in nonemployment relationships, such as school/student or parent/child.

d. Neuroscience evidence and the Sixth and Seventh Amendment rights to trial by jury

One might also argue that some kinds of neuroscience evidence could be excluded from evidence as a result of the federal constitutional rights to trial by jury in criminal and most civil cases. In United States v. Scheffer,39 the Supreme Court upheld an express ban in the Military Rules of Evidence on the admission of any polygraph evidence against a criminal defendant’s claimed Sixth Amendment right to introduce the evidence in his defense. Justice Thomas wrote the opinion of the Court holding that the ban was justified by the questionable reliability of the polygraph. Justice Thomas continued, however, in a portion of the opinion joined only by Chief Justice Rehnquist and Justices Scalia and Souter, to hold that the Rule could also be justified by an interest in the role of the jury:

It is equally clear that Rule 707 serves a second legitimate governmental interest: Preserving the jury’s core function of making credibility determinations in criminal trials. A fundamental premise of our criminal trial system is that “the jury is the lie detector.” United States v. Barnard, 490 F.2d 907, 912 (CA9 1973) (emphasis added), cert. denied, 416 U.S. 959, 40 L. Ed. 2d 310, 94 S. Ct. 1976 (1974). Determining the weight and credibility of witness testimony, therefore, has long been held to be the “part of every case [that] belongs to the jury, who are presumed to be fitted for it by their natural intelligence and their practical knowledge of men and the ways of men.” Aetna Life Ins. Co. v. Ward, 140 U.S. 76, 88, 35 L. Ed. 371, 11 S. Ct. 720 (1891).40

The other four justices in the majority, and Justice Stevens in dissent, disagreed that the role of the jury justified this rule, but the question remains open. Justice Thomas’s opinion did not argue that exclusion was required as part of the rights to jury trials in criminal and civil cases under the Sixth and Seventh Amendments, respectively, but one might try to extend his statements of the importance of the jury as “the lie detector” to such an argument.41

39. 523 U.S. 303 (1998).

40. Id. at 312–13.

41. The Federal Rules of Criminal Procedure effectively give the prosecution a right to a jury trial, by allowing a criminal defendant to waive such a trial only with the permission of both the prosecution and the court. Fed. R. Crim. P. 23(a). Many states allow a criminal defendant to waive a jury trial unilaterally, thus depriving the prosecution of an effective “right” to a jury.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

2. Possible rights to the creation or use of neuroscience evidence

a. The Eighth Amendment right to present evidence of mitigating circumstances in capital cases

In one of many ways in which “death is different,” in Lockett v. Ohio,42 the U.S. Supreme Court held that the Eighth Amendment guarantees a convicted defendant in a capital case a sentencing hearing in which the sentencing authority must be able to consider any mitigating factors. In Rupe v. Wood,43 the Ninth Circuit, in an appeal from the defendant’s successful habeas corpus proceeding, applied that holding to find that a capital defendant had a constitutional right to have polygraph evidence admitted as mitigating evidence in his sentencing hearing. The court agreed that totally unreliable evidence, such as astrology, would not be admissible, but that the district court had properly ruled that polygraph evidence was not that unreliable. (The Washington Supreme Court had previously decided that polygraph evidence should be admitted in the penalty phase of capital cases under some circumstances.44) Thus, capital defendants may argue that they have the right to present neuroscience evidence as mitigation even if it would not be admissible during the guilt phase.

b. The Sixth Amendment right to present a defense

The Scheffer case arose in the context of another right guaranteed by the Sixth Amendment, the right of a criminal defendant to present a defense. It seems likely that neuroscience evidence will first be offered by parties who have been its voluntary subjects and who will argue that it strengthens their cases. In fact, the main use of neuroimaging in the courts so far, at least in criminal cases, has been by defendants seeking to demonstrate through the scans some element of a defense or mitigation. If jurisdictions were to exclude such evidence categorically, they might face a similar Sixth Amendment challenge.

The Supreme Court has held that some prohibitions on evidence in criminal cases violate the right to present a defense. Thus, in Rock v. Arkansas,45 the Court struck down a per se rule in Arkansas against the admission of hypnotically refreshed testimony, holding that it was “arbitrary or disproportionate to the purposes [it is] designed to serve.” The Scheffer case probably provides the model for how arguments about exclusions of neuroscience evidence would play out. Eight of the Justices in Scheffer agreed that the reliability of polygraphy was sufficiently

42. 438 U.S. 586 (1978).

43. 93 F.3d 1434, 1439–41 (9th Cir. 1996). But see United States v. Fulks, 454 F.3d 410, 434 (4th Cir. 2006). See generally Christopher Domin, Mitigating Evidence? The Admissibility of Polygraph Results in the Penalty Phase of Capital Trials, 41 U.C. Davis L. Rev. 1461 (2010), which argues that the Supreme Court should resolve the resulting circuit split by adopting the Ninth Circuit’s position.

44. State v. Bartholomew, 101 Wash. 2d 631, 636, 683 P.2d 1079 (1984).

45. 483 U.S. 44, 56, 97 L. Ed. 2d 37, 107 S. Ct. 2704 (1987).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

questionable as to justify the per se ban on its use. Justice Stevens, however, dissented, finding polygraphy sufficiently reliable to invalidate its per se exclusion.

3. The Fourth Amendment

The Fourth Amendment raises some particularly interesting questions. It provides, of course, that,

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

On the one hand, an involuntary neuroscience examination would seem to be a search or seizure, and thus “unreasonable” neuroscience examinations are prohibited. To that extent, the Fourth Amendment would appear to be a protection against compulsory neuroscience testing.

On the other hand, if, say, an fMRI scan or an EEG were viewed as a “search or seizure” for purposes of the Fourth Amendment, presumably courts could issue a warrant for such a search or seizure, given probable cause and the relevant procedural requirements. The use of such a warrant might (or might not) be limited by the privilege against self-incrimination or by some constitutional privacy right, but, if such rights did not apply, would such warrants allow our brains to be searched? This is, in a way, the ultimate result of the revolution in neuroscience, which identifies our incorporeal “mind” with our physical “brain” and allows us to begin to draw inferences from the brain to the mind. If the brain is a physical thing or a place, it could be searchable, even if the goal in searching it is to find out something about the mind, something that, as a practical matter, had never itself been directly searchable.

VI. Examples of the Possible Uses of Neuroscience in the Courts

Neuroscience may end up in court wherever someone’s mental state or condition is relevant, which means it may be relevant to a vast array of cases. There are very few cases, civil or criminal, where the mental states of the parties are not at least theoretically relevant on issues of competency, intent, motive, recklessness, negligence, good or bad faith, or others. And even if the parties’ own mental states were not relevant, the mental states of witnesses almost always will be potentially relevant—are they telling the truth? Are they biased against one party or another? The mental states of jurors and even of judges occasionally may be called into question.

There are some important limitations on the use of neuroscience in the courtroom. First, it is unlikely to be used that often, particularly if it remains expensive.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

With the possible exception of lie detection or bias detection, most cases will not present a practical use for it. The garden variety breach of contract or assault and battery is not likely to provide a plausible context for convincing neuroscience evidence, especially if there is no evidence that the actor or the actions were odd or bizarre. And many cases will not provide, or justify, the resources necessary for a scan. Those costs could come down, but it seems unlikely that such evidence would commonly be admitted without expert testimony, and the costs of that seem likely to remain high.

Second, neuroscience evidence usually has a “time machine” problem. Neuroscience seems unlikely ever to be able to discern a person’s state of mind in the past. Unless the legally relevant action took place inside an MRI scanner or other neuroscience tool, the best it may be able to do is to say that, based on your current mental condition or state, as shown by the current structure or functioning of your brain, you are more or less likely than average to have had a particular mental state or condition at the time of the relevant event. If the time of the relevant event is the time of trial (or shortly before trial)—as would be the case with the truthfulness of testimony, the existence of bias, or the existence of a particular memory—that would not be a problem, but otherwise it would be.

Nonetheless, neuroscience evidence seems likely to be offered into evidence for several issues, and in many of them, it already has been offered and even accepted. In some cases it will be, and has been, offered as evidence of “legislative facts,” of realities relevant to a broader legal issue than the mental state of any particular party or witness. Thus, amicus briefs in two Supreme Court cases involving the punishment of juveniles—one about capital punishment and one about life imprisonment without possibility of parole—and to some extent the Court itself, have discussed neuroscience findings about adolescent brains.46 Three of

46. In Roper v. Simmons, 543 U.S. 551 (2005), the Court held that the death penalty could not constitutionally be imposed for crimes committed while a defendant was a juvenile. Two amicus briefs argued that behavioral and neuroscience evidence supported this position. See Brief of Amicus Curiae American Medical Association et al., Roper v. Simmons; and Brief of Amicus Curiae American Psychological Association and the Missouri Psychological Association Supporting Respondent (No. 03-633).

In Roper, the Court itself did not substantially rely on the neuroscientific evidence and does not cite those amicus briefs. The Court’s opinion noted the scientific evidence only in passing as one part of three relevant differences between adults and juveniles: “First, as any parent knows and as the scientific and sociological studies respondent and his amici cite tend to confirm, ‘[a] lack of maturity and an underdeveloped sense of responsibility are found in youth more often than in adults and are more understandable among the young. These qualities often result in impetuous and ill-considered actions and decisions.’ [citation omitted]” 543 U.S. at 569. Justice Scalia, however, did take the majority to task for even this limited invocation of science and sociology. 543 U.S. at 616–18.

In Graham v. Florida, 2010 U.S. Lexis 3881, 130 S. Ct. 2011, 176 L. Ed. 2d 825 (2010), the Court held that defendants could not be sentenced to life without the possibility of parole for nonhomicide crimes committed while they were juveniles. Two amicus briefs similar to those discussed in Roper were filed. See Brief of Amicus Curiae American Medical Association (No. 08-7412) and Brief Amicus Curiae American Academy of Child & Adolescent Psychiatry (No. 08-7621) Supporting Neither

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

the handful of published cases in which fMRI evidence was offered in court concerned challenges to state laws requiring warning labels on violent videogames.47 The states sought, without success, to use fMRI studies of the effects of violent videogames on the brains of children playing the games to support their statutes.48

These “wholesale” uses of neuroscience may (or may not) end up affecting the law, but the courts would be more affected if various “retail” uses of neuroscience become common, where a party or a witness is subjected to neuroscience procedures to determine something relevant only to that particular case. An incomplete list of some of the most plausible categories for such retail uses includes the following:

  • Issues of responsibility, certainly criminal and likely also civil;
  • Predicting future behavior for sentencing;
  • Mitigating (or potentially aggravating) factors on sentencing;

Party; and Brief of Amicus Curiae American Psychological Association et al. Supporting Petitioners (Nos. 08-7412, 08-7621). The Court did refer more directly to the scientific findings in Graham, directly citing the amicus briefs:

“No recent data provide reason to reconsider the Court’s observations in Roper about the nature of juveniles. As petitioner’s amici point out, developments in psychology and brain science continue to show fundamental differences between juvenile and adult minds. For example, parts of the brain involved in behavior control continue to mature through late adolescence.” See Brief for American Medical Association et al. as Amici Curiae 16–24; Brief for American Psychological Association et al. as Amici Curiae 22–27.

Justice Thomas, in a dissent joined by Justice Scalia, reviewed some of the evidence from these amicus briefs:

“In holding that the Constitution imposes such a ban, the Court cites ‘developments in psychology and brain science’ indicating that juvenile minds ‘continue to mature through late adolescence,’ ante, at 17 (citing Brief for American Medical Association et al. as Amici Curiae 16–24; Brief for American Psychological Association et al. as Amici Curiae 22–27 (hereinafter APA Brief)), and that juveniles are ‘more likely [than adults] to engage in risky behaviors,’” id. at 7. But even if such generalizations from social science were relevant to constitutional rulemaking, the Court misstates the data on which it relies.

47. Entm’t Software Ass’n v. Hatch, 443 F. Supp. 2d 1065 (D. Minn. 2006); Entm’t Software Ass’n v. Blagojevich, 404 F. Supp. 2d 1051 (N.D. Ill. 2005); Entm’t Software Ass’n v. Granholm, 404 F. Supp. 2d 978 (E.D. Mich. 2005). Each of the three courts held that the state statutes violated the First Amendment.

48. The courts, sitting in equity and so without juries, all considered the scientific evidence and concluded that it was insufficient to sustain the statutes’ constitutionality. In Blagojevich the court heard testimony for the state directly from Dr. Kronenberger, the author of some of the fMRI-based articles on which the state relied, as well from Dr. Howard Nusbaum, for the plaintiffs, who attacked Dr. Kronenberger’s study. After a substantial discussion of the scientific arguments, the district court judge, Judge Matthew Kennelly, found that “Dr. Kronenberger’s studies cannot support the weight he attempts to put on them via his conclusions,” and did not provide a basis for the statute. Blagojevich, 404 F. Supp. at 1063–67. Judge Kennelly’s discussion of this point may be a good example of the kind of analysis neuroscience evidence may force upon judges.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
  • Competency, now or in the past, to take care of one’s affairs, to enter into agreements or make wills, to stand trial, to represent oneself, and to be executed;
  • Deception in current statements;
  • Existence or nonexistence of a memory of some event and, possibly, some information about the status of that memory (true, false; new, old, etc.);
  • Presence of the subjective sensation of pain;
  • Presence of the subjective sensation of remorse; and
  • Presence of bias against a party.

Many, but not all, of these issues have begun to be discussed in the literature. A few of them, such as criminal responsibility, mitigation, memory detection, and lie detection, are appearing in courtrooms; others, such as pain detection, have reached the edge of trial. This chapter does not discuss all of these topics and does not discuss any of them in great depth, but it will describe three of them—criminal responsibility, detection of pain, and lie detection—in order to provide a flavor of the possibilities.

A. Criminal Responsibility

Neuroscience may raise some deep questions about criminal responsibility. Assume we had excellent scientific evidence that a defendant could not help but commit the criminal acts because of a specific brain abnormality?49 Should that affect the defendant’s guilt and, if so, how? Should it affect his sentence or other subsequent treatment? The moral questions may prove daunting. Currently the law is not very interested in such deep questions of free will, but that may change.

Already, though, criminal law is concerned with the mental state of the defendant in many more specific contexts. A conviction generally requires both an actus reus and a mens rea—a “guilty act” and a “guilty mind.” An unconscious person cannot “act,” but even a conscious act is often not enough. Specific crimes often require specific intents, such as acting with a particular purpose or in a knowing or reckless fashion. Some crimes require even more defined mental states, such as a requirement for premeditation in some murder statutes. And almost all crimes can be excused by legal insanity. In these and other ways the mental state of the defendant may be relevant to a criminal case.

Neuroscience may provide evidence in some cases to support a defendant’s claim of nonresponsibility. For example, a defendant who claims to have been insane at the time of the crime might try to support his or her claim by alleging that he or she is seeing and hearing hallucinations. Neuroimaging may be able

49. See Henry T. Greely, Neuroscience and Criminal Responsibility Proving “Can’t Help Himself” as a Narrow Bar to Criminal Liability, in Law and Neuroscience, Current Legal Issues 13 (Michael Freeman ed. 2011).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

to provide some evidence about whether the defendant is, in fact, hallucinating, at least at the time when he or she is in the scanner. Such imaging might show that the defendant had a stroke or tumor in a particular part of the brain, which then could be used to argue in some way against the defendant’s criminal responsibility.50

Neuroimaging has been used more broadly in some criminal cases. For example, as noted above, in the trial of John Hinckley for the attempted assassination of President Reagan, the defense used CAT scans of Hinckley’s brain to support the argument, based largely on his bizarre behavior, that he suffered from schizophrenia. The scientific basis for that conclusion, offered early in the history of brain CAT scans, was questionable at the time and has become even weaker since, but Hinckley was found not guilty by reason of insanity. More recently, in November 2009, testimony about an fMRI scan was introduced in the penalty phase of a capital case as mitigating evidence that the defendant suffered from psychopathy. The defendant was sentenced to death, but after longer jury deliberations than defense counsel expected.51 (This appears to have been the first time fMRI results were introduced in a criminal case.52)

Neuroscience evidence also may be relevant in wider arguments about criminal justice. Evidence about the development of adolescent brains has been referred to in appellate cases concerning the punishments appropriate for people who committed crimes while under age, including, as noted above, U.S. Supreme Court decisions. More broadly, some have urged that neuroscience will undercut much of the criminal justice system. The argument is that neuroscience ultimately will prove that no one—not even the sanest defendant—has free will and that this will fatally weaken the retributive aspect of criminal justice.53

50. In at least one fascinating case, a man who was convicted of sexual abuse of a child was found to have a large tumor pressing into his brain. When the tumor was removed, his criminal sexual impulses disappeared. When his impulses returned, so had his tumor. The tumor was removed a second time and, again, his impulses disappeared. J.M. Burns & R.H. Swerdlow, Right Orbitofrontal Tumor with Pedophilia Symptom and Constructional Apraxia Sign, 60 Arch. Neurology 437 (2003); Doctors Say Pedophile Lost Urge After Tumor Removed, USA Today, July 28, 2003. See Greely, Neuroscience and Criminal Responsibility, supra note 49 (offering a longer discussion of this case).

51. The defendant in this Illinois case, Brian Dugan, confessed to the murder but sought to avoid the death penalty. See Virginia Hughes, Head Case, 464 Nature 340 (2010) (providing an excellent discussion of this case).

52. Other forms of neuroimaging, particularly PET and structural MRI scans, have been more widely used in criminal cases. Dr. Amos Gur at the University of Pennsylvania estimates that he has used neuroimaging in testimony for criminal defendants about 30 times. Id.

53. See, e.g., Robert M. Sapolsky, The Frontal Cortex and the Criminal Justice System, in Law and the Brain (Semir Zeki & Oliver Goodenough eds., 2006); Joshua Greene & Jonathan Cohen, For the Law, Neuroscience Changes Nothing and Everything, in Law and the Brain (Semir Zeki & Oliver Goodenough eds., 2006).

This argument has been forcefully attacked by Professor Stephen Morse. See, e.g., Stephen J. Morse, Determinism and the Death of Folk Psychology: Two Challenges to Responsibility from Neuroscience, 9 Minn. J.L. Sci. & Tech. 1 (2008); Stephen J. Morse, The Non-Problem of Free Will in Forensic Psychiatry

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

The application of neuroscience evidence to individual claims of a lack of criminal responsibility should prove challenging.54 Such claims will suffer from the time machine problem—the brain scan will almost always be from after, usually long after, the crime was committed and so cannot directly show the defendant’s brain state (and hence, by inference, his or her mental state) at or before the time of the crime. Similarly, most of the neuroscience evidence will be from associations, not from experiments. It is hard to imagine an ethical experiment that would scan people when they are, or are not, committing particular crimes, leaving only indirect experiments. Evidence that, for example, more convicted rapists than nonrapists had particular patterns of brain activation when viewing sexual material might somehow be relevant to criminal responsibility, but it also might not.

Careful neuroscience studies, either structural or functional, of the brains of criminals are rare. It seems highly unlikely that a “responsibility region” will ever be found, one that is universally activated in law-abiding people and that is deactivated in criminals (or vice versa). At most, the evidence is likely to show that people with particular brain structures or patterns of brain functioning commit crimes more frequently than people without such structures or patterns. Applying this group evidence to individual cases will be difficult, if not impossible. All of the problems of technical and statistical analysis of neuroimaging data, discussed in Section IV, apply. And it is possible that the to-be-scanned defendants will be able to implement countermeasures to “fool” the expert analyzing the scan.

The use of neuroscience to undermine criminal responsibility faces another problem—identifying a specific legal argument. It is not generally a defense to a criminal charge to assert that one has a predisposition to commit a crime, or even a very high statistical likelihood, as a result of social and demographic variables, of committing a crime. It is not clear whether neuroscience would, in any more than a very few cases,55 provide evidence that was not equivalent to predisposition evidence. (And, of course, prudent defense counsel might think twice before presenting evidence to the jury that his or her client was strongly predisposed to commit crimes.)

We are at an early stage in our understanding of the brain and of the brain states related to the mental states involved in criminal responsibility. At this point, about all that can be said is that at least some criminal defense counsel, seeking to represent their clients zealously, will watch neuroscience carefully for arguments they could use to relieve their clients from criminal responsibility.

and Psychology, 25 Behav. Sci. & L. 203 (2007); Stephen J. Morse, Moral and Legal Responsibility and the New Neuroscience, in Neuroethics: Defining the Issues in Theory, Practice, and Policy (Judy Illes ed., 2006); Stephen J. Morse, Brain Overclaim Syndrome and Criminal Responsibility: A Diagnostic Note, 3 Ohio St. J. Crim. L. 397 (2005).

54. A good short discussion of these challenges can be found in Helen Mayberg, Does Neuroscience Give Us New Insights into Criminal Responsibility? in A Judge’s Guide to Neuroscience, supra note 1.

55. See Greely, Neuroscience and Criminal Responsibility, supra note 49 (arguing for a very narrow neuroscience-based defense).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

B. Lie Detection

The use of neuroscience methods for lie detection probably has received more attention than any other issue raised in this chapter.56 This is due in part to the cultural interest in lie detection, dating back in its technological phase nearly 90 years to the invention of the polygraph.57 But it is also due to the fact that two commercial firms currently are offering fMRI-based lie detection services for sale in the United States: Cephos and No Lie MRI.58 Currently, as far as we know,

56. For a technology whose results have yet to be admitted in court, the legal and ethical issues raised by fMRI-based lie detection have been discussed in an amazingly long list of scholarly publications from 2004 to the present. An undoubtedly incomplete list follows: Nita Farahany, supra note 27; Brown & Murphy, supra note 20; Anthony D. Wagner, supra note 12; Frederick Schauer, Can Bad Science Be Good Evidence?: Neuroscience, Lie-Detection, and the Mistaken Conflation of Legal and Scientific Norms, 95 Cornell L. Rev. 1191 (2010); Frederick Schauer, Neuroscience, Lie-Detection, and the Law: A Contrarian View, 14 Trends Cog. Sci. 101 (2010); Emilio Bizzi et al., Using Imaging to Identify Deceit: Scientific and Ethical Questions (2009); Joelle Anne Moreno, The Future of Neuroimaged Lie Detection and the Law, 42 Akron L. Rev. 717 (2009); Julie Seaman, Black Boxes: fMRI Lie Detection and the Role of the Jury, 42 Akron L. Rev. 931 (2009); Jane Campbell Moriarty, Visions of Deception: Neuroimages and the Search for Truth, 42 Akron L. Rev. 739 (2009); Dov Fox, supra note 27; Benjamin Holley, It’s All in Your Head: Neurotechnological Lie Detection and the Fourth and Fifth Amendments, 28 Dev. Mental Health L. 1 (2009); Brian Reese, Comment: Using fMRI as a Lie DetectorAre We Lying to Ourselves? 19 Alb. L.J. Sci. & Tech. 205 (2009); Cooper Ellenberg, Student Article: Lie Detection: A Changing of the Guard in the Quest for Truth in Court? 33 Law & Psychol. Rev. 139 (2009); Julie Seaman, Black Boxes, 58 Emory L.J. 427 (2008); Matthew Baptiste Holloway, supra note 27; William Federspiel, supra note 27; Greely & Illes, supra note 28; Sarah E. Stoller & Paul R. Wolpe, supra note 27; Mark Pettit, FMRI and BF Meet FRE: Braining Imaging and the Federal Rules of Evidence, 33 Am. J.L. & Med. 319 (2007); Jonathan H. Marks, Interrogational Neuroimaging in Counterterrorism: A “No-Brainer” or a Human Rights Hazard? 33 Am. J.L. & Med. 483 (2007); Leo Kittay, Admissibility of fMRI Lie Detection: The Cultural Bias Against “Mind Reading” Devices, 72 Brook. L. Rev. 1351, 1355 (2007); Jeffrey Bellin, The Significance (if Any) for the Federal Criminal Justice System of Advances in Lie Detector Technology, Temp. L. Rev. 711 (2007); Henry T. Greely, The Social Consequences of Advances in Neuroscience: Legal Problems; Legal Perspectives, in Neuroethics: Defining the Issues in Theory, Practice and Policy 245 (Judy Illes ed., 2006); Charles N.W. Keckler, Cross-Examining the Brain: A Legal Analysis of Neural Imaging for Credibility Impeachment, 57 Hastings L.J. 509 (2006); Archie Alexander, Functional Magnetic Resonance Imaging Lie Detection: Is a “Brainstorm” Heading Toward the “Gatekeeper”? 7 Hous. J. Health L. & Pol’y (2006); Michael S. Pardo, supra note 27; Erich Taylor, supra note 27; Paul R. Wolpe et al., Emerging Neurotechnologies for Lie-Detection: Promises and Perils, 5 Am. J. Bioethics 38, 42 (2005); Henry T. Greely, Premarket Approval Regulation for Lie Detection: An Idea Whose Time May Be Coming, 5 Am. J. Bioethics 50–52 (2005); Sean Kevin Thompson, Note: The Legality of the Use of Psychiatric Neuroimaging in Intelligence Interrogation, 90 Cornell L. Rev. 1601 (2005); Henry T. Greely, Prediction, Litigation, Privacy, and Property: Some Possible Legal and Social Implications of Advances in Neuroscience, in Neuroscience and the Law: Brain, Mind, and the Scales of Justice 114–56 (Brent Garland ed., 2004); and Judy Illes, A Fish Story? Brain Maps, Lie Detection, and Personhood, 6 Cerebrum 73 (2004).

57. An interesting history of the polygraph can be found in Ken Alder, The Lie Detectors: The History of an American Obsession (2007). Perhaps the best overall discussion of the polygraph, including some discussion of its history, is found in the National Research Council report, supra note 14, commissioned in the wake of the Wen Ho Lee case, on the use of the technology for screening.

58. The Web sites for the two companies are at Cephos, www.cephoscorp.com (last visited July 3, 2010); and No Lie MRI, http://noliemri.com (last visited July 3, 2010).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

evidence from fMRI-based lie detection has not been admitted into evidence in any court, but it was offered—and rejected—in two cases, United States v. Semrau59 and Wilson v. Corestaff Services, L.P.,60 in May 2010.61 This section will begin by analyzing the issues raised for courts by this technology and then will discuss these two cases, before ending with a quick look at possible uses of this kind of technology outside the courtroom.

1. Issues involved in the use of fMRI-based lie detection in litigation

Published research on fMRI and detecting deception dates back to about 2001.62 As noted above, to date between 20 and 30 peer-reviewed articles from about 15 laboratories have appeared claiming to find statistically significant correlations between patterns of brain activation and deception. Only a handful of the published studies have looked at the accuracy of determining deception in individual subjects as opposed to group averages. Those studies generally claim accuracy rates of between about 75% and 90%. No Lie MRI has licensed the methods used by one laboratory, that of Dr. Daniel Langleben at the University of Pennsylvania; Cephos has licensed the method used by another laboratory, that of Dr. Frank A. Kozel, first at the Medical University of South Carolina and then at the University of Texas Southwestern Medical Center. (The method used by a British researcher, Dr. Sean Spence, has been used on a British reality television show.)

All of these studies rely on research subjects, typically but not always college students, who are recruited for a study of deception. They are instructed to answer some questions truthfully in the scanner and to answer other questions inaccurately.63 In the Langleben studies, for example, right-handed, healthy, male

59. No. 07-10074 M1/P, Report and Recommendation (W.D. Tenn. May 31, 2010).

60. 2010 NY slip op. 20176, 1 (N.Y. Super. Ct. 2010); 900 N.Y.S.2d 639; 2010 N.Y. Misc. LEXIS 1044 (2010).

61. In early 2009, a motion to admit fMRI-based lie detection evidence, provided by No Lie MRI, was made, and then withdrawn, in a child custody case in San Diego. The case is discussed in a prematurely entitled article, Alexis Madrigal, MRI Lie Detection to Get First Day in Court, WIRED SCI. (Mar. 16, 2009), available at http://blog.wired.com/wiredscience/2009/03/noliemri.html (last visited July 3, 2010). A somewhat similar method of using EEG to look for signs of “recognition” in the brain was admitted into one state court hearing for postconviction relief at the trial court level in Iowa in 2001, and both it and another EEG-based method have been used in India. As far as we know, evidence from the use of EEG for lie detection has not been admitted in any other U.S. cases. See supra note 28.

62. The most recent reviews of the scientific literature on this subject are Anthony D. Wagner, supra note 12; and S.E. Christ et al., The Contributions of Prefrontal Cortex and Executive Control to Deception: Evidence from Activation Likelihood Estimate Meta-Analyses, 19 Cerebral Cortex 2557 (2009). See also Greely & Illes, supra note 28 (for discussion of the articles through early 2007). The following discussion is based largely on those sources.

63. At least one fMRI study has attempted to investigate self-motivated lies, told by subjects who were not instructed to lie, but who chose to lie for personal gain. Joshua D. Greene & Joseph M. Paxton, Patterns of Neural Activity Associated with Honest and Dishonest Moral Decisions, 106 Proc. Nat’l Acad. Sci. 12,506 (2009). The experiment was designed to make it easy for subjects to realize they

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

University of Pennsylvania undergraduates were shown images of playing cards while in the scanner and asked to indicate whether they saw a particular card. They were instructed to answer truthfully except when they saw one particular card. Some of Kozel’s studies used a different experimental paradigm, in which the subjects were put in a room and told to take either a watch or a ring. When asked in the scanner separately whether they had taken the watch and then whether they had taken the ring, they were to reply “no” in both cases—truthfully once and falsely the other time. When analyzed in various ways, the fMRI results showed statistically different patterns of brain activation (small changes in BOLD response) when the subjects were lying and when they were telling the truth.

In general, these studies are not guided by a consistent hypothesis about which brain regions should be activated or deactivated during truth or deception. The results are empirical; they see particular patterns that differ between the truth state and the lie state. Some have argued that the patterns show greater mental effort when deception is involved; others have argued that they show more impulse control when lying.

Are fMRI-based lie detection methods accurate? As a class of experiments, these studies are subject to all the general problems discussed in Section IV regarding fMRI scans that might lead to neuroscience evidence. So far there are only a few studies involving a limited number of subjects. (The method used by No Lie MRI seems ultimately to have been based on the responses of four right-handed, healthy, male University of Pennsylvania undergraduates.64) There have been, to date, no independent replications of any group’s findings.

The experience of the research subjects in these fMRI studies of deception seems to be different from “lying” as the court system would perceive it. The subjects knew they were involved in research, they were following orders to lie, and they knew that the most harm that could come to them from being detected in a lie might be lesser payment for taking part in the experiment. This seems hard to compare to a defendant lying about participating in a murder. More fundamentally, it is not clear how one could conduct ethical but realistic experiments with lie detection. Research subjects cannot credibly be threatened with jail if they do not convince the researcher of the truth of their lies.

Only a handful of researchers have published studies showing reported accuracy rates with individual subjects and only with a small number of subjects.65

would be given more money if they lied about how many times they correctly predicted a coin flip. Investigators could not, however, determine if a subject lied in any particular trial.

64. Daniel D. Langleben et al., Telling Truth from Lie in Individual Subjects with Fast Event-Related fMRI, 26 Hum. Brain Mapping 262, 267 (2005).

65. See discussion in Anthony D. Wagner, supra note 12, at 29–35. Wagner analyzes 11 peer-reviewed, published papers. Seven come from Kozel’s laboratory; three come from Langleben’s. The only exception is a paper from John Cacciopo’s group, which concludes “[A]lthough fMRI may permit investigation of the neural correlates of lying, at the moment it does not appear to provide a very accurate marker of lying that can be generalized across individuals or even perhaps across types

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

Some of the studies used complex and somewhat controversial statistical techniques. And although subjects in at least one experiment were invited to try to use countermeasures against being detected, no specific countermeasures were tested.

Beyond the scientific validity of these techniques lie a host of legal questions. How accurate is accurate enough for admissibility in court or for other legal system uses? What are the implications of admissible and accurate lie detection for the Fourth, Fifth, Sixth, and Seventh Amendments? Would jurors be allowed to consider the failure, or refusal, of a party to take a lie detector test? Would lie detection be available in discovery? Would each side get to do its own tests—and who would pay?

Accurate lie detection could make the justice system much more accurate. Incorrect convictions might become rare; so might incorrect acquittals. Accurate lie detection also could make the legal system much more efficient. It seems likely that far fewer cases would go to trial if the witnesses could expect to have their veracity accurately determined.

Inaccurate lie detection, on the other hand, holds the potential of ruining the innocent and immunizing the guilty. It is at least daunting to remember some of the failures of the polygraph, such as the case of Aldrich Ames, a Soviet (and then Russian) mole in the Central Intelligence Agency, who passed two Agency polygraph tests while serving as a paid spy.66 The courts already have begun to decide whether and how to use these new methods of lie detection in the judicial process; the rest of society also will soon be forced to decide on their uses and limits.

2. Two cases involving fMRI-based lie detection

On May 31, 2010, U.S. Magistrate Judge Tu M. Pham of the Western District of Tennessee issued a 39-page report and recommendation on the prosecution’s motion to exclude evidence from an fMRI-based lie detection report by Cephos in the case of United States v. Semrau.67 The report came after a hearing on May 13–14 featuring testimony from Steve Laken, CEO of Cephos, for admission, and from two experts arguing against admission. (The district judge adopted the magistrate’s report during the trial.)

The defendant in this case, a health professional accused of defrauding Medicare, offered as evidence a report from Cephos stating that he was being truthful

of lies by the same individuals.” G. Monteleone et al., Detection of Deception Using fMRI: Better Than Chance, But Well Below Perfection, 4 Soc. Neurosci. 528 (2009). However, that study only looked at one brain region at a time, and it did not test combinations or patterns, which might have improved the predictive power.

66. See Senate Select Committee on Intelligence, Assessment of the Aldrich H. Ames Espionage Case and Its Implications for U.S. Intelligence (1994).

67. See supra note 59. The district court judge assigned to the case had a scheduling conflict on the date of the hearing on the prosecution’s motion, and so the hearing was held before a magistrate judge from that district.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

when he answered a set of questions about his actions and knowledge concerning the alleged crimes.

Judge Pham first analyzed the motion under Rule 702, using the Daubert criteria. He concluded that the technique was testable and had been the subject of peer-reviewed publications. On the other hand, he concluded that the error rates for its use in realistic situations were unknown. Furthermore, he found there were no standards for its appropriate use. To the extent that the publications relied on by Cephos to establish its reliability constituted such standards, those standards had not actually been followed in the tests of the defendant. Cephos actually scanned Dr. Semrau 2 times on 1 day, asking questions about one aspect of the criminal charges during the first scan and then about another aspect in the second scan. The company’s subsequent analysis of those scans indicated that the defendant had been truthful in the first scan but deceptive in the second scan. Cephos then scanned him a third time, several days later, on the second subject but with revised questions, and concluded that he was telling the truth that time. Nothing in the publications relied upon by Cephos indicated that the third scan was appropriate. Finally, Judge Pham found that the method was not generally accepted in the relevant scientific community as sufficiently reliable for use in court, citing several publications, including some written by the authors whose methods Cephos used.

The magistrate judge then examined the motion under Rule 403 and found that the potential prejudicial effect of the evidence outweighed its probative value. He noted that the test had been conducted without the government’s knowledge or participation, in a context where the defendant risked nothing by taking the test—a negative result would never be disclosed. He noted the jury’s central role in determining credibility and considered the likelihood that the lie detection evidence would be a lengthy and complicated distraction from the jury’s central mission. Finally, he noted that the probative value of the evidence was greatly reduced because the report only gave a result concerning the defendant’s general truthfulness when responding to more than 10 questions about the events but did not even purport to say whether the defendant was telling the truth about any particular question.

Earlier that month, a state trial court judge in Brooklyn excluded another Cephos lie detection report in a civil case, Wilson v. Corestaff Services, L.P.68 This case involved a claim by a former employee under state law that she had been subject to retaliation for reporting sexual harassment. The plaintiff offered evidence from a Cephos report finding that her main witness was truthful when he described how defendant’s management said it would retaliate against the plaintiff.

That case did not involve an evidentiary hearing or, indeed, any expert testimony. The judge decided the lie detection evidence was not appropriate under New York’s version of the Frye test, noting that, in New York, “courts have advised that the threshold question under Frye in passing on the admissibility

68. Wilson v. Corestaff Services, L.P., supra note 60.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

of expert’s testimony is whether the testimony is ‘within the ken of the typical juror.’”69 Because credibility is a matter for the jury, the judge concluded that this kind of evidence was categorically excluded under New York’s version of Frye. He also noted that “even a cursory review of the scientific literature demonstrates that the plaintiff is unable to establish that the use of the fMRI test to determine truthfulness or deceit is accepted as reliable in the relevant scientific community.”70

3. fMRI-based lie detection outside the courtroom

Lie detection might have applications to litigation without ever being introduced in trials. As is the case today with the polygraph, the fact that it is not generally admissible in court might not stop the police or the prosecutors from using it to investigate alleged crimes. Similarly, defense counsel might well use it to attempt to persuade the authorities that their clients should not be charged or should be charged with lesser offenses. One could imagine the same kinds of pretrial uses of lie detection in civil cases, as the parties seek to affect each other’s perceptions of the merits of the case.

Such lie detection efforts could also affect society, and the law, outside of litigation. One could imagine prophylactic lie detection at the beginning of contractual relations, seeking to determine whether the other side honestly had the present intention of complying with the contract’s terms. One can also imagine schools using lie detection as part of investigations of student misconduct or parents seeking to use lie detection on their children. The law more broadly may have to decide whether and how private actors can use lie detection, determining whether, for example, to extend to other contexts—or to weaken or repeal—the Employee Polygraph Protection Act.71

The current fMRI-based methods of lie detection provide one kind of protection for possible subjects—they are obvious. No one is going to be put into an MRI for an hour and asked to respond, repeatedly, to questions without realizing something important is going on. Should researchers develop less obtrusive or obvious methods of neuroscience-based lie detection, we will have to deal with the possibilities of involuntary and, indeed, surreptitious lie detection.

C. Detection of Pain

No matter where an injury occurs and no matter where it seems to hurt, pain is felt in the brain.72 Without sensory nerves leading to the brain from a body

69. Id. at 6, citing People v. Cronin, 60 N.Y.2d 430, 458 N.E.2d 351, 470 N.Y.S.2d 110 (1983).

70. See Wilson, supra note 60, at 7.

71. See supra text accompanying note 32.

72. See Brain Facts, supra note 2, at 19–21, 49–50, which includes a useful brief description of the neuroscience of pain.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

region, there is usually no experience of pain. Without the brain machinery and functioning to process the signal, no pain is perceived.

Pain turns out to be complicated—even the common pain that is experienced from an acute injury to, say, an arm. Neurons near the site of the injury called nociceptors transmit the pain signal to the spinal cord, which relays it to the brain. But other neurons near the site of the injury will, over time, adapt to affect the pain signal. Cells in the spinal cord can also modulate the pain signal that is sent to the brain, making it stronger or weaker. The brain, in turn, sends signals down to the spinal cord that cause or, at least, affect these modulations. And the actual sensation of pain—the “ouch”—takes place in the brain.

The immediate and localized sensation is processed in the somatosensory cortex, the brain region that takes sensory inputs from different body parts (with each body part getting its own portion of the somatosensory cortex) and processes them into a perceived sensation. The added knowledge that the sensation is painful seems to require the participation of other regions of the brain. Using fMRI and other techniques, some researchers have identified what they call the “pain matrix” in the brain, regions that are activated when experimental subjects, in scanners, are exposed to painful stimuli. The brain regions identified as part of the so-called pain matrix vary from researcher to researcher, but generally include the thalamus, the insula, parts of the anterior cingulate cortex, and parts of the cerebellum.73

Researchers have run experiments with subjects in the scanner receiving painful or not painful stimuli and have attempted to find activation patterns that appear when pain is perceived and that do not appear when pain is absent. (The subjects usually are given nonharmful painful stimuli such as having their skin touched with a hot metal rod or coated with a pepper-derived substance that causes a burning sensation.) Some have reported substantial success, detecting pain in more than 80% of the cases.74 Other studies have found a positive correlation between the degree of activation in the pain matrix and the degree of subjective pain, both as reported by the subject and as possibly indicated by the heat of the rod or the amount of the painful substance—the higher the temperature or the concentration of the painful substance, the greater the average activation in the pain matrix.75

Other neuroscience studies of individual pain look not at brain function during painful episodes but at brain structure. Some researchers, for example, claim that different regions of the brain have different average size and neuron densities

73. A good review article on the uses of fMRI in studying pain is found in David Borsook & Lino R. Becerra, Breaking Down the Barriers: fMRI Applications in Pain, Analgesia and Analgesics, 2 Molecular Pain 30 (2006).

74. See, e.g., Irene Tracey, Imaging Pain, 101 Brit J. Anaesth. 32 (2008).

75. See, e.g., Robert C. Coghill et al., Neural Correlates of Interindividual Differences in the Subjective Experience of Pain, 100 Proc. Nat’l Acad. Sci. 8538 (2003).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

in patients who have had long-term chronic pain than in those who have not had such pain.76

Pain is clearly complicated. Placebos, distractions, or great need can sometimes cause people not to sense, or perhaps not to notice, pain that could otherwise be overwhelming. Similarly, some people can become hypersensitive to pain, reporting severe pain when the stimulus normally would be benign. Amputees with phantom pain—the feeling of pain in a limb that has been gone for years—have been scanned while reporting this phantom pain. They show activation in the pain matrix. In some fMRI studies, people who have been hypnotized to feel pain, even when there is no painful stimulus, show activation in the pain matrix.77 And in one fMRI study, subjects who reported feeling emotional distress, as a result of apparently being excluded from a “game” being played among research subjects, also showed, on average, statistically significant activation of the pain matrix.78

Pain also plays an enormous role in the legal system.79 The existence and extent of pain is a matter for trial in hundreds of thousands of injury cases each year. Perhaps more importantly, pain figures into uncounted workers’ compensation claims and Social Security disability claims. Pain is often difficult to prove, and the uncertainty of a jury’s response to claimed pain probably keeps much litigation alive. We know that the tests for pain currently presented to jurors, judges, and other legal decisionmakers are not perfect. Anecdotes of and the assessments by pain experts both are convincing that some nontrivial percentage of successful claimants are malingering and only pretending to feel pain; a much greater percentage may be exaggerating their pain.

A good test for whether a person is feeling pain, and, even better, a “scientific” way to measure the amount of that pain—at least compared to other pains felt by that individual, if not to pain as perceived by third parties—could help resolve a huge number of claims each year. If such pain detection were reliable, it would make justice both more accurate and more certain, leading to faster, and

76. See, e.g., Vania Apkarian et al., Chronic Back Pain Is Associated with Decreased Prefrontal and Thalamic Gray Matter Density, 24 J. Neurosci. 10,410 (2004); see also Arne May, Chronic Pain May Change the Structure of the Brain, 137 Pain 7 (2008); Karen D. Davis, Recent Advances and Future Prospects in Neuroimaging of Acute and Chronic Pain, 1 Future Neurology 203 (2006).

77. Stuart W. Derbyshire et al., Cerebral Activation During Hypnotically Induced and Imagined Pain, 23 NeuroImage 392 (2004).

78. Naomi I. Eisenberg, Does Rejection Hurt? An fMRI Study of Social Exclusion, 302 Science 290 (2003).

79. The only substantial analysis of the legal implications of using neuroimaging to detect pain is found in Adam J. Kolber, Pain Detection and the Privacy of Subjective Experience, 33 Am. J.L. & Med. 433 (2007). Kolber expands on that discussion in interesting ways in Adam J. Kolber, The Experiential Future of the Law, 60 Emory L.J. 585, 595–601 (2011). The possibility of such pain detection was briefly discussed earlier in two different 2006 publications: Henry T. Greely, Prediction, Litigation, Privacy, and Property: Some Possible Legal and Social Implications of Advances in Neuroscience, supra note 56, at 141–42; and Charles Keckler, Cross-Examining the Brain, supra note 56, at 544.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

cheaper, resolution of many claims involving pain. The legal system, as well as the honest plaintiffs and defendants within it, would benefit.

A greater understanding of pain also might lead to broader changes in the legal system. For example, emotional distress often is treated less favorably than direct physical pain. If neuroscience were to show that, in the brain, emotional distress seemed to be the same as physical pain, the law might change. Perhaps more likely, if neuroscience could provide assurance that sincere emotional pain could be detected and faked emotional distress would not be rewarded, the law again might change. Others have argued that even our system of criminal punishment might change if we could measure, more accurately, how much pain different punishments caused defendants, allowing judges to let the punishment fit the criminal, if not the crime.80 A “pain detector” might even change the practice of medicine in legally relevant ways, by giving physicians a more certain way to check whether their patients are seeking controlled substances to relieve their own pain or whether they are seeking them to abuse or to sell for someone else to abuse.

In at least one case, a researcher who studies the neuroscience of pain was retained as an expert witness to testify regarding whether neuroimaging could provide evidence that a claimant was, in fact, feeling pain. The case settled before the hearing.81 In another case, a prominent neuroscientist was approached about being a witness against the admissibility of fMRI-based evidence of pain, but, before she had decided whether to take part, the party seeking to introduce the evidence changed its mind. This issue has not, as of the time of this writing, reached the courts yet, but lawyers clearly are thinking about these uses of neuroscience. (And note that in some administrative contexts, the evidentiary rules will not apply in their full rigor, possibly making the admission of such evidence more likely.)

Do either functional or structural methods of detecting pain work and, if so, how well? We do not know. These studies share many of the problems outlined in Section IV. The studies are few in number, with few subjects (and usually sets of subjects that are not very diverse). The experiments—usually involving giving college students a painful stimulus—are different from the experience of, for example, older people who claim to have low back pain. Independent replication is rare, if it exists at all. The experiments almost always report that, on average, the group shows a statistically significant pattern of activation that differs depending on whether they are receiving the painful stimulus, but the group average does not in itself tell us about the sensitivity or specificity of such a test when applied to individuals. And the statistical and technical issues are daunting.

In the area of pain, the issue of countermeasures may be the most interesting, particularly in light of the experiments conducted with hypnotized subjects. Does remembered pain look the same in an fMRI scan as currently experienced

80. Adam J. Kolber, How to Improve Empirical Dessert, 75 Brook. L. Rev. 429 (2009).

81. Greg Miller, Brain Scans of Pain Raise Questions for the Law, 323 Science 195 (2009).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

pain? Does the detailed memory of a kidney stone pain look any different from the present sensation of low back pain? Can a subject effectively convince himself that he is feeling pain and so appear to the scanner to be experiencing pain? The answers to these questions are clear—we do not yet know.

Pain detection also would raise legal questions. Could a plaintiff be forced to undergo a “pain scan”? If a plaintiff offered a pain scan in evidence, could the defendant compel the plaintiff to undergo such a scan with the defendant’s machine and expert? Would it matter if the scan were itself painful or even dangerous? Who would pay for these scans and for the experts to interpret them?

Detecting pain would be a form of neuroscience evidence with straightforward and far-reaching applications to the legal system. Whether it can be done, and, if so, how accurately it can be done, remain to be seen. So does the legal system’s reaction to this possibility.

VII. Conclusion

Atomic physicist Niels Bohr is credited with having said “It is always hard to predict things, especially the future.”82 It seems highly likely that the massively increased understanding of the human brain that neuroscience is providing will have significant effects on the law and, more specifically, on the courts. Just what those effects will be cannot be accurately predicted, but we hope that this guide will provide some useful background to help judges cope with whatever neuroscience evidence comes their way.

82. This quotation has been attributed to many people, especially Yogi Berra, but Bohr seems to be the most likely candidate, even though it does not appear in anything he published. See discussion in Henry T. Greely, Trusted Systems and Medical Records: Lowering Expectations, 52 Stan. L. Rev. 1585, 1591–92 n.9 (2000). One of the authors, however, recently had a conversation with a scientist from Denmark, who knew the phrase (in Danish) as an old Danish saying and not something original with Bohr.

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×

References on Neuroscience

Fundamental Neuroscience (Larry R. Squire et al. eds., 3d ed. 2008).

Eric R. Kandel et al., Principles of Neural Science (4th ed. 2000).

The Cognitive Neurosciences (Michael S. Gazzaniga ed., 4th ed. 2009).

Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 747
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 748
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 749
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 750
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 751
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 752
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 753
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 754
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 755
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 756
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 757
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 758
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 759
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 760
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 761
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 762
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 763
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 764
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 765
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 766
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 767
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 768
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 769
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 770
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 771
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 772
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 773
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 774
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 775
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 776
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 777
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 778
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 779
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 780
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 781
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 782
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 783
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 784
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 785
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 786
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 787
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 788
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 789
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 790
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 791
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 792
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 793
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 794
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 795
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 796
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 797
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 798
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 799
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 800
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 801
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 802
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 803
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 804
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 805
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 806
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 807
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 808
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 809
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 810
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 811
Suggested Citation:"Reference Guide on Neuroscience--Henry T. Greely and Anthony D. Wagner." National Research Council. 2011. Reference Manual on Scientific Evidence: Third Edition. Washington, DC: The National Academies Press. doi: 10.17226/13163.
×
Page 812
Next: Reference Guide on Mental Health Evidence--Paul S. Appelbaum »
Reference Manual on Scientific Evidence: Third Edition Get This Book
×
Buy Paperback | $79.95 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Reference Manual on Scientific Evidence, Third Edition, assists judges in managing cases involving complex scientific and technical evidence by describing the basic tenets of key scientific fields from which legal evidence is typically derived and by providing examples of cases in which that evidence has been used.

First published in 1994 by the Federal Judicial Center, the Reference Manual on Scientific Evidence has been relied upon in the legal and academic communities and is often cited by various courts and others. Judges faced with disputes over the admissibility of scientific and technical evidence refer to the manual to help them better understand and evaluate the relevance, reliability and usefulness of the evidence being proffered. The manual is not intended to tell judges what is good science and what is not. Instead, it serves to help judges identify issues on which experts are likely to differ and to guide the inquiry of the court in seeking an informed resolution of the conflict.

The core of the manual consists of a series of chapters (reference guides) on various scientific topics, each authored by an expert in that field. The topics have been chosen by an oversight committee because of their complexity and frequency in litigation. Each chapter is intended to provide a general overview of the topic in lay terms, identifying issues that will be useful to judges and others in the legal profession. They are written for a non-technical audience and are not intended as exhaustive presentations of the topic. Rather, the chapters seek to provide judges with the basic information in an area of science, to allow them to have an informed conversation with the experts and attorneys.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!