Will brain science be used by the government to access the most private of spaces — our minds — against our wills? Such scientific tools would have tremendous privacy implications if the government suddenly used brain science to more effectively read minds during police interrogations, criminal trials, and even routine traffic stops. Pundits and scholars alike have thus explored the constitutional protections that citizens, defendants, and witnesses would require to be safe from such mind searching.
Future-oriented thinking about where brain science may lead us can make for great entertainment and can also be useful for forward-thinking policy development. But only to a point. In this Article, I reconsider these concerns about the use of brain science to infer mental functioning. The primary message of this Article is straightforward: “Don’t panic!” Current constitutional protections are sufficiently nimble to allow for protection against involuntary government machine-aided neuroimaging mind reading. The chief challenge emerging from advances in brain science is not the insidious collection of brain data, but how brain data is (mis)used and (mis)interpreted in legal and policy settings by the government and private actors alike.
The Article proceeds in five parts. Part I reviews the use of neuroscientific information in legal settings generally, discussing both the recent rise of neurolaw as well as an often overlooked history of brain science and law that stretches back decades. Part II evaluates concerns about mental privacy and argues for distinguishing between the inferences to be drawn from the data and the methods by which the data is collected. Part III assesses current neuroscience techniques for lie detection and mind reading. Part IV then evaluates the relevant legal protections available in the criminal justice system. I argue that the weight of scholarly opinion is correct: The Fourth Amendment and Fifth Amendment likely both provide protections against involuntary use of machine-aided neuroimaging mind reading evidence. Part V explores other possible machine-aided neuroimaging mind reading contexts where these protections might not apply in the same way. The Article then briefly concludes.
This is a chapter in a book, Constitution 3.0: Freedom and Technological Change, edited by Jeffrey Rosen and Benjamin Wittes and published by Brookings. It considers whether likely advances in neuroscience will fundamentally alter our conceptions of human agency, of what it means to be a person, and of responsibility for action. I argue that neuroscience poses no such radical threat now and in the immediate future and it is unlikely ever to pose such a threat unless it or other sciences decisively resolve the mind-body problem. I suggest that until that happens, neuroscience might contribute to the reform of doctrines that do not accurately reflect truths about human behavior, to the resolution of individual cases, and to the efficient operation of various legal practices. If the power to predict and prevent dangerous behavior becomes sufficiently advanced, however, traditional notions of responsibility and guilt might simply become irrelevant.
Owen D. Jones, Vanderbilt University Law School & Department of Biological Sciences, and Francis X. Shen, Tulane University Law School & The Murphy Institute, have published Law and Neuroscience in the United States, at International Neurolaw: A Comparative Analysis 349 (T.M. Spranger ed.; Springer Verlag, 2012).
Neuroscientific evidence is increasingly reaching United States courtrooms in a number of legal contexts. And the emerging field of Law and Neuroscience is being built on a foundation that joins: a) rapidly developing technologies and techniques of neuroscience; b) quickly expanding legal scholarship on the implications of neuroscience; and c) neuroscientific research designed specifically to explore legally relevant topics.
Despite the sharply increasing interest in neuroscientific evidence, it remains unclear how the legal system – at the courtroom, regulatory, and policy levels – will resolve the many challenges that new neuroscience applications raise.
This chapter – part of an edited volume surveying neurolaw in 18 countries – provides an overview of notable neurolaw developments in the United States through 2011. The chapter proceeds in six parts. Section 1 introduces the development of law and neuroscience in the U.S. Section 2 then considers several of the evidentiary contexts in which neuroscientific evidence has been, and likely will be, introduced. Sections 3 and 4 discuss the implications of neuroscience for the criminal and civil systems, respectively. Section 5 reviews three special topics: lie detection, memory, and legal decision making. Section 6 concludes with brief thoughts about the future of law and neuroscience in the United States.
The Scientist reports that according to a new Royal Society study, the emerging discipline of law and neuroscience may not be the magic technology that detects lies, at least not as far as the courtroom is concerned. fMRI scans seem to assist in identifying deceptive people. But figuring out whether someone is telling "the truth" if that person believes he is being truthful, has always been a problem, since such witnesses are credible. Their testimony simply differs from the facts. Thus, if the witness believes he is recounting actual events, even his version differs from what happened, and fMRI scans don't seem to help much, if at all, in identifying that witness.
As far as neuroscience in court goes, the study notes that many lawyers and judges have no training in the science on which neuroscience is based and do not understand its applications and limitations. Undergraduates do not learn how law and neuroscience applies in society. Lawyers and scientists do not have a systematic or official way to work together to discuss research in the field. More information, including links to a press briefing and the report in pdf, e-reader, and Kindle versions, is available here.
The Royal Society study seems to confirm what other studies have been suggesting for a while. See this blog's index term "neuroscience" for more posts.
Oliver R. Goodenough, Vermont Law School & Berkman Center for Internet & Society, and Micaela Tucker, Vermont Law School, have published Law and Cognitive Neuroscience at 6 Annual Review of Law and Social Science 61 (2010). Here is the abstract.
Law and neuroscience (sometimes neurolaw) has become a recognized field of study. The advances of neuroscience are proving useful in solving some perennial challenges of legal scholarship and are leading to applications in law and policy. While caution is appropriate in considering neurolaw approaches, the new knowledge should - and will - be put to use. Areas of special attention in current neurolaw scholarship include (a) techniques for the objective investigation of subjective states such as pain, memory, and truth-telling; (b) evidentiary issues for admitting neuroscience facts and approaches into a court proceeding; (c) free will, responsibility, moral judgment, and punishment; (d) juvenile offenders; (e) addiction; (f) mental health; (g) bias; (h) emotion; and (i) the neuroeconomics of decision making and cooperation. The future of neurolaw will be more productive if challenges to collaboration between lawyers and scientists can be resolved.
As the capabilities of cognitive neuroscience, in particular functional magnetic resonance imaging (fMRI) 'brain scans,' have become more advanced, some have claimed that fMRI-based lie-detection can and should be used at trials and for other forensic purposes to determine whether witnesses and others are telling the truth. Although some neuroscientists have promoted such claims, most aggressively resist them, and arguing that the research on neuroscience-based lie-detection is deeply flawed in numerous ways. And so these neuroscientists have resisted any attempt to use such methods in litigation, insisting that poor science has no place in the law. But although the existing studies have serious problems of validity when measured by the standards of science, and true as well that the reliability of such methods is significantly lower than their advocates claim, it is nevertheless an error to assume that the distinction between good and bad science, whether as a matter of validity or of reliability, is dispositive for law. Law is not only about putting criminals in jail, and numerous uses of evidence in various contexts in the legal system require a degree of probative value far short of proof beyond a reasonable doubt. And because legal and scientific norms, standards, and goals are different, good science may still not be good enough for some legal purposes, and, conversely, some examples of bad science my, in some contexts, still be good enough for law. Indeed, the exclusion of substandard science, when measured by scientific standards, may have the perverse effect of lowering the accuracy and rigor of legal fact-finding, because the exclusion of flawed science will only increase the importance of the even more flawed non-science that now dominates legal fact-finding. And thus the example of neuroscience-based lie detection, while timely and important in its own right, is even more valuable as a case study suggesting that Daubert v. Merrill-Dow Pharmaceuticals may have sent the legal system down a false path. By inappropriately importing scientific standards into legal decision-making with little modification, Daubert confused the goals of science with those of law, a mistake that it is not too late for the courts to correct.
Download the essay from SSRN at the link.
Professor Schauer says in part:
Because the criteria that judges and juries traditionally employ to evaluate the veracity of witnesses have been notoriously unreliable, the quest for a scientific way of distinguishing the truth teller from the liar has been with us for generations. Indeed, the Frye test, which for many years was the prevailing legal standard for determining the admissibility of scientific evidence, arose in 1923 in the context of an unsuccessful attempt to admit into evidence a rudimentary lie-detection machine invented by William Moulton Marston -- perhaps better known as the creator of the comic book character Wonder Woman, whose attributes included possession of a magic lasso, forged from the Magic Girdle of Aphrodite, which would make anyone it encircled tell the truth without fail. The device at issue in Frye was a simple polygraph and not a magic lasso, but Frye did not just set the standard for the admission of scientific evidence for more than a half-century; its exclusion of lie-detection technology also paved the way for the continuing exclusion, with few exceptions, of lie-detection evidence in American courts.
The historical use of science in the search for truth has posed consistent evidentiary problems of definition, causation, validity, accuracy, inferential conclusions unsupported by data, and complications of real-world applications. As the Innocence Project exoneration data show and the National Academy of Science Report on Forensic Science suggest, our reach in this area may well exceed our grasp. This article argues that the neuroimaging of deception - focusing primarily on the functional magnetic resonance imaging (fMRI) studies done to date - may well include all of these problems. This symposium article reviews briefly the types of neuroimaging used to detect deception, describes some of the specific criticisms leveled at the science, and explains why these small group of studies are not yet courtroom-ready. Arguing that the studies meet neither the general acceptance nor reliability standards of evidence, the article urges courts to act with restraint, allowing time for further studies, further robust criticism of the studies, additional replication studies, and sufficient time for moral, ethical, and jurisprudential rumination about whether the legal system really wants this type of evidence.
The literature on neuroimaging and its use in the courtroom is expanding furiously. Many legal scholars are weighing in on its uses, particularly on its possible use as a "magic cure" as a lie detector. Here's what Jennifer Bard, of Texas Tech University School of Law, thinks of neuroimaging scans and their use as evidence.
Any law student who has taken Evidence has read about, or better experienced, an experiment in which a man bursts into a crowded classroom, runs through shouting and then leaves. When questioned directly after the event there is strong disagreement among the witnesses as to what the man was saying, what he was wearing and whether or not he had a gun. Based on the work of psychologist Elizabeth Loftus, now on the faculty of the University of California at Irvine Law School, this experience, more than any dry article about cognitive science, demonstrates the inherent unreliability of human memory and the conviction of eye-witnesses about what they have seen. Lawyers involved in the Innocence Project which is seeking to challenge wrongful convictions based on eye-witness testimony by examining conflicting DNA evidence have further brought these findings to public attention. As they explain, 'Research shows that the human mind is not like a tape recorder; we neither record events exactly as we see them, nor recall them like a tape that has been rewound.' Yet despite what has become common knowledge about the malleability of human memory, the idea that it’s possible to access the brain directly to find out whether a witness is telling the truth is being put forward by companies which seek to profit from research that suggests that new imaging technology can detect when a human is telling a lie. These companies are advertising this technology as a tool for law enforcement and promoting its use in U.S. trials as a way of helping juries to assess the credibility of witnesses.
This article explores these claims that neuroimaging scans can be used to detect lies, which far exceed those made by responsible scientists, and also puts them in the context of a series of U.S. Supreme Court cases which have dramatically changed how scientific (forensic) evidence can be presented to the jury in criminal trials. See Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 113 S.Ct. 2786, 125 L. Ed. 2d 469 (1993) (establishing new criteria for admission of scientific evidence); Crawford v. Washington, 541 U.S. 36 (2006) (requiring that defendants directly face accusors). It also addresses the significant criticisms being brought against what has often been incautious adoption of unreliable techniques. 'Strengthening Forensic Science in the United States: A Path Forward' (National Research Council 2009).
In this article I argue that promises of lie detection are not only based on false premises, but they are harmful to the integrity of the legal system because they seek to substitute a technology, which is not just undeveloped and inadequately tested but inherently flawed, for the judgment of the fact-finder, judge or jury, in a criminal trial. I conclude that even if there was neuroimaging technology which could provide direct access to human thought, the result would share the inaccuracies and subjectivity that we already know is an inherent feature of human memory. Moreover, because this technology promises to do something that jurors know they cannot - determine when a person is lying - there is a substantial risk that it will prejudice defendants because jurors will substitute the results of the technology for their own collective judgment.
Download the full text of the paper from SSRN at the link.