I recently read with great interest an article, “the quest to build the perfect lie detector,” (a condensed excerpt from Lone Frank’s recent book, “The Neurotourist: Postcards from the Edge of Brain Science”) on the use of neuroimaging to advance the current state of lie detector technology. The caveat that “they’re getting close and it’s scary” may be a bit off the mark in some ways, but spot-on in others. While the technology is not as close as implied to being able to “scan brains to read minds” – and particularly detect deception, I think that the truly scary issue lies in the fact that many believe it to be “close enough.”
Scanning the brain does not equal reading minds. We can not provide one-to-one maps of higher order mental processes (like deception) from patterns of brain activity, because of the inherent limitations of neuroimaging technology, and the complex nature of the neural functions that are involved in cognition, emotions and behaviors (hence, the critical importance of Stephen Kosslyn’s “prism” in our continued research).
While it may one day (and not necessarily a day too far off in the future) be conceivable to exploit computational resources to build databanks large enough to allow us to match an individual’s brain images to a repository of image patterns that reflect particular types of mental activities (such as categories of thoughts), we’re not there yet. Even if taken together with other forms of neurotechnology, such as neurogenetics and the expression of certain brain proteins (i.e. neuroproteomics), current forms of neuroimaging do not have the technical capabilities for such fine-grained discrimination, and what’s more, putting people in a 3-Tesla magnet is a pretty contrived situation, and thus lacks “real world utility” – what is known as “ecological validity.”
It’s important to remain stringent in our recognition and appreciation of what neuroscience and neurotechnology (so-called “neuroS&T”) can and cannot do, and not bastardize the field’s capability and/or findings to fit our expectations or aspirations. Certainly, scrutiny is needed when looking to, and relying upon neuroS&T for determination of legal judgments – especially regarding culpability. Extant criteria (e.g.-Frye and Daubert standards) are changeable and can reflect (and are often contributory to) the current scientific, social and economic “climate” in which various techniques and technologies are regarded, embraced and utilized.
There is the implication (and maybe expectation) that neuroS&T could be used to define predispositions to types of behavior, and thus may have some predictive value, and could be employed to allow preemptive interventions to deter the commission of crimes. For sure, this has “Minority Report” overtones, but as distasteful as this seems at face value, the recent shootings in Oslo (and those in Phoenix, and Columbine, among others) prompt renewed calls to use the science and technologies we have at hand to “do something” to ensure that such events do not happen again.
And perhaps we should. But the question is how do we maximize the benefit of the tools we possess, while not over-stepping the boundaries of science or corrupting ethico-legal probity? The use of neuroS&T to predict who among us is most likely to enact harmful behavior would indeed be a powerful tool, and with such power comes the potential for misappropriation or frank misuse. Moreover, the labels that we place on those with certain predispositions for neuropsychiatric conditions can lead to neurocentric norms and “neuro-ontologies” that can have dire social, economic and legal consequences (e.g. “…I’m sorry sir, we cannot hire you/insure you/admit you to our school, as your co-registered brain image and neurogenetic data indicate that you are a pro-dromal psychopath who we’re sure will do something horrible in the next few years…”). On at least some level, we “want to have our neuro-cake, and eat it too.” Can we realistically get both? In other words, how can we garner the benefits and not simultaneously incur risk?
To be sure, neuroS&T are a big part of – and will likely shape – the present and future social scene, and there is a growing trend to use neuroS&T in ways that meet socio-political and economic agenda. While the tendency to use any S&T in such ways is certainly not new, the extent and profundity of what neuroS&T implies (i.e. about the nature of cognition, emotion and behaviors, self-control, identity, and intention) mandate pragmatic review and discernment. These techniques and technologies have real benefit and utility, but it’s critical that we “stay within the lines” of what’s realistic if we are to maintain any meaningful sense of responsibility about the ways that neuroscientific research is conducted, and how we use the tools and information that such research provides.
Just because we’re “not there yet” doesn’t mean we’re not on the road, and it’s also important to understand exactly “where we really are” and these value of both our current position and the destinations we seek. We must be aware of agendas to employ neuroscience in a variety of ways, and must be prepared to confront the realities. To do so will require clear assessment of the capacities and limitations of neuroS&T, as well as analyses and guidelines to establish how to engage neuroS&T in ways that are scientifically and technically rigorous, and ethically and legally sound. This calls for a discursive ethics that brings together not only scientists and engineers, but also philosophers, ethicists, policy makers, sociologists and the public, as stake- and shareholders in the process, its means, and its ends.