I recently read with great interest an article, “the quest to build the perfect lie detector,” (a condensed excerpt from Lone Frank’s recent book, “The Neurotourist: Postcards from the Edge of Brain Science”) on the use of neuroimaging to advance the current state of lie detector technology. The caveat that “they’re getting close and it’s scary” may be a bit off the mark in some ways, but spot-on in others. While the technology is not as close as implied to being able to “scan brains to read minds” – and particularly detect deception, I think that the truly scary issue lies in the fact that many believe it to be “close enough.”
There is much that we can do with neuroscience, its techniques and technological tools, but in each and every case, it is important to consider what should be done, given the attractiveness and limits of neuroscientific knowledge, socio-cultural realities, extant – and newly developing – moral constructs, and the potential to use any scientific and technological tool to evoke good or harm. At the fore is the need to regard neuroscience as a human endeavor. Therefore, we are responsible for the relative rightness and/or wrongness of the ways that neuroscientific knowledge and interventions are employed. Brain research and neurotechnology extend the boundaries of self-understanding, and may alter the way we view and treat both humans and non-human beings (e.g., animals, artificially intelligent machines, etc). Furthermore, neuroscience and neurotechnology provide means to control cognition, emotion, and behavior. While beneficent motives may drive the use of such capabilities, neuroscience is not enacted in a social vacuum, and thus, such interventions and manipulations are subject to the often-capricious influences of the market and political power. Accordingly, we must ask how these goods and resources will be employed, distributed, and what effect this will incur on individuals, groups, and society.