Recently, Adrian Carter discussed the move toward adopting a disease model of addiction. A disease model can be useful in that it often substantiates and compels search(es) for prevention, cure, or at least some form of effective management. Of course, it’s presumed that any such treatments would be developed and rendered in accordance with the underlying moral imperative of medical care to act in patients’ best interests. But this fosters the need for a more finely-grained assessment of exactly what obtains and entails the “good” of medical care given the variety of levels and domains that reflect and involve patients’ values, goals, burdens, risks and harms.
The employment of basic neuroscientific research (what are known in government parlance as “6.1 Level” studies) in translational development (so-called “6.2 Level” work) and test and evaluation applications (“6.3 Level” uses) is not always a straightforward sequence of events. There are some well-done and very interesting basic neuroscientific findings that sniff of translational and applied utility, and recent demonstration that rats do not have neurological mechanism to allow finely tuned vertical orientation may be an example of such a study. Recent research by Robin Hayman, Madeleine Verriotis, Aleksandar Jovalekie, Andre Fenton, and Kathryn Jeffery, (Anisotropic encoding of three-dimensional space by place cells and grid cells) suggests that the rat brain does not process vertical-space information as efficiently or adeptly as horizontal and lateral field information, and this may have a number of implications – both for an understanding of brain-environmental interactions, and for future research.
Neuro – see below
Lalia – from the Latin, lallare – to sing “la la,” the use of language
It was with great interest that I read Deric Bownds’ recent MindBlog re-post about representation of inner lives, and his current post about the utility of being vague. I think that taken together, these two concepts well describe the state-of-the field of neuroscience, and nicely frame how neuroscience and the use of neurotechnology can affect the public mindset.
Larissa MacFarquhar’s profile of Paul and Patricia Churchland in a February edition of The New Yorker magazine stated that the first family of neurophilosophy “…like to speculate about a day when whole chunks of English are replaced by scientific words that call a thing by its proper name, rather than some outworn metaphor.” I’m all for that, and I respect most of Paul and Pat Churchland’s work as being spot-on the mark. But we might need to be careful about replacing one metaphor with another, lest we engage this vocabulary exercise prematurely and/or get too carried away. There’s a lot of stuff going on in one’s neural networks that make up the peripheral and central nervous system, and while some of this is kind of a “toe bone leads to foot bone leads to leg bone” arrangement, such straightforward descriptions get dicey once we get inside the head bones and into the brain.
I recently read with great interest an article, “the quest to build the perfect lie detector,” (a condensed excerpt from Lone Frank’s recent book, “The Neurotourist: Postcards from the Edge of Brain Science”) on the use of neuroimaging to advance the current state of lie detector technology. The caveat that “they’re getting close and it’s scary” may be a bit off the mark in some ways, but spot-on in others. While the technology is not as close as implied to being able to “scan brains to read minds” – and particularly detect deception, I think that the truly scary issue lies in the fact that many believe it to be “close enough.”
There is much that we can do with neuroscience, its techniques and technological tools, but in each and every case, it is important to consider what should be done, given the attractiveness and limits of neuroscientific knowledge, socio-cultural realities, extant – and newly developing – moral constructs, and the potential to use any scientific and technological tool to evoke good or harm. At the fore is the need to regard neuroscience as a human endeavor. Therefore, we are responsible for the relative rightness and/or wrongness of the ways that neuroscientific knowledge and interventions are employed. Brain research and neurotechnology extend the boundaries of self-understanding, and may alter the way we view and treat both humans and non-human beings (e.g., animals, artificially intelligent machines, etc). Furthermore, neuroscience and neurotechnology provide means to control cognition, emotion, and behavior. While beneficent motives may drive the use of such capabilities, neuroscience is not enacted in a social vacuum, and thus, such interventions and manipulations are subject to the often-capricious influences of the market and political power. Accordingly, we must ask how these goods and resources will be employed, distributed, and what effect this will incur on individuals, groups, and society.