Stirring Neuroscientific Knowledge in the Social Crucible.

In my last blog, I raised the issue of what I referred to as the real questions arising from the nature and implications of neurocentric criteria of normality and diversity, ontological status (e.g.- of embryos, the profoundly brain-damaged, non-human animals, etc) , and the ways we form and formulate beliefs, policies and laws. The “take home” questions were 1) whether (and how) insight(s) to the neuroscience of painience and sentience (or the translation of neuroscientific information and technology to create organisms that are sentient and\or painient) could provide a metric for moral and social regard and treatment, 2) whether we will be sufficiently sensitive to, and wise enough to appropriately weigh and act upon such knowledge, and 3) if and how such information can – and should be used to inform ethics, policies and laws. If numbers speak to trends in interest and involvement, the approximately 33,000 attendees at this month’s Society for Neuroscience Meeting and almost 200 attendees at the International Neuroethics Society meeting in Washington DC attest to the growth of these fields, both within the professional sphere and in the public eye. Without doubt, neuroscience and the neuroethical issues it spawns are ever expanding, frequently “hot” –  as in the ‘hot off the press’, ‘hot’ as seductive, and ‘hot’ as in controversial  – and arguably important ingredients that simmer in the crucible of social sentiment, action and change…and this can be an often unpredictable if not volatile brew.

For sure, neuroscience has enabled deeper and wider insight to putative substrates and mechanisms of consciousness, mind, self and personhood.  Despite (genuine recognition of) current limitations in the type and extent of such information, the knowledge gained to date has initiated moves from longstanding, dogmatic notions of self and person, to a broader construct of what constitutes the self and a person, that’s more inclusive of the possibility – if not probability – of animal “persons” and machine “selves.”  Of course, differing viewpoints exist, not only within the field of brain-mind studies (including its disciplines of neuroscience, psychology, philosophy, etc); but also between various camps within the sciences and humanities, and even within the public sphere. In the main, these differences reflect and\or stem from various epistemological and anthropological positions that continue to pose questions for both scientific inquiry and social conduct. Indeed, transformations in the construct of self and personhood are certain to impact ethico-legal considerations, policy decisions, ecological trends, if not the human condition at-large.

Working in our group, researchers Nicholas Fitz and Dan Howlader are focusing upon the ways that the increasing advancement, and societal reliance and role of neuroscience and neurotechnology may change current and longstanding ideas of self and personhood, and foster re-examination of more neurocentrically-oriented views of animals, fetuses, the obtunded, computers, and hybrid human-machine beings. Working from the premise that it’s not so much a question of if such epistemic shifts will occur, but when this will happen, Fitz and Howlader are questioning what society will do with this new information and its potential implications for policy and law.

Our general position is that a deepening understanding of the ways that nervous systems and brains are involved in (or evoke) those characteristics that we value as individuals, groups, and a species should compel and sustain the ways that we regard and treat the organisms that possess nervous systems that give rise to such characteristics. Moreover, neuroscience has – and will likely continue to –demonstrate that despite a wide array of individual differences, there are features that are common to nervous systems, and to the organisms in which they are embodied.

Simply put, we must ask whether and in what ways neuroscience might demonstrate the ways that we are alike and differ. Is it possible that neuroscience might afford both purchase and leverage to reconcile apparent differences between individuals, religions, cultures, and even species?  On some level, I think so, but perhaps a bigger and more important question is whether we as individuals, groups, cultures and a species will in fact embrace such knowledge to prompt positive change in our views, values, regard and actions toward those things that “have a brain and are a mind”.

Working with philosopher John Shook, Fitz and Howlader are examining if current ethico-legal concepts and criteria are adequate to deal with the contingencies posed by today’s neuroscientific and neurotechnological challenges, or if ethical and legal concepts and systems need to be adapted, or even developed anew to sufficiently account for and meet the epistemological, anthropological and socio-cultural (and economic) changes that neuroscience fosters.

I’ve stated in the blog before, and un-apologetically do so again here, that we call for frank, pragmatic assessment of neuroscientific and neurotechnological capability and limitations, and an openness to revising scientific facts, philosophical doctrine, and social constructs in preparation for and recognition of the potential proximate, intermediate, and distal effects that such new knowledge – and values – are likely to incur.

Given the reciprocal relationship of knowledge, technology, and culture it will be critical to develop ethical, legal, and political systems that appropriately reflect scientific advancements, apprehend the realities of social effect(s), and aptly guide, if not govern the use and manifestations of science in the public sphere. Knowledge both brings considerable power, and mandates increasing responsibility. To accept one without the other is a recipe for failure.


Creative Machines: Tomorrow’s Possibilities, Today’s Responsibilities

The issue that lurks right over the horizon of possibility is whether increasing complexification in generatively encoded “intelligent machines” could instantiate some form of consciousness.  I argue that the most probable answer is “yes”. The system would become auto-referential, and in this way, acquire a “sense of self”.  Leaving aside more deeply philosophical discussions on the topic, at the most basic level this means that the system would develop an awareness of its internal state and of external conditions and be able to discriminate between itself and things that are not itself. This is an important step, as it would lead to relationality – a set of functions that provide resonance or dissonance with particular types of (internal and/or external) environmental conditions, reinforcement and reward for achieving certain goals or states, and in this way a sense of what neuroscientist Antonio Damasio has called “a feeling of what happens“;  in other words, a form of consciousness (and self-consciousness).

Continue reading

Creative Machines: Self-Made Machines and Machine-Made Selves

Could robotic systems create environments and bodies for themselves? To answer these questions, let’s start with something simple (and most probable), and then open our discussion to include a somewhat more sublime, and more futuristic vision. Let’s also lay down some basic presumptions about how a paradigm for such physically intelligent robots would be initiated and sustained.  The establishment of a neurally-modeled, physically intelligent system capable of generative encoding would need to enable the acquisition of data, information, and therefore some type of “knowledge” about both the system itself (i.e.- interoceptive understanding), and the environments in which the system would be embedded and engaged (i.e.- exteroceptive understanding).

Continue reading

Spare the Tune When Shooting the Piano Player

The blogosphere is buzzing with lots of vitriol for Martin Lindstrom’s piece on the ‘neuroscience’ of loving your iPhone.  To be sure, there’s plenty to spew about, and many of my colleagues in neuroscience, neurotechnology and neuroethics have brought the issues to the fore: inapt misrepresentation of neuroscience, miscommunication of neuroscientific theory and findings, fallacious thinking both as regards the ways that neuroimaging can and should be used (e.g. the fallacy of false cause/post hoc ergo propter hoc – attributing the antecedents to the consequential), and the conceptualization of structure-function relations in the brain (what Bennett and Hacker have called the mereological fallacy of attributing the function of the whole solely to one of the constituent parts), and last, but certainly not least, plain misuse of terms and constructs (e.g. “synesthesia”).

Continue reading

Creative Machines: On the Cusp of Consciousness?

I recently had the opportunity to chat with Lakshmi Sandhana as she was preparing her article, “Darwin’s Robots” that appeared in last week’s New Scientist. Lakshmi specifically addresses the work of Jeffrey Clune, of the HyperNEAT Project of Cornell University’s Creative Machine Laboratory.  Clune’s work is cutting edge and provocative in its focus upon the possibility and implications of “creative”, and “intelligent,” if not “conscious” machines.  But it’s this last point about consciousness in a machine that really opens up a proverbial “can of worms”.  As a neuroscientist I believe that it’s not a question of if this will happen, but when…and perhaps, more appropriately, how soon, and will be ready for it when it does, and as a neuroethicist I can guarantee that the idea – and reality – of conscious machines stirs up a brew of moral, ethical, legal and social contentiousness. But let’s put these ethical issues and questions aside for the moment, and look into some of the possibilities spawned by neuro-robotic engineering.

Continue reading

Neuroscience as a Social Force: The Baby and the Bathwater

Recently, Adrian Carter discussed the move toward adopting a disease model of addiction. A disease model can be useful in that it often substantiates and compels search(es) for prevention, cure, or at least some form of effective management. Of course, it’s presumed that any such treatments would be developed and rendered in accordance with the underlying moral imperative of medical care to act in patients’ best interests. But this fosters the need for a more finely-grained assessment of exactly what obtains and entails the “good” of medical care given the variety of levels and domains that reflect and involve patients’ values, goals, burdens, risks and harms.

Continue reading

Icarus’ Folly: On the Need to Steward Neuroscientific Information…”Out of the Lab and into the Public Sphere”

The employment of basic neuroscientific research (what are known in government parlance as “6.1 Level” studies) in translational development (so-called “6.2 Level” work) and test and evaluation applications (“6.3 Level” uses) is not always a straightforward sequence of events.  There are some well-done and very interesting basic neuroscientific findings that sniff of translational and applied utility, and recent demonstration that rats do not have neurological mechanism to allow finely tuned vertical orientation may be an example of such a study. Recent research by Robin Hayman, Madeleine Verriotis, Aleksandar Jovalekie, Andre Fenton, and Kathryn Jeffery, (Anisotropic encoding of three-dimensional space by place cells and grid cells) suggests that the rat brain does not process vertical-space information as efficiently or adeptly as horizontal and lateral field information, and this may have a number of implications – both for an understanding of brain-environmental interactions, and for future research.

Continue reading