A Bird’s Eye View of Cans of Worms….

A quick note of thanks to you, the readers of this blog, for hanging in there for a few months, while I took a bit of a sabbatical from blog-writing to focus on projects undertaken while I was at the Human Science Center (HWZ) of Ludwig Maximilians Universität, Munich, Germany. Over the next few weeks, I’ll provide some reports on that ongoing work, both in particular and in relation to larger issues and other endeavors in neuroscience, neurotechnology and neurobioethics. I’m pleased and honored that I’ll be returning to Bad Tӧlz and Munich on a regular basis to head up a program in neurotechnology and neurobioethics, having been newly appointed to the HWZ-LMU. I’m also pleased to announce that the blogs will be researched and contributed to by our growing staff of resident scholars at the Center for Neurotechnology, including Daniel Howlader, Dan Degerman, and Misti Ault Anderson. Thanks to you, our readers, for your continued interest and support.

The tail-end of 2011 and dawn of 2012 provided a surge of information about the ways that neurotechnology can be used in the public sphere and for national defense – and all of the ethical issues such possible uses stir up.  The Nuffield Council on Bioethics has released a consultation paper on novel neurotechnologies and brain intervention, and is currently soliciting responses from academic and industry experts with experience using neurotechnologies – as well as patients, and individuals that have used such devices “in recreational settings.” The report divides its summaries and questions into three categories of neurotechnology: brain computer interfaces (BCIs), neurostimulation, and neural stem cell therapy. The first section references both medical and non-medical applications for BCIs, to include neuroprosthetic interfaces and assistance for those patients with forms of locked-in syndrome. The non-medical applications for BCIs include recreational uses (for example, in the form of video games and toys), and military applications that range from performance enhancement of individual soldiers to telepresence and EEG-based communications between personnel. These military applications of novel and emerging neurotechnologies are particularly poignant given the recent release of the Royal Society’s third Brain Waves Modules, “Neuroscience, conflict and security.” The Royal Society report discusses, amongst other things, a number of aspects of neuro- and cognitive sciences that can be applied towards national security, intelligence and defense (NSID) situations, including brain imaging and stimulation, aspects of weaponization, as well as a chapter devoted to training and enhancement. This opens a proverbial can of ethico-legal and socio-political worms – not least of which are questions of 1) if neuroscience even should be used in NSID agenda (and, its corollary- “if not, then how to realistically prevent this?”); 2) how can and should neuroscience and neurotechnology be used in NSID;  3) who should address these questions, and 4) how should they be addressed – and answered (or can they, given the momentum of current research and use?)

Here’s a hint: Let’s get question number one settled: Neuroscience and neurotechnology can, are, and will be employed for NSID, by someone, somewhere, sometime, and the time for deliberation and responsible action is not at some vague point in the future, but now. Of course, what responsible action means, and what forms it assumes can vary, and there’s room on the table for discussion and debate about the merits of each. On some level, that’s the easy part, and I’m encouraged by most of the discourse to date, including a recent workshop at the National Institutes of Health, a panel at last year’s International Neuroethics Society meeting in Washington DC, and our NELSI-3 Symposium.  Nice start. But, the hard part is confronting questions 2-4 in ways that are realistic, well-balanced, cosmopolitan to the extent possible, and prudent, given both the uncertainties and contingencies of the science and technology – and insight to human history, socio-cultural, geo-political and economic trends and tendencies.

What’s needed is a frank depiction of what the science and technology can and cannot do, a deep dive into anthropological analyses of human values and social conduct, and a well-grounded and multi-partite discourse about which moral precepts can and should be used to guide and govern neuroscience and technological employment for national agenda.   It’s a work in progress, and my hope is that its pace and depth continues and deepens, before – and not in response to – an international event that is irrecoverable or unforgiveable, and thus challenges our capabilities and ingenuity.

The Nuffield report’s section on neurostimulation primarily focuses on transcrancial magnetic stimulation and deep brain stimulation.  Here too, the spectre of security – both on an individual and national level – is cast. But, the training and learning benefits of such neuroscience and technology are applicable beyond the realms of the NSID communities. For example, using some of these techniques in classrooms would enable educators to more accurately differentiate the strengths and weaknesses of students, and this information could then be used to create better learning environments. For instance, the use of electroencephalography (EEG) to group students of similar cognitive function may be more pedagogically effective and scientifically sound than simply grouping students by chronological age, or even test scores (as tends to occur in elementary education). Grouping students according to individual patterns of neuro-cognitive functions, capacities, talents, and limitations could allow for both an improved, more cognitively-tailored curriculum that is adapted to these skills and needs, and could also enable more functionally cohesive group dynamics among students (and their teachers).

This is not new. Michael Posner’s work has long spoken to assessing the brain to educate and enrich the mind, and M. Layne Kalbfleisch’s ongoing work in this area is of note as Layne specifically addresses the education/enhancement issue in much of her writing.

On a more fundamental level, an understanding of regulatory neurophysiological processes, such as bodily responses to amount and quality of light, sound, olfactory (ie- smell) and temperature cues can all be used to create “smart classrooms” that optimize conditions for student concentration, emotional stability and learning. This was a major aspect of the work being done by my colleagues Dr. Herbert Plischke, Niko Kohls, Sebastian Sauer and Astrid Schülke-Hazzam at the Generation Research Program-Bad Tӧlz of the Human Science Center of Ludwig Maximilians Universität, Munich, Germany. Moreover, this approach – of developing and using neurophysiologcally-based adaptive ambient technology (AAT) is not limited to the classroom, but can be employed in workspaces, hospitals and living spaces (for children, adults, and seniors), to make neuroergonomic and responsive environments to monitor, and facilitate function, and decrease various detriments and risks. The goal is to use what we know to sustain and optimize safe and effective spaces for human flourishing.

Sounds good, right? Still, there are objections to this type of educational and life-space enablement, with opposition ranging from parents and community members who think it undesirable to further differentiate children from each other, to those who fear neuro-stereotyping, neuro-ghettoization and/or neuro-doping in the classroom, workspace, and living (or bed) room! These fears are at least somewhat valid, insofar as we as a society do not want to excessively categorize or stigmatize if it can be avoided. However, this type of differentiation already occurs in schools in the forms of intelligence quotient and standardized testing – and these data are more arbitrary than what neuro-cognitive measures could provide. The application of some of these neuroscientific techniques in classrooms will allow for more targeted (and thus more efficient) learning by students. And as far as work- and living spaces are concerned, we need only to look around at our lights, computers, phones, microwaves, smartphones and host of other goodies to see the trend in-play.

The use of neurophysioloigcally-based AATs are novel development in an ongoing march to create and utilize the tools at our disposal to improve the quality of life and the ways that we live it.  I think that it’d be foolish not to dip into the most current knowledge that neuroscience can provide to develop neurotechnologies that are more in tune with human physiology, and ecology. But ideas of what constitutes flourishing and the good life can be slippery, and just as with national security issues, questions of personal and individual security, and ethico-legal probity need to be addressed before, and during the development and use of neurotechnologies, not just after the fact, lest situations and effects wiggle wildly out of our control. Indeed, it’s an early bird that must catch the worms from the can we’ve already opened.

Stirring Neuroscientific Knowledge in the Social Crucible.

In my last blog, I raised the issue of what I referred to as the real questions arising from the nature and implications of neurocentric criteria of normality and diversity, ontological status (e.g.- of embryos, the profoundly brain-damaged, non-human animals, etc) , and the ways we form and formulate beliefs, policies and laws. The “take home” questions were 1) whether (and how) insight(s) to the neuroscience of painience and sentience (or the translation of neuroscientific information and technology to create organisms that are sentient and\or painient) could provide a metric for moral and social regard and treatment, 2) whether we will be sufficiently sensitive to, and wise enough to appropriately weigh and act upon such knowledge, and 3) if and how such information can – and should be used to inform ethics, policies and laws. If numbers speak to trends in interest and involvement, the approximately 33,000 attendees at this month’s Society for Neuroscience Meeting and almost 200 attendees at the International Neuroethics Society meeting in Washington DC attest to the growth of these fields, both within the professional sphere and in the public eye. Without doubt, neuroscience and the neuroethical issues it spawns are ever expanding, frequently “hot” –  as in the ‘hot off the press’, ‘hot’ as seductive, and ‘hot’ as in controversial  – and arguably important ingredients that simmer in the crucible of social sentiment, action and change…and this can be an often unpredictable if not volatile brew.

For sure, neuroscience has enabled deeper and wider insight to putative substrates and mechanisms of consciousness, mind, self and personhood.  Despite (genuine recognition of) current limitations in the type and extent of such information, the knowledge gained to date has initiated moves from longstanding, dogmatic notions of self and person, to a broader construct of what constitutes the self and a person, that’s more inclusive of the possibility – if not probability – of animal “persons” and machine “selves.”  Of course, differing viewpoints exist, not only within the field of brain-mind studies (including its disciplines of neuroscience, psychology, philosophy, etc); but also between various camps within the sciences and humanities, and even within the public sphere. In the main, these differences reflect and\or stem from various epistemological and anthropological positions that continue to pose questions for both scientific inquiry and social conduct. Indeed, transformations in the construct of self and personhood are certain to impact ethico-legal considerations, policy decisions, ecological trends, if not the human condition at-large.

Working in our group, researchers Nicholas Fitz and Dan Howlader are focusing upon the ways that the increasing advancement, and societal reliance and role of neuroscience and neurotechnology may change current and longstanding ideas of self and personhood, and foster re-examination of more neurocentrically-oriented views of animals, fetuses, the obtunded, computers, and hybrid human-machine beings. Working from the premise that it’s not so much a question of if such epistemic shifts will occur, but when this will happen, Fitz and Howlader are questioning what society will do with this new information and its potential implications for policy and law.

Our general position is that a deepening understanding of the ways that nervous systems and brains are involved in (or evoke) those characteristics that we value as individuals, groups, and a species should compel and sustain the ways that we regard and treat the organisms that possess nervous systems that give rise to such characteristics. Moreover, neuroscience has – and will likely continue to –demonstrate that despite a wide array of individual differences, there are features that are common to nervous systems, and to the organisms in which they are embodied.

Simply put, we must ask whether and in what ways neuroscience might demonstrate the ways that we are alike and differ. Is it possible that neuroscience might afford both purchase and leverage to reconcile apparent differences between individuals, religions, cultures, and even species?  On some level, I think so, but perhaps a bigger and more important question is whether we as individuals, groups, cultures and a species will in fact embrace such knowledge to prompt positive change in our views, values, regard and actions toward those things that “have a brain and are a mind”.

Working with philosopher John Shook, Fitz and Howlader are examining if current ethico-legal concepts and criteria are adequate to deal with the contingencies posed by today’s neuroscientific and neurotechnological challenges, or if ethical and legal concepts and systems need to be adapted, or even developed anew to sufficiently account for and meet the epistemological, anthropological and socio-cultural (and economic) changes that neuroscience fosters.

I’ve stated in the blog before, and un-apologetically do so again here, that we call for frank, pragmatic assessment of neuroscientific and neurotechnological capability and limitations, and an openness to revising scientific facts, philosophical doctrine, and social constructs in preparation for and recognition of the potential proximate, intermediate, and distal effects that such new knowledge – and values – are likely to incur.

Given the reciprocal relationship of knowledge, technology, and culture it will be critical to develop ethical, legal, and political systems that appropriately reflect scientific advancements, apprehend the realities of social effect(s), and aptly guide, if not govern the use and manifestations of science in the public sphere. Knowledge both brings considerable power, and mandates increasing responsibility. To accept one without the other is a recipe for failure.

Creative Machines: Tomorrow’s Possibilities, Today’s Responsibilities

The issue that lurks right over the horizon of possibility is whether increasing complexification in generatively encoded “intelligent machines” could instantiate some form of consciousness.  I argue that the most probable answer is “yes”. The system would become auto-referential, and in this way, acquire a “sense of self”.  Leaving aside more deeply philosophical discussions on the topic, at the most basic level this means that the system would develop an awareness of its internal state and of external conditions and be able to discriminate between itself and things that are not itself. This is an important step, as it would lead to relationality – a set of functions that provide resonance or dissonance with particular types of (internal and/or external) environmental conditions, reinforcement and reward for achieving certain goals or states, and in this way a sense of what neuroscientist Antonio Damasio has called “a feeling of what happens“;  in other words, a form of consciousness (and self-consciousness).

Continue reading

Creative Machines: Self-Made Machines and Machine-Made Selves

Could robotic systems create environments and bodies for themselves? To answer these questions, let’s start with something simple (and most probable), and then open our discussion to include a somewhat more sublime, and more futuristic vision. Let’s also lay down some basic presumptions about how a paradigm for such physically intelligent robots would be initiated and sustained.  The establishment of a neurally-modeled, physically intelligent system capable of generative encoding would need to enable the acquisition of data, information, and therefore some type of “knowledge” about both the system itself (i.e.- interoceptive understanding), and the environments in which the system would be embedded and engaged (i.e.- exteroceptive understanding).

Continue reading

Spare the Tune When Shooting the Piano Player

The blogosphere is buzzing with lots of vitriol for Martin Lindstrom’s piece on the ‘neuroscience’ of loving your iPhone.  To be sure, there’s plenty to spew about, and many of my colleagues in neuroscience, neurotechnology and neuroethics have brought the issues to the fore: inapt misrepresentation of neuroscience, miscommunication of neuroscientific theory and findings, fallacious thinking both as regards the ways that neuroimaging can and should be used (e.g. the fallacy of false cause/post hoc ergo propter hoc – attributing the antecedents to the consequential), and the conceptualization of structure-function relations in the brain (what Bennett and Hacker have called the mereological fallacy of attributing the function of the whole solely to one of the constituent parts), and last, but certainly not least, plain misuse of terms and constructs (e.g. “synesthesia”).

Continue reading

Creative Machines: On the Cusp of Consciousness?

I recently had the opportunity to chat with Lakshmi Sandhana as she was preparing her article, “Darwin’s Robots” that appeared in last week’s New Scientist. Lakshmi specifically addresses the work of Jeffrey Clune, of the HyperNEAT Project of Cornell University’s Creative Machine Laboratory.  Clune’s work is cutting edge and provocative in its focus upon the possibility and implications of “creative”, and “intelligent,” if not “conscious” machines.  But it’s this last point about consciousness in a machine that really opens up a proverbial “can of worms”.  As a neuroscientist I believe that it’s not a question of if this will happen, but when…and perhaps, more appropriately, how soon, and will be ready for it when it does, and as a neuroethicist I can guarantee that the idea – and reality – of conscious machines stirs up a brew of moral, ethical, legal and social contentiousness. But let’s put these ethical issues and questions aside for the moment, and look into some of the possibilities spawned by neuro-robotic engineering.

Continue reading

Neuroscience as a Social Force: The Baby and the Bathwater

Recently, Adrian Carter discussed the move toward adopting a disease model of addiction. A disease model can be useful in that it often substantiates and compels search(es) for prevention, cure, or at least some form of effective management. Of course, it’s presumed that any such treatments would be developed and rendered in accordance with the underlying moral imperative of medical care to act in patients’ best interests. But this fosters the need for a more finely-grained assessment of exactly what obtains and entails the “good” of medical care given the variety of levels and domains that reflect and involve patients’ values, goals, burdens, risks and harms.

Continue reading

Neurolalia: Can We Talk Our Way Through the Forest and Trees of Neuroscience?

Neuro – see below
Lalia – from the Latin, lallare – to sing “la la,” the use of language

It was with great interest that I read Deric Bownds’ recent MindBlog re-post about representation of inner lives, and his current post about the utility of being vague. I think that taken together, these two concepts well describe the state-of-the field of neuroscience, and nicely frame how neuroscience and the use of neurotechnology can affect the public mindset.

Larissa MacFarquhar’s profile of Paul and Patricia Churchland in a February edition of The New Yorker magazine stated that the first family of neurophilosophy “…like to speculate about a day when whole chunks of English are replaced by scientific words that call a thing by its proper name, rather than some outworn metaphor.” I’m all for that, and I respect most of Paul and Pat Churchland’s work as being spot-on the mark. But we might need to be careful about replacing one metaphor with another, lest we engage this vocabulary exercise prematurely and/or get too carried away. There’s a lot of stuff going on in one’s neural networks that make up the peripheral and central nervous system, and while some of this is kind of a “toe bone leads to foot bone leads to leg bone” arrangement, such straightforward descriptions get dicey once we get inside the head bones and into the brain.

Continue reading

Prologue to Minority Report: Protecting the Majority from the Validity and Risks of Predictive Neurotechnology

I recently read with great interest an article, “the quest to build the perfect lie detector,” (a condensed excerpt from Lone Frank’s recent book, “The Neurotourist: Postcards from the Edge of Brain Science”) on the use of neuroimaging to advance the current state of lie detector technology. The caveat that “they’re getting close and it’s scary” may be a bit off the mark in some ways, but spot-on in others. While the technology is not as close as implied to being able to “scan brains to read minds” – and particularly detect deception, I think that the truly scary issue lies in the fact that many believe it to be “close enough.”

Continue reading

Neuro-Enablement: Unique Issues Between the Scylla of Treatment and the Charybdis of Enhancement

The National Core for Neuroethics at the University of British Columbia recently put out a piece about cognitive enhancement in the military on their fine blog “Neuroethics at the Core.” I will offer some brief comments on the complicated “treatment v. enhancement” debate:

Continue reading