A Bird’s Eye View of Cans of Worms….

A quick note of thanks to you, the readers of this blog, for hanging in there for a few months, while I took a bit of a sabbatical from blog-writing to focus on projects undertaken while I was at the Human Science Center (HWZ) of Ludwig Maximilians Universität, Munich, Germany. Over the next few weeks, I’ll provide some reports on that ongoing work, both in particular and in relation to larger issues and other endeavors in neuroscience, neurotechnology and neurobioethics. I’m pleased and honored that I’ll be returning to Bad Tӧlz and Munich on a regular basis to head up a program in neurotechnology and neurobioethics, having been newly appointed to the HWZ-LMU. I’m also pleased to announce that the blogs will be researched and contributed to by our growing staff of resident scholars at the Center for Neurotechnology, including Daniel Howlader, Dan Degerman, and Misti Ault Anderson. Thanks to you, our readers, for your continued interest and support.

The tail-end of 2011 and dawn of 2012 provided a surge of information about the ways that neurotechnology can be used in the public sphere and for national defense – and all of the ethical issues such possible uses stir up.  The Nuffield Council on Bioethics has released a consultation paper on novel neurotechnologies and brain intervention, and is currently soliciting responses from academic and industry experts with experience using neurotechnologies – as well as patients, and individuals that have used such devices “in recreational settings.” The report divides its summaries and questions into three categories of neurotechnology: brain computer interfaces (BCIs), neurostimulation, and neural stem cell therapy. The first section references both medical and non-medical applications for BCIs, to include neuroprosthetic interfaces and assistance for those patients with forms of locked-in syndrome. The non-medical applications for BCIs include recreational uses (for example, in the form of video games and toys), and military applications that range from performance enhancement of individual soldiers to telepresence and EEG-based communications between personnel. These military applications of novel and emerging neurotechnologies are particularly poignant given the recent release of the Royal Society’s third Brain Waves Modules, “Neuroscience, conflict and security.” The Royal Society report discusses, amongst other things, a number of aspects of neuro- and cognitive sciences that can be applied towards national security, intelligence and defense (NSID) situations, including brain imaging and stimulation, aspects of weaponization, as well as a chapter devoted to training and enhancement. This opens a proverbial can of ethico-legal and socio-political worms – not least of which are questions of 1) if neuroscience even should be used in NSID agenda (and, its corollary- “if not, then how to realistically prevent this?”); 2) how can and should neuroscience and neurotechnology be used in NSID;  3) who should address these questions, and 4) how should they be addressed – and answered (or can they, given the momentum of current research and use?)

Here’s a hint: Let’s get question number one settled: Neuroscience and neurotechnology can, are, and will be employed for NSID, by someone, somewhere, sometime, and the time for deliberation and responsible action is not at some vague point in the future, but now. Of course, what responsible action means, and what forms it assumes can vary, and there’s room on the table for discussion and debate about the merits of each. On some level, that’s the easy part, and I’m encouraged by most of the discourse to date, including a recent workshop at the National Institutes of Health, a panel at last year’s International Neuroethics Society meeting in Washington DC, and our NELSI-3 Symposium.  Nice start. But, the hard part is confronting questions 2-4 in ways that are realistic, well-balanced, cosmopolitan to the extent possible, and prudent, given both the uncertainties and contingencies of the science and technology – and insight to human history, socio-cultural, geo-political and economic trends and tendencies.

What’s needed is a frank depiction of what the science and technology can and cannot do, a deep dive into anthropological analyses of human values and social conduct, and a well-grounded and multi-partite discourse about which moral precepts can and should be used to guide and govern neuroscience and technological employment for national agenda.   It’s a work in progress, and my hope is that its pace and depth continues and deepens, before – and not in response to – an international event that is irrecoverable or unforgiveable, and thus challenges our capabilities and ingenuity.

The Nuffield report’s section on neurostimulation primarily focuses on transcrancial magnetic stimulation and deep brain stimulation.  Here too, the spectre of security – both on an individual and national level – is cast. But, the training and learning benefits of such neuroscience and technology are applicable beyond the realms of the NSID communities. For example, using some of these techniques in classrooms would enable educators to more accurately differentiate the strengths and weaknesses of students, and this information could then be used to create better learning environments. For instance, the use of electroencephalography (EEG) to group students of similar cognitive function may be more pedagogically effective and scientifically sound than simply grouping students by chronological age, or even test scores (as tends to occur in elementary education). Grouping students according to individual patterns of neuro-cognitive functions, capacities, talents, and limitations could allow for both an improved, more cognitively-tailored curriculum that is adapted to these skills and needs, and could also enable more functionally cohesive group dynamics among students (and their teachers).

This is not new. Michael Posner’s work has long spoken to assessing the brain to educate and enrich the mind, and M. Layne Kalbfleisch’s ongoing work in this area is of note as Layne specifically addresses the education/enhancement issue in much of her writing.

On a more fundamental level, an understanding of regulatory neurophysiological processes, such as bodily responses to amount and quality of light, sound, olfactory (ie- smell) and temperature cues can all be used to create “smart classrooms” that optimize conditions for student concentration, emotional stability and learning. This was a major aspect of the work being done by my colleagues Dr. Herbert Plischke, Niko Kohls, Sebastian Sauer and Astrid Schülke-Hazzam at the Generation Research Program-Bad Tӧlz of the Human Science Center of Ludwig Maximilians Universität, Munich, Germany. Moreover, this approach – of developing and using neurophysiologcally-based adaptive ambient technology (AAT) is not limited to the classroom, but can be employed in workspaces, hospitals and living spaces (for children, adults, and seniors), to make neuroergonomic and responsive environments to monitor, and facilitate function, and decrease various detriments and risks. The goal is to use what we know to sustain and optimize safe and effective spaces for human flourishing.

Sounds good, right? Still, there are objections to this type of educational and life-space enablement, with opposition ranging from parents and community members who think it undesirable to further differentiate children from each other, to those who fear neuro-stereotyping, neuro-ghettoization and/or neuro-doping in the classroom, workspace, and living (or bed) room! These fears are at least somewhat valid, insofar as we as a society do not want to excessively categorize or stigmatize if it can be avoided. However, this type of differentiation already occurs in schools in the forms of intelligence quotient and standardized testing – and these data are more arbitrary than what neuro-cognitive measures could provide. The application of some of these neuroscientific techniques in classrooms will allow for more targeted (and thus more efficient) learning by students. And as far as work- and living spaces are concerned, we need only to look around at our lights, computers, phones, microwaves, smartphones and host of other goodies to see the trend in-play.

The use of neurophysioloigcally-based AATs are novel development in an ongoing march to create and utilize the tools at our disposal to improve the quality of life and the ways that we live it.  I think that it’d be foolish not to dip into the most current knowledge that neuroscience can provide to develop neurotechnologies that are more in tune with human physiology, and ecology. But ideas of what constitutes flourishing and the good life can be slippery, and just as with national security issues, questions of personal and individual security, and ethico-legal probity need to be addressed before, and during the development and use of neurotechnologies, not just after the fact, lest situations and effects wiggle wildly out of our control. Indeed, it’s an early bird that must catch the worms from the can we’ve already opened.

Neuro-Enablement: Unique Issues Between the Scylla of Treatment and the Charybdis of Enhancement

The National Core for Neuroethics at the University of British Columbia recently put out a piece about cognitive enhancement in the military on their fine blog “Neuroethics at the Core.” I will offer some brief comments on the complicated “treatment v. enhancement” debate:

Continue reading