The issue that lurks right over the horizon of possibility is whether increasing complexification in generatively encoded “intelligent machines” could instantiate some form of consciousness. I argue that the most probable answer is “yes”. The system would become auto-referential, and in this way, acquire a “sense of self”. Leaving aside more deeply philosophical discussions on the topic, at the most basic level this means that the system would develop an awareness of its internal state and of external conditions and be able to discriminate between itself and things that are not itself. This is an important step, as it would lead to relationality – a set of functions that provide resonance or dissonance with particular types of (internal and/or external) environmental conditions, reinforcement and reward for achieving certain goals or states, and in this way a sense of what neuroscientist Antonio Damasio has called “a feeling of what happens“; in other words, a form of consciousness (and self-consciousness).
Could robotic systems create environments and bodies for themselves? To answer these questions, let’s start with something simple (and most probable), and then open our discussion to include a somewhat more sublime, and more futuristic vision. Let’s also lay down some basic presumptions about how a paradigm for such physically intelligent robots would be initiated and sustained. The establishment of a neurally-modeled, physically intelligent system capable of generative encoding would need to enable the acquisition of data, information, and therefore some type of “knowledge” about both the system itself (i.e.- interoceptive understanding), and the environments in which the system would be embedded and engaged (i.e.- exteroceptive understanding).
I recently had the opportunity to chat with Lakshmi Sandhana as she was preparing her article, “Darwin’s Robots” that appeared in last week’s New Scientist. Lakshmi specifically addresses the work of Jeffrey Clune, of the HyperNEAT Project of Cornell University’s Creative Machine Laboratory. Clune’s work is cutting edge and provocative in its focus upon the possibility and implications of “creative”, and “intelligent,” if not “conscious” machines. But it’s this last point about consciousness in a machine that really opens up a proverbial “can of worms”. As a neuroscientist I believe that it’s not a question of if this will happen, but when…and perhaps, more appropriately, how soon, and will be ready for it when it does, and as a neuroethicist I can guarantee that the idea – and reality – of conscious machines stirs up a brew of moral, ethical, legal and social contentiousness. But let’s put these ethical issues and questions aside for the moment, and look into some of the possibilities spawned by neuro-robotic engineering.
“…still I look to find a reason to believe…”
Recently Mercier and Sperber have reported on the role of reason in human cognition, social behavior, and formulation of epistemological capital. In an evolutionary-developmental (evo-devo) neuroscientific light, this comports well with a bio-psychosocial model of both individual and cultural cognitive capability. As a species (and like many other species) we tend to augment our existing capabilities and skills, and compensate for those we lack. In this way, the ability to reason may afford particular cognitive capacities that facilitates our social interactions, and compensates for the limitations and restrictions imposed by a single point of view. Sort of a combination of “there’s power in numbers” and “two heads are better than one” approach to social cognition. I’m fond of referring to the late George Bugliarello’s concept of BioSoMa, as an interesting model to depict the engagement of social interaction and use of tools (e.g.- machination) in response to our biological abilities and limitations. As Mercier and Sperber note, it seems that reasoning is based upon a set of fundamental cognitive constructs and intuitions, and provides a mechanism with which to navigate through the nuances of an issue. But the human ability to reason is not reason to expect a lack of bias in the ways of thought and action; but rather, quite the opposite – reason provides a way to approach a situation and/or problem by engaging our subjective cognitive and emotional perspective in comparison (and perhaps contest) with the ideas of others. And frequently, it’s a case of “let the best biases win”.
Recently, Adrian Carter discussed the move toward adopting a disease model of addiction. A disease model can be useful in that it often substantiates and compels search(es) for prevention, cure, or at least some form of effective management. Of course, it’s presumed that any such treatments would be developed and rendered in accordance with the underlying moral imperative of medical care to act in patients’ best interests. But this fosters the need for a more finely-grained assessment of exactly what obtains and entails the “good” of medical care given the variety of levels and domains that reflect and involve patients’ values, goals, burdens, risks and harms.
The employment of basic neuroscientific research (what are known in government parlance as “6.1 Level” studies) in translational development (so-called “6.2 Level” work) and test and evaluation applications (“6.3 Level” uses) is not always a straightforward sequence of events. There are some well-done and very interesting basic neuroscientific findings that sniff of translational and applied utility, and recent demonstration that rats do not have neurological mechanism to allow finely tuned vertical orientation may be an example of such a study. Recent research by Robin Hayman, Madeleine Verriotis, Aleksandar Jovalekie, Andre Fenton, and Kathryn Jeffery, (Anisotropic encoding of three-dimensional space by place cells and grid cells) suggests that the rat brain does not process vertical-space information as efficiently or adeptly as horizontal and lateral field information, and this may have a number of implications – both for an understanding of brain-environmental interactions, and for future research.
Neuro – see below
Lalia – from the Latin, lallare – to sing “la la,” the use of language
It was with great interest that I read Deric Bownds’ recent MindBlog re-post about representation of inner lives, and his current post about the utility of being vague. I think that taken together, these two concepts well describe the state-of-the field of neuroscience, and nicely frame how neuroscience and the use of neurotechnology can affect the public mindset.
Larissa MacFarquhar’s profile of Paul and Patricia Churchland in a February edition of The New Yorker magazine stated that the first family of neurophilosophy “…like to speculate about a day when whole chunks of English are replaced by scientific words that call a thing by its proper name, rather than some outworn metaphor.” I’m all for that, and I respect most of Paul and Pat Churchland’s work as being spot-on the mark. But we might need to be careful about replacing one metaphor with another, lest we engage this vocabulary exercise prematurely and/or get too carried away. There’s a lot of stuff going on in one’s neural networks that make up the peripheral and central nervous system, and while some of this is kind of a “toe bone leads to foot bone leads to leg bone” arrangement, such straightforward descriptions get dicey once we get inside the head bones and into the brain.