The issue that lurks right over the horizon of possibility is whether increasing complexification in generatively encoded “intelligent machines” could instantiate some form of consciousness. I argue that the most probable answer is “yes”. The system would become auto-referential, and in this way, acquire a “sense of self”. Leaving aside more deeply philosophical discussions on the topic, at the most basic level this means that the system would develop an awareness of its internal state and of external conditions and be able to discriminate between itself and things that are not itself. This is an important step, as it would lead to relationality – a set of functions that provide resonance or dissonance with particular types of (internal and/or external) environmental conditions, reinforcement and reward for achieving certain goals or states, and in this way a sense of what neuroscientist Antonio Damasio has called “a feeling of what happens“; in other words, a form of consciousness (and self-consciousness).
Could robotic systems create environments and bodies for themselves? To answer these questions, let’s start with something simple (and most probable), and then open our discussion to include a somewhat more sublime, and more futuristic vision. Let’s also lay down some basic presumptions about how a paradigm for such physically intelligent robots would be initiated and sustained. The establishment of a neurally-modeled, physically intelligent system capable of generative encoding would need to enable the acquisition of data, information, and therefore some type of “knowledge” about both the system itself (i.e.- interoceptive understanding), and the environments in which the system would be embedded and engaged (i.e.- exteroceptive understanding).
The blogosphere is buzzing with lots of vitriol for Martin Lindstrom’s piece on the ‘neuroscience’ of loving your iPhone. To be sure, there’s plenty to spew about, and many of my colleagues in neuroscience, neurotechnology and neuroethics have brought the issues to the fore: inapt misrepresentation of neuroscience, miscommunication of neuroscientific theory and findings, fallacious thinking both as regards the ways that neuroimaging can and should be used (e.g. the fallacy of false cause/post hoc ergo propter hoc – attributing the antecedents to the consequential), and the conceptualization of structure-function relations in the brain (what Bennett and Hacker have called the mereological fallacy of attributing the function of the whole solely to one of the constituent parts), and last, but certainly not least, plain misuse of terms and constructs (e.g. “synesthesia”).