Creative Machines: On the Cusp of Consciousness?

I recently had the opportunity to chat with Lakshmi Sandhana as she was preparing her article, “Darwin’s Robots” that appeared in last week’s New Scientist. Lakshmi specifically addresses the work of Jeffrey Clune, of the HyperNEAT Project of Cornell University’s Creative Machine Laboratory.  Clune’s work is cutting edge and provocative in its focus upon the possibility and implications of “creative”, and “intelligent,” if not “conscious” machines.  But it’s this last point about consciousness in a machine that really opens up a proverbial “can of worms”.  As a neuroscientist I believe that it’s not a question of if this will happen, but when…and perhaps, more appropriately, how soon, and will be ready for it when it does, and as a neuroethicist I can guarantee that the idea – and reality – of conscious machines stirs up a brew of moral, ethical, legal and social contentiousness. But let’s put these ethical issues and questions aside for the moment, and look into some of the possibilities spawned by neuro-robotic engineering.

I think it’s highly likely that in the not too distant future, robots that possess neurally-modeled sensory, information processing, decision and motoric systems will rapidly progress to increased levels of complexity and capacity, and in so doing acquire some type and/or level of consciousness. Research in neurally-based robotic systems is expanding: Efforts to create robots that incorporate complex, dynamic neural-like sensory acquisition, information integration and synthesis, and motor output systems, are both an intuitive and, many would say, predictable trajectory in the fusion of neural and robotic engineering. The approach relies upon and also provides a very useful set of heuristics. First, it is based upon the “tools-to-theory” heuristics of neuroscience, which has allowed significant progress in understanding the structure and basic functions of neural systems. Second, this has enabled “theory-to-tools” heuristics that have been actualized in the development and use of a variety of neurotechnologies, including neuro-interventional devices (e.g. transcranial and deep brain stimulation applications, nanoplatforms for pharmacological delivery, etc), brain-machine interfaces, and neuro-cognitive systems – such as those being created for these “next generation” robotics.

I believe that such “reverse-engineered” neural models of brain-like structures and functions will ultimately be a key to unraveling the enigma of consciousness.  This will close the heuristic loop through the re-engagement of a “tools-to-theory” approach. This very concept of utilizing tools to understand and create complex dynamical systems is fundamental to the engineering of neurally-competent robots. While we’d create the general template for the neural system and robot, it is the tool itself (i.e. the neural system “embodied” in the robot) that would develop techniques and implements to identify the features of its physical system that need to be fortified, modified, or discarded, based upon acquired information about the environment in which it exists, and the tasks necessary to act under changing conditions within such environments.

So, simply put, the system could acquire a form of “physical intelligence” rather quickly, and then iteratively adapt its neural functions and physical features to optimize inputs and outputs. We’re already on this path in that the ongoing work of a number of labs is to engineer systems that are predisposed to “learn”, and adapt their structures and functions so as to maximize both continued learning, and a set of performance outputs in the environments in which they operate.

In many ways, such generative encoding represents what is called an autopoietic – or self constructing – system, and as such operates both developmentally and somewhat “evolutionarily” – first to modify itself (i.e. developmentally) and then to affect others of its type to progressively adapt (i.e. evolutionarily) to ever-more complex levels of information acquisition and use, and activity.  On a number of levels, such autofunctional systems might be seen as desirable, because they have “build ’em and leave ’em” qualities, and thus humans would assume the role of Richard Dawkins’ proverbial “blind watchmaker“.  These kinds of systems would learn what they’d need to know and do to achieve a set of tasks and goals that we define and describe – at least initially.

But, as I told Lakshmi, the neural networks and bodies that we create for the system are not necessarily those that the system would develop for itself (think about that “special gift” someone gave you on your last birthday – the tie with the purple flying pigs on it, or the lime green blazer with red stitching; often, what others think you need and want tends to reflect their taste and wants more than your own). Recall that what’s being toyed with here is the creation of not only real cybernetic entities (in the strict sense of the word – a system of progressively adaptive feed-forward/feed-back mechanisms, as defined by Norbert Weiner), but complex, dynamical cybernetic systems, with all the features, bells and whistles that such systems entail.

These systems are very sensitive to initial and changing conditions, and are responsive – and adapt to – various attractors and constraints that might not be readily apparent to humans from outside the system. So, the neural system could, and likely would, rather quickly establish its own heuristics for what works and what doesn’t, and employ parts-to-whole (i.e. “bottom-up”) and whole-to-parts (i.e. “top-down”) self-assessments, to provide sort of an “inside out” perspective to “teach” its builders what it structurally requires to optimize its functions.  But, we must ask if we are really ready to learn what a machine is (trying) to teach us, and what we can – and should – do with this knowledge.

Next: By-passing the human middle man: Neurally-modeled machines that “create” themselves?

Leave a comment