Creative Machines: Tomorrow’s Possibilities, Today’s Responsibilities

The issue that lurks right over the horizon of possibility is whether increasing complexification in generatively encoded “intelligent machines” could instantiate some form of consciousness.  I argue that the most probable answer is “yes”. The system would become auto-referential, and in this way, acquire a “sense of self”.  Leaving aside more deeply philosophical discussions on the topic, at the most basic level this means that the system would develop an awareness of its internal state and of external conditions and be able to discriminate between itself and things that are not itself. This is an important step, as it would lead to relationality – a set of functions that provide resonance or dissonance with particular types of (internal and/or external) environmental conditions, reinforcement and reward for achieving certain goals or states, and in this way a sense of what neuroscientist Antonio Damasio has called “a feeling of what happens“;  in other words, a form of consciousness (and self-consciousness).

The question is, what then? What will we make of this?  I posit that if neuroscience is to have any value as a human endeavor, then the information it yields must be leveraged in both understanding and action. It’s not just what neuroscience informs and teaches, it’s about what we do with the knowledge we acquire. The discovery that an entity is painient and sentient is not esoteric, but rather means something both about that organism, and the ways that it should be considered. Neurocentric criteria, namely, whether a being manifests the ability for pain/suffering, some form of emotion, awareness of self,  and the type and extent of these properties,  are arguably  important for the way we morally regard – and ethically, legally and socially treat – other beings. These issues – and the questions they spawn – get particularly dicey given the capacity of neurally modeled robots to self-assess, manifest awareness, and self-develop and/or replicate.  Yet, the very fact that there is realistic discussion about our moral consideration of and for machines represents a shift in our epistemology and ethical paradigm.

And this prompts questions of if, and in what ways we can be prepared for the implications of new information. Let’s face it, the likelihood of conscious machines – as exciting as it may be – is still years away, even given the most fruitful estimate. So while a bit of “what if” speculation about mindful machines can rattle the girders of extant ethical constructs, I offer that ‘what if’ scenarios take a back seat to the real ‘what about’ questions raised by the nature and implications of neurocentric criteria for notions of normality and diversity, ontological status (e.g.- of embryos, the profoundly brain-damaged, non-human animals, etc) , and the ways we form and formulate beliefs, policies and laws. Can neuroscience provide a metric for how we assign moral regard? Will insights to the neural basis of moral cognition, beliefs and action afford a foundation upon which to structure ethics, policies and laws? Perhaps, at least in some ways, but I prefer not to work in absolutes.

Rather, my take is that neuroscience can- and should – contribute to knowledge about the nature of human and non-human beings, and what being is all about. Is neuroscience an answer – for sure.  Is it the only answer? Surely not, because I believe that any realistic approach to neuroscience must acknowledge the contextual basis of the embodied brain and the embeddedness of individuals in the spatiotemporal contingencies of society and culture. In other words, neuroscience works best within a (neuro)bio-psychosocial orientation – not in some esoteric or “new-agey” sense, but as an accurate depiction of the reciprocal interactions of the systems that make up organisms and their environments. In this way, neuroscience can provide purchase with which to probe ever deeper into existing questions of consciousness, cognition, beliefs, biases and behaviors, and to raise new questions – both about brain~mind and the ways we employ the knowledge we possess to guide our consideration and treatment of(both human and non-human)  others in what philosopher Owen Flanagan has called “ethics as human ecology”.

But a healthy measure of modesty is called for – neuroscience and its technologies are powerful tools, but like any tools, the responsibility to use them (and the knowledge and capabilities they bring ) in the right ways rests in our hands.  Let’s not over-estimate the power either.  There’s much we still do not know about the brain, consciousness, and how the biological, psychological and social domains interact. And this takes me back to our musings about conscious machines…it’s fun to speculate on what neuroscience holds for the future, and the element of speculation imparts a flair of the fictional.  It’s folly not to critically assess what this science holds for the present, foolhardy not to recognize the promise – and perils – that such science and technology may incur, and frighteningly dangerous not to devote time, effort and resources to studying, and developing ways to prudently guide each and every step ahead.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s