The issue that lurks right over the horizon of possibility is whether increasing complexification in generatively encoded “intelligent machines” could instantiate some form of consciousness. I argue that the most probable answer is “yes”. The system would become auto-referential, and in this way, acquire a “sense of self”. Leaving aside more deeply philosophical discussions on the topic, at the most basic level this means that the system would develop an awareness of its internal state and of external conditions and be able to discriminate between itself and things that are not itself. This is an important step, as it would lead to relationality – a set of functions that provide resonance or dissonance with particular types of (internal and/or external) environmental conditions, reinforcement and reward for achieving certain goals or states, and in this way a sense of what neuroscientist Antonio Damasio has called “a feeling of what happens“; in other words, a form of consciousness (and self-consciousness).
Recently, Adrian Carter discussed the move toward adopting a disease model of addiction. A disease model can be useful in that it often substantiates and compels search(es) for prevention, cure, or at least some form of effective management. Of course, it’s presumed that any such treatments would be developed and rendered in accordance with the underlying moral imperative of medical care to act in patients’ best interests. But this fosters the need for a more finely-grained assessment of exactly what obtains and entails the “good” of medical care given the variety of levels and domains that reflect and involve patients’ values, goals, burdens, risks and harms.