Recently Mercier and Sperber have reported on the role of reason in human cognition, social behavior, and formulation of epistemological capital. In an evolutionary-developmental (evo-devo) neuroscientific light, this comports well with a bio-psychosocial model of both individual and cultural cognitive capability. As a species (and like many other species) we tend to augment our existing capabilities and skills, and compensate for those we lack. In this way, the ability to reason may afford particular cognitive capacities that facilitates our social interactions, and compensates for the limitations and restrictions imposed by a single point of view. Sort of a combination of “there’s power in numbers” and “two heads are better than one” approach to social cognition. I’m fond of referring to the late George Bugliarello’s concept of BioSoMa, as an interesting model to depict the engagement of social interaction and use of tools (e.g.- machination) in response to our biological abilities and limitations. As Mercier and Sperber note, it seems that reasoning is based upon a set of fundamental cognitive constructs and intuitions, and provides a mechanism with which to navigate through the nuances of an issue. But the human ability to reasonis not reason to expect a lack of bias in the ways of thought and action; but rather, quite the opposite – reason provides a way to approach a situation and/or problem by engaging our subjective cognitive and emotional perspective in comparison (and perhaps contest) with the ideas of others. And frequently, it’s a case of “let the best biases win”.
Recently, Adrian Carter discussed the move toward adopting a disease model of addiction. A disease model can be useful in that it often substantiates and compels search(es) for prevention, cure, or at least some form of effective management. Of course, it’s presumed that any such treatments would be developed and rendered in accordance with the underlying moral imperative of medical care to act in patients’ best interests. But this fosters the need for a more finely-grained assessment of exactly what obtains and entails the “good” of medical care given the variety of levels and domains that reflect and involve patients’ values, goals, burdens, risks and harms.
The employment of basic neuroscientific research (what are known in government parlance as “6.1 Level” studies) in translational development (so-called “6.2 Level” work) and test and evaluation applications (“6.3 Level” uses) is not always a straightforward sequence of events. There are some well-done and very interesting basic neuroscientific findings that sniff of translational and applied utility, and recent demonstration that rats do not have neurological mechanism to allow finely tuned vertical orientation may be an example of such a study. Recent research by Robin Hayman, Madeleine Verriotis, Aleksandar Jovalekie, Andre Fenton, and Kathryn Jeffery, (Anisotropic encoding of three-dimensional space by place cells and grid cells) suggests that the rat brain does not process vertical-space information as efficiently or adeptly as horizontal and lateral field information, and this may have a number of implications – both for an understanding of brain-environmental interactions, and for future research.
Neuro – see below
Lalia – from the Latin, lallare – to sing “la la,” the use of language
It was with great interest that I read Deric Bownds’ recent MindBlog re-post about representation of inner lives, and his current post about the utility of being vague. I think that taken together, these two concepts well describe the state-of-the field of neuroscience, and nicely frame how neuroscience and the use of neurotechnology can affect the public mindset.
Larissa MacFarquhar’s profile of Paul and Patricia Churchland in a February edition of The New Yorker magazine stated that the first family of neurophilosophy “…like to speculate about a day when whole chunks of English are replaced by scientific words that call a thing by its proper name, rather than some outworn metaphor.” I’m all for that, and I respect most of Paul and Pat Churchland’s work as being spot-on the mark. But we might need to be careful about replacing one metaphor with another, lest we engage this vocabulary exercise prematurely and/or get too carried away. There’s a lot of stuff going on in one’s neural networks that make up the peripheral and central nervous system, and while some of this is kind of a “toe bone leads to foot bone leads to leg bone” arrangement, such straightforward descriptions get dicey once we get inside the head bones and into the brain.